CN106648116B - Virtual reality integrated system based on motion capture - Google Patents

Virtual reality integrated system based on motion capture Download PDF

Info

Publication number
CN106648116B
CN106648116B CN201710053172.9A CN201710053172A CN106648116B CN 106648116 B CN106648116 B CN 106648116B CN 201710053172 A CN201710053172 A CN 201710053172A CN 106648116 B CN106648116 B CN 106648116B
Authority
CN
China
Prior art keywords
sensor
motion capture
data
virtual reality
gun
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710053172.9A
Other languages
Chinese (zh)
Other versions
CN106648116A (en
Inventor
隋文涛
李秀丰
周清
孔令涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Ruichenxinchuang Network Technology Co ltd
Original Assignee
Nanjing Ruichenxinchuang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ruichenxinchuang Network Technology Co ltd filed Critical Nanjing Ruichenxinchuang Network Technology Co ltd
Priority to CN201710053172.9A priority Critical patent/CN106648116B/en
Publication of CN106648116A publication Critical patent/CN106648116A/en
Application granted granted Critical
Publication of CN106648116B publication Critical patent/CN106648116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A virtual reality integrated system based on motion capture comprises an inertial motion capture device, an indoor positioning device, a virtual reality device, a data glove device, an electronic simulation gun device and a backpack computer device; the backpack computer device is internally provided with a Kalman filter I. The inertial motion capture device comprises a plurality of motion capture sensors; the indoor positioning device is a UWB indoor positioning system; the data glove device comprises a glove body and a plurality of hand joint gesture sensors; the electronic simulation gun device comprises an electronic simulation gun and an electronic simulation gun data acquisition sensor; the motion capture sensor, the hand joint gesture sensor and the electron gun gesture sensor all comprise a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor with a built-in Kalman filter II. The invention can lead the real world people to carry out all-round interaction with the virtual environment, and improves the immersive degree and the interactivity of the virtual reality system.

Description

Virtual reality integrated system based on motion capture
Technical Field
The invention relates to the fields of micro-machines (MEMS) and virtual reality, in particular to a virtual reality integrated system based on a motion capture technology.
Background
The gesture tracking and restoring technology of the moving object is widely applied to various fields, in particular to the fields of spaceflight navigation, human body movement gesture modeling and the like. The currently commonly used motion capture technologies mainly include an optical motion capture technology and an inertial sensor-based motion capture technology, which are respectively described as follows:
optical motion capture technology: a plurality of reference balls/points are attached to a moving target body, a plurality of high-speed cameras are adopted to surround an object to be detected, the movement range of the object to be detected is located in the overlapping area of the cameras, the track of the reference balls/points of the target body is obtained through a video recognition technology, and the movement gesture and the track of the target are calculated and recovered through a 3-dimensional model. The optical motion capture technology has the advantages of high recognition accuracy and high sampling frequency. However, the following problems are also present: the whole system is complex, a plurality of high-speed cameras and an operation platform are needed, and the system is high in price; the system is complicated to lock, only the movement of the object overlapped by the camera can be captured, and when the movement is complex, the mark is easy to be confused and shielded, so that an error result is generated; meanwhile, the optical requirements on the environment are high, and the illumination intensity has a large influence on the precision.
Inertial motion capture technology: with the rapid development of MEMS sensors, the technology of micro inertial sensors is becoming more mature, and micro inertial sensors have been started to be used in motion capture technology. The specific method comprises the following steps: an Inertial Measurement Unit (IMU) is coupled to the object to be measured for movement therewith. And acquiring and processing the data of the plurality of sensor nodes, transmitting the data to an upper computer system through a wireless communication technology, and carrying out posture reduction through the upper computer. The inertial motion capturing technology has the advantages that the system is relatively simple, shielding is prevented, requirements on light and environment are lower than those of optical motion capturing, the application range is wide, and meanwhile, the cost of the inertial motion capturing system is generally lower than that of the optical motion capturing system. The inertial measurement unit comprises an accelerometer, a gyroscope and a geomagnetic sensor, and can measure information and azimuth information of an object to be measured by double integration of an acceleration signal and integration of a gyroscope signal.
However, the gyroscope has errors in measuring the posture data of the moving object, and the posture of the moving object can be reflected truly after correction. When the gyroscope measures attitude data, the error generation process is as follows:
First, the data measured by the gyroscope is an angular velocity, which is an instantaneous value, and in most cases, it is not directly applicable, and it is necessary to integrate the angular velocity with time to obtain an angle change amount. Then, the calculated angle value is the attitude data of the object motion after the initial angle is added to the obtained angle variation.
In time integration of the angular velocity, the smaller the integration process time (dt), the more accurate the angle value obtained. However, since the measurement basis of the gyroscope is itself rather than an external absolute reference; furthermore, the integration process time (dt) cannot be infinitely small, so that the integrated error gradually increases with time, and thus the measured motion gesture data deviates from the actual data.
The virtual reality technology is briefly described below.
Virtual reality technology: the computer graphics, man-machine interaction technology, sensor technology, artificial intelligence technology and other fields are designed, and the computer is used for generating vivid three-dimensional visual, audible, tactile, olfactory and other sensations, so that people can naturally experience and interact with the virtual world through a proper device as a participant. When the user moves the position, the computer can immediately perform complex operation, and the accurate 3D time image is returned to generate the feeling of reality. The technology inherits the latest results of computer graphics technology, computer simulation technology, artificial intelligence, sensor technology, display technology, network parallel processing and other technologies, and is a high and new technology simulation system generated by the assistance of computer technology.
Immersion and interactivity are two important features that evaluate the virtual reality technology described above. The immersion degree refers to the degree of realism that a user feels to exist in a simulated environment as a principal angle. The ideal simulation environment can make the user throw the whole body into the three-dimensional virtual environment created by the computer, so that the user is difficult to distinguish true from false. Interactivity is a virtual implementation that refers to the degree to which a user can manipulate objects within a simulated environment and the nature of feedback from the simulated environment.
However, most of the existing virtual reality technologies experience in the form of virtual reality glasses or mirrors, and thus the immersion and interactivity of the virtual reality technologies are not high.
The virtual reality integrated system adopts an inertial motion capturing technology, an indoor positioning technology, a virtual reality glasses technology, a data glove technology, an electronic simulation gun technology and the like, and is applied to scenes such as game experience, simulation training, indoor and outdoor performance, medical recovery and the like.
Disclosure of Invention
The invention aims to solve the technical problems of the prior art, and provides a virtual reality integrated system based on motion capture, which can enable people in the real world to perform omnibearing interaction with a virtual environment and improve the immersive and interactive performance of the virtual reality system.
In order to solve the technical problems, the invention adopts the following technical scheme:
a virtual reality integrated system based on motion capture comprises an inertial motion capture device, an indoor positioning device, a virtual reality device, a data glove device, an electronic simulation gun device and a backpack computer device; the inertial motion capturing device, the indoor positioning device, the virtual reality device, the data glove device and the electronic simulation gun device are all in wireless connection with the backpack computer device; the backpack computer device is internally provided with a Kalman filter I and a simulation software system.
The inertial motion capture device comprises a plurality of motion capture modules which can be fixed on a human body, and each motion capture module comprises a motion capture sensor.
The indoor positioning device is a UWB indoor positioning system.
The data glove device comprises a glove body and a plurality of hand joint gesture sensors arranged in the glove body.
The electronic simulation gun device comprises an electronic simulation gun and an electronic simulation gun data acquisition sensor which is arranged in the electronic simulation gun, wherein the electronic simulation gun data acquisition sensor comprises an electronic gun posture sensor and an electronic gun operation sensor.
The motion capture sensor, the hand joint gesture sensor and the electron gun gesture sensor comprise a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor; the three-axis MEMS acceleration sensor, the three-axis MEMS angular velocity sensor and the three-axis MEMS magnetometer are all connected with the data filtering sensor, and the data filtering sensor is also connected with the microprocessor; the data filtering sensor can carry out primary filtering on data detected by the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer and then transmit the data to the microprocessors, and a Kalman filter II is arranged in each microprocessor.
The electron gun operation sensor is connected with a microprocessor in the electron gun posture sensor.
The electron gun operation sensor is one or a combination of a shooting sensor, a cartridge clip sensor, a loading sensor and a safety.
The virtual reality device comprises a VR wearing device and an environment feedback device, wherein the environment feedback device is one or a combination of more of an audio system, a controllable running machine, an electrode stimulation patch and a force feedback coat.
The VR wearing device is a VR helmet or VR glasses.
The model of the microprocessor in the motion capture sensor, the hand joint gesture sensor and the electron gun gesture sensor is NXP-LPC13xx.
The motion capture sensor can collect skeleton posture data of a human body contact part and perform displacement correction on the collected skeleton posture data; the working process of the motion capture sensor is as follows:
the method comprises the steps that a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor and a triaxial MEMS magnetometer in the motion capture sensor respectively acquire acceleration, angular velocity and geomagnetic field intensity of a human body contact part, and a data filtering sensor in the motion capture sensor performs primary filtering processing on acquired acceleration, angular velocity and geomagnetic field intensity data; and the Kalman filter in the microprocessor adopts a Kalman filtering algorithm to carry out deep filtering and fusion on acceleration, angular velocity and geomagnetic field intensity data which are in a normal range value after primary filtering, an ellipsoid model coefficient is obtained through fitting by a least square estimation method, a geomagnetic sensor error matrix and an offset vector are deduced by utilizing the ellipsoid model coefficient, and finally displacement correction and correction are carried out on skeleton posture data output under a geomagnetic environment.
The UWB indoor positioning system comprises a plurality of positioning anchor nodes, a plurality of mobile tags, a synchronizer and a server; the positioning anchor nodes are fixedly arranged indoors, the mobile tags are worn on each target user, and data transmission is carried out between the mobile tags and the positioning anchor nodes through UWB; the time correction communication is carried out between the synchronizer and each positioning anchor point, so that the time synchronization between the positioning anchor points is realized; the server is provided with infinite access nodes, and each positioning anchor node transmits data with the server through the infinite access nodes.
After the structure is adopted, the invention has the following beneficial effects:
1. the motion capture module has small volume, light weight and strong endurance, does not influence the motion of a human body when being bound on the human body, has high sampling frequency, and collects complex and high-speed motion; the motion capture module is flexible in configuration, and can capture local motion and whole-body motion; the motion capture is not limited by the field, and the capture effect is not influenced by the shielding of objects in the real environment; the cost of motion capture systems is relatively low.
2. By adopting the indoor positioning device, the absolute coordinate position and the coordinates in dynamic capture can be fused by combining a fusion algorithm, and the accumulated error of the gyroscope integral in the motion capture device is corrected, so that a more real position effect is provided for a user.
3. And the data glove device is adopted to transmit the finger state data to the processor, so that man-machine interaction is more real, and the experience degree of the virtual reality of the user is increased.
4. The electronic simulation gun device is adopted, states of the hand-held electronic gun, such as the gesture, the switch, the insurance and the like, are transmitted to the processor, virtual reality interaction and man-machine interaction are enhanced, and the experience of virtual reality game and training is enhanced.
In summary, the invention can introduce the human body (including trunk, limbs, handheld props, etc.) and actions in real world into the virtual world in real time, map the actions to the corresponding roles, and feed back the roles' actions of the virtual environment to the perception of the real world user in real time in a proper way, thereby greatly improving the immersion degree of the virtual reality, and simultaneously increasing the interactivity of the roles and the virtual environment, so that people can obtain more real virtual reality experience.
Drawings
In order to more clearly illustrate the technical solutions of the present invention in real time or in the prior art, the following description will briefly introduce the drawings of the embodiments or the required use in the description of the prior art.
FIG. 1 shows a schematic diagram of a motion capture-based virtual reality integration system of the present invention.
FIG. 2 is a schematic diagram showing a method of fusing indoor positioning data and motion capture data according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and specific preferred embodiments.
As shown in FIG. 1, the motion capture-based virtual reality integrated system comprises an inertial motion capture device, an indoor positioning device, a virtual reality device, a data glove device, an electronic simulation gun device and a backpack computer device. The inertial motion capturing device, the indoor positioning device, the virtual reality device, the data glove device and the electronic simulation gun device are all in wireless connection with the backpack computer device. Wireless connectivity communication means include, but are not limited to, bluetooth, zegibee, WIFI, 2.4Ghz communication.
The inertial motion capture device comprises a plurality of motion capture modules which can be fixed on a human body. The number of the motion capture modules can be arbitrarily selected according to the situation, and can be 3, 6, 9, 11, 15, 17 or the like.
When the number of the motion capture modules is 3, the 3 motion capture modules are respectively fixed at three different positions of a user through binding belts or special dynamic capture clothes, and the three different positions are preferably: 1. head, torso and buttocks; 2. head, one of the double upper arms (left upper arm and right upper arm), and one of the double forearms (left forearm and right forearm).
When the number of motion capture modules is 6, the 6 motion capture modules are preferably fixed to the head, the torso, the buttocks, the legs, the feet (left and right), one of the upper arms, and one of the forearms, respectively, or to the head, the torso, the buttocks, one of the upper arms, one of the forearms, and one of the hands (left and right), respectively, by straps or professional motion capture clothing, respectively.
When the number of motion capture modules is 9, the 9 motion capture modules are preferably fixed to one of the head, torso, buttocks, double thighs, double calves, double upper arms and one of the double forearms, respectively, or to the head, torso, buttocks, double thighs, double calves, double upper arms and double forearms, respectively, by straps or professional motion capture clothing.
When the number of the motion capture modules is 11, the 11 motion capture modules are preferably fixed on the head, the trunk, the buttocks, the two thighs, the two calves, one of the two feet, one of the two upper arms and one of the two forearms, or respectively fixed on the head, the trunk, the buttocks, the two thighs, the two calves, the two upper arms and the two forearms, by using a binding belt or professional motion capture clothes.
When the number of motion capture modules is 15, the motion capture modules are preferably fixed to the head, trunk, buttocks, both thighs, both calves, both feet, both upper arms, both forearms, and both hands, respectively.
When the number of motion capture modules is 17, the motion capture modules are preferably fixed to the head, trunk, buttocks, legs, feet, upper arms, forearms, hands, and shoulders, respectively.
Each of the motion capture modules described above includes a motion capture sensor.
The motion capture sensor includes a three-axis MEMS acceleration sensor, a three-axis MEMS angular velocity sensor (also known as a gyroscope sensor), a three-axis MEMS magnetometer (also known as an electronic compass sensor), a data filtering sensor, and a microprocessor.
The triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetic force are respectively used for measuring acceleration signals, angular velocity signals and geomagnetic signals.
The triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer are all connected with the data filtering sensor, and the data filtering sensor is also connected with the microprocessor.
The data filtering sensor can carry out primary filtering on data detected by the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer and then transmit the data to the microprocessors, and a Kalman filter II is arranged in each microprocessor.
The microprocessor includes, but is not limited to, MCU, DSP or FPGA, preferably of the type NXP-LPC13xx. The microprocessor NXP-LPC13xx is communicated with the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer through communication modes such as SPI (serial peripheral interface), IIC (two-wire serial bus), USART (serial port) and the like.
The motion capture sensor can collect skeleton gesture data of a human body contact part and perform displacement correction on the collected skeleton gesture data.
The working process of the motion capture sensor is as follows:
the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer are used for respectively acquiring acceleration, angular velocity and geomagnetic field intensity of a human body contact part.
The data filtering sensor performs primary filtering processing on the collected acceleration, angular velocity and geomagnetic field intensity data, and then transmits the acceleration, angular velocity and geomagnetic field intensity signal data in a normal range to the microprocessor.
And the microprocessor NXP-LPC13xx receives the acceleration signal, the angular velocity signal and the geomagnetic intensity signal, generates quaternion or Euler angle, and a Kalman filter built in the microprocessor carries out deep filtering and fusion on the received acceleration, angular velocity and geomagnetic field intensity data by adopting a Kalman filtering algorithm and processes the data into user body posture information.
The microprocessor also analyzes various error sources of the geomagnetic sensor while deep filtering and fusion are carried out, an ellipsoid error model of the geomagnetic sensor in a complete form is established, an ellipsoid model coefficient is obtained through fitting by a least square estimation method, the geomagnetic sensor error matrix and an offset vector are deduced by utilizing the ellipsoid model coefficient, and finally displacement correction and correction are carried out on skeleton posture data output under the magnetic environment of the geomagnetic sensor.
Finally, the microprocessor transmits the corrected skeleton posture data and corrected user body posture information (including azimuth information, euler angles, quaternion information and the like) to the backpack type computer device in a wireless or wired mode.
The Kalman filtering algorithm is a recursive autoregressive data processing algorithm, is a mature prior art, and is realized through 5 common formulas. The Kalman filtering algorithm estimates the process state through a feedback control method, and circularly corrects the state result output every time until the optimal state process data are obtained. Kalman filtering algorithm can be divided into in two cycles: the time updating and measurement updating process, the former is responsible for timely and forward calculating the estimated value of the current state variable and the error covariance to construct the prior estimation of the next time state; the latter combines a priori estimates and measured variables to construct an improved posterior estimate; the time update process can be regarded as a pre-estimation process, the measurement update process can be regarded as a correction process, and the whole estimation algorithm is essentially a pre-estimation-correction algorithm of a numerical solution.
Through the Kalman filtering algorithm and the data process processing, the method can be used in a certain range of iron and weak magnetic field environments, when objects with weak magnetic fields such as mobile phones and the like are close to the sensor, signal acquisition of the geomagnetic sensor is not affected, and gesture data can be normally used.
The motion capture module has the advantages of small volume, light weight and strong endurance, does not influence the motion of a human body when being bound on the human body, has high sampling frequency, and collects complex and high-speed motion; the motion capture module is flexible in configuration, and can capture local motion and whole-body motion; the motion capture is not limited by the field, and the capture effect is not influenced by the shielding of objects in the real environment; the cost of motion capture systems is relatively low.
The data glove device comprises a glove body and a plurality of hand joint gesture sensors arranged in the glove body.
The number of the hand joint posture sensors can be arbitrarily selected according to the situation, and can be 6, 10 or 15, etc.
In one embodiment, the number of joint sensors is 6, and the joint sensors are respectively fixed on the back of the hand by 1 finger and 5 fingers.
In one embodiment, the number of the joint sensors is 10, 1 is fixed on the back of the hand through the glove respectively, 1 is the thumb, and the other four fingers are respectively fixed with 2 sensors.
In one embodiment, the number of the joint sensors is 15, 1 is fixed on the back of the hand through the glove respectively, 2 is thumb, and the other four fingers are respectively fixed with 3 sensors.
The hand joint gesture sensor also comprises a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor. The components of the hand joint posture sensor are the same as the motion capture sensor, and the connection relationship and the working process between the components are basically similar, and will not be described in detail here.
The electronic simulation gun devices comprise electronic simulation guns, electronic simulation gun data acquisition sensors, wireless communication modules, power supplies and the like, wherein the electronic simulation gun data acquisition sensors are arranged in the electronic simulation guns.
The number of the electronic simulation guns can be set to be 1, 2 or 3 according to the number of users, one user wears one electronic simulation gun, and actions such as gun changing, loading, shooting and the like are simulated in an open space.
The electronic simulation gun is preferably manufactured according to the proportion of 1:1 of a real gun, and the appearance, the weight and the operation mode are completely designed according to the actual installation, so that the experience is high.
The electronic simulation gun data acquisition sensor comprises an electronic gun attitude sensor and an electronic gun operation sensor.
The electronic gun operation sensor is one or a combination of a plurality of shooting sensors, cartridge clip sensors, loading sensors, insurance and the like.
The electronic gun attitude sensor also comprises a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor. The components of the electronic gun posture sensor are the same as the motion capture sensor, and the connection relationship and the working process between the components are basically similar, and will not be described in detail here.
However, the microprocessor in the gun attitude sensor is also connected to the gun operation sensor.
Acceleration, angular velocity and geomagnetic field intensity are measured through an electron gun attitude sensor, the state of the gun is acquired through an electron gun operation sensor, data are input into a microprocessor for processing, quaternion or Euler angles of all nodes are output, signal data are transmitted to a backpack computer in a wired or wireless mode through a data analysis and reduction algorithm, and the computer is connected to a virtual reality device through a data interface to restore the state in real time.
The user holds the electronic simulation gun by hand and braces, and the operations of loading, changing the bullet, shooting and the like of the real gun are simulated by operating the trigger, loading or the cartridge clip and the like. The shooting sensor, the cartridge clip sensor, the loading sensor, the insurance and the like can detect the operation states of shooting, bullet changing, loading, insurance and the like in real time, and transmit the operation state data to the microprocessor, and the microprocessor wirelessly transmits the operation state data to the knapsack computer device for data processing, and maps the state of the gun in the virtual world in the virtual reality device.
The virtual reality device comprises a VR wearing device and an environment feedback device.
Wherein, VR wears the device and is VR helmet or VR glasses etc..
The environmental feedback device is one or a combination of more of an audio system, a controllable running machine, an electrode stimulation patch, a force feedback coat/shoe and the like. The sound effect system is a sound box for feeding back the audio signal to the human ear; the force feedback coat/shoe is applied to certain parts of the human body through a certain driver to generate certain action, namely, the force feedback coat/shoe is used for feeding back force feedback signals to the human body; the electrode stimulation patches are electrode patches, the electrode patches are attached to the skin, and then voltage is applied between the motor patches, so that stimulation effect is generated on nerves or muscles between the two electrode patches, namely, tactile signals are fed back to a human body.
The environment feedback device is worn on a target user, is fixed by wearing the helmet or the bandage mode, is preferably connected to the backpack computer device in a wireless mode, generates a 3D virtual environment and a virtual character aiming at the user, maps the received position information, body posture information, finger posture information and electronic simulation gun state information into the virtual character and the environment, and simultaneously transmits corresponding video and audio signals to devices such as video, audio and pressure of the virtual reality glasses device through different signal interfaces according to interaction between the virtual character and the environment.
The indoor positioning device is a UWB indoor positioning system. UWB indoor positioning systems are state of the art, see in particular the patent of application number CN201520817538.1 filed earlier by the applicant.
The UWB indoor positioning system comprises a plurality of positioning anchor nodes, a plurality of mobile tags, a synchronizer and a server; the positioning anchor nodes are fixedly arranged indoors, the mobile tags are worn on each target user, and data transmission is carried out between the mobile tags and the positioning anchor nodes through UWB; the time correction communication is carried out between the synchronizer and each positioning anchor point, so that the time synchronization between the positioning anchor points is realized; the server is provided with infinite access nodes, and each positioning anchor node transmits data with the server through the infinite access nodes.
In specific implementation, a plurality of positioning anchor nodes can be arranged according to specific field areas, and a user wears the mobile tag. The number of mobile tags may be 1, 2 or 3, etc., preferably by means of a strap or professional binding garment to the head, chest, wrist etc. of the user, walking, activity in the field where the anchor node is located.
The dynamic precision positioning of the UWB technology time target user in the indoor environment is utilized, the system is low in power consumption, the system design with low complexity is easier to operate, wiring is not needed, the application efficiency is improved, and the device outputs the position information of the target user.
The positioning characteristic of the UWB indoor positioning system is that accumulated errors cannot be generated after long-time use. However, the device has a certain positioning error range, the error range is +/-20 cm, the device belongs to small-range positioning in the real-time use process, the displacement is not smooth enough, the device cannot be directly used for replacing displacement data in the motion capture gesture, and the situation that the gesture is not matched with the displacement can be generated by direct replacement.
As described in the background art, in the three-axis MEMS angular velocity sensor according to the present invention, i.e., the gyroscope, when the angular velocity is integrated in time, although the data filtering sensor is used for the primary filtering and the kalman filter in the microprocessor is used for the deep filtering, the integrated accumulated error still gradually increases with time, and the measured motion gesture data still has a certain deviation from the actual data.
The invention further solves the problem of deviation between the measured motion gesture data and the actual data by adopting the following method.
1. Posture recombination:
the Kalman filter II is used in the UWB indoor positioning system and the backpack computer device, and can fuse positioning data in the UWB indoor positioning system with gesture data measured by an inertial motion capture sensor (comprising a hand joint gesture sensor, an electron gun gesture sensor and the like), namely, the absolute coordinate position is fused with the coordinates in the motion capture, and the accumulated error of gyroscope integration in the motion capture device is corrected, so that a more real position effect is provided for a user.
A method for fusing indoor positioning data and motion capture data comprises the following steps:
step 1, motion capture data acquisition: acquiring motion capture data of a human body through a motion capture sensor in a virtual reality integrated system; the virtual reality integrated system is provided with an inertial motion capture device, the inertial motion capture device comprises a plurality of motion capture sensors which can be fixed on a human body, and the motion capture sensors can automatically capture and collect motion data of a human body contact part, namely skeleton gesture data.
Step 2, indoor positioning data acquisition: indoor positioning data are obtained through a UWB indoor positioning system.
Step 3, obtaining fusion displacement: and (3) fusing the motion capture data acquired in the step (1) with the indoor positioning data acquired in the step (2) by adopting a Kalman filtering algorithm to obtain fusion displacement.
Assuming that the coordinate data in the motion capture data collected in the step 1 and the indoor positioning data collected in the step 2 are both a set of two-dimensional coordinate points (x, y), wherein x and y respectively represent the abscissa and ordinate of the point, the specific acquisition method of the fusion displacement comprises the following steps:
step 31, a state equation is established: taking the displacement increment of the motion capture data acquired in the step 1 as a state quantity, and establishing a state equation as follows:
Figure SMS_1
In the above, vectors
Figure SMS_2
A is the identity matrix ++A for a priori estimation of motion capture data at time k>
Figure SMS_3
Posterior estimation of motion capture data for time k-1, +.>
Figure SMS_4
Displacement increment, w, of motion capture data acquired in step 1 k As covariance matrix of process noise, which is experimentally measured, is an adjustable parameter, whose preferred matrix is +.>
Figure SMS_5
The matrix parameter ranges from 0 to 500.
Step 32, establishing an observation equation: taking the indoor positioning data acquired in the step 2 as observed quantity, and establishing an observation equation shown as follows:
Figure SMS_6
in the above, vectors
Figure SMS_7
For posterior estimation of the indoor positioning data at time k, C is an observation matrix, preferably an identity matrix +.>
Figure SMS_8
Input coordinate data representing the UWB indoor positioning system; r is (r) k For the observation of the noise matrix, which is experimentally measured, the preferred matrix is +.>
Figure SMS_9
The matrix parameters range from 0 to 100.
Step 33, calculating fusion displacement: and solving the state equation established in the step 31 and the observation equation established in the step 32 to obtain the fusion displacement.
Step 4, displacement deviation correction: analyzing the posture data of each bone captured by each motion capture sensor in the virtual reality integrated system, and calculating the relative displacement coordinates of each bone; and (3) carrying out displacement correction on each motion capture sensor in the virtual reality integrated system according to the fusion displacement and skeleton relative displacement coordinates obtained in the step (3) to form gesture recombination displacement.
When the displacement is corrected, in order to enable the fusion displacement obtained in the step 3 to be matched with the original skeleton gesture, judging based on whether a human body has a landing place or not; under the condition that a human body lands, calculating the positions of bones of the whole body by taking the landing point as an origin; if no new landing point is generated in the correction process, keeping the original point unchanged; if a new landing point is generated in the correction process, the origin becomes the fusion displacement at the current moment.
Under the condition that a human body lands, calculating the positions of bones of the whole body by using a pose matrix by taking a landing point as an origin; wherein the pose matrix T is represented as follows:
Figure SMS_10
wherein,,
Figure SMS_11
O=[0 0 0],I=1
in the above formula, T represents a pose matrix, n represents a normal vector, O represents a direction vector, a represents a proximity vector, P represents a translation vector, R represents a rotation matrix, P represents a position matrix, O represents a perspective matrix, and I represents proportional conversion; x, y, z represent three coordinate axis directions.
The rotation matrix represented by R is derived from the gesture data of the motion capture sensor.
The gesture data of the motion capture sensor is quaternion: q= (w, x, y, z)
The conversion formula of the quaternion and the rotation matrix is as follows:
Figure SMS_12
the position matrix denoted by P is initially
Figure SMS_13
The latter can be obtained by combining the pose matrix T with a bone parameter matrix (e.g. right thigh bone parameter matrix +. >
Figure SMS_14
The bone parameter matrix is a fixed parameter) are multiplied.
The O matrix and the I matrix are fixed parameter matrices.
Step 5, forming a final output displacement: and 3, the fusion displacement obtained in the step 3 and the gesture recombination displacement formed in the step 4 form a preliminary output displacement, the preliminary output displacement is subjected to Kalman filtering, and a flash point generated in the displacement correction process is removed, so that a smooth final output displacement of each motion capture sensor is formed.
When the preliminary output displacement is subjected to Kalman filtering, a Kalman filtering state equation is as follows:
Figure SMS_15
in the above-mentioned method, the step of,
Figure SMS_16
preliminarily outputting the state quantity of displacement for the moment k; />
Figure SMS_17
Is->
Figure SMS_18
Is the first derivative of (a); />
Figure SMS_19
Is a state matrix; t is t s The sampling frequency of the motion capture sensor is represented as a fixed parameter; />
Figure SMS_20
And outputting the state quantity of displacement for the moment k-1.
Kalman filtering observation equation:
Figure SMS_21
in the above-mentioned method, the step of,
Figure SMS_22
for a posterior estimation of preliminary output displacement, +.>
Figure SMS_23
Is->
Figure SMS_24
Is the first derivative of (a); c is an observation matrix, taking
Figure SMS_25
Figure SMS_26
Is the observed quantity.
The system analyzes the bone posture data (including the data collected by the motion capture sensor, the hand joint posture sensor and the electron gun posture sensor) of the motion capture system, and calculates the relative displacement coordinates of each bone. And carrying out displacement correction on the dynamic capture system through fusion of the intra-chamber internal displacement and the relative coordinates of bones.
2. And (3) output filtering:
the output displacement consists of the displacement of gesture recombination and the displacement of Kalman fusion, and because the displacement of gesture recombination and the displacement of Kalman fusion have deviation (two sets of incoherent systems have the deviation necessarily), when the landing point is generated, a flashover point is generated when the Kalman fusion displacement replaces a new origin. The existence of the flash point can cause the final effect to generate the phenomenon of the flash of the character, and the flash point needs to be eliminated or smoothed. The output Kalman filtering is the smoothing of the flash point.
The knapsack type computer device comprises a computer device, a binding belt, a knapsack, a reinforcing belt, a damping device, a buffering device and the like.
The computer device includes: the system comprises a computer host, a standard video interface, a standard audio interface, a standard USB3.0 interface, a wireless communication module, a battery power supply system, a charging system and a voltage conversion circuit.
The backpack computer device is internally provided with a Kalman filter I and a simulation software system. The backpack computer is preferably connected to all of the microprocessors described above by wireless.
The simulation software system is a mature software system and can be directly purchased and used, and the application is not described in detail.
The backpack type computer device is preferably connected with the inertia motion capturing device, the indoor positioning device, the virtual reality glasses device, the data glove device and the electronic simulation gun device in a wireless mode, signals of the devices are input to the backpack type computer device, a Kalman filter is used for fusing gesture output data of motion capturing with positioned output data by adopting a data fusion algorithm with a recursive autoregressive filtering function, and various signals of the inertia motion capturing device, the indoor positioning device, the data glove device and the electronic simulation gun device in the backpack type computer device are used for generating a 3D virtual environment and a virtual role for a user in the backpack type computer device, and feedback, display and realization are carried out in the virtual reality device. The 3D virtual environment includes a virtual scene, one or more user-corresponding characters, and a series of virtual objects. The three can interact with each other, and an effect identical to the real world can be generated, and the effect accords with objective rules.
The system adopts an inertial sensor technology, wears an inertial sensor module on a body to capture human body action gesture data in real time, uploads the gesture data to an upper computer through a wireless communication technology, restores human body gestures in real time, integrates a knapsack computer technology, a virtual reality glasses technology, an indoor positioning technology, an electronic simulation gun technology, a data glove technology, an ergonomic technology, a data fusion technology and a geomagnetic anti-interference technology, and integrates a virtual reality system.
The virtual composite system of the present invention is described in detail below in conjunction with specific examples.
Assume that in this embodiment, a user performs individual combat training or individual tactical collaborative combat in a virtual environment. The whole body of the user is bound with 17 motion capture modules, and the binding positions are head, chest, buttocks, shoulders, double large arms, double small arms, double hands, double large legs, double small legs and double feet. Movement of UWB indoor positioning system the tag is worn on the tactical helmet; wearing the data glove device with both hands; the hand-held electronic simulation gun is worn on the head of a tactical helmet with tipping VR glasses.
Each motion capture module, each hand joint gesture sensor and each electron gun gesture sensor acquire the azimuth information of each module node sensor through integration of the diagonal speed, and simultaneously acquire the azimuth of the module to the gravity direction and the geomagnetic direction through measurement of geomagnetic and gravity acceleration. The sensors of each module transmit acceleration, angular velocity and geomagnetic information to a microprocessor, the microprocessor integrates the acceleration for the second time to obtain displacement information of each part, and the integrated error of each module is corrected according to biomechanical constraint and external contact ending judgment. The microprocessor transmits the information such as acceleration, angular velocity, geomagnetic information, displacement information, azimuth information and the like of each module sensor to the backpack computer in a wired or wireless mode.
The mobile tag of the UWB indoor positioning system is worn at a tactical helmet of a user, and the user moves in a place where the positioning anchor nodes and the synchronizers are arranged. The mobile tag worn on the human body and the positioning anchor node perform data transmission based on UWB, the synchronizer performs timing communication with each anchor point, and each anchor point performs data transmission with the server through the wireless access node. The server outputs the absolute coordinates of the mobile tag in the spatial location by calculating the time difference between the tag and each anchor node, by an indoor positioning algorithm. The server sends the position information of the mobile tag to the backpack computer device through a wired or wireless mode.
The virtual reality device comprises a helmet type tipping VR glasses, a sound system and a plurality of electrode patches on the body of a user. The three-dimensional virtual space picture can be displayed by wearing helmet type tipping bucket VR glasses; the sound system feeds back various sounds in the virtual environment, and the electrode patch feeds back various stimuli of the virtual environment to the user. The virtual reality device is formed by integrating various collected information of an inertial motion capturing device, an indoor positioning device, a data glove device and an electronic simulation gun device through an algorithm, wherein the inertial motion capturing device, the indoor positioning information, the data glove information and the electronic simulation gun information are integrated through an emulation software output signal and a virtual reality device, and a helmet type tipping VR glasses, a sound system and an electrode patch are driven to act on a user to generate a deep immersion and vivid virtual environment.
The backpack computer runs simulation software, and the virtual reality device can generate a three-dimensional virtual space acting on a user, and the three-dimensional virtual space has some events which do not exist in the real world or occur with small probability. For example, the individual soldiers simulate special army encountered in training and tactical coordination in the event of sudden armed conflict, so that the task of extinguishing conflict is completed. In the virtual environment, a user can shoot, unify and other operations on armed personnel in the virtual environment by using an electronic simulation gun in the hand, and the role in the virtual can attack the user or attack and damage other users on the user. The user can carry out actions such as dodging, running, jumping, creeping, kneeling and the like in the face of armed personnel in the virtual environment, and meanwhile, the electronic simulation gun in the hand can kill and calm the virtual armed molecules. The multi-user glove can carry out the operation and communication of sign language and tactical actions, and can also carry out communication in a voice system mode. If the user is concentrated by other users and armed molecules in the virtual environment, the electrode patches in the virtual reality device generate stimulus signals corresponding to the attack intensity at opposite positions, so that the user generates a true hit feeling.
In accordance with the above examples, in combination with the prior art, the same and different points of the motion capture-based virtual reality integrated system of the present method and a common 3D playing game are explained.
The same points: both are users manipulating virtual characters to perform certain activities and experiences in a virtual 3D world environment. The difference is that: the invention is to operate immersive 3D virtual reality software, and control virtual roles by means of limb actions, finger actions, simulated gun actions and languages of users, just as general real world people operate themselves, and common 3D playing games control roles by using a mouse and a keyboard; meanwhile, a common 3D role playing game intelligently sees a plane image on a display, only sees roles played by the user and roles in the environment, but cannot experience interactions between the roles in the game and surrounding environments by other senses, when the virtual reality integrated system is adopted, a corresponding three-dimensional view mirror of the 3D virtual environment can be provided according to changes of the roles in the virtual environment, the sense of reality is improved, a user feels like being in the scene, and meanwhile, through an environment feedback device, the user can experience interactions between the virtual environment and the real roles through other parts of the body.
In conclusion, the motion capture module, the hand joint gesture sensor and the electron gun gesture sensor have the advantages of small volume, light weight and convenient wearing, do not influence the motion when being bound to a human body, have high sampling speed, and can be used for novel sampling of complex and high-speed motion; the wearing is flexible, and a proper wearing combination mode can be selected according to the actual demand; the motion capture is not limited by the field, and the motion capture effect is not influenced by the shielding of a real object; the cost of motion capture is relatively low. The indoor positioning device can capture and position real-time positions of a plurality of users in a space where the positioning device is deployed in real time, and output absolute coordinates of the users; the indoor positioning device adopts UWB positioning technology, has high sampling frequency, can position and position the user in real time, and can quickly position the quick action of the user; the wearing is flexible, the tag can be worn on the head, the chest and the wrist, and the tag can be worn according to specific requirements; the deployment is simple and convenient, and the positioning deployment can be completed only by deploying a plurality of anchor nodes, synchronizers, a small amount of auxiliary power supplies and other devices in the space needing to be positioned; the positioning is not affected by the environment and the optical fiber, and the positioning can be deployed and positioned in open outdoor places without being affected by light; UWB indoor positioning costs are relatively low. The data glove is convenient to wear, the module is small, the data glove can work only by wearing a special data glove carrier and connecting with a backpack computer, and the use is convenient; the configuration is flexible, different joints can be configured according to specific requirements, and virtual experience is completed in the most suitable configuration mode; the virtual experience can be carried out under direct sunlight without being influenced by the light environment; the sampling frequency is high, and complex and quick-acting capturing and sampling can be performed.
In addition, the electronic simulation gun, the virtual reality glasses and the backpack computer technology solve the problem of real-time restoration of the wearing type, the gesture and the game state, and improve the user experience. The data glove technology, the virtual reality glasses and the backpack computer technology solve the problem of real-time reduction display of the wearable limbs and fingers and provide user experience. The data fusion and geomagnetic anti-interference technology reduces the interference of a complex magnetic field environment to an electronic compass sensor, and improves the fitness of the physical environment and the user experience.
According to the invention, the human action gesture in the real world and the state of the handheld prop of the peripheral equipment can be introduced into the virtual reality in real time and mapped to the corresponding role, and the role action of the virtual environment on the role is fed back to the perception of the human in the real world in real time in an appropriate mode, so that the immersion of the virtual reality is greatly improved, and meanwhile, the interactivity of the role and the virtual environment is increased, so that the user experience is more true and real.
The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the specific details of the above embodiments, and various equivalent changes can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the equivalent changes belong to the protection scope of the present invention.

Claims (6)

1. A virtual reality integrated system based on motion capture is characterized in that: the electronic simulation gun comprises an inertial motion capturing device, an indoor positioning device, a virtual reality device, a data glove device, an electronic simulation gun device and a backpack computer device; the inertial motion capturing device, the indoor positioning device, the virtual reality device, the data glove device and the electronic simulation gun device are all in wireless connection with the backpack computer device; the backpack type computer device is internally provided with a Kalman filter I and a simulation software system;
the inertial motion capture device comprises a plurality of motion capture modules which can be fixed on a human body, and each motion capture module comprises a motion capture sensor;
the indoor positioning device is a UWB indoor positioning system;
the data glove device comprises a glove body and a plurality of hand joint gesture sensors arranged in the glove body;
the electronic simulation gun device comprises an electronic simulation gun and an electronic simulation gun data acquisition sensor which is arranged in the electronic simulation gun, wherein the electronic simulation gun data acquisition sensor comprises an electronic gun posture sensor and an electronic gun operation sensor;
the motion capture sensor, the hand joint gesture sensor and the electron gun gesture sensor comprise a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor, a triaxial MEMS magnetometer, a data filtering sensor and a microprocessor; the three-axis MEMS acceleration sensor, the three-axis MEMS angular velocity sensor and the three-axis MEMS magnetometer are all connected with the data filtering sensor, and the data filtering sensor is also connected with the microprocessor; the data filtering sensor can carry out primary filtering on data detected by the triaxial MEMS acceleration sensor, the triaxial MEMS angular velocity sensor and the triaxial MEMS magnetometer and then transmit the data to the microprocessors, and a Kalman filter II is arranged in each microprocessor;
The electron gun operation sensor is connected with a microprocessor in the electron gun attitude sensor;
the motion capture sensor can collect skeleton posture data of a human body contact part and perform displacement correction on the collected skeleton posture data; the working process of the motion capture sensor is as follows:
the method comprises the steps that a triaxial MEMS acceleration sensor, a triaxial MEMS angular velocity sensor and a triaxial MEMS magnetometer in the motion capture sensor respectively acquire acceleration, angular velocity and geomagnetic field intensity of a human body contact part, and a data filtering sensor in the motion capture sensor performs primary filtering processing on acquired acceleration, angular velocity and geomagnetic field intensity data; the Kalman filter in the microprocessor adopts a Kalman filtering algorithm to carry out deep filtering and fusion on acceleration, angular velocity and geomagnetic field intensity data which are in a normal range value after primary filtering, an ellipsoid model coefficient is obtained through fitting by a least square estimation method, a geomagnetic sensor error matrix and an offset vector are deduced by utilizing the ellipsoid model coefficient, and finally displacement correction and correction are carried out on skeleton attitude data output under a geomagnetic environment;
analyzing the posture data of each bone captured by each motion capture sensor in the virtual reality integrated system, and calculating the relative displacement coordinates of each bone; performing displacement correction on each motion capture sensor in the virtual reality integrated system according to the fusion displacement and the skeleton relative displacement coordinates to form gesture recombination displacement; the fusion displacement refers to the displacement obtained by fusing the collected motion capture data with the collected indoor positioning data by adopting a Kalman filtering algorithm;
When the displacement is corrected, in order to match the combined displacement with the original skeleton gesture, judging whether a human body has a landing place or not; under the condition that a human body lands, calculating the positions of bones of the whole body by taking the landing point as an origin; if no new landing point is generated in the correction process, keeping the original point unchanged; if a new landing point is generated in the correction process, changing the origin into fusion displacement at the current moment;
under the condition that a human body lands, the position of each skeleton of the whole body is calculated by taking the landing point as an origin and adopting a pose matrix.
2. The motion capture-based virtual reality integration system of claim 1, wherein: the electron gun operation sensor is one or a combination of a shooting sensor, a cartridge clip sensor, a loading sensor and a safety.
3. The motion capture-based virtual reality integration system of claim 1, wherein: the virtual reality device comprises a VR wearing device and an environment feedback device, wherein the environment feedback device is one or a combination of more of an audio system, a controllable running machine, an electrode stimulation patch and a force feedback coat.
4. The motion capture-based virtual reality integration system of claim 3, wherein: the VR wearing device is a VR helmet or VR glasses.
5. The motion capture-based virtual reality integration system of claim 1, wherein: the model of the microprocessor in the motion capture sensor, the hand joint gesture sensor and the electron gun gesture sensor is NXP-LPC13xx.
6. The motion capture-based virtual reality integration system of claim 1, wherein: the UWB indoor positioning system comprises a plurality of positioning anchor nodes, a plurality of mobile tags, a synchronizer and a server; the positioning anchor nodes are fixedly arranged indoors, the mobile tags are worn on each target user, and data transmission is carried out between the mobile tags and the positioning anchor nodes through UWB; the time correction communication is carried out between the synchronizer and each positioning anchor point, so that the time synchronization between the positioning anchor points is realized; the server is provided with infinite access nodes, and each positioning anchor node transmits data with the server through the infinite access nodes.
CN201710053172.9A 2017-01-22 2017-01-22 Virtual reality integrated system based on motion capture Active CN106648116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710053172.9A CN106648116B (en) 2017-01-22 2017-01-22 Virtual reality integrated system based on motion capture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710053172.9A CN106648116B (en) 2017-01-22 2017-01-22 Virtual reality integrated system based on motion capture

Publications (2)

Publication Number Publication Date
CN106648116A CN106648116A (en) 2017-05-10
CN106648116B true CN106648116B (en) 2023-06-20

Family

ID=58841326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710053172.9A Active CN106648116B (en) 2017-01-22 2017-01-22 Virtual reality integrated system based on motion capture

Country Status (1)

Country Link
CN (1) CN106648116B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145235A (en) * 2017-05-11 2017-09-08 杭州幻行科技有限公司 A kind of virtual reality system
CN107336233B (en) * 2017-06-02 2020-10-09 南京邮电大学 Inertial-kinetic-capture-based human-robot virtual-real interaction control system
CN107045816A (en) * 2017-06-23 2017-08-15 西安天圆光电科技有限公司 Air battle dual training analogue means and method based on AR glasses and data glove
CN107469315A (en) * 2017-07-24 2017-12-15 烟台中飞海装科技有限公司 A kind of fighting training system
CN107469343B (en) * 2017-07-28 2021-01-26 深圳市瑞立视多媒体科技有限公司 Virtual reality interaction method, device and system
CN107632699B (en) * 2017-08-01 2019-10-11 东南大学 Natural human-machine interaction system based on the fusion of more perception datas
CN107576335B (en) * 2017-09-04 2020-12-25 红栗子虚拟现实(北京)科技有限公司 Inertial motion capture deformation and distortion correction method and device based on large space
CN108227928B (en) * 2018-01-10 2021-01-29 三星电子(中国)研发中心 Picking method and device in virtual reality scene
CN108489330B (en) * 2018-02-08 2020-02-21 乌鲁木齐涅墨西斯网络科技有限公司 Multi-person interactive virtual reality screening shooting training system for police and use method
CN108320608A (en) * 2018-02-11 2018-07-24 北京猫眼视觉科技有限公司 A kind of virtual reality training system and method
CN108196686B (en) * 2018-03-13 2024-01-26 北京无远弗届科技有限公司 Hand motion gesture capturing device, method and virtual reality interaction system
CN108459716B (en) * 2018-03-13 2021-06-22 北京欧雷新宇动画科技有限公司 Method for realizing multi-person cooperation to complete task in VR
CN108744528A (en) * 2018-04-08 2018-11-06 深圳市博乐信息技术有限公司 Body-sensing advertisement interactive approach, device and computer readable storage medium
TWI671740B (en) * 2018-06-07 2019-09-11 光禾感知科技股份有限公司 Indoor positioning system and method based on geomagnetic signals in combination with computer vision
CN108983636B (en) * 2018-06-20 2020-07-17 浙江大学 Man-machine intelligent symbiotic platform system
CN109011555A (en) * 2018-08-01 2018-12-18 浙江树人学院 A kind of virtual reality somatic sensation television game equipment
CN109089310A (en) * 2018-08-02 2018-12-25 凌宇科技(北京)有限公司 A kind of system for realizing the synchronous positioning of more people
TWI731263B (en) * 2018-09-06 2021-06-21 宏碁股份有限公司 Smart strap and method for defining human posture
CN109186594A (en) * 2018-09-20 2019-01-11 鎏玥(上海)科技有限公司 The method for obtaining exercise data using inertial sensor and depth camera sensor
CN110935166A (en) * 2018-09-25 2020-03-31 维亚科技国际有限公司 Virtual reality game system, processor and virtual game scene moving method
CN111643885A (en) * 2019-04-18 2020-09-11 成都奇天幻影数字娱乐有限公司 Virtual reality steering control method based on IMU
CN110333776A (en) * 2019-05-16 2019-10-15 上海精密计量测试研究所 A kind of military equipment operation training system and method based on wearable device
CN110427055A (en) * 2019-08-05 2019-11-08 厦门大学 A kind of stage follow spotlight automatic control system and method
CN110413130B (en) * 2019-08-15 2024-01-26 泉州师范学院 Virtual reality sign language learning, testing and evaluating method based on motion capture
CN111537988B (en) * 2020-03-31 2023-04-18 北京小米移动软件有限公司 Role control method, role control device, and computer-readable storage medium
CN111672089B (en) * 2020-06-22 2021-09-07 良匠实业(海南)有限公司 Electronic scoring system for multi-person confrontation type project and implementation method
CN111966213A (en) * 2020-06-29 2020-11-20 青岛小鸟看看科技有限公司 Image processing method, device, equipment and storage medium
CN111783679A (en) * 2020-07-04 2020-10-16 北京中科深智科技有限公司 Real-time whole body dynamic capture system and method based on data mixing of camera and IMU
TWM631301U (en) * 2020-07-10 2022-09-01 李志峯 Interactive platform system
CN112179204A (en) * 2020-09-14 2021-01-05 北京晶品特装科技股份有限公司 Simulation sniping gun for simulation and control method
CN112650395A (en) * 2020-12-30 2021-04-13 上海建工集团股份有限公司 Real-time updating method for virtual reality scene of architectural engineering
CN113343896A (en) * 2021-06-25 2021-09-03 浙江工业大学 Motion capture system based on real bird flight and control method thereof
US20230031480A1 (en) * 2021-07-28 2023-02-02 Htc Corporation System for tracking camera and control method thereof
CN113641103B (en) * 2021-08-13 2023-04-25 广东工业大学 Running machine control method and system of self-adaptive robot
CN116963028A (en) * 2022-04-13 2023-10-27 北京字跳网络技术有限公司 Head-mounted terminal equipment and tracking method and device thereof
WO2024040813A1 (en) * 2022-08-22 2024-02-29 深圳市韶音科技有限公司 Sensing apparatus and glove for capturing hand action
CN116661643B (en) * 2023-08-02 2023-10-03 南京禹步信息科技有限公司 Multi-user virtual-actual cooperation method and device based on VR technology, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150016A (en) * 2013-02-20 2013-06-12 兰州交通大学 Multi-person motion capture system fusing ultra wide band positioning technology with inertia sensing technology
CN105068654A (en) * 2015-08-14 2015-11-18 济南中景电子科技有限公司 Motion capturing system and method based on CAN bus and inertial sensor
CN105222772A (en) * 2015-09-17 2016-01-06 泉州装备制造研究所 A kind of high-precision motion track detection system based on Multi-source Information Fusion
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010027015A1 (en) * 2008-09-05 2010-03-11 国立大学法人東京大学 Motion capture device
CN201829068U (en) * 2010-09-13 2011-05-11 徐龙龙 Individual training system based on virtual reality
CN103136912A (en) * 2013-03-05 2013-06-05 广西师范大学 Moving posture capture system
CN103488291B (en) * 2013-09-09 2017-05-24 北京诺亦腾科技有限公司 Immersion virtual reality system based on motion capture
US10684485B2 (en) * 2015-03-06 2020-06-16 Sony Interactive Entertainment Inc. Tracking system for head mounted display
CN105446485B (en) * 2015-11-20 2018-03-30 哈尔滨工业大学 System and method is caught based on data glove and the human hand movement function of position tracking instrument
CN106297473B (en) * 2016-10-26 2017-08-29 覃伟 Using multi-functional VR man-machine interactions and the analogy method of the simulator of external environment condition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150016A (en) * 2013-02-20 2013-06-12 兰州交通大学 Multi-person motion capture system fusing ultra wide band positioning technology with inertia sensing technology
CN105068654A (en) * 2015-08-14 2015-11-18 济南中景电子科技有限公司 Motion capturing system and method based on CAN bus and inertial sensor
CN105222772A (en) * 2015-09-17 2016-01-06 泉州装备制造研究所 A kind of high-precision motion track detection system based on Multi-source Information Fusion
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot

Also Published As

Publication number Publication date
CN106648116A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106648116B (en) Virtual reality integrated system based on motion capture
CN206497423U (en) A kind of virtual reality integrated system with inertia action trap setting
CN106843484B (en) Method for fusing indoor positioning data and motion capture data
CN103488291B (en) Immersion virtual reality system based on motion capture
CN203405772U (en) Immersion type virtual reality system based on movement capture
US20090046056A1 (en) Human motion tracking device
CN102323854B (en) Human motion capture device
EP2915025B1 (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
KR100948095B1 (en) Motion-input device for a computing terminal and method of its operation
CN112102677A (en) Mixed reality high-simulation battle site emergency training platform and training method thereof
US20120156661A1 (en) Method and apparatus for gross motor virtual feedback
CN106873787A (en) A kind of gesture interaction system and method for virtual teach-in teaching
CN103019386A (en) Method for controlling human-machine interaction and application thereof
US20180216959A1 (en) A Combined Motion Capture System
US11887259B2 (en) Method, system, and apparatus for full-body tracking with magnetic fields in virtual reality and augmented reality applications
CN112256125B (en) Laser-based large-space positioning and optical-inertial-motion complementary motion capture system and method
CN106112997B (en) Ectoskeleton clothes
CN109003300B (en) Virtual reality system based on human body centroid displacement calculation algorithm
CN206011064U (en) Ectoskeleton takes
CN109102572A (en) Power transformation emulates virtual hand bone ratio in VR system and estimates method
Dallaire-Côté et al. Animated self-avatars for motor rehabilitation applications that are biomechanically accurate, low-latency and easy to use
RU173655U1 (en) SIMULATOR OF COSMIC CONDITIONS BASED ON VIRTUAL REALITY
CN206534641U (en) Ectoskeleton takes and body analogue system
Majid et al. Three Axis Kinematics Study for Motion Capture Using Augmented Reality
US11360549B2 (en) Augmented reality doll

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant