CN107961531B - Virtual social contact system based on motion capture clothing and working method thereof - Google Patents

Virtual social contact system based on motion capture clothing and working method thereof Download PDF

Info

Publication number
CN107961531B
CN107961531B CN201711259238.6A CN201711259238A CN107961531B CN 107961531 B CN107961531 B CN 107961531B CN 201711259238 A CN201711259238 A CN 201711259238A CN 107961531 B CN107961531 B CN 107961531B
Authority
CN
China
Prior art keywords
electromagnetic field
user
motion capture
real
radiator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711259238.6A
Other languages
Chinese (zh)
Other versions
CN107961531A (en
Inventor
周密
吴斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Siwuge Technology Co ltd
Original Assignee
Chengdu Siwuge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Siwuge Technology Co ltd filed Critical Chengdu Siwuge Technology Co ltd
Priority to CN201711259238.6A priority Critical patent/CN107961531B/en
Publication of CN107961531A publication Critical patent/CN107961531A/en
Application granted granted Critical
Publication of CN107961531B publication Critical patent/CN107961531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a social contact system based on motion capture clothing and a working method thereof, wherein any user can be connected on a network, real-time motion information is extracted by wearing the motion capture clothing, the motion of the user is projected into a virtual set scene, the user can enter the virtual scene by using a human face (body) model extracted by a third-party smart phone, and also can enter the virtual scene by using an animation model provided by software, and the user can perform real-time language and limb motion with other users in the virtual scene, so that a near-real social contact effect is achieved; the invention adopts an electromagnetic field measurement mode to extract the real-time action information of the human body, solves the problem of slow response by adopting the traditional inertia technology and optical technology, and has simple structure of the whole virtual communication system and low cost of the used hardware; the device does not need to be calibrated again in the using process, and is extremely convenient to use.

Description

Virtual social contact system based on motion capture clothing and working method thereof
Technical Field
The invention relates to the field of virtual interaction systems, in particular to a virtual social system based on motion capture clothing and a working method thereof.
Background
The current social system mainly uses QQ, micro and strange instant messaging software, users log in the software through electronic equipment such as a computer or a mobile phone, and the like to search for added people, and then use a keyboard, an earphone, a camera and the like to interact with other online users through instant characters, voice or videos.
Although the social software communication mode can provide voice and video communication, the video mode is difficult to ensure smooth at any time due to the limitation of network speed, and particularly when the social software communication mode is in an outdoor non-wifi environment; in addition, the social software cannot provide face-to-face real-time interaction, participants can only type and speak, and most of the participants can see the expression and the environment, so that deeper interaction cannot be provided.
In addition, with the development of scientific technology, some people adopt human body motion extraction technology and virtual reality technology to improve real-time interactivity. However, in the prior art, when the traditional inertial sensing technology is adopted to extract the human body motion, the relative position of each joint is obtained by calculating the acceleration of the human body motion, only the position information of the relative previous position can be obtained, the cost is high, and meanwhile, the timing calibration is needed, so that the use is very inconvenient, and the method is only suitable for industrial application such as film making and practical products such as a dynamic compensation equipment series of NOITOM; when the optical sensing technology is adopted for extracting human body actions, visible light or infrared rays are adopted for carrying out image recognition and tracking on beacons of human body joint parts, so that the human body action extraction is realized, the beacons are easily shielded, action omission is generated, in addition, the image processing speed is low, the cost is high, and practical products such as Microsoft Xbox motion sensing games, HTCVIVE and the like are obtained.
Disclosure of Invention
Therefore, in order to solve the technical problems that a user cannot be completely immersed and deeply interacted, the use is inconvenient, the response speed is low, the cost is high and the like in the prior art, the invention provides a virtual social system based on wearable motion capture clothes and a working method thereof.
The virtual social contact system comprises wearable motion capture clothing, display equipment and main control equipment, wherein the display equipment is in communication connection with the main control equipment; the motion capture clothing comprises a human body motion capture system, a motion capture system and a motion capture system, wherein the human body motion capture system comprises a plurality of type I electromagnetic field radiators arranged on the clothing, a plurality of type II electromagnetic field radiators arranged on the clothing and a processor; the processor measures the amplitude change of the voltage of a receiving signal of the electromagnetic field radiator at the origin of the reference coordinate by the type I electromagnetic field radiator, and calculates the coordinate information of the type I electromagnetic field radiator relative to the origin; the processor obtains the motion trail of the class II electromagnetic field radiator based on the distance and angle relation between the class II electromagnetic field radiator and the class I electromagnetic field radiator, and obtains the coordinate information of the class II electromagnetic field radiator relative to the origin of coordinates through coordinate transfer calculation based on the motion trail; the processor obtains real-time human body action information of the user based on real-time coordinate information of the class II electromagnetic field radiator and the class I electromagnetic field radiator relative to the origin of coordinates, and transmits the obtained real-time human body action information to the main control equipment; the main control equipment comprises a controller, and the controller projects the obtained real-time actions of the user to a preset virtual scene to realize real-time communication between the user and other users in the virtual scene.
Furthermore, the electromagnetic field radiator is used for transmitting and receiving, the measured position electromagnetic field radiator radiates to the periphery to form local field distribution by using a transmitting signal generated by the transmitting waveform generating circuit, the measuring position electromagnetic field radiator receives the analog signal, the analog signal is amplified by the amplifying circuit, the analog signal is acquired by the data acquisition card and converted into a digital signal, and the digital signal is subjected to digital filtering processing by the processing circuit and enters the processor for calculation processing.
And further, measuring the amplitude change of the voltage of the receiving signal of the electromagnetic field radiator at the origin of the reference coordinate by the type I electromagnetic field radiator, and calculating the coordinate information of the type I electromagnetic field radiator relative to the origin based on the corresponding relation between the voltage amplitude and the coordinate.
Further, the master control equipment is a computer, and the display equipment is a VR/AR head-mounted display.
Further, a working method based on the virtual social system comprises the following steps:
1) firstly, wearing a motion capture dress by a user, selecting a communication object and a virtual scene through a software interface on a main control device, and initiating a communication connection invitation;
2) the communication connection is successfully established, a program is initialized, the motion capture clothing worn by the user starts to be calibrated, and after the calibration is finished, the real-time motion information of the user is extracted through the motion capture clothing;
3) reading user real-time action information extracted from the wearable action capture clothes in real time, and updating animation image actions in the virtual scene to be completely synchronous with the actions of the user;
4) the user wears the display equipment, enters a virtual scene, and starts real-time and comprehensive interaction of actions and languages among the users.
Further, the motion capture garment calibration steps are as follows: calculating and communicating one by one through 20-50 electromagnetic field radiators arranged on the clothing, determining the actual positions of all the radiators relative to a back reference point, comparing the actual positions with the expected arrangement positions of the radiators, and sending a signal to a main control device to remind a user to correct if the error exceeds a preset threshold value through a controller arranged on the clothing; after the calibration is completed, the user does not need to perform the recalibration in the whole communication process before the user does not take off the capturing clothes.
Furthermore, in the action process, the radiators which are arranged corresponding to different movement parts of the human body have different update rates of the position information of the corresponding radiators, and are dynamically set according to the frequency degree of the current activity.
Further, the radiator provided corresponding to the frequently moving part has a position information update rate set to 100 frames/second.
Further, the radiator provided corresponding to the infrequently moving part has a position information update rate set to 30 frames/second.
Further, the user selects the animation image through an animation model provided by third-party intelligent equipment or software.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the electromagnetic field radiator is arranged on the body of the tracked person, so that the human body action tracking device is independent of the environment, the electromagnetic field radiator is completely arranged on the clothes worn by the human body, no external equipment is required to be erected, and the use convenience is greatly improved.
2. The invention adopts an electromagnetic field measurement mode to extract the real-time action information of the human body, solves the problem of slow response by adopting the traditional inertia technology and optical technology, and has simple structure of the whole virtual communication system and low cost of the used hardware.
3. According to the method, error accumulation is avoided based on the capture result of the absolute coordinate position of the back fixed point, so that the defect that timing calibration is needed in the prior art is overcome; the disposable calibration can be used without time limitation only by once calibration when worn in the use process, and the use is extremely convenient.
4. According to the invention, the high-speed action data updating rate can be realized according to the software settings of different acquisition and processing components, and the action refreshing speed requirements of ideal VR display and 3D games are met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a block diagram of the system architecture of the present invention.
FIG. 2 is a block diagram of the operation of the human motion capture system of the present invention.
FIG. 3 is a flow chart of a system operation method implementation of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Examples
As shown in fig. 1 and fig. 2, the virtual social system includes a wearable motion capture garment, a display device, and a master control device, where the display device is communicatively connected to the master control device; the motion capture clothing comprises a human body motion capture system, a motion capture system and a motion capture system, wherein the human body motion capture system comprises a plurality of type I electromagnetic field radiators arranged on the clothing, a plurality of type II electromagnetic field radiators arranged on the clothing and a processor; the processor measures the amplitude change of the voltage of a receiving signal of the electromagnetic field radiator at the origin of the reference coordinate by the type I electromagnetic field radiator, and calculates the coordinate information of the type I electromagnetic field radiator relative to the origin; the processor obtains the motion trail of the class II electromagnetic field radiator based on the distance and angle relation between the class II electromagnetic field radiator and the class I electromagnetic field radiator, and obtains the coordinate information of the class II electromagnetic field radiator relative to the origin of coordinates through coordinate transfer calculation based on the motion trail; the processor obtains real-time human body action information of the user based on real-time coordinate information of the class II electromagnetic field radiator and the class I electromagnetic field radiator relative to the origin of coordinates, and transmits the obtained real-time human body action information to the main control equipment; the main control equipment comprises a controller, and the controller projects the obtained real-time actions of the user to a preset virtual scene to realize real-time communication between the user and other users in the virtual scene.
The radiators are distributed at all positions of the clothes worn by the user and are connected with the processor through cables, wherein the radiators are used for transmitting and receiving, the signals of the transmitting waveform generating circuit are radiated to the periphery by the position-to-be-measured transmitting radiator to form local field distribution, the analog signals are received by the measuring position receiving radiator, the analog signals are amplified by the amplifying circuit, are collected by the data acquisition card and then are converted into digital signals, and the digital signals are subjected to digital filtering processing and enter the signal processing circuit to be subjected to algorithm processing calculation.
The processor unit is connected with each electromagnetic field radiator through a cable, the processor unit calculates the distance and the angle between the electromagnetic field radiators by calculating the amplitude and the phase of signals transmitted and received by each tracking electromagnetic field radiator, and after the distance and the angle relation among all the electromagnetic field radiators is determined, the action state of the human body is determined accordingly.
The specific algorithm for acquiring the real-time information of the human body actions is as follows:
most of electromagnetic field radiators (class I electromagnetic field radiators) arranged on the chest of the back all use the center of the back as a reference coordinate origin, when a human body is in an upright natural state, the distributed positions of the electromagnetic field radiators are known initial positions, in the motion process of the human body, because the relative motion range of the trunk part is limited, the relative position changes of the electromagnetic field radiators move in a one-dimensional direction, and therefore the position change quantity of the detected electromagnetic field radiators relative to the origin can be directly calculated by measuring the voltage amplitude change of the received signals of the electromagnetic field radiators at the reference coordinate origin.
The motion trail of a few electromagnetic field radiators of the trunk part and electromagnetic field radiators (class II electromagnetic field radiators) on the limbs is determined according to the distance and angle relation between the electromagnetic field radiators and the class I electromagnetic field radiators, and coordinate information of the class II electromagnetic field radiators relative to the origin of coordinates can be obtained through coordinate transfer calculation.
The relative position calculation method of the class II electromagnetic field radiator and the class I electromagnetic field radiator is as follows:
according to an electromagnetic field induced voltage calculation formula:
Figure BDA0001493057670000041
in which dB is the magnetic flux density element in Tesla (T) (Tesla), one Tesla equals one Weber (Wb/m) per square meter2) (ii) a d1 is the wire element of the current direction; a isRIs the unit vector pointing from d1 to point P; r is the distance from current element dl to point P; k is a proportionality constant.
According to the calculation formula of the space magnetic field, the Bio Safael law shows that the magnetic flux density formed by any current element at a certain point in space is inversely proportional to the square of the distance between the two points and is proportional to the cross product of the included angle between the point and the direction of the current element.
When one electromagnetic field radiator transmits a signal and the other electromagnetic field radiator receives the signal, according to an integral form of faraday's law in maxwell's equation, at the same frequency, the strength (voltage magnitude) of the signal received at the receiving electromagnetic field radiator is determined by the magnitude of the magnetic flux density generated by the transmitting electromagnetic field radiator on the coil of the receiving electromagnetic field radiator, and the two are in a linear relationship. On the other hand, because the antenna shape and the coil turns of our electromagnetic field radiator are fixed, the contribution of the antenna shape and the coil turns is fixed in the overall flux density of the coil obtained by integration according to the Piezo law, and therefore the size of the flux density is only determined by the distance and the included angle of the two electromagnetic field radiators.
Taking the radiator tracking thigh part number 2H as an example, when H (class II electromagnetic field radiator) transmits a signal, the signal is received with 2G and 2F (class I electromagnetic field radiator).
The range of motion that the 2H electromagnetic field radiator can move relative to 2G and 2F is limited due to limitations of human joint and muscle motion. 2H can only move within a range of a distance of 4-8CM (along with the leg lifting and lateral eversion movement) and a certain included angle (about 0-130 degrees) from 2G compared with 2G, and similarly, 2H can only move within a range of a distance of 6-10CM (along with the leg lifting and lateral eversion movement) and a certain included angle (about 0-130 degrees) from 2F compared with 2G
Therefore, within this range, when 2H transmits and 2G and 2F receives signals, the measured signal intensities Vout2G and Vout2F also correspond to the relationship between the distance r and the included angle θ between 2H and 2G, i.e., the distance r between 2H and 2G
Vout2g=f(r1,θ1)
4<r1<8;
0<θ1<130;
Vout2f=f(r2,θ2)
6<r2<10;
0<θ2<130;
Wherein r1 is the distance between 2G and 2H, and theta 1 is the included angle between 2G and 2H; r2 is the distance between 2F and 2H; theta 2 is an included angle between 2F and 2H. Wherein the function f (r, θ) is calculated as follows:
Figure BDA0001493057670000051
wherein A is a voltage conversion coefficient, which is a constant related to the circuit structure of the electromagnetic field radiator; mu is air permeability; n is a radical of2,N1The number of turns of the electromagnetic field radiator coil at the transmitting end and the receiving end; c. C1,c2Is the integral along the loop of the transmit receive coil; theta is an included angle of the sending coil and the receiving coil; r is the distance of the transmitting and receiving coil current elements.
The voltage value of the receiving electromagnetic field radiator at the known position and included angle can be calculated by the formula. The voltage value of a radiator for receiving the electromagnetic field is obtained through testing, then a position/included angle-voltage matrix is obtained through calculation according to a formula according to the measured values of Vout2g and Vout2f and the constraint conditions of r and theta, a Newton optimization method (similar to solving the problems, a plurality of mature algorithms such as the Runge Kutta method and the like) is used for judging the minimum root mean square error, the test voltage is calculated to correspond to the position/included angle point with the minimum root mean square error in the matrix, namely r1, theta 1, r2 and theta 2 corresponding to the current 2H can be calculated, and the coordinates of the current 2H to the coordinate origin can be obtained through coordinate conversion.
When the calculation efficiency is not enough, the number of the electromagnetic field radiators for receiving can be increased to position one measured electromagnetic field radiator.
The positioning method of other electromagnetic field radiators is similar to that, in the calculation process of the positions of other electromagnetic field radiators, the constraint relation of r and theta is determined by the moving range of the joint at the position, and can be obtained through the analysis of motion physiology. The processor obtains the global absolute coordinates of each mark point (the point where the electromagnetic field radiator is located) of the human body action relative to a certain fixed point (the origin of coordinates) on the back by determining the relative position relation of all the electromagnetic field radiators. In actual use, a user starts from a standard action, the system completes calibration according to the standard action, and then the user can act freely without recalibrating the system.
The electromagnetic radiator is a miniature position sensor, the sensor comprises a sealing layer, a printed circuit board layer, a supporting buffer layer and an adhesive layer which are sequentially stacked from top to bottom, and the printed circuit board layer is connected with a coupling cable. The printed circuit board layer is provided with a radiator circuit, the radiator circuit comprises an access port, a grounding port, a capacitor, a coil and a vibrator, and the access port is used for being connected with a cable so as to access signals; the grounding port is used for being connected with the ground; the grounding port and the access port are connected on the coupling cable to realize the connection with external equipment; the capacitor is connected between the access port and the grounding port; the coil is connected in parallel on the capacitor, and after the signals are accessed through the cable, the signals with relatively low frequency in the electromagnetic waves are radiated from the coil to form near-distance field distribution; the oscillator is connected to the access port, and after signals are accessed through the cable, signals with relatively high frequency in the electromagnetic waves are radiated from the oscillator to form close-range field distribution.
Based on the circuit structure, the position and the angle of the sensor can be reversely solved based on the change of the measured electromagnetic field, the obtained result is the absolute position coordinate and the included angle relative to the signal source point, meanwhile, the used electromagnetic wave frequency is far lower than that of visible light or infrared rays, so that the diffraction effect is quite good, and the shielding effect of a common object comprising metal in the action distance of the sensor can be almost ignored. The mutual position relation and the speed relation are calculated through the electromagnetic field communication between the two joints, so that the movement track of the fixed body joint can be reflected. The absolute positions of all sensors relative to the coordinate reference source point can be solved through the mutual position relation and speed relation calculation of a group of sensors.
The amplifying circuit adopts a hybrid integrated circuit built by simulating BJT triodes by the English flying company; the data acquisition card adopts a low-cost high-precision A/D chip of TI company; the filtering processing adopts digital filter software constructed by processor resources and is processed by the software; the processing circuit adopts ARM7 or ARM9 series chips or low-cost FPGA series chips of altera company; the emission waveform is generated by the cooperation of a processor chip, a peripheral operational amplifier and a crystal oscillator circuit.
As shown in fig. 3, the communication system is implemented as follows:
1. firstly, wearing motion capture clothes by a user, then selecting interested strangers or acquaintances for communication through a software interface, and selecting a virtual scene where communication occurs, wherein the process is completed through a traditional computer software interface, the operation is very close to qq, and a video call application is similarly initiated; at this time, both programs establish a TCP/IP connection through the Internet and prepare for data transmission.
2. After the opposite party agrees to perform VR communication with the invitee, the invitees click the determinations individually and calibrate the motion capture apparel worn by the invitee.
At the moment, the computers of the two parties run the same virtual scene VR subprogram, according to the established connection, the virtual scene programs on the two computers are initialized according to the same steps, animation scenes are rendered, an animation character coordinate system is set, the appearance background of the animation character is set at the adjacent virtual space position, and respective animation images are set according to the previous selection of the two parties, wherein the images can be real face 3D models of the users (manufactured by sony mobile phone apps) or animation images selected by the users from a program image library.
After the motion capture clothes press the calibration key, the user puts out standard motions, the 20-50 radiators on the clothes (different according to version complexity) are communicated and calculated one by one, the positions of all the radiators relative to a back reference point are determined, and if the error is too large, a controller on the clothes reminds the user to correct the error through a computer program (the error is caused by the fact that the radiators are offset from the expected arrangement positions of the body due to each wearing of the clothes). After the calibration is finished, the user does not need to calibrate in the whole game process before the costume is captured, after the calibration is finished, the computer reads the user action through the game controller, and the action of the animation character in the VR program is updated in real time so as to be completely synchronous with the action of the user. The game controller reads in the user action process as follows: the game controller communicates with the clothing controller, when the calibration is completed, the clothing controller sends 2500 frames per second to 6000 frames (different according to configuration) of radiator position information to the game controller according to the communication protocol of the enterprise standard, wherein the updating speed of the radiator position information of different positions of the human body is different, for example, the updating rate of the corresponding radiator position information is set to be 100 frames per second because the action range of the elbow joint is larger, the updating rate of the corresponding radiator position information is set to be 30 frames per second because the action range of the waist part is generally smaller, in addition, in the action process, the updating rate of the position of each radiator of the clothing is dynamically set, and for the radiator with frequent activity at present, the clothing controller can improve the updating rate, and vice versa. After the game controller reads in the position information of the radiator, the radiator communicates with a computer through a USB3.0 or HDMI interface, the position information is transmitted to a VR scene program running on the computer, the program refreshes the action of the animation character through the position information, based on the refreshing rate of the radiator and the 3D calculation speed of the existing hardware, the action refreshing rate of the animation character can reach more than 60 frames (the refreshing includes the action of all positions of a human body at one time and the comprehensive effect of 3D calculation delay of a delay box of a transmission system is added), the whole system delay can reach less than 7mS (7mS is the picture delay level which can be reached by adopting the traditional game control handle in the prior art, and because the delay of the action capturing clothing is equivalent to the speed of the game handle, the system delay is the same as the prior art handle control technology), and the game experience that a user feels smooth subjectively.
Both sides wear the VR display helmet, get into virtual scene, carry out the language through the microphone and communicate, carry out limbs language through motion capture dress and communicate, and both sides field of vision is immersing completely and is setting for virtual environment, realizes complete face-to-face and communicates and experience.
The present invention provides a way to fully immerse and communicate participants as if they were in one room. Besides games and social contact, company teleconferencing and remote office can be easily realized by the system and the working method thereof.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A virtual social system based on a motion capture garment comprises a wearable motion capture garment, a display device and a main control device, wherein the display device is in communication connection with the main control device; the motion capture clothing is characterized by comprising a human body motion capture system, a motion capture device and a motion capture device, wherein the human body motion capture system comprises a plurality of type I electromagnetic field radiators arranged on the clothing, a plurality of type II electromagnetic field radiators arranged on the clothing and a processor; the processor measures the amplitude change of the voltage of the receiving signal of the electromagnetic field radiator at the origin of the reference coordinate by the type I electromagnetic field radiator, and calculates the coordinate information of the type I electromagnetic field radiator relative to the origin; the processor obtains the motion trail of the class II electromagnetic field radiator based on the distance and angle relation between the class II electromagnetic field radiator and the class I electromagnetic field radiator, and obtains the coordinate information of the class II electromagnetic field radiator relative to the origin of coordinates through coordinate transfer calculation based on the motion trail; the processor obtains real-time human body action information of the user based on real-time coordinate information of the class II electromagnetic field radiator and the class I electromagnetic field radiator relative to the origin of coordinates, and transmits the obtained real-time human body action information to the main control equipment; the main control equipment comprises a controller, and the controller projects the obtained real-time actions of the user to a preset virtual scene to realize real-time communication between the user and other users in the virtual scene.
2. The virtual social system as claimed in claim 1, wherein the electromagnetic field radiator is used for both transmitting and receiving, the measured position electromagnetic field radiator radiates the transmission signal generated by the transmission waveform generating circuit to the surrounding to form a local field distribution, the measurement position electromagnetic field radiator receives the transmission signal, the transmission signal is amplified by the amplifying circuit, collected by the data collecting card and converted into a digital signal, and the digital signal is processed by the processing circuit and digitally filtered and then enters the processor for calculation.
3. The virtual social contact system as claimed in claim 1, wherein magnitude of change in voltage amplitude of a received signal of the electromagnetic field radiator of class i to the electromagnetic field radiator at an origin point of the reference coordinate is measured, and coordinate information of the electromagnetic field radiator of class i with respect to the origin point is calculated based on correspondence between the voltage amplitude and the coordinate.
4. The virtual social system of claim 1 wherein said master device is a computer and said display device is a VR/AR head mounted display.
5. An operating method applied to the virtual social system according to any one of claims 1 to 4, characterized in that: the method comprises the following steps:
1) firstly, wearing a motion capture dress by a user, selecting a communication object and a virtual scene through a software interface on a main control device, and initiating a communication connection invitation;
2) the communication connection is successfully established, a program is initialized, the motion capture clothing worn by the user starts to be calibrated, and after the calibration is finished, the real-time motion information of the user is extracted through the motion capture clothing;
3) reading user real-time action information extracted from the wearable action capture clothes in real time, and updating animation image actions in the virtual scene to be completely synchronous with the actions of the user;
4) the user wears the display equipment, enters a virtual scene, and starts real-time and comprehensive interaction of actions and languages among the users.
6. The virtual social system working method according to claim 5, wherein the wearable motion capture clothing calibration is implemented as follows: calculating and communicating one by one through 20-50 electromagnetic field radiators arranged on the clothing, determining the actual positions of all the radiators relative to a back reference point, comparing the actual positions with the expected arrangement positions of the radiators, and sending a signal to a main control device to remind a user to correct if the error exceeds a preset threshold value through a controller arranged on the clothing; after the calibration is completed, the user does not need to perform the recalibration in the whole communication process before the user does not take off the capturing clothes.
7. The working method of the virtual social system according to claim 5, wherein in the action process, the position information updating rates of the corresponding radiators arranged corresponding to different moving parts of the human body are different and are dynamically set according to the frequency of the current activities.
8. The virtual social system working method according to claim 7, wherein the radiator provided corresponding to the frequently moving part has a position information update rate set to 100 frames/sec.
9. The virtual social system working method according to claim 7, wherein the radiator provided corresponding to the infrequently active part has a position information update rate set to 30 frames/sec.
10. The method of claim 5, wherein the user selects the animated character through an animation model provided by a third-party smart device or software.
CN201711259238.6A 2017-12-04 2017-12-04 Virtual social contact system based on motion capture clothing and working method thereof Active CN107961531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711259238.6A CN107961531B (en) 2017-12-04 2017-12-04 Virtual social contact system based on motion capture clothing and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711259238.6A CN107961531B (en) 2017-12-04 2017-12-04 Virtual social contact system based on motion capture clothing and working method thereof

Publications (2)

Publication Number Publication Date
CN107961531A CN107961531A (en) 2018-04-27
CN107961531B true CN107961531B (en) 2020-08-11

Family

ID=61999361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711259238.6A Active CN107961531B (en) 2017-12-04 2017-12-04 Virtual social contact system based on motion capture clothing and working method thereof

Country Status (1)

Country Link
CN (1) CN107961531B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109157827A (en) * 2018-10-18 2019-01-08 看见故事(苏州)影视文化发展有限公司 A kind of motion assistant system of motion capture technology
CN110728739B (en) * 2019-09-30 2023-04-14 杭州师范大学 Virtual human control and interaction method based on video stream
CN115088014A (en) * 2020-12-14 2022-09-20 郑州大学综合设计研究院有限公司 Method and system for real social contact by using virtual scene
CN117180720B (en) * 2023-11-07 2024-01-05 成都孚谦科技有限公司 Virtual action game interaction system and method based on somatosensory tracker technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503582A (en) * 2014-12-29 2015-04-08 北京智谷睿拓技术服务有限公司 Wearable-equipment-based interaction method and interaction device, and wearable equipment
CN204347840U (en) * 2014-12-04 2015-05-20 安徽工程大学 A kind of wearable human body attitude pen recorder
CN106371584A (en) * 2016-08-19 2017-02-01 联想(北京)有限公司 Information processing method, electronic device and information processing system
CN107037886A (en) * 2017-05-27 2017-08-11 成都索微通讯技术有限公司 A kind of system and its method of work extracted for human action

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9588582B2 (en) * 2013-09-17 2017-03-07 Medibotics Llc Motion recognition clothing (TM) with two different sets of tubes spanning a body joint

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204347840U (en) * 2014-12-04 2015-05-20 安徽工程大学 A kind of wearable human body attitude pen recorder
CN104503582A (en) * 2014-12-29 2015-04-08 北京智谷睿拓技术服务有限公司 Wearable-equipment-based interaction method and interaction device, and wearable equipment
CN106371584A (en) * 2016-08-19 2017-02-01 联想(北京)有限公司 Information processing method, electronic device and information processing system
CN107037886A (en) * 2017-05-27 2017-08-11 成都索微通讯技术有限公司 A kind of system and its method of work extracted for human action

Also Published As

Publication number Publication date
CN107961531A (en) 2018-04-27

Similar Documents

Publication Publication Date Title
CN107961531B (en) Virtual social contact system based on motion capture clothing and working method thereof
CN107820593B (en) Virtual reality interaction method, device and system
CN105608746B (en) A method of reality is subjected to Virtual Realization
KR102065687B1 (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
WO2023071964A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN112513711A (en) Method and system for resolving hemispherical ambiguities using position vectors
CN105824416B (en) A method of by virtual reality technology in conjunction with cloud service technology
CN108700939A (en) System and method for augmented reality
CN107835367A (en) A kind of image processing method, device and mobile terminal
CN102945564A (en) True 3D modeling system and method based on video perspective type augmented reality
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN112198959A (en) Virtual reality interaction method, device and system
CN108564643A (en) Performance based on UE engines captures system
KR20000017755A (en) Method for Acquisition of Data About Motion
CN106648088A (en) Inertial motion capture pose transient calibration method and inertial motion capture system
CN113129450A (en) Virtual fitting method, device, electronic equipment and medium
CN105739703A (en) Virtual reality somatosensory interaction system and method for wireless head-mounted display equipment
CN106843507A (en) A kind of method and system of virtual reality multi-person interactive
CN107015655A (en) Museum virtual scene AR experiences eyeglass device and its implementation
CN204406327U (en) Based on the limb rehabilitating analog simulation training system of said three-dimensional body sense video camera
CN106952334A (en) The creation method of the net model of human body and three-dimensional fitting system
CN109395375A (en) A kind of 3d gaming method of interface interacted based on augmented reality and movement
CN114022532A (en) Height measuring method, height measuring device and terminal
CN206584210U (en) A kind of many people&#39;s three-dimensional space position harvesters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant