CN110221691B - Immersive virtual experience method, system and device - Google Patents

Immersive virtual experience method, system and device Download PDF

Info

Publication number
CN110221691B
CN110221691B CN201910395522.9A CN201910395522A CN110221691B CN 110221691 B CN110221691 B CN 110221691B CN 201910395522 A CN201910395522 A CN 201910395522A CN 110221691 B CN110221691 B CN 110221691B
Authority
CN
China
Prior art keywords
motion
real
virtual
scene
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910395522.9A
Other languages
Chinese (zh)
Other versions
CN110221691A (en
Inventor
唐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Diantong Information Technology Co ltd
Original Assignee
Shenzhen Diantong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Diantong Information Technology Co ltd filed Critical Shenzhen Diantong Information Technology Co ltd
Priority to CN201910395522.9A priority Critical patent/CN110221691B/en
Publication of CN110221691A publication Critical patent/CN110221691A/en
Application granted granted Critical
Publication of CN110221691B publication Critical patent/CN110221691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention belongs to the technical field of virtual experience, and comprises the following steps: scaling in equal proportion to establish a virtual model according to a field scene, a static object and all moving objects; acquiring fixed information of all static objects and dynamic information of motion of a moving object; calculating by combining the information to obtain the motion data of the moving object relative to the static object; substituting the motion data into the corresponding virtual model to obtain a real-time motion result; and sending the real-time motion result to a display device of a user to synchronize the real-time scene according to the time sequence for playing the video or the virtual reality. The immersive virtual experience method provided by the invention has the advantages that the off-site signal capture equipment is adopted to capture the motion of the moving object in combination with the sensor equipment to obtain the motion result of the moving object, the result is led into the pre-established virtual model to obtain the synchronous virtual scene, the synchronous virtual scene is sent to the display equipment of the user to be played, the substitution feeling is high, and the synchronous result of the virtual reality and the real scene can be achieved.

Description

Immersive virtual experience method, system and device
Technical Field
The invention belongs to the technical field of virtual experience, and particularly relates to an immersive virtual experience method, system and device.
Background
Generally speaking, in the prior art, the best viewing experience in the sports competition is in the scene on the spot, the full angle of the live athletes and the atmosphere of the audience can not be viewed through the video, the video viewing is less in substitution sense, and the viewing effect is poor. Therefore, in the prior art, a plurality of cameras are erected, and video images captured by the plurality of cameras are processed to create a virtual reality (i.e., VR/AR, etc.) and transmitted to a device of a user.
However, in the above method, the camera is fixedly located outside the real scene, and the virtual real environment obtained after processing is only a stereoscopic version of video watching, and cannot achieve a deeper substitution feeling of standing in the real scene or even running along with the player. In addition, by making the shot video image information into a virtual reality, the calculation amount is huge, the precision is insufficient, and the effect of synchronous playing with the real scene is difficult to achieve.
Therefore, it is necessary to provide a corresponding technical solution to solve the above technical problems.
Disclosure of Invention
The invention aims to provide an immersive virtual experience method to solve the technical problems that the conventional method for watching a real scene such as a match through a virtual reality has low substitution feeling and huge calculation amount, so that synchronous playing is difficult and the precision is insufficient.
In a first implementation, a method of immersive virtual experience includes the steps of: scaling in equal proportion according to a real scene, a static object and all moving objects to establish a virtual model; acquiring fixed information of all static objects in the scene on the spot and dynamic information of capturing by a signal capturing device and sensing the motion of the moving object by a sensor device; wherein the real scene is provided with a calibrator matched with the signal capture device; a plurality of sensor devices and/or infrared induction points and/or pulse induction points and/or electromagnetic induction points and/or a micro-accelerometer and a micro-gyroscope are fixed on the moving object; calculating by combining the fixed information and the dynamic information to obtain motion data of the motion object relative to the static object in a field scene; adopting prejudgment compensation operation on a motion path of a skeleton in the motion object, wherein the motion path comprises a biological skeleton motion path in a biological form and an object motion path in a non-biological form; substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model; the real-time motion result is stored in a playable way; and sending the real-time motion result to a display device of a user to synchronously play videos or virtual reality in a real scene according to a time sequence.
With reference to the first implementable manner, in a second implementable manner, the sensor device includes at least one of an infrared sensor, a pulse sensor, an electromagnetic sensor, and an inertial navigation sensor.
With reference to the first implementable manner, in a third implementable manner, the signal capturing device comprises at least one of the following ring light emitter, pulse emitter, electromagnetic emitter, mechanical sensor and image capturing device; the signal capture device is fixed in proximity outside the solid area scene.
With reference to the first implementable manner, in a fourth implementable manner, the step of substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model includes the following sub-steps: after a stereoscopic coordinate in a certain virtual scene is determined as a visual observation point, the visual observation point is used as a basic element point, a sequence frame is generated by the real-time motion result of virtual simulation through a video technology, the loss of the real-time motion result caused by discontinuous sampling of part of data is compensated, and the sequence frame of motion compensation is inserted; generating the sequence frames into a video file in a streaming media or other format which can be played by the terminal equipment by adopting a video compression technology; the compression process can be carried out by a central server or by a terminal playing device for marginalized real-time operation.
With reference to the first implementable manner, in a fifth implementable manner, after the step of sending the real-time motion result to the display device of the user to synchronize the real-time scene according to the time sequence and performing video or virtual reality playing, the method further includes the following steps: acquiring a request of a user for viewing the fixed point and the angle of the display equipment, and sending a virtual reality corresponding to the viewing fixed point and the angle to the user; if the coordinates of the visual observation point are adjusted, all data of the basic element points based on the viewing fixed point and the angle are recalculated.
With reference to the first implementable manner, in a sixth implementable manner, the calibrator includes an L-shaped calibrator and/or a T-shaped calibrator; the L-shaped calibrator is used for static calibration; the T-shaped calibrator is used for dynamic calibration.
With reference to the first implementable manner, in a seventh implementable manner, the method further includes the following steps: and acquiring real-time expression information of the moving object, and synchronously adding the real-time expression information into the corresponding virtual model.
With reference to the first implementable manner, in an eighth implementable manner, the method further includes the following steps: acquiring sound information captured by sound source capturing equipment at different positions in a scene on the spot; and synchronously playing different contents and volume in the acquired sound information into the virtual reality of the user display equipment according to the distance between the visual angle position and the sound source capturing equipment.
It is another object of the invention to provide an immersive virtual experience system.
In a first particular embodiment, an immersive virtual experience system includes: the virtual modeling module is used for carrying out equal scaling according to a real scene, a static object and all moving objects to establish a virtual model; the dynamic capturing module is used for acquiring fixed information of all the static objects in the field scene and dynamic information obtained by capturing the fixed information by the signal capturing equipment and sensing the motion of the moving object by the sensor equipment; the operation module is used for performing operation by combining the fixed information and the dynamic information to obtain the motion data of the motion object relative to the static object in the field scene; the data combination module is used for substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model; and the sending module is used for sending the real-time motion result to a display device of a user to synchronously play videos or virtual reality scenes according to the time sequence.
With reference to the first specific embodiment, in a second specific embodiment, the method further includes: and the visual angle conversion module is used for acquiring a request of a user for a viewing fixed point and an angle of the display equipment and sending a virtual reality corresponding to the request to the user.
It is an object of another aspect of the invention to provide a computer apparatus.
A computer apparatus comprising a processor and a memory, said processor being configured to execute a computer program stored in said memory to implement the immersive virtual experience method as described in any of the first to eighth implementable modes.
The immersive virtual experience method provided by the invention adopts off-site signal capture equipment to capture the motion of a moving object by combining sensor equipment, calculates according to the displacement change data of the infrared sensing point and the calibrator on the moving object and the sensing data of the sensor equipment at the corresponding part to obtain the motion result of the moving object, guides the result into a pre-established virtual model to obtain synchronous virtual scenes, and sends the synchronous virtual scenes to display equipment of a user for playing. In a preferable implementation mode, a user can freely set a viewing fixed point and a viewing angle, and even a certain moving object is locked, so that the user obtains viewing experience through the method provided by the invention, the sense of substitution is high, the whole process of real-time capturing, checking, synthesizing and displaying of the three-dimensional motion capture data is not more than the second level, the sense of presence of the whole solution is ensured, meanwhile, the pressure and fault tolerance of system restoration are small, and the result of synchronization of a virtual reality and a field scene can be achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method provided by the present invention;
FIG. 2 is a schematic diagram of the placement of a signal capture device according to the present invention;
FIG. 3 is a schematic view of another arrangement of the signal capture device of the present invention;
FIG. 4 is a flow chart of another implementation of the present invention;
FIG. 5 is a flow chart of another implementation of the present invention;
FIG. 6 is a flow chart of another implementation of the present invention;
FIG. 7 is a system architecture diagram of the immersive virtual experience system provided in the present invention;
FIG. 8 is a system architecture diagram of another immersive virtual experience system provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1 to 6, an aspect of the embodiments of the present disclosure is to provide an immersive virtual experience method to solve the technical problems that a conventional virtual reality is not high in substitution sense and large in computation amount, so that synchronous playing is difficult and precision is insufficient when a real scene such as a game is watched.
An immersive virtual experience method, see fig. 1, in a first implementation, includes the following steps S101, S102, S103, S104, S105.
Step S101: and scaling the real scene, the static object and all the moving objects in equal proportion to establish a virtual model.
It should be noted that the static objects in the scene include, for example, a ball post or goal, a floor line, a flagpole, etc.; the moving objects include, for example, players, referees, balls, flags in crime hands, and the like.
Wherein, the scaling of the static objects in the solid scene is set in the corresponding position of the virtual scene in the virtual model in equal proportion; the position of the motion capture device on the moving object is also scaled proportionally to the corresponding position of the virtual model, for example: the motion state of the motion capture device of the player's left leg knee joint is consistent with the motion state of the left leg knee joint of the player's virtual model within the virtual scene. In addition, when a moving object in the real scene is detected to leave or go up, such as a player going up and down or a ball flying out of the scene, the corresponding virtual model is moved in the virtual scene.
In addition, because the virtual scene and the virtual model are modeled in advance, when the scene on the spot moves, the processor directly extracts corresponding data, under the support of enough operation performance, the three-dimensional motion capture data is guaranteed to be captured, checked, synthesized and displayed in real time, the whole process does not exceed the second level, and the telepresence of the whole solution is guaranteed.
Step S102: acquiring fixed information of all static objects in a scene on the spot and dynamic information of capturing by a signal capturing device and sensing the motion of a moving object by a sensor device; the field scene is provided with a calibrator matched with the signal capturing equipment; the moving object is fixed with a plurality of sensor devices, and/or infrared induction points, and/or pulse induction points, and/or electromagnetic induction points, and/or a micro-accelerometer and a micro-gyroscope.
It should be noted that the signal capturing device can capture the motion of the infrared sensing points, and at the same time, number and mark each infrared sensing point, and can capture and identify the calibrator and the motion track of each infrared sensing point.
On the one hand, in general, in a real-world scene, the scene is closed, for example: football court, tennis court, basketball court, etc. In a field scenario, the need for outside personnel to enter or exit is greatly restricted. Thus, there may be one or more fixed markers in a scene in the field.
Furthermore, the calibrator comprises an L-shaped calibrator and/or a T-shaped calibrator; the L-shaped calibrator is used for static calibration; the T-shaped calibrator is used for dynamic calibration.
The calibrator is fixed on a field scene and can be captured by the signal capturing equipment, the calibrator is provided with induction sources with different gradients, when the signal capturing equipment captures different induction sources, a three-dimensional coordinate system can be established by taking the calibrator as a reference in subsequent operation processing, and the motion trail of a motion object is operated by referring to the coordinate system. It should be noted that the above description is similar to the principle of the existing infrared imaging technology.
Referring to fig. 2 and 3, the arrangement of 10 and 16 signal capturing devices is shown to cover the moving range of the moving object in the field to the maximum extent, so that the same marker or infrared sensing point can be captured by multiple signal capturing devices, or the same signal capturing device can capture the motion trail of multiple markers or infrared sensing points. Therefore, the cross-type signal capture is adopted, so that the motion trail can be corrected more easily in subsequent data processing, and the precision is higher. It should be noted that the 10 and 16 signal capture devices are only two of them, and are not limited in number.
On the other hand, sensor devices or infrared sensing points are fixed on the moving object, and in order to increase the capturing precision, the infrared sensing points which are lightly adhered to the moving object, particularly a small object (such as a football, a basketball and the like) are selected, and the moving object is dynamically captured by adopting an infrared optical motion capturing mode. In addition, a plurality of sensor devices are fixed on a large moving object (for example, a human body), and motion tracking is performed on at least 17 parts of the human body, such as the head, the shoulder, the upper arm, the lower arm, the hand, the chest, the tail vertebra, the thigh, the lower leg, the ankle, and the like, particularly the sensor devices are fixed on the main skeleton, and corresponding numbering is performed. The sensor equipment can sense the current motion track information and send the current motion track information to an external computer device in a wireless transmission mode.
Meanwhile, the sensor device and the signal capture device send out the information of the current device at intervals. Further, motion trajectory information is captured every 0.1 seconds or 0.2 seconds and sent to the processor.
The advantage of capturing the motion information once in 0.1 second or 0.2 second is that the calculation amount of the processor is small, so the configuration requirement on the processor is small, and the cost is reduced; and the operation of 0.1 second or 0.2 second is continuous in human eyes, and the pause of frame break can not be caused. In addition, when a signal of a certain signal capturing device is not received at a certain time of transmission, action abnormity can be caused in the virtual reality, the next operation is carried out at an interval of 0.1 second or 0.2 second, the action abnormity in the previous time period can be corrected and displayed in the watching experience of a user, the action abnormity which can happen more or less in normal conditions is continuously refreshed according to the actual situation, and the real-time watching experience of the user is better.
In another aspect, the sensor device further comprises an inertial navigation sensor including at least one of an infrared sensor, a pulse sensor, an electromagnetic sensor, and an inertial navigation sensor.
The inertial navigation technology is a technology for obtaining instantaneous speed and instantaneous position data of an object through integral operation by measuring acceleration, a motion angle and an orientation of the object. The inertial navigation sensor comprises a data acquisition unit, a data transmission unit and a data processing unit, wherein the data acquisition unit is integrated with an accelerometer, a gyroscope, a magnetometer and the like. The accelerometer is used for detecting the magnitude and direction of the acceleration received by the inertial navigation sensor, and measures the magnitude and direction of the acceleration of the inertial navigation sensor in a certain axial direction, but the attitude of the inertial navigation sensor relative to the ground is not high in precision. The defects of the accelerometer are made up by a gyroscope, the gyroscope is used for measuring an included angle between a vertical axis of an internal gyro rotor in a three-dimensional coordinate system and an inertial navigation sensor, calculating an angular velocity, and judging the motion state of a moving object in a three-dimensional space through the included angle and the angular velocity, and because the vertical axis of the internal gyro rotor is always vertical to the ground, the attitude precision of the ground can be ensured, but the attitude of the gyro rotor can not be measured in the same direction of east, west, south and north, and 4 directions. The gyroscope deficiency is compensated by the magnetometer, which is a small electronic compass that measures the angle of the inertial navigation sensor to the north and south magnetic poles and determines the 4-direction attitude.
The data transmission unit in the inertial navigation sensor is used for transmitting the acquired action data to the data processing unit and is also a data intersection point of the at least 17 data acquisition units, and the characteristic determines that the data transmission unit is inevitably deployed nearby the data acquisition units. From the aspects of use comfort and wearability, the data transmission unit should adopt wireless communication technology to transmit data back to the data processing unit so as to reduce the number of cables and burden on a wearer. Currently, the mainstream wireless communication technologies include ZigBee, Bluetooth, RFID, WiFi, and the like, and the design of the communication subsystem of the system is determined according to the data throughput.
The position tracking accuracy reaches 5mm, the minimum angular velocity is 0 degree/s, the maximum angular velocity is 2000 degrees/s, the accuracy yaw is 0.25 degrees, the pitching and rolling are respectively 0.1 degrees, the angular resolution is 0.01 degrees and the like on the aspects of positioning accuracy and real-time performance by adopting the inertial navigation sensor. The weight of the human body is light, the limb movement is not obstructed, and the exertion of the moving object is not influenced.
Step S103: calculating by combining the fixed information and the dynamic information to obtain the motion data of the moving object relative to the static object in the scene on the spot; the method comprises the following steps of (1) adopting pre-judging compensation operation for a motion path of a bone in a motion object; the motion path comprises a biological skeleton motion path in a biological form and an object motion path in a non-biological form.
As can be appreciated, biomorphic generally refers to a human or other animal, such as a player or a race horse in a race; the non-biological shape generally refers to a ball on a court or a flag or whistle in the referee's hand.
The data processing module of the computer device carries out USB drive enumeration of the equipment, the motion capture software is operated after the enumeration is successful, the software sends a message packet of 'acquisition point list application' through a downlink pipeline, after the sensor equipment receives the message packet of 'acquisition point list application', each sensor equipment feeds back respective ID and capability, the 'acquisition point list and capability' are fed back to the computer device through wireless transmission after the summary, the software acquires the 'acquisition point list and capability' through a driven uplink pipeline, the acquisition starting message is sent after corresponding information is updated, and the motion data starts to be uploaded to the software by the data transmission equipment.
In addition, the signal capturing device captures the motion of the moving object by the infrared optical motion capturing technology to obtain infrared capturing motion data; the sensor equipment obtains the induction motion data of the motion trail of the current part; edge operation is carried out by combining infrared capture motion data and induction motion data, so that the capture of a moving object is more accurate.
Furthermore, the signal capturing device comprises at least one of the following ring light emitter, pulse emitter, electromagnetic emitter, mechanical sensor and image capturing device; the signal capture device is fixed in proximity outside of the scene of the solid area.
Among them, in the prior art, the ring light emitter employs an infrared light emitting type device, especially a near infrared light emitter, the capture frame size can reach up to 4096 × 3072 (pixels are at least 1200 ten thousand), and the capture frequency of the full size is 1-300 HZ. The ring light emitter emits infrared electromagnetic waves outwards, the infrared electromagnetic waves are induced by infrared induction points on the moving object, then the image acquisition equipment acquires images, motion data of the infrared induction points are obtained, and the motion data are input to the computer device for graphic operation.
On the other hand, the bone motion track sensed by the sensor equipment is subjected to the pre-judging compensation operation, so that when the data sensed by the sensor equipment is subjected to the graphic operation, the pre-judging compensation algorithm in the prior art is adopted, and the obtained operation result is more accordant with the motion mode of the human body.
Furthermore, after a three-dimensional coordinate in a certain virtual scene is determined as a visual observation point, the visual observation point is taken as a basic element point, a sequence frame is generated by a virtual simulation real-time motion result through a video technology, the loss of the real-time motion result caused by discontinuous sampling of partial data is compensated, and a motion compensation sequence frame is inserted; generating the sequence frames into a video file in a streaming media or other format which can be played by the terminal equipment by adopting a video compression technology; the compression process can be carried out by a central server or by a terminal playing device for marginalized real-time operation.
Step S104: substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model; and the real-time motion result is stored in a playable way.
The motion data result of the moving object obtained by the operation performed by combining the prejudgment compensation algorithm is substituted into the virtual model, it should be noted that the substitution is one-to-one correspondence, for example, the motion trajectory of the head is substituted into the motion of the head of the virtual model, so as to obtain the real-time motion result of the virtual model changing along with time. It should be noted that the real-time motion result is stored in the computer device, and the user may send a request for return visit, so that the motion result of the virtual model jumps back to a previous time node and is played again according to the sequence of the time nodes.
Step S105: and sending the real-time motion result to a display device of a user to synchronously play the video or the virtual reality according to the time sequence.
It should be noted that the term "display device" in this application includes VR/AR devices and flat players, and generally speaking, the immersive experience result is more achieved by the VR/AR devices.
Among them, because of the operation speed of the order of seconds or less, the real-time motion result of the virtual model is almost synchronous with the motion in the current real scene, and the delay rate is low. The user receives the real-time motion result of the virtual model through the display device, and can experience a more real match scene.
It should be noted that, the real-time motion result after the operation of each time interval is substituted into the virtual model, and by continuously substituting the real-time motion result of each time interval, the motion states in adjacent time intervals are distinguished, that is, motion is performed; the image received according to the vision in the brain can be temporarily stopped and experienced in human eyes, and then the virtual model also performs corresponding movement in the virtual scene. Therefore, by continuously measuring the motion state of the moving object in the real scene and substituting the motion state into the virtual model modeled in advance, the virtual model moves synchronously. The watching experience of the motion such as the match in the three-dimensional scene on the spot can be synchronously obtained in real time without being on site through equipment such as VR and the like, the delay is extremely small, and the watching experience is better compared with the prior art.
Further, referring to fig. 4, after the step of transmitting the real-time motion result to the display device of the user to synchronize the real-time scene according to the time sequence for playing the video or the virtual reality, the method further comprises the following steps:
step S106: acquiring a request of a user for viewing a fixed point and an angle of a display device, and sending a virtual reality corresponding to the viewing fixed point and the angle to the user; if the coordinates of the visual observation point are adjusted, all data of the basic primitive points based on the viewing fixed point and the angle are recalculated.
It should be noted that, when the user watches through the VR/AR that can adjust the watching fixed point, the whole virtual scene is already running, at this time, the user can send the request of watching the fixed point and the angle of the watching visual angle to the processor by clicking the VR/AR picture, and freely select any watching fixed point and angle in the whole virtual scene, so that the substitution sense is better and the experience is better when the VR/AR watching experience is performed.
At this time, for a computer device which continuously processes data in a background, the real-time operation result of the whole virtual model in the current time node is determined, and after a request of a user for a certain viewing fixed point and angle is received, the real-time operation result can be displayed and played through VR/AR according to the request without large operation amount. The user can freely change the viewing position and angle in the virtual reality, and can also lock the movement of a certain moving object for viewing. Thus, there is a very similar live viewing experience.
Still further, referring to fig. 5, the method further comprises the following steps:
step S107: and acquiring real-time expression information of the moving object, and synchronously adding the real-time expression information into the corresponding virtual model.
The expression of the virtual model is modeled in advance. By attaching a plurality of infrared sensing points to the face of a moving object, particularly a telemechanical, when the signal capturing device obtains the expression of a player, the expression is recognized in the data processing process of the computer device, and the expression is correspondingly added into the virtual model, so that the VR/AR experience of a user is more real.
Still further, referring to fig. 6, the method further comprises the following steps:
step S108: acquiring sound information captured by sound source capturing equipment at different positions in a scene on the spot; and synchronously playing different contents and volume in the acquired sound information into the virtual reality of the user display equipment according to the distance between the visual angle position and the sound source capturing equipment.
The sound of the field audience can be recorded through the radio equipment, the sound can be synchronously played in real time in the virtual reality, and preset similar sound content can be extracted. Therefore, the VR/AR experience of the user is more physically watched on the spot, and the reality degree is higher.
In addition, in the scene on the spot, can install a plurality of radio reception equipment in different positions, because the distance of sound source and radio reception equipment is different, then lead to every radio reception equipment to the volume size of same sound source radio reception, in user's VR/AR experience, to the distance difference of watching the place in sound source and the virtual reality during watching, adjust the volume size of certain sound when user when arbitrary position in the scene, can all according to far and near with sound source equipment, hear different sound, the degree of reality is higher.
It is another object of the invention to provide an immersive virtual experience system.
Referring to fig. 7, in a first particular embodiment, an immersive virtual experience system includes: the virtual modeling module 10 is used for scaling in equal proportion according to a real scene, a static object and all moving objects to establish a virtual model; the dynamic capture module 20 is configured to obtain fixed information of all static objects in a scene on the spot, and dynamic information obtained by capturing by a signal capture device and sensing motion of a moving object by a sensor device; the operation module 30 is configured to perform operation by combining the fixed information and the dynamic information to obtain motion data of a moving object relative to a static object in a real scene; the data combination module 40 is used for substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model; and a sending module 50, configured to send the real-time motion result to a display device of the user, and synchronize the real-time scene according to a time sequence to perform video or virtual reality playing.
It should be noted that the operation module 30 includes sub-modules: a motion compensation operation module (not shown) and a video format conversion module (not shown). The motion compensation operation module is used for adopting pre-judgment compensation operation on the motion path of the skeleton in the motion object, and the specific detailed technology can refer to the prior art. In addition, the video format conversion module is used for generating the sequence frames into a video file in a streaming media or other formats which can be played by the terminal equipment by adopting a video compression technology, and then playing the video file through the video equipment.
Still further, referring to fig. 8, the method further includes: and the viewing angle conversion module 60 is configured to obtain a request of the user for viewing a fixed point and an angle of the VR/AR, and send a virtual reality corresponding to the request to the user.
It is an object of another aspect of the invention to provide a computer apparatus.
A computer apparatus comprising a processor and a memory, the processor being operative to execute a computer program stored in the memory to implement an immersive virtual experience method as any one of its realizable manners of immersive virtual experience method.
The immersive virtual experience method provided by the invention adopts off-site signal capture equipment to capture the motion of a moving object by combining sensor equipment, calculates according to the displacement change data of the infrared sensing point and the calibrator on the moving object and the sensing data of the sensor equipment at the corresponding part to obtain the motion result of the moving object, guides the result into a pre-established virtual model to obtain synchronous virtual scenes, and sends the synchronous virtual scenes to display equipment of a user for playing.
In a preferable implementation mode, a user can freely set a viewing fixed point and a viewing angle, and even a certain moving object is locked, so that the user obtains viewing experience through the method provided by the invention, the sense of substitution is high, the whole process of real-time capturing, checking, synthesizing and displaying of the three-dimensional motion capture data is not more than the second level, the sense of presence of the whole solution is ensured, meanwhile, the pressure and fault tolerance of system restoration are small, and the result of synchronization of a virtual reality and a field scene can be achieved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. An immersive virtual experience method, comprising the steps of:
scaling in equal proportion to establish a virtual model according to a field scene, a static object and all moving objects;
acquiring fixed information of all the static objects in the field scene and dynamic information obtained by capturing through a signal capturing device and sensing the motion of the moving object through a sensor device; wherein the real scene is provided with a calibrator matched with the signal capture device; a plurality of sensor devices and/or infrared induction points, and/or pulse induction points, and/or electromagnetic induction points, and/or a micro-accelerometer and a micro-gyroscope are fixed on the moving object;
calculating by combining the fixed information and the dynamic information to obtain the motion data of the motion object relative to the static object in the field scene; adopting prejudgment compensation operation on a motion path of the moving object, wherein the motion path comprises a biological skeleton motion path in a biological form and an object motion path in a non-biological form;
substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model; the real-time motion result is stored in a playable way;
sending the real-time motion result to a display device of a user to synchronously play videos or virtual reality scenes according to a time sequence;
the step of obtaining the real-time motion result of the virtual model by substituting the motion data into the corresponding virtual model comprises the following substeps:
after a three-dimensional coordinate in a certain virtual scene is determined as a visual observation point, the visual observation point is taken as a basic element point, a sequence frame is generated by the real-time motion result of virtual simulation through a video technology, the loss of the real-time motion result caused by discontinuous sampling of partial data is compensated, and the sequence frame of motion compensation is inserted;
generating the sequence frames into a video file in a streaming media or other format which can be played by the terminal equipment by adopting a video compression technology; the compression process can be carried out by a central server or by a terminal playing device for marginal real-time operation.
2. The immersive virtual experience method of claim 1, wherein the sensor device comprises at least one of an infrared sensor, an impulse sensor, an electromagnetic sensor, and an inertial navigation sensor.
3. The immersive virtual experience method of claim 1, wherein the signal capture device comprises at least one of a ring light emitter, a pulse emitter, an electromagnetic emitter, a mechanical sensor, and an imaging device; the signal capture device is fixed in proximity outside the solid area scene.
4. The immersive virtual experience method of claim 1, wherein sending the real-time motion results to a display device of a user to synchronize a real-world scene in a chronological order for the video or the virtual reality playing step further comprises:
acquiring a request of a user for viewing the fixed point and the angle of the display equipment, and sending a virtual reality corresponding to the viewing fixed point and the angle to the user; if the coordinates of the visual observation point are adjusted, all data of the basic primitive points based on the viewing fixed point and the angle are recalculated.
5. The immersive virtual experience method of claim 1, wherein the sealer comprises an L-shaped sealer and/or a T-shaped sealer; the L-shaped calibrator is used for static calibration; the T-shaped calibrator is used for dynamic calibration.
6. The immersive virtual experience method of claim 1, further comprising the steps of:
and acquiring real-time expression information of the moving object, and synchronously adding the real-time expression information into the corresponding virtual model.
7. The immersive virtual experience method of claim 1, further comprising the steps of:
acquiring sound information captured by sound source capturing equipment at different positions in a scene on the spot; and synchronously playing different contents and volume in the acquired sound information in a virtual reality of user display equipment according to the distance between the visual angle position and the sound source capturing equipment.
8. An immersive virtual experience system, comprising:
the virtual modeling module is used for carrying out equal scaling according to a real scene, a static object and all moving objects to establish a virtual model;
the dynamic capturing module is used for acquiring fixed information of all the static objects in the field scene and dynamic information obtained by capturing the fixed information by the signal capturing equipment and sensing the motion of the moving object by the sensor equipment;
the operation module is used for performing operation by combining the fixed information and the dynamic information to obtain the motion data of the motion object relative to the static object in the field scene;
the data combination module is used for substituting the motion data into the corresponding virtual model to obtain a real-time motion result of the virtual model;
the sending module is used for sending the real-time motion result to display equipment of a user to synchronously play videos or virtual reality in a real-time scene according to a time sequence;
wherein the data combining module is specifically configured to:
after a stereoscopic coordinate in a certain virtual scene is determined as a visual observation point, the visual observation point is used as a basic element point, a sequence frame is generated by the real-time motion result of virtual simulation through a video technology, the loss of the real-time motion result caused by discontinuous sampling of part of data is compensated, and the sequence frame of motion compensation is inserted;
generating the sequence frames into a video file in a streaming media or other format which can be played by the terminal equipment by adopting a video compression technology; the compression process can be carried out by a central server or by a terminal playing device for marginal real-time operation.
9. The immersive virtual experience system of claim 8, further comprising:
and the visual angle conversion module is used for acquiring a request of a user for the viewing fixed point and the viewing angle of the display equipment and sending a virtual reality corresponding to the request to the user.
10. A computer arrangement comprising a processor and a memory, the processor being configured to execute a computer program stored in the memory to implement the immersive virtual experience method of any of claims 1-7.
CN201910395522.9A 2019-05-13 2019-05-13 Immersive virtual experience method, system and device Active CN110221691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910395522.9A CN110221691B (en) 2019-05-13 2019-05-13 Immersive virtual experience method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910395522.9A CN110221691B (en) 2019-05-13 2019-05-13 Immersive virtual experience method, system and device

Publications (2)

Publication Number Publication Date
CN110221691A CN110221691A (en) 2019-09-10
CN110221691B true CN110221691B (en) 2022-07-15

Family

ID=67820960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910395522.9A Active CN110221691B (en) 2019-05-13 2019-05-13 Immersive virtual experience method, system and device

Country Status (1)

Country Link
CN (1) CN110221691B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515187B (en) * 2020-04-10 2024-02-13 咪咕视讯科技有限公司 Virtual reality scene generation method and network side equipment
CN111754827A (en) * 2020-05-20 2020-10-09 四川科华天府科技有限公司 Presentation system based on AR interactive teaching equipment
CN112860072A (en) * 2021-03-16 2021-05-28 河南工业职业技术学院 Virtual reality multi-person interactive cooperation method and system
CN113158906B (en) * 2021-04-23 2022-09-02 天津大学 Motion capture-based guqin experience learning system and implementation method
CN113448445B (en) * 2021-09-01 2021-11-30 深圳市诚识科技有限公司 Target position tracking method and system based on virtual reality
WO2023220908A1 (en) * 2022-05-17 2023-11-23 威刚科技股份有限公司 Live image reconstruction system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106296686A (en) * 2016-08-10 2017-01-04 深圳市望尘科技有限公司 One is static and dynamic camera combines to moving object three-dimensional reconstruction method frame by frame
CN106310660A (en) * 2016-09-18 2017-01-11 三峡大学 Mechanics-based visual virtual football control system
CN107871120A (en) * 2017-11-02 2018-04-03 汕头市同行网络科技有限公司 Competitive sports based on machine learning understand system and method
CN108307183A (en) * 2018-02-08 2018-07-20 广州华影广告有限公司 Virtual scene method for visualizing and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9268406B2 (en) * 2011-09-30 2016-02-23 Microsoft Technology Licensing, Llc Virtual spectator experience with a personal audio/visual apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915849A (en) * 2016-05-09 2016-08-31 惠州Tcl移动通信有限公司 Virtual reality sports event play method and system
CN106296686A (en) * 2016-08-10 2017-01-04 深圳市望尘科技有限公司 One is static and dynamic camera combines to moving object three-dimensional reconstruction method frame by frame
CN106310660A (en) * 2016-09-18 2017-01-11 三峡大学 Mechanics-based visual virtual football control system
CN107871120A (en) * 2017-11-02 2018-04-03 汕头市同行网络科技有限公司 Competitive sports based on machine learning understand system and method
CN108307183A (en) * 2018-02-08 2018-07-20 广州华影广告有限公司 Virtual scene method for visualizing and system

Also Published As

Publication number Publication date
CN110221691A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110221691B (en) Immersive virtual experience method, system and device
EP3436867B1 (en) Head-mounted display tracking
CN102411783B (en) Move from motion tracking user in Video chat is applied
US9448067B2 (en) System and method for photographing moving subject by means of multiple cameras, and acquiring actual movement trajectory of subject based on photographed images
US20100194879A1 (en) Object motion capturing system and method
US20170024904A1 (en) Augmented reality vision system for tracking and geolocating objects of interest
WO2016017121A1 (en) Augmented reality display system, terminal device and augmented reality display method
JP5039808B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
KR101898782B1 (en) Apparatus for tracking object
US11872468B2 (en) Sport training method and system and head-mounted VR device
CN106383596A (en) VR (virtual reality) dizzy prevention system and method based on space positioning
US11127156B2 (en) Method of device tracking, terminal device, and storage medium
US20220026981A1 (en) Information processing apparatus, method for processing information, and program
CN107193380A (en) A kind of low-cost and high-precision virtual reality positioning and interactive system
JPH10314357A (en) Play display device
TW201916666A (en) Mobile display device, image supply device, display system, and program
CZ24742U1 (en) Apparatus for recording and representation of ball drop next to the playing field by making use of cameras
WO2016033717A1 (en) Combined motion capturing system
Li Development of immersive and interactive virtual reality environment for two-player table tennis
KR20150066941A (en) Device for providing player information and method for providing player information using the same
CN112166594A (en) Video processing method and device
CN110657796B (en) Virtual reality auxiliary positioning device and method
US20240144613A1 (en) Augmented reality method for monitoring an event in a space comprising an event field in real time
TWI822380B (en) Ball tracking system and method
US20220339496A1 (en) Ball position identification system, ball position identification method and information storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant