CN113129413B - Three-dimensional engine-based virtual image feedback action system and method - Google Patents

Three-dimensional engine-based virtual image feedback action system and method Download PDF

Info

Publication number
CN113129413B
CN113129413B CN202110447823.9A CN202110447823A CN113129413B CN 113129413 B CN113129413 B CN 113129413B CN 202110447823 A CN202110447823 A CN 202110447823A CN 113129413 B CN113129413 B CN 113129413B
Authority
CN
China
Prior art keywords
dimensional image
module
image module
modeling
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110447823.9A
Other languages
Chinese (zh)
Other versions
CN113129413A (en
Inventor
杨树才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ea Intelligent Technology Co ltd
Original Assignee
Shanghai Ea Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ea Intelligent Technology Co ltd filed Critical Shanghai Ea Intelligent Technology Co ltd
Priority to CN202110447823.9A priority Critical patent/CN113129413B/en
Publication of CN113129413A publication Critical patent/CN113129413A/en
Application granted granted Critical
Publication of CN113129413B publication Critical patent/CN113129413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an action system and method based on three-dimensional engine virtual image feedback, relates to the technical field of image feedback, and aims to solve the problems that the existing three-dimensional image technology in the prior art realizes dynamic capture through close-range wearing equipment, but cannot be directly fed back on a real-time three-dimensional model, and needs to be manufactured through a later technology. The three-dimensional image acquisition frame comprises a main three-dimensional image module, a rear three-dimensional image module, a left three-dimensional image module, a right three-dimensional image module and a horizontal gravity sensing plate, wherein the main three-dimensional image module, the rear three-dimensional image module, the left three-dimensional image module and the right three-dimensional image module have the same composition structure, the output end of the main three-dimensional image module is separated from the body modeling operation end, the output end of the rear three-dimensional image module is separated from the lower limb modeling operation end, and the output ends of the left three-dimensional image module and the right three-dimensional image module are separated from the upper limb modeling operation end.

Description

Three-dimensional engine-based virtual image feedback action system and method
Technical Field
The invention relates to the technical field of image feedback, in particular to an action system and method based on three-dimensional engine virtual image feedback.
Background
The three-dimensional image technology is the interaction between reality and virtual realized on the basis of a dynamic model.
However, the existing three-dimensional image technology realizes dynamic capture through close-range wearable equipment, but cannot be directly fed back on a real-time three-dimensional model, but needs to be manufactured through a later-stage technology; therefore, the existing requirements are not met, and a three-dimensional engine-based virtual image feedback action system and a three-dimensional engine-based virtual image feedback action method are provided.
Disclosure of Invention
The invention aims to provide a three-dimensional engine-based virtual image feedback action system and method, which are used for solving the problems that the existing three-dimensional image technology proposed in the background technology realizes dynamic capture through close-range wearing equipment, but cannot be directly fed back to a real-time three-dimensional model, but needs to be manufactured through a later technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: the utility model provides an action system based on three-dimensional engine virtual image feedback, includes three-dimensional image acquisition frame, three-dimensional image acquisition frame includes that the three-dimensional image module is looked in the head-up, back looks three-dimensional image module, left side look three-dimensional image module, right side look three-dimensional image module and horizontal gravity sensing board, and the composition structure that looks three-dimensional image module is looked in the head-up, back looks three-dimensional image module, left side look three-dimensional image module and right side look three-dimensional image module the same, the output of head-up three-dimensional image module and body modeling operation branch end, and the output of back looks three-dimensional image module and lower limb modeling operation branch end, the output of left side look three-dimensional image module and right side look three-dimensional image module and upper limbs modeling operation branch end.
Preferably, the input ends of the upper limb modeling operation split end, the lower limb modeling operation split end and the body modeling operation split end are connected with the output end of the normal state data module.
Preferably, the input ends of the upper limb modeling operation split end, the lower limb modeling operation split end and the body modeling operation split end are connected with the output ends of the model database and the body state database.
Preferably, the main view three-dimensional image module comprises a dynamic capture grating, a main shaft long-focus probe and a secondary shaft micro-focus probe, and the output ends of the dynamic capture grating, the main shaft long-focus probe and the secondary shaft micro-focus probe are connected with the input ends of the animation synthesis module and the scene synthesis unit.
Preferably, the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment subtraction unit, and the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinates.
Preferably, the output ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the main model joint control terminal, and the input end of the main model joint control terminal is connected with the output end of the multi-channel decompression module.
Preferably, the master module linkage control terminal comprises an animation synthesis module, the input end of the animation synthesis module is connected with the output ends of the dynamic synchronization unit, the action filtering unit and the frame number adjusting module, and the frame number adjusting module comprises a linear optimizing unit and a distortion correcting unit.
A method for three-dimensional engine-based avatar feedback actions, comprising the steps of:
step one: the user stands in the monitoring range of the horizontal gravity sensing plate and faces the front view three-dimensional image module, the two hands correspond to the left view three-dimensional image module and the right view three-dimensional image module on the two sides respectively, and the rear view three-dimensional image module is positioned at the rear lower part of the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft tele probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and real-time modeling data acquisition is performed through the dynamic capture grating and the main shaft tele probe;
step three: the acquired data are mainly divided into upper limbs, lower limbs and a body, after the acquisition of user data is completed, modeling of the virtual image is performed through computer software, and meanwhile, each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system extracts the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system finely adjusts the models in the database through a normal logic algorithm to ensure the coordination of overall modeling;
step five: after static modeling is completed by the split-end computer, a triaxial environment coordinate taking the horizontal gravity sensing plate as a basic origin is established, and modeling data except a user and a non-specific object can be automatically subtracted by the system in the environment modeling process;
step six: transmitting data on the three-component end computer to a main module joint control terminal through a special compression transmission channel, and splicing and combining the three component modules together by the main module joint control terminal to form a complete user three-dimensional model;
step seven: after the main end computer completes the module assembly, a user can perform some action display on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture some detail changes on the user limb, and meanwhile, the horizontal gravity sensing plate on the sole can collect gravity center changes under different actions of the user;
step eight: the data in the whole process is continuously uploaded to the host computer after being optimized, and the same actions as the virtual modeling image feedback part in the computer are displayed.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, through the acquisition of real-time modeling data of the dynamic capture grating and the main shaft tele probe, the acquired data is mainly divided into an upper limb, a lower limb and a body, after the acquisition of user data is completed, the modeling of an virtual image is carried out through computer software, meanwhile, each group of data is provided with an independent computer to realize the modeling, in the modeling process, the system can extract the existing data with the highest matching degree from a model database and a body state database according to the height and the body state data of the user to directly apply, and then the system can finely adjust the model in the database through a normal logic algorithm to ensure the coordination of the overall modeling;
2. the normal state data module is internally stored with a large amount of detailed data based on the actual body state, so that a system can conveniently match similar body state data according to a measured result and directly use the data, and then the system can directly extract the manufactured model data from a model database and a body state data database according to the matched data, thereby shortening the time required by system modeling;
3. the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe, so that the change of some details on the limbs of a user can be captured, meanwhile, the horizontal gravity sensing plate of the sole can also collect the gravity center change under different actions of the user, the data in the whole process can be continuously uploaded to a main end computer after being optimized, and the action display with the same virtual modeling image feedback position in the computer is realized.
Drawings
FIG. 1 is an overall control flow diagram of the present invention;
FIG. 2 is a flow chart of image capturing according to the present invention;
FIG. 3 is a split modeling flow chart of the present invention;
FIG. 4 is a compressed transmission flow chart of the present invention;
fig. 5 is a terminal feedback flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Referring to fig. 1-5, an embodiment of the present invention is provided: the three-dimensional image acquisition frame comprises a main vision three-dimensional image module, a rear vision three-dimensional image module, a left vision three-dimensional image module, a right vision three-dimensional image module and a horizontal gravity sensing plate, wherein the main vision three-dimensional image module, the rear vision three-dimensional image module, the left vision three-dimensional image module and the right vision three-dimensional image module have the same composition structure, the output end of the main vision three-dimensional image module is separated with a body modeling operation, the main vision three-dimensional image module is mainly used for observing the dynamic conditions of the head and the upper body of a user, the output end of the rear vision three-dimensional image module is separated with a lower limb modeling operation, the rear vision three-dimensional image module can be used for observing the dynamic conditions of the waist and the legs, the output end of the left vision three-dimensional image module and the right vision three-dimensional image module are separated with the modeling operation of the upper limb, and the left vision three-dimensional image module is used for observing the dynamic conditions of the hands of the user.
Further, the input ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the output end of the normal state data module, and a large amount of detailed data based on the actual body state is stored in the normal state data module, so that the system can be conveniently matched with similar body state data according to the measured result to be directly used.
Furthermore, the input ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the output ends of the model database and the body state data database, and the system can directly extract the manufactured model data from the model database and the body state data database according to the matched data, so that the time required by system modeling can be shortened.
Further, the main vision three-dimensional image module comprises a dynamic capture grating, a main shaft long-focus probe and a secondary shaft micro-focus probe, the output ends of the dynamic capture grating, the main shaft long-focus probe and the secondary shaft micro-focus probe are connected with the input ends of the animation synthesis module and the scene synthesis unit, and the secondary shaft micro-focus probe can capture some finer actions.
Furthermore, the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment deduction unit, the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinates, a triaxial environment coordinate taking the horizontal gravity sensing plate as the basic origin is also established after static modeling is completed by the sub-end computer, and modeling data except a user and unspecified objects can be automatically deducted by the system in the environment modeling process.
Further, the output ends of the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the main model joint control terminal, and the input end of the main model joint control terminal is connected with the output end of the multi-channel decompression module.
Further, the main module linkage control terminal comprises an animation synthesis module, the input end of the animation synthesis module is connected with the output ends of the dynamic synchronization unit, the action filtering unit and the frame number adjusting module, the frame number adjusting module comprises a linear optimizing unit and a distortion correcting unit, the synthesized model and transmitted data are optimized, and smoothness of virtual action feedback is guaranteed.
A method for three-dimensional engine-based avatar feedback actions, comprising the steps of:
step one: the user stands in the monitoring range of the horizontal gravity sensing plate and faces the front view three-dimensional image module, the two hands correspond to the left view three-dimensional image module and the right view three-dimensional image module on the two sides respectively, and the rear view three-dimensional image module is positioned at the rear lower part of the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft tele probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and real-time modeling data acquisition is performed through the dynamic capture grating and the main shaft tele probe;
step three: the acquired data are mainly divided into upper limbs, lower limbs and a body, after the acquisition of user data is completed, modeling of the virtual image is performed through computer software, and meanwhile, each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system extracts the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system finely adjusts the models in the database through a normal logic algorithm to ensure the coordination of overall modeling;
step five: after static modeling is completed by the split-end computer, a triaxial environment coordinate taking the horizontal gravity sensing plate as a basic origin is established, and modeling data except a user and a non-specific object can be automatically subtracted by the system in the environment modeling process;
step six: transmitting data on the three-component end computer to a main module joint control terminal through a special compression transmission channel, and splicing and combining the three component modules together by the main module joint control terminal to form a complete user three-dimensional model;
step seven: after the main end computer completes the module assembly, a user can perform some action display on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture some detail changes on the user limb, and meanwhile, the horizontal gravity sensing plate on the sole can collect gravity center changes under different actions of the user;
step eight: the data in the whole process is continuously uploaded to the host computer after being optimized, and the same actions as the virtual modeling image feedback part in the computer are displayed.
Working principle: when in use, a user stands in the monitoring range of the horizontal gravity sensing plate and faces the front view three-dimensional image module, two hands respectively correspond to the left view three-dimensional image module and the right view three-dimensional image module at two sides, the rear view three-dimensional image module is positioned at the rear lower part of the user, after the equipment is operated, the dynamic capture grating and the main shaft tele probe in the three-dimensional image module are put into operation, at the moment, the user needs to keep static pictures as far as possible, the acquired data are mainly divided into upper limbs, lower limbs and bodies through the acquisition of the real-time modeling data of the dynamic capture grating and the main shaft tele probe, after the acquisition of the user data is completed, the modeling of the virtual image is carried out through computer software, meanwhile, each group of data is provided with an independent computer to realize the modeling, the system can extract the existing data with the highest matching degree from a model database and a body state database according to the height and the body state data of the user for direct application, the system can finely adjust the model in the database through a normal logic algorithm, ensure the coordination of overall modeling, establish a triaxial environment coordinate with a horizontal gravity sensing plate as a basic origin after the end computer completes static modeling, automatically subtract modeling data except the user and a non-specific object in the environment modeling process, transmit the data on the three-component end computer to a main module linkage control terminal through a special compression transmission channel, splice and combine the three-component module data together by the main module linkage control terminal to form a complete user three-dimensional model, and after the main end computer completes the modeling, the user can perform some action display on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe, so that the change of some details on the limbs of a user can be captured, meanwhile, the gravity center change of the sole of the user under different actions can be collected by the horizontal gravity sensing plate, the data in the whole process can be continuously uploaded to the host computer after being optimized, and the virtual modeling image feedback part in the computer is displayed by the action identical to the virtual modeling image feedback part.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (5)

1. The utility model provides a based on three-dimensional engine virtual image feedback action system, includes three-dimensional image acquisition frame, its characterized in that: the three-dimensional image acquisition frame comprises a main three-dimensional image module, a rear three-dimensional image module, a left three-dimensional image module, a right three-dimensional image module and a horizontal gravity sensing plate, the main three-dimensional image module, the rear three-dimensional image module, the left three-dimensional image module and the right three-dimensional image module are identical in composition structure, the output end of the main three-dimensional image module is separated from a body modeling operation end, the output end of the rear three-dimensional image module is separated from a lower limb modeling operation end, the output ends of the left three-dimensional image module and the right three-dimensional image module are separated from an upper limb modeling operation end, the input ends of the upper limb modeling operation end, the lower limb modeling operation end and the body modeling operation end are connected with the output ends of a model database and a body state data database, and the main three-dimensional image module comprises a dynamic capture grating, a main shaft long-focus probe and a auxiliary shaft micro-focus probe, and the input ends of the dynamic capture grating, the main shaft long-focus probe and the auxiliary shaft micro-focus probe are connected with the input end of a synthesis unit.
2. The three-dimensional engine-based avatar feedback action system of claim 1, wherein: the input end of the animation synthesis module is connected with the output end of the light and shadow filling module, the input end of the scene synthesis unit is connected with the output end of the environment subtraction unit, and the input ends of the animation synthesis module and the scene synthesis unit are connected with the output end of the basic origin coordinates.
3. The three-dimensional engine-based avatar feedback action system of claim 1, wherein: the upper limb modeling operation branch end, the lower limb modeling operation branch end and the body modeling operation branch end are connected with the input end of the compressed data channel, the output end of the compressed data channel is connected with the input end of the main model joint control terminal, and the input end of the main model joint control terminal is connected with the output end of the multi-channel decompression module.
4. A three-dimensional engine-based avatar feedback action system as claimed in claim 3, wherein: the main module linkage control terminal comprises an animation synthesis module, the input end of the animation synthesis module is connected with the output ends of the dynamic synchronization unit, the action filtering unit and the frame number adjusting module, and the frame number adjusting module comprises a linear optimization unit and a distortion correction unit.
5. A method based on three-dimensional engine avatar feedback actions, based on the three-dimensional engine avatar feedback action system implementation of any of claims 1-4, wherein the method comprises the following steps:
step one: the user stands in the monitoring range of the horizontal gravity sensing plate and faces the front view three-dimensional image module, the two hands correspond to the left view three-dimensional image module and the right view three-dimensional image module on the two sides respectively, and the rear view three-dimensional image module is positioned at the rear lower part of the user;
step two: after the equipment is operated, the dynamic capture grating and the main shaft tele probe in the three-dimensional image module are put into operation, at the moment, a user needs to keep a static picture as much as possible, and real-time modeling data acquisition is performed through the dynamic capture grating and the main shaft tele probe;
step three: the acquired data are divided into an upper limb, a lower limb and a body, after the acquisition of user data is completed, modeling of the virtual image is performed through computer software, and each group of data is provided with an independent computer to realize modeling;
step four: in the modeling process, the system extracts the existing data with the highest matching degree from the model database and the posture database according to the height and posture data of the user for direct application, and then the system finely adjusts the models in the database through a normal logic algorithm to ensure the coordination of overall modeling;
step five: after static modeling is completed by the split-end computer, a triaxial environment coordinate taking the horizontal gravity sensing plate as a basic origin is established, and modeling data except a user and a non-specific object can be automatically subtracted by the system in the environment modeling process;
step six: transmitting data on the three-component end computer to a main module joint control terminal through a special compression transmission channel, and splicing and combining the three component modules together by the main module joint control terminal to form a complete user three-dimensional model;
step seven: after the main end computer completes the module assembly, a user can perform some action display on the horizontal gravity sensing plate, the auxiliary shaft micro-focus probe is matched with the dynamic capture grating and the main shaft long-focus probe to capture some detail changes on the user limb, and meanwhile, the horizontal gravity sensing plate on the sole can collect gravity center changes under different actions of the user;
step eight: the data in the whole process is continuously uploaded to the host computer after being optimized, and the same actions as the virtual modeling image feedback part in the computer are displayed.
CN202110447823.9A 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method Active CN113129413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110447823.9A CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110447823.9A CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Publications (2)

Publication Number Publication Date
CN113129413A CN113129413A (en) 2021-07-16
CN113129413B true CN113129413B (en) 2023-05-16

Family

ID=76780123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110447823.9A Active CN113129413B (en) 2021-04-25 2021-04-25 Three-dimensional engine-based virtual image feedback action system and method

Country Status (1)

Country Link
CN (1) CN113129413B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103134444A (en) * 2013-02-01 2013-06-05 同济大学 Double-field variable-focus three-dimensional measurement system
CN110503707A (en) * 2019-07-31 2019-11-26 北京毛毛虫森林文化科技有限公司 A kind of true man's motion capture real-time animation system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700433B (en) * 2015-03-24 2016-04-27 中国人民解放军国防科学技术大学 A kind of real-time body's whole body body motion capture method of view-based access control model and system thereof
KR101862131B1 (en) * 2016-06-08 2018-05-30 한국과학기술연구원 Motion capture system using a FBG sensor
CN107274465A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of virtual reality
CN110087059B (en) * 2018-01-26 2021-02-19 四川大学 Interactive auto-stereoscopic display method for real three-dimensional scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103134444A (en) * 2013-02-01 2013-06-05 同济大学 Double-field variable-focus three-dimensional measurement system
CN110503707A (en) * 2019-07-31 2019-11-26 北京毛毛虫森林文化科技有限公司 A kind of true man's motion capture real-time animation system and method

Also Published As

Publication number Publication date
CN113129413A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US7606392B2 (en) Capturing and processing facial motion data
CN203689439U (en) Intelligent holographic projection system controlled by human body
CN106600709A (en) Decoration information model-based VR virtual decoration method
CN101277454A (en) Method for generating real time tridimensional video based on binocular camera
CN202662016U (en) Real-time virtual fitting device
CN203746012U (en) Three-dimensional virtual scene human-computer interaction stereo display system
CN104702936A (en) Virtual reality interaction method based on glasses-free 3D display
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
KR20170044318A (en) Method for collaboration using head mounted display
CN107134194A (en) Immersion vehicle simulator
CN105739703A (en) Virtual reality somatosensory interaction system and method for wireless head-mounted display equipment
CN110503707A (en) A kind of true man's motion capture real-time animation system and method
JP2022512262A (en) Image processing methods and equipment, image processing equipment and storage media
CN112070820A (en) Distributed augmented reality positioning terminal, positioning server and positioning system
CN109806580A (en) Mixed reality system and method based on wireless transmission
Gilson et al. High fidelity immersive virtual reality
CN101923729B (en) Reconstruction method of three-dimensional shape of lunar surface based on single gray level image
CN205005198U (en) Head -mounted display
CN113129413B (en) Three-dimensional engine-based virtual image feedback action system and method
CN116994720A (en) Image remote communication collaboration system based on XR technology
CN108022459A (en) A kind of mechanical part assembling demo system based on AR
CN204883048U (en) Wear -type virtual reality display device
CN116543083A (en) Three-dimensional animation production system
CN207502836U (en) A kind of augmented reality display device
CN110610536A (en) Method for displaying real scene for VR equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant