CN109885163A - A kind of more people's interactive cooperation method and systems of virtual reality - Google Patents
A kind of more people's interactive cooperation method and systems of virtual reality Download PDFInfo
- Publication number
- CN109885163A CN109885163A CN201910122971.6A CN201910122971A CN109885163A CN 109885163 A CN109885163 A CN 109885163A CN 201910122971 A CN201910122971 A CN 201910122971A CN 109885163 A CN109885163 A CN 109885163A
- Authority
- CN
- China
- Prior art keywords
- data
- role
- user
- scene
- skeleton
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000033001 locomotion Effects 0.000 claims abstract description 37
- 230000001360 synchronised effect Effects 0.000 claims abstract description 13
- 238000012546 transfer Methods 0.000 claims abstract description 10
- 230000027455 binding Effects 0.000 claims description 10
- 238000009739 binding Methods 0.000 claims description 10
- 230000003993 interaction Effects 0.000 claims description 9
- 230000000386 athletic effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 210000000988 bone and bone Anatomy 0.000 claims 1
- 238000013507 mapping Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009916 joint effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present invention discloses a kind of more people's interactive cooperation method and systems of virtual reality, which includes motion pick equipment, to collect user's skeleton data;Multiple client obtains user's attitude data to carry out data modeling according to user's skeleton data, and is mapped in the joint position primary data of each artis of skeleton model;Server obtains to bind the joint position primary data of the skeleton model and the scene role of the user and synchronous transfer character location data is to other scenes role;Joint position primary data of the client also to more new scene role forms attitude motion skeleton cartoon in conjunction with the model animation of virtual scene.Client terminal of the present invention realizes attitude data modeling, mapping, joint position update etc., slows down server stress significantly, realizes multi-process distributed dynamic load balancing, solve more people's concurrency performance bottlenecks, increases system scalability.
Description
Technical field
The invention belongs to technical field of virtual reality more particularly to a kind of more people's interactive cooperation methods of virtual reality be
System.
Background technique
With the development of virtual reality and Internet technology, in order to meet the different computing terminals being geographically widely distributed it
Between interaction demand, distributed virtual reality (Distributed Virtual Reality, DVR), also known as distributed virtual ring
(DVE, Distributed Virtual Environments) comes into being in border.
It is geographically distributed in the computing terminal of different zones, is locally constructing virtual environment, then passes through the virtual section of control
Point incarnation (avatar, for representing the visual human of real user) interacted in a shared three-dimensional virtual environment or
Roaming.
Currently, more people interaction of virtual reality still has following problems: virtual reality interaction data amount is big, transmission process
Efficiency is poor, influences real-time, and then influence user experience.
Summary of the invention
The present invention is intended to provide a kind of more people's interactive assistance method and systems of virtual reality, can effectively solve the problem that above-mentioned technology
Defect.
To achieve the goals above, the technical scheme is that
There are multiple scene roles in a virtual scene, each scene role corresponds to a user, and scene role exists
There is an absolute position in virtual scene;
(1) data acquire
(11) motion-captured: user's skeleton data, user's skeleton data packet are collected by motion pick equipment
Include the three-dimensional coordinate data of each skeletal joint point of the user;
(12) Attitude Modeling: data modeling is carried out according to user's skeleton data, obtains user's attitude data;
(13) joint maps: at the beginning of user's attitude data is mapped to the joint position of each artis of skeleton model
In beginning data;
(2) data interaction
(21) joint position and scene role bindings: by the joint position primary data of the skeleton model with the user's
Scene role binds, and obtains character location data;
(22) data transmission and distribution: the character location data synchronous transfer is given to other scenes role;
(3) VR is shown:
(31) joint displacements: according to the joint position primary data of the character location data more new scene role, scene
Role realizes upper and lower, horizontal and/or rotary motion;
(32) skeleton cartoon: while the joint position primary data of the more new scene role, with virtual scene
Model animation combines, and forms attitude motion skeleton cartoon, the scene role of virtual scene and the athletic posture of the user in reality
It is consistent.
Further, the character location data include that joint position primary data, role's attitude data and role are absolute
Position data.
Further, the motion pick equipment includes video image identification device or wearing position sensing device.
In order to solve the above-mentioned technical problem, the embodiment of the invention also provides a kind of virtual reality more people's interactive assistances systems
System comprising:
Motion pick equipment, to collect user's skeleton data, user's skeleton data includes that the user is each
The three-dimensional coordinate data of skeletal joint point;
Multiple client, the client are connect with the motion pick equipment comprising:
Attitude Modeling unit: to carry out data modeling according to user's skeleton data, user's attitude data is obtained;
Position map unit: at the beginning of user's attitude data is mapped to the joint position of each artis of skeleton model
In beginning data;
Server comprising:
Role bindings unit, to by the scene role of the joint position primary data of the skeleton model and the user into
Row binding, obtains character location data;
Synchronous transfer unit, to give the character location data synchronous transfer to other scenes role;
The client further include:
Joint position updating unit, according to the joint position primary data of the character location data more new scene role,
Realize scene role or more, horizontal and/or rotary motion;
Animation combining unit, to while the joint position primary data of the more new scene role, with virtual field
The model animation of scape combines, and forms attitude motion skeleton cartoon, the movement of the scene role and the user in reality of virtual scene
Posture is consistent;
Display unit, to show the attitude motion skeleton cartoon.
Further, the character location data include that joint position primary data, role's attitude data and role are absolute
Position data.
Further, the motion pick equipment includes video image identification device or wearing position sensing device.
Based on the above-mentioned technical proposal, the embodiment of the present invention at least can produce following technical effect:
1, the present invention by the acquisition for increasing virtual reality data with it is synchronous, more enhance virtual and real knot
It closes, effectively realizes user gesture identification, motion capture.
2, it had both solved the unique identification of role using role's login mode, while having met the safety of more people's interactive systems
Property;Interaction data and role bindings solve position orientation problem of more people in virtual scene.
3, client terminal realizes attitude data modeling, mapping, joint position update etc., slows down server stress significantly, real
Show multi-process distributed dynamic load balancing, solved more people's concurrency performance bottlenecks, increases system scalability.
Detailed description of the invention
Fig. 1 is the flow diagram of the more people's interactive assistance methods of virtual reality of the embodiment of the present invention;
Fig. 2 is the flow diagram of the more people's interactive assistance methods of virtual reality of the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of the more people's interactive assistance systems of virtual reality of the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.It should be appreciated that described herein
Embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention.
It as shown in Figure 1, 2, is the more people's interactive assistance methods of virtual reality of the embodiment of the present invention comprising:
(1) data acquire:
(11) motion-captured: user's skeleton data, user's skeleton data packet are collected by motion pick equipment
Include the three-dimensional coordinate data of each skeletal joint point of the user.Wherein, the motion pick equipment includes video image identification dress
It sets or wearing position sensing device.Such as by the way that wearing position sensing device is worn on user's skeletal joint position, such as dress
Position sensor on elbow.
(12) Attitude Modeling: data modeling is carried out according to user's skeleton data, obtains user's attitude data.Specifically
Ground, user's attitude data are the motion change data of each skeletal joint point of the user.As elbow translates in place from position A
Set the variation of B etc. joint action.Such as by modeling spin matrix, obtain user's attitude data.
(13) joint maps: at the beginning of user's attitude data is mapped to the joint position of each artis of skeleton model
In beginning data.Also the artis of skeleton is mapped with each artis on skeleton model.Namely give user's appearance
State data impart joint position data.Connect example, the variation of user's elbow attitude data, correspond to elbow on skeleton model this
The elbow of the variation of a artis namely available actor model moves to position B ' from position A '.Wherein, the joint position
Set the position data of each artis when primary data is the initial unified posture of actor model.
(2) data interaction
(21) joint position and scene role bindings: by the joint position primary data of the skeleton model with the user's
Scene role binds, and obtains character location data.Wherein, the character location data include joint position primary data,
Role's attitude data and role's absolute position data.
Wherein, there is in virtual scene an absolute position namely world coordinates due to each scene role.Different scenes
Absolute position of the role in virtual scene is different, therefore needs the scene of the joint position data of skeleton model and the user
Role binds, and refers to joint position data with the absolute position of the scene role of the user, it is opposite to obtain artis
Joint station-keeping data in the absolute position of the scene role of the user.At this point, user's attitude data, joint position number
Association is realized according to, role absolute position data three.Example is connected, then this artis of available elbow is in virtual scene
Role's attitude data.
(22) data transmission and distribution: the character location data synchronous transfer is given to other scenes role.
(3) VR is shown
(31) joint displacements: according to the joint position primary data of the character location data more new scene role, scene
Role realizes upper and lower, horizontal and/or rotary motion;
(32) skeleton cartoon: while the joint position primary data of the more new scene role, with virtual scene
Model animation combines, and forms attitude motion skeleton cartoon, the scene role of virtual scene and the athletic posture of the user in reality
It is consistent.
The present invention by the acquisition for increasing virtual reality data with it is synchronous, more enhance virtual and real knot
It closes, effectively realizes user gesture identification, motion capture.Both solved the unique identification of role using role's login mode, together
When meet the safeties of more people's interactive systems;It is fixed to solve position of more people in virtual scene for interaction data and role bindings
Position problem.
As shown in figure 3, the embodiment of the invention provides a kind of more people's interactive assistance systems of virtual reality comprising:
Multiple motion pick equipment 100, to collect user's skeleton data, user's skeleton data includes the use
The three-dimensional coordinate data of each skeletal joint point in family;
Multiple client 200, the client are connect with the motion pick equipment comprising:
Attitude Modeling unit 210: to carry out data modeling according to user's skeleton data, user's posture number is obtained
According to;
Position map unit 220: user's attitude data is mapped to the joint position of each artis of skeleton model
It sets in primary data;
Server 300 comprising:
Role bindings unit 310, to by the scene angle of the joint position primary data of the skeleton model and the user
Color is bound, and character location data are obtained;
Synchronous transfer unit 320, to give the character location data synchronous transfer to other scenes role;
The client 200 further include:
Joint position updating unit 230, according to the joint position initial number of the character location data more new scene role
According to, realize scene role up and down, horizontal and/or rotary motion;
Animation combining unit 240, it is and virtual to while the joint position primary data of the more new scene role
The model animation of scene combines, and forms attitude motion skeleton cartoon, the scene role of virtual scene and the fortune of the user in reality
Dynamic posture is consistent;
Display unit 250, to show the attitude motion skeleton cartoon.
Further, the character location data include that joint position primary data, role's attitude data and role are absolute
Position data.
Further, the motion pick equipment includes video image identification device or wearing position sensing device.
Based on the above-mentioned technical proposal, the embodiment of the present invention at least can produce following technical effect:
The present invention by the acquisition for increasing virtual reality data with it is synchronous, more enhance virtual and real knot
It closes, effectively realizes user gesture identification, motion capture.
It had both solved the unique identification of role using role's login mode, while having met the safety of more people's interactive systems;
Interaction data and role bindings solve position orientation problem of more people in virtual scene.
Client terminal realizes attitude data modeling, mapping, joint position update etc., slows down server stress significantly, realizes
Multi-process distributed dynamic load balancing solves more people's concurrency performance bottlenecks, increases system scalability.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (6)
1. a kind of more people's interactive assistance methods of virtual reality, it is characterised in that:
There are multiple scene roles in a virtual scene, and the corresponding user of each scene role, scene role is virtual
There is an absolute position in scene;
(1) data acquire
(11) motion-captured: user's skeleton data to be collected by motion pick equipment, user's skeleton data includes should
The three-dimensional coordinate data of each skeletal joint point of user;
(12) Attitude Modeling: data modeling is carried out according to user's skeleton data, obtains user's attitude data;
(13) joint maps: user's attitude data is mapped to the joint position initial number of each artis of skeleton model
In;
(2) data interaction
(21) joint position and scene role bindings: by the scene of the joint position primary data of the skeleton model and the user
Role binds, and obtains character location data;
(22) data transmission and distribution: the character location data synchronous transfer is given to other scenes role;
(3) VR is shown
(31) joint displacements: according to the joint position primary data of the character location data more new scene role, scene role
Realize upper and lower, horizontal and/or rotary motion;
(32) skeleton cartoon: the model while joint position primary data of the more new scene role, with virtual scene
Animation combines, and forms attitude motion skeleton cartoon, and the scene role of virtual scene and the athletic posture of the user in reality are kept
Unanimously.
2. the more people's interactive assistance methods of virtual reality according to claim 1, which is characterized in that the character location data
Including joint position primary data, role's attitude data and role's absolute position data.
3. the more people's interactive assistance methods of virtual reality according to claim 1, which is characterized in that the motion pick equipment
Including video image identification device or wearing position sensing device.
4. a kind of more people's interactive assistance systems of virtual reality characterized by comprising
Motion pick equipment, to collect user's skeleton data, user's skeleton data includes each bone of the user
The three-dimensional coordinate data of artis;
Multiple client, the client are connect with the motion pick equipment comprising:
Attitude Modeling unit: to carry out data modeling according to user's skeleton data, user's attitude data is obtained;
Position map unit: user's attitude data is mapped to the joint position initial number of each artis of skeleton model
In;
Server comprising:
Role bindings unit, to tie up the joint position primary data of the skeleton model and the scene role of the user
It is fixed, obtain character location data;
Synchronous transfer unit, to give the character location data synchronous transfer to other scenes role;
The client further include:
Joint position updating unit is realized according to the joint position primary data of the character location data more new scene role
Scene role up and down, horizontal and/or rotary motion;
Animation combining unit, to while the joint position primary data of the more new scene role, with virtual scene
Model animation combines, and forms attitude motion skeleton cartoon, the scene role of virtual scene and the athletic posture of the user in reality
It is consistent;
Display unit, to show the attitude motion skeleton cartoon.
5. the more people's interactive assistance systems of virtual reality according to claim 1, which is characterized in that the character location data
Including joint position primary data, role's attitude data and role's absolute position data.
6. the more people's interactive assistance systems of virtual reality according to claim 1, which is characterized in that the motion pick equipment
Including video image identification device or wearing position sensing device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122971.6A CN109885163A (en) | 2019-02-18 | 2019-02-18 | A kind of more people's interactive cooperation method and systems of virtual reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122971.6A CN109885163A (en) | 2019-02-18 | 2019-02-18 | A kind of more people's interactive cooperation method and systems of virtual reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109885163A true CN109885163A (en) | 2019-06-14 |
Family
ID=66928453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910122971.6A Pending CN109885163A (en) | 2019-02-18 | 2019-02-18 | A kind of more people's interactive cooperation method and systems of virtual reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109885163A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515466A (en) * | 2019-08-30 | 2019-11-29 | 贵州电网有限责任公司 | A kind of motion capture system based on virtual reality scenario |
CN110728739A (en) * | 2019-09-30 | 2020-01-24 | 杭州师范大学 | Virtual human control and interaction method based on video stream |
CN111028339A (en) * | 2019-12-06 | 2020-04-17 | 国网浙江省电力有限公司培训中心 | Behavior action modeling method and device, electronic equipment and storage medium |
CN111784809A (en) * | 2020-07-09 | 2020-10-16 | 网易(杭州)网络有限公司 | Virtual character skeleton animation control method and device, storage medium and electronic equipment |
CN111968205A (en) * | 2020-07-31 | 2020-11-20 | 深圳市木愚科技有限公司 | Driving method and system of bionic three-dimensional model |
CN112947758A (en) * | 2021-03-04 | 2021-06-11 | 北京京航计算通讯研究所 | Multi-user virtual-real cooperative system based on VR technology |
CN113034651A (en) * | 2021-03-18 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Interactive animation playing method, device, equipment and storage medium |
CN113407031A (en) * | 2021-06-29 | 2021-09-17 | 国网宁夏电力有限公司 | VR interaction method, system, mobile terminal and computer readable storage medium |
CN113570690A (en) * | 2021-08-02 | 2021-10-29 | 北京慧夜科技有限公司 | Interactive animation generation model training method, interactive animation generation method and system |
CN114116081A (en) * | 2020-08-10 | 2022-03-01 | 北京字节跳动网络技术有限公司 | Interactive dynamic fluid effect processing method and device and electronic equipment |
CN114119857A (en) * | 2021-10-13 | 2022-03-01 | 北京市应急管理科学技术研究院 | Processing method, system and storage medium for synchronizing position and limb of character avatar |
CN114115534A (en) * | 2021-11-12 | 2022-03-01 | 山东大学 | Relationship enhancement system and method based on room type interactive projection |
CN114415909A (en) * | 2021-12-27 | 2022-04-29 | 宝宝巴士股份有限公司 | Node interaction method and device based on cos2dx |
CN114564259A (en) * | 2022-01-24 | 2022-05-31 | 杭州博联智能科技股份有限公司 | Method and system for generating visual interface |
CN115220578A (en) * | 2022-06-30 | 2022-10-21 | 华东交通大学 | Interactive VR system and method based on optical motion capture |
US11809616B1 (en) | 2022-06-23 | 2023-11-07 | Qing Zhang | Twin pose detection method and system based on interactive indirect inference |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976451A (en) * | 2010-11-03 | 2011-02-16 | 北京航空航天大学 | Motion control and animation generation method based on acceleration transducer |
US20170021275A1 (en) * | 2002-07-27 | 2017-01-26 | Sony Interactive Entertainment America Llc | Method and System for Applying Gearing Effects to Visual Tracking |
CN107833271A (en) * | 2017-09-30 | 2018-03-23 | 中国科学院自动化研究所 | A kind of bone reorientation method and device based on Kinect |
CN108011886A (en) * | 2017-12-13 | 2018-05-08 | 上海曼恒数字技术股份有限公司 | A kind of cooperative control method, system, equipment and storage medium |
-
2019
- 2019-02-18 CN CN201910122971.6A patent/CN109885163A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170021275A1 (en) * | 2002-07-27 | 2017-01-26 | Sony Interactive Entertainment America Llc | Method and System for Applying Gearing Effects to Visual Tracking |
CN101976451A (en) * | 2010-11-03 | 2011-02-16 | 北京航空航天大学 | Motion control and animation generation method based on acceleration transducer |
CN107833271A (en) * | 2017-09-30 | 2018-03-23 | 中国科学院自动化研究所 | A kind of bone reorientation method and device based on Kinect |
CN108011886A (en) * | 2017-12-13 | 2018-05-08 | 上海曼恒数字技术股份有限公司 | A kind of cooperative control method, system, equipment and storage medium |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110515466A (en) * | 2019-08-30 | 2019-11-29 | 贵州电网有限责任公司 | A kind of motion capture system based on virtual reality scenario |
CN110515466B (en) * | 2019-08-30 | 2023-07-04 | 贵州电网有限责任公司 | Motion capture system based on virtual reality scene |
CN110728739B (en) * | 2019-09-30 | 2023-04-14 | 杭州师范大学 | Virtual human control and interaction method based on video stream |
CN110728739A (en) * | 2019-09-30 | 2020-01-24 | 杭州师范大学 | Virtual human control and interaction method based on video stream |
CN111028339A (en) * | 2019-12-06 | 2020-04-17 | 国网浙江省电力有限公司培训中心 | Behavior action modeling method and device, electronic equipment and storage medium |
CN111028339B (en) * | 2019-12-06 | 2024-03-29 | 国网浙江省电力有限公司培训中心 | Behavior modeling method and device, electronic equipment and storage medium |
CN111784809A (en) * | 2020-07-09 | 2020-10-16 | 网易(杭州)网络有限公司 | Virtual character skeleton animation control method and device, storage medium and electronic equipment |
CN111784809B (en) * | 2020-07-09 | 2023-07-28 | 网易(杭州)网络有限公司 | Virtual character skeleton animation control method and device, storage medium and electronic equipment |
CN111968205A (en) * | 2020-07-31 | 2020-11-20 | 深圳市木愚科技有限公司 | Driving method and system of bionic three-dimensional model |
CN114116081B (en) * | 2020-08-10 | 2023-10-27 | 抖音视界有限公司 | Interactive dynamic fluid effect processing method and device and electronic equipment |
CN114116081A (en) * | 2020-08-10 | 2022-03-01 | 北京字节跳动网络技术有限公司 | Interactive dynamic fluid effect processing method and device and electronic equipment |
CN112947758A (en) * | 2021-03-04 | 2021-06-11 | 北京京航计算通讯研究所 | Multi-user virtual-real cooperative system based on VR technology |
CN113034651A (en) * | 2021-03-18 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Interactive animation playing method, device, equipment and storage medium |
CN113034651B (en) * | 2021-03-18 | 2023-05-23 | 腾讯科技(深圳)有限公司 | Playing method, device, equipment and storage medium of interactive animation |
CN113407031B (en) * | 2021-06-29 | 2023-04-18 | 国网宁夏电力有限公司 | VR (virtual reality) interaction method, VR interaction system, mobile terminal and computer readable storage medium |
CN113407031A (en) * | 2021-06-29 | 2021-09-17 | 国网宁夏电力有限公司 | VR interaction method, system, mobile terminal and computer readable storage medium |
CN113570690A (en) * | 2021-08-02 | 2021-10-29 | 北京慧夜科技有限公司 | Interactive animation generation model training method, interactive animation generation method and system |
CN114119857A (en) * | 2021-10-13 | 2022-03-01 | 北京市应急管理科学技术研究院 | Processing method, system and storage medium for synchronizing position and limb of character avatar |
CN114115534A (en) * | 2021-11-12 | 2022-03-01 | 山东大学 | Relationship enhancement system and method based on room type interactive projection |
CN114115534B (en) * | 2021-11-12 | 2023-12-22 | 山东大学 | Relationship enhancement system and method based on room type interactive projection |
CN114415909A (en) * | 2021-12-27 | 2022-04-29 | 宝宝巴士股份有限公司 | Node interaction method and device based on cos2dx |
CN114415909B (en) * | 2021-12-27 | 2023-12-26 | 宝宝巴士股份有限公司 | Node interaction method and device based on cocos2dx |
CN114564259A (en) * | 2022-01-24 | 2022-05-31 | 杭州博联智能科技股份有限公司 | Method and system for generating visual interface |
US11809616B1 (en) | 2022-06-23 | 2023-11-07 | Qing Zhang | Twin pose detection method and system based on interactive indirect inference |
CN115220578A (en) * | 2022-06-30 | 2022-10-21 | 华东交通大学 | Interactive VR system and method based on optical motion capture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109885163A (en) | A kind of more people's interactive cooperation method and systems of virtual reality | |
EP3332565B1 (en) | Mixed reality social interaction | |
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
US9717988B2 (en) | Rendering system, rendering server, control method thereof, program, and recording medium | |
CN108525299B (en) | System and method for enhancing computer applications for remote services | |
US10755675B2 (en) | Image processing system, image processing method, and computer program | |
US20160225188A1 (en) | Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment | |
CN110832442A (en) | Optimized shading and adaptive mesh skin in point-of-gaze rendering systems | |
CN107122045A (en) | A kind of virtual man-machine teaching system and method based on mixed reality technology | |
CN105739703A (en) | Virtual reality somatosensory interaction system and method for wireless head-mounted display equipment | |
US20180357747A1 (en) | Adaptive mesh skinning in a foveated rendering system | |
CN104740874A (en) | Method and system for playing videos in two-dimension game scene | |
CN108595004A (en) | More people's exchange methods, device and relevant device based on Virtual Reality | |
CN108983974A (en) | AR scene process method, apparatus, equipment and computer readable storage medium | |
CN109395375A (en) | A kind of 3d gaming method of interface interacted based on augmented reality and movement | |
CN107945270A (en) | A kind of 3-dimensional digital sand table system | |
CN106125927B (en) | Image processing system and method | |
Schönauer et al. | Wide area motion tracking using consumer hardware | |
CN108983954A (en) | Data processing method, device and system based on virtual reality | |
Park et al. | AR room: Real-time framework of camera location and interaction for augmented reality services | |
CN113313796B (en) | Scene generation method, device, computer equipment and storage medium | |
Liu et al. | Thangka realization based on MR | |
CN205507685U (en) | Virtual reality exhibition of paintings system | |
CN115240272A (en) | Video-based attitude data capturing method | |
CN103593863A (en) | A three-dimensional animation production system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190614 |
|
RJ01 | Rejection of invention patent application after publication |