CN115494962A - Virtual human real-time interaction system and method - Google Patents
Virtual human real-time interaction system and method Download PDFInfo
- Publication number
- CN115494962A CN115494962A CN202211443555.4A CN202211443555A CN115494962A CN 115494962 A CN115494962 A CN 115494962A CN 202211443555 A CN202211443555 A CN 202211443555A CN 115494962 A CN115494962 A CN 115494962A
- Authority
- CN
- China
- Prior art keywords
- information
- virtual
- client
- real
- virtual human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims abstract description 29
- 241000282414 Homo sapiens Species 0.000 claims abstract description 20
- 230000014509 gene expression Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000009471 action Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 8
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 abstract description 16
- 230000005540 biological transmission Effects 0.000 abstract description 7
- 238000007654 immersion Methods 0.000 abstract description 5
- 241000282326 Felis catus Species 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/822—Strategy games; Role-playing games
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/807—Role playing or strategy games
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8082—Virtual reality
Abstract
The invention provides a virtual human real-time interaction system and a virtual human real-time interaction method, which comprise the following steps: the client component comprises a plurality of sub clients which can be virtual human clients or player clients; the cloud server component is used for receiving data from the client component, storing and processing the data and distributing the data to each sub-client; the motion capture component is connected with the virtual human client and is used for collecting performance information of a virtual human performer; a virtual reality hardware device connected with the sub-client for use by a human user to participate in a virtual scene; the virtual human real-time interaction system and the matching method thereof provided by the invention can solve the technical problems of single interactive expression mode and lack of reality of the virtual human in the prior art, so that human players can interact with the virtual human in a virtual scene, and real-time immersion of a digital entertainment space is realized by methods of real-time feedback, motion capture, cloud server information transmission and the like.
Description
Technical Field
The invention relates to the field of virtual reality and man-machine interaction, in particular to a virtual human real-time interaction system and a virtual human real-time interaction method.
Background
The avatar is an avatar of an avatar driven by computer technology, and is classified into an AI driving type and a real person driving type in terms of driving modes. Currently, avatars used for entertainment are mainly of the real-person driving type. The real-person driven virtual person is a virtual person presenting mode for displaying the virtual person by capturing and recording actions, expressions and the like of the real person and driving the virtual person. Existing implementations primarily include motion capture technology and computer rendering technology, however, few have considered implementing a virtual human and spectator real-time game interaction from the perspective of an overall system and integration of new technology. Therefore, most of the existing entertainment type virtual persons cannot form an interactive relationship with audiences, and the presentation modes of the entertainment type virtual persons are non-interactive modes such as videos and pictures, and are fixed and recorded unidirectional presentations.
At present, most of virtual people can only show the performance and other processes according to a fixed and preset mode and cannot interact with human beings in real time; the interaction forms of the virtual human capable of real-time interaction in scenes such as few virtual live broadcasts are limited, and mainly include forms such as voice and expressions. The virtual human interaction in the prior art lacks sense of reality and real-time performance, the expression mode is single, and the user cannot have immersive experience.
Disclosure of Invention
Aiming at the technical problems that the interactive expression mode of a virtual human is single and the reality is lacked in the prior art, the invention provides a real-time interactive system and a real-time interactive method of the virtual human, so that a human player and the virtual human can interact in a virtual scene, and a real-time immersive digital entertainment space is realized through methods such as real-time feedback, motion capture and cloud server information transmission.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a virtual human real-time interaction system, which comprises:
the client component comprises a plurality of sub clients which are virtual human clients or player clients;
the cloud server component is used for receiving the data from the client component, storing and processing the data and distributing the data to each sub-client;
the motion capture component is connected with the virtual human client and is used for collecting performance information of a virtual human performer;
a virtual reality hardware device connected with the sub-client for use by a human user to participate in a virtual scene;
wherein the human user comprises an avatar performer and a player.
Further, the performance information comprises action information, expression information and voice information;
the motion capture component includes:
the motion capture device is used for acquiring the motion information and the expression information of the virtual human performer;
and the voice processing device is used for acquiring the voice information of the virtual human performer.
Further, the cloud server component further comprises a database component, and the database component is used for storing and packaging data from the client side, and distributing the data after forming a data packet by the cloud server component.
Further, the virtual reality hardware device can be selected as a device capable of interacting with a user or a display-only device as required.
Further, each sub-client comprises a digital scene component for generating a virtual scene.
The invention also provides a virtual human real-time interaction method, which is based on the virtual human real-time interaction system and comprises the following steps:
a1: the cloud server component receives and processes information from each sub-client, and packages and distributes the received information to each sub-client;
a2: the sub-client receives information from the cloud server component and presents the information to a user in real time through the virtual reality hardware equipment;
a3: the virtual human performer performs according to the player operation information presented by the virtual reality hardware equipment, and the action capturing component collects performance information and sends the performance information to the virtual human client;
a4: the sub-client forwards the acquired information to the cloud server component in real time;
wherein, the steps A1 to A4 are carried out in real time without any sequence.
Further, in step A1, the information from each of the sub-clients includes performance information from the avatar client and player operation information from the player client.
Further, the performance information comprises action information, expression information and voice information, wherein the action information is processed by the virtual human client, and the collected actions of the virtual human performer are converted into three-dimensional model skeleton motion information.
Further, when processing the action information, the virtual human client performs a value extraction process.
Further, the player client performs interpolation processing while presenting the action information in the performance information.
The beneficial effects of the invention are:
the invention provides a virtual human real-time interaction system, which combines the virtual human field and the game interaction field and provides a new real-time human-computer interaction form, a human performer obtains the operation of a player in real time and performs feedback performance in real time, and the performance of the performer is more vivid by using a motion capture component, so that the player can experience real and immersive interaction in a virtual world, the problem of limited interactivity between the virtual human and the human at present is solved, the real-time performance of the interaction between the human and the virtual human is highlighted, the real-time interaction effect between the virtual human and the human is improved, and the conversion from a fixed presentation mode to an interactive media presentation mode of the virtual human is realized;
the invention provides a virtual human real-time interaction method, which enables a human player to interact with a virtual human in a virtual scene, and realizes a real-time immersion digital entertainment space through methods of real-time feedback, motion capture, cloud server information transmission and the like. The invention introduces the cloud server technology, breaks through the space limitation between the game player and the virtual human, and the human participants can play games with the virtual human and other players in different spaces through the client without participating in interaction on site.
Drawings
FIG. 1 is a logic diagram of an inventive system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of the inventive data logic in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a game mode and a virtual scene according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of a field display in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of experimental results in an example of the present invention;
fig. 6 is a schematic flow chart of a virtual human real-time interaction method in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the technical solutions of the embodiments of the present invention will be fully described below with reference to the accompanying drawings in the present application. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The existing virtual human performance modes mostly adopt the technical scheme of computer animation production, and the technical scheme limits the real-time performance and the interactivity of the virtual human. The game is an interactive entertainment form, and interactivity is one of essential characteristics of the game and one of core directions of development of the virtual human. The traditional computer animation technology generally adopts the technical modes of interpolation animation or motion capture animation and off-line rendering, and does not meet the development direction of the virtual human and the requirement of audiences on real-time interaction of the virtual human. Therefore, the technical scheme for exploring the virtual human real-time interactive game can overcome the defects of offline rendering and lack of interactivity in the traditional computer animation technical scheme.
The invention aims to establish a virtual human real-time interaction system, which provides a digital entertainment space for real-time interaction with a virtual human for audiences and realizes the conversion of the virtual human from a fixed presentation mode to an interactive media presentation mode.
In some embodiments, the invention adopts the following technical scheme:
a virtual human real-time interaction system comprises:
the client component comprises a plurality of sub clients which can be virtual human clients or player clients;
the cloud server component is used for receiving data from the client component and comprises a database structure, the database is used for storing and packaging the data from the client, and the data packages are formed and then distributed to all the sub-clients by the cloud server component;
each sub-client comprises a digital scene component used for generating a virtual scene, and the digital scene component comprises a real-time digital game module which is rendered through computer modeling and a game physical engine.
The motion capturing component is connected with the virtual human client and comprises motion capturing equipment and voice processing equipment which are used for collecting the actions, expressions and voices of the virtual human performer;
and the virtual reality hardware device is connected with the sub-client and used by human users (including virtual human performers and players) to participate in the virtual scene, and can be selected as a device capable of interacting with the user or a device only performing display according to needs. In this embodiment, the player can access the player client by using various virtual reality hardware devices, such as VR, AR, MR devices, personal computers or other types of game devices, and the virtual human performer can use various display devices as hardware devices to access the sub-client, in addition to the above devices, because they can not transmit operation information through the virtual reality hardware devices, but collect more highly accurate and more vivid operation information through the motion capture component.
In practical applications, non-Player characters (NPCs) in interactive scenes such as games are important modules for game interaction, and particularly in entertainment scenes, a virtual human with high modeling quality forms the core of performance interaction. Therefore, the virtual human performance adopts the action capturing component and the real-time interaction system with higher cost, and is the most efficient way for improving the game immersion and the game experience; and the game player can adopt various existing commercial virtual reality devices to access, and the operation is carried out in a preset action expression form, so that the long-term use habits of the player are considered, the experience cost of the player is not required to be improved, and the real-time performance and the immersion experience of the game can be greatly improved.
The motion capture component captures information such as the motion, expression and voice of a virtual human actor in real time, for example, optical motion capture is performed by using a Kinect device, facial expression capture is performed by using a depth camera, and voice capture is performed by using a microphone. The method comprises the steps that action information is processed into skeleton information of a virtual human three-dimensional model in real time by a virtual human client, expression information is processed into face feature point information in real time by the virtual human client and drives expression animation of the virtual human model, voice information is processed into sounds with different tones by the virtual human client in real time, and the information is sent to a cloud server component through the virtual human client and is distributed to a player client; the cloud server component receives and processes information transmitted by the virtual human client and the player client through internet communication, mainly compresses and stores the information into a database, and distributes states of the virtual human and the player to all the sub-clients in real time; the digital scene component refers to a digital entertainment scene manufactured by modeling software and a game physical engine, for example, a digital model is constructed by using the modeling software such as 3D Studio Max, blender, maya and the like, and the game scene is constructed by using the game engines such as Unity, unreal and the like. The interactive game scheme is implemented through game planning, so that a player can interact with a virtual human in real time in the same virtual digital space through a virtual avatar; the virtual reality hardware equipment is accessed to the client through a distributed virtual reality system, for example, a distributed system is formed by adopting a series flow computer of virtual reality equipment terminals such as an Oculus Quest2 and a Pico neo and the like through a computer network, and the virtual reality hardware equipment for game interaction of players is formed.
As an optimal implementation mode of the virtual human real-time interaction system disclosed by the invention: the invention creatively integrates interactive games in one-way artistic forms such as live broadcast, performance and the like, and realizes the innovation from traditional media to interactive media.
In order to achieve the purpose, the virtual human real-time game interaction system disclosed by the invention is realized by adopting the following technical scheme:
(1) The player client side comprises the following steps: a player is connected with a computer client in a streaming manner through virtual reality equipment, a virtual digital game scene is entered, the client sends player operation information to a cloud server, receives virtual person information and other player information sent by the cloud server, and participates in a game together with a virtual person and other players;
(2) A virtual human performer step: the virtual human performer wears the motion capture equipment and the virtual reality equipment, enters a virtual digital game scene through the virtual human client, sends motion capture information to the cloud server, receives operation information of the player at the same time, can see the behavior of the player in the game through the virtual reality equipment, and interacts with the player in real time according to a game target;
(3) A cloud server step: the cloud server is connected with an information transmission medium and a channel between the player and the virtual human, and realizes the real-time interaction function of the online game by receiving, processing and distributing information to the client.
The invention further provides a virtual human real-time interaction method, based on the virtual human real-time interaction system, in some embodiments, as shown in fig. 6, the method comprises the following steps:
a1: the cloud server component receives and processes performance information from the virtual human client and player operation information from the player client, and packages and distributes the received information to the sub-clients;
a2: the sub-client receives information from the cloud server component and presents the information to a user in real time through the virtual reality hardware equipment;
a3: the virtual human performer performs according to the player operation information presented by the virtual reality hardware equipment, and the performance information is collected by the motion capture component and sent to the virtual human client;
a4: the sub-client forwards the acquired information to the cloud server component in real time;
it should be noted that the sequence in the figure is only a specific implementation manner, and actually, steps A1 to A4 are all performed in real time and are not in sequence.
The performance information comprises action information, expression information and voice information, wherein the action information is processed by the virtual human client, and the collected actions of the virtual human performer are converted into three-dimensional model skeleton motion information.
The motion capture data can be optimized (such as value extraction processing) to reduce the transmission quantity of the data and improve the transmission efficiency, and an interpolation method is adopted at the client to recover the motion capture data;
as shown in fig. 1, which illustrates the system logic of the present invention. The user is connected with the virtual human real-time game interaction system disclosed by the invention through the terminal, the virtual human real-time game interaction system comprises a motion capture component, a cloud server component, a digital scene component and virtual reality hardware equipment, and the terminal is connected with the virtual digital human. The virtual digital human receives the user information, the real human performer makes corresponding interaction, virtual human data capture is performed through voice processing, motion capture and the like, and real-time rendering and display are performed. And the user and the virtual person play game interaction in the same virtual digital scene through the terminal.
As shown in fig. 2, which illustrates the data logic of the present invention. The cloud server is the core of data, and can receive information and realize information processing, storage and distribution. The cloud server comprises a database for recording and distributing data, the data is input into the cloud server and then is transmitted into the database, and the database records and stores information such as the identity, position coordinates and the like of a player and then distributes the data. The virtual digital human performer end mainly processes the motion capture data, mainly enables the motion capture data to be matched with the skeleton of a virtual human three-dimensional model in a game engine, sends the processed data to a cloud server through a client, receives audience client information distributed by the server and displays the audience client information to the performer through an audio and video monitoring system (such as a display screen and a loudspeaker), and therefore the performer can conveniently master the audience game information in real time; the spectator client mainly processes spectator operation data, mainly changes a data format for convenient transmission, for example, discrete position coordinate information is rewritten into a matrix form, the processed data is sent to the cloud server through the client, and meanwhile, the information of the virtual man end and other spectator client ends distributed by the server is received. The client of the invention is a program running on a personal computer and can be used as client output equipment by streaming virtual reality equipment.
The following examples are included to further illustrate the embodiments of the present invention:
fig. 3 and 4 show a virtual human real-time interactive game "cat trip" made according to the present invention, which is shown in the international art and design education conference of the university of qinghua. The player enters the game through the virtual reality terminal, the avatar image of the cat is adopted, the night vision capability of the cat is utilized to interact with the virtual person controlled by the performer and the cats controlled by other players in the virtual scene of power failure, and the task of the game is completed together. The technical implementation mode of the game is the same as that of the game, the game is modeled by 3D Studio Max, a game scene is built by a Unity game engine, C # and Lua programming languages are adopted, a Tencent cloud server is adopted to build a database, and Kinect equipment is adopted to capture the motion of the virtual human. The game display terminal adopts the virtual reality equipment of the streaming computer, and can also directly adopt the computer screen for display under the condition that the virtual reality equipment is insufficient.
The digital game scene proposed by the invention is mainly designed according to the system and the data module of the invention. The real-time game scene follows the design principle of an asymmetric model. The connotation of the asymmetric model mainly comprises the following steps:
(1) An asymmetric interaction pattern. The interaction relationship between the virtual human and the audience is an interaction mode of a performer to a plurality of audiences, and the interaction mode forms an asymmetric relationship between the virtual performer and the audiences.
(2) An asymmetric visual style. In line with the above, the visual design of the interactive virtual human game based on the system and the data logic of the invention follows the asymmetric principle, and highlights the principal role of the virtual human and the participant role of the audience. For example, the virtual stage design highlights the star image, the volume difference between the virtual person and the audience is large, the virtual avatar of the audience is a non-human image, and the like, and all belong to the design idea of the game mode provided by the invention.
(3) An asymmetric game rule. The virtual human and the audience should have asymmetrical game targets, and the game rules designed based on the asymmetrical game targets should follow the asymmetrical principle. For example, the avatar and the audience have different game value attributes, etc.
(4) An asymmetric business model. The business model consistent with the present invention follows an asymmetric principle, where the viewer's consumption rules are in an asymmetric form. The spectators do not buy one-to-one services, but use an asymmetric one-to-many service mode, and simultaneously, the spectators experience the services provided by the virtual man, participate in games with other spectators, and create commercial value together with other spectators and the virtual man.
The invention performs experiments on the cat travel, and selects 12 participants, wherein 6 participants are 21-25 years old, 5 participants are 26-30 years old, and 1 participant is over 35 years old, and the participants experience the cat travel game for the first time. Evaluation was performed by the GEQ (Game User expert Evaluation) scale (Table 1). The experimental result is shown in fig. 5, and the experimental result shows that the game interaction is an easy-to-use and high-immersion virtual human real-time game interaction mode.
TABLE 1 GEQ core Scale
In the description herein, references to the terms "one embodiment" and "an example" or the like mean that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to be combined in any suitable manner in the relative embodiments or examples.
It should be noted that the above-mentioned embodiments are not intended to limit but only to help understand the core idea of the present invention, and it will be apparent to those skilled in the art that any modifications made to the present invention and alternatives equivalent to the present invention without departing from the principle of the present invention also fall within the protection scope of the appended claims.
Claims (10)
1. A virtual human real-time interaction system is characterized by comprising:
the client component comprises a plurality of sub clients which are virtual human clients or player clients;
the cloud server component is used for receiving data from the client component, storing and processing the data and distributing the data to each sub-client;
the motion capture component is connected with the virtual human client and used for acquiring performance information of a virtual human performer;
a virtual reality hardware device connected with the sub-client for use by a human user to participate in a virtual scene;
wherein the human user comprises an avatar performer and a player.
2. The virtual human real-time interaction system of claim 1, wherein:
the performance information comprises action information, expression information and voice information;
the motion capture component includes:
the motion capture device is used for acquiring the motion information and the expression information of the virtual human performer;
and the voice processing device is used for acquiring the voice information of the virtual human performer.
3. The virtual human real-time interaction system as claimed in claim 1, wherein:
the cloud server component further comprises a database component, and the database component is used for storing and packaging data from the client side, and distributing the data after forming a data package by the cloud server component.
4. The virtual human real-time interaction system as claimed in claim 1, wherein the virtual reality hardware device can be selected as a device capable of interacting with a user or a device only for display as required.
5. The virtual human real-time interaction system as claimed in claim 1, wherein each of the sub-clients comprises a digital scene component for generating a virtual scene.
6. A virtual human real-time interaction method is characterized in that the virtual human real-time interaction system based on any one of claims 1-5 comprises the following steps:
a1: the cloud server component receives and processes information from each sub-client, and packages and distributes the received information to each sub-client;
a2: the sub-client receives information from the cloud server component and presents the information to a user in real time through the virtual reality hardware equipment;
a3: the virtual human performer performs according to the player operation information presented by the virtual reality hardware equipment, and the action capturing component collects performance information and sends the performance information to the virtual human client;
a4: the sub-client forwards the acquired information to the cloud server component in real time;
wherein, the steps A1 to A4 are carried out in real time without any sequence.
7. The method according to claim 6, wherein in step A1, the information from each of the sub-clients includes performance information from the avatar client and player operation information from the player client.
8. The method of claim 7, wherein the performance information comprises action information, expression information and voice information, wherein the action information is processed by the avatar client, and the collected avatar performer actions are converted into three-dimensional model skeletal motion information.
9. The method according to claim 8, wherein the avatar client performs a snapshot process while processing the action information.
10. The method of claim 9, wherein the player client performs interpolation processing when presenting the action information in the performance information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211443555.4A CN115494962A (en) | 2022-11-18 | 2022-11-18 | Virtual human real-time interaction system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211443555.4A CN115494962A (en) | 2022-11-18 | 2022-11-18 | Virtual human real-time interaction system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115494962A true CN115494962A (en) | 2022-12-20 |
Family
ID=85116185
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211443555.4A Pending CN115494962A (en) | 2022-11-18 | 2022-11-18 | Virtual human real-time interaction system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115494962A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117319628A (en) * | 2023-09-18 | 2023-12-29 | 四开花园网络科技(广州)有限公司 | Real-time interactive naked eye 3D virtual scene system supporting outdoor LED screen |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120142415A1 (en) * | 2010-12-03 | 2012-06-07 | Lindsay L Jon | Video Show Combining Real Reality and Virtual Reality |
CN106373142A (en) * | 2016-12-07 | 2017-02-01 | 西安蒜泥电子科技有限责任公司 | Virtual character on-site interaction performance system and method |
CN107274464A (en) * | 2017-05-31 | 2017-10-20 | 珠海金山网络游戏科技有限公司 | A kind of methods, devices and systems of real-time, interactive 3D animations |
CN109874021A (en) * | 2017-12-04 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus and system |
US20210366174A1 (en) * | 2020-05-21 | 2021-11-25 | Scott REILLY | Interactive Virtual Reality Broadcast Systems and Methods |
CN114900678A (en) * | 2022-07-15 | 2022-08-12 | 北京蔚领时代科技有限公司 | VR end-cloud combined virtual concert rendering method and system |
-
2022
- 2022-11-18 CN CN202211443555.4A patent/CN115494962A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120142415A1 (en) * | 2010-12-03 | 2012-06-07 | Lindsay L Jon | Video Show Combining Real Reality and Virtual Reality |
CN106373142A (en) * | 2016-12-07 | 2017-02-01 | 西安蒜泥电子科技有限责任公司 | Virtual character on-site interaction performance system and method |
CN107274464A (en) * | 2017-05-31 | 2017-10-20 | 珠海金山网络游戏科技有限公司 | A kind of methods, devices and systems of real-time, interactive 3D animations |
CN109874021A (en) * | 2017-12-04 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus and system |
US20210366174A1 (en) * | 2020-05-21 | 2021-11-25 | Scott REILLY | Interactive Virtual Reality Broadcast Systems and Methods |
CN114900678A (en) * | 2022-07-15 | 2022-08-12 | 北京蔚领时代科技有限公司 | VR end-cloud combined virtual concert rendering method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117319628A (en) * | 2023-09-18 | 2023-12-29 | 四开花园网络科技(广州)有限公司 | Real-time interactive naked eye 3D virtual scene system supporting outdoor LED screen |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220193542A1 (en) | Compositing multiple video streams into a single media stream | |
WO2022143182A1 (en) | Video signal playing method, apparatus, and device for multi-user interaction | |
JP7368886B2 (en) | Information processing system, information processing method, and information processing program | |
US10531158B2 (en) | Multi-source video navigation | |
CN103368929B (en) | A kind of Video chat method and system | |
CN105306468A (en) | Method for real-time sharing of synthetic video data and anchor client side | |
CN109104641A (en) | A kind of more main broadcaster's direct broadcasting rooms give the method and device of virtual present | |
WO2015078199A1 (en) | Live interaction method and device, client, server and system | |
JP2003164672A (en) | System and method which offer users experience as spectator of games networked | |
CN113473159A (en) | Digital human live broadcast method and device, live broadcast management equipment and readable storage medium | |
CN107241654A (en) | A kind of high in the clouds accelerates render farm panorama game live broadcast system and method | |
KR101739220B1 (en) | Special Video Generation System for Game Play Situation | |
CN114095744B (en) | Video live broadcast method and device, electronic equipment and readable storage medium | |
CN101169808A (en) | Super time and super space combating game design for infusing network and realization | |
CN115494962A (en) | Virtual human real-time interaction system and method | |
CN114286021A (en) | Rendering method, rendering apparatus, server, storage medium, and program product | |
CN112188223B (en) | Live video playing method, device, equipment and medium | |
CN110446090A (en) | A kind of virtual auditorium spectators bus connection method, system, device and storage medium | |
CN116744027A (en) | Meta universe live broadcast system | |
CN110602523A (en) | VR panoramic live multimedia processing and synthesizing system and method | |
CN116382535A (en) | Interactive video playback method and interactive video player | |
CN113274727B (en) | Live interaction method and device, storage medium and electronic equipment | |
CN114430494B (en) | Interface display method, device, equipment and storage medium | |
CN113179435B (en) | Television remote karaoke method and system based on VR | |
WO2022134943A1 (en) | Explanation video generation method and apparatus, and server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |