CN117539349A - Meta universe experience interaction system and method based on blockchain technology - Google Patents

Meta universe experience interaction system and method based on blockchain technology Download PDF

Info

Publication number
CN117539349A
CN117539349A CN202311491910.XA CN202311491910A CN117539349A CN 117539349 A CN117539349 A CN 117539349A CN 202311491910 A CN202311491910 A CN 202311491910A CN 117539349 A CN117539349 A CN 117539349A
Authority
CN
China
Prior art keywords
data
virtual
digital
interaction
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311491910.XA
Other languages
Chinese (zh)
Inventor
武立辉
伍远山
刘烈
刘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuyao Tianshu Beijing Technology Co ltd
Original Assignee
Jiuyao Tianshu Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuyao Tianshu Beijing Technology Co ltd filed Critical Jiuyao Tianshu Beijing Technology Co ltd
Priority to CN202311491910.XA priority Critical patent/CN117539349A/en
Publication of CN117539349A publication Critical patent/CN117539349A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Accounting & Taxation (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a meta-universe experience interaction system and method based on a blockchain technology, wherein the method comprises the following steps: collecting touch control instructions and user data; generating a digital virtual person which keeps synchronous and dynamic with the expression and action of the user according to the user data; synchronously displaying the digital neighborhood and the digital virtual man of the meta-universe, and realizing synchronous interaction of the digital virtual man in the digital neighborhood according to the interaction data; providing personalized AI voice assistant service for users according to voice interaction data, providing optimized state space and action space for users according to interaction actions of the users, providing social interaction functions among the users for the users, and providing commodity exhibition and trade functions in digital blocks for the users; and recording the data generated in the meta-universe experience interaction process on the blockchain. Through the technical scheme of the invention, personalized and safe digital neighborhood experience is provided for the user, and meanwhile, an experience basis with high safety and reliability is provided.

Description

Meta universe experience interaction system and method based on blockchain technology
Technical Field
The invention relates to the technical field of virtual reality, in particular to a meta-universe experience interaction system based on a block chain technology and a meta-universe experience interaction method based on the block chain technology.
Background
With the rapid development of digital technology, the meta-universe has become a new desire for virtual worlds. However, conventional user interfaces are generally limited by the size of the screen and the manner in which they interact, and only the different elements of the interface can be stacked with the hierarchy through optimal combinations of information to carry communications between the user and the computer as well as to allow the two parties to understand each other as if translated.
However, the information structure of the fixed behavior path needs to be adapted and learned for the user, so that a part of autonomous motility is lost, and the interaction experience between the user and the system is limited. Therefore, there is a need for a new type of interactive system that provides a user with a richer, personalized meta-universe experience.
Disclosure of Invention
Aiming at the problems, the invention provides a meta-universe experience interaction system and a meta-universe experience interaction method based on a blockchain technology, which are characterized in that a display interaction module is used as a display interface for acquiring user data and interacting content in real time, a generated digital virtual person realizes synchronous interaction in a digital neighborhood of a created virtual scene according to interaction actions of the user, personalized and safe digital neighborhood experience is provided for the user, social interaction and digital commodity display transaction can be realized in the digital neighborhood, and in addition, a high-safety and credibility experience basis is provided for an experience interaction process through the blockchain technology.
In order to achieve the above object, the present invention provides a meta-universe experience interactive system based on a blockchain technology, including: the system comprises a display interaction module, a virtual person module, an interaction simulation module, an creation interaction platform and a blockchain module;
the display interaction module is a touch display screen and is used for displaying interaction content, receiving touch control instructions and collecting face data, gesture action data, gesture interaction data and voice interaction data of a user;
the virtual human module is used for generating a digital virtual human which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
the interactive simulation module is used for creating a digital neighborhood with a virtual scene model as a meta universe in advance, synchronously displaying the digital neighborhood on the display interaction module in real time with the digital virtual person, and realizing synchronous interaction action in the digital neighborhood according to the gesture interaction data and the voice interaction data;
the creation interaction platform is used for providing an AI voice assistant for the user, providing personalized service for the user according to the voice interaction data, providing an optimized state space and an optimized action space for the user according to the interaction action of the user, providing a social interaction function among the users for the user, and providing a commodity exhibition and sales transaction function in the digital neighborhood for the user;
The blockchain module is used for recording data generated by the display interaction module, the virtual person module, the interaction simulation module and the creation interaction platform on a blockchain.
In the above technical solution, preferably, the virtual person module includes an intelligent sensing sub-module, an interaction control sub-module, a data processing sub-module and a person synthesizing sub-module;
the intelligent perception submodule is used for generating a virtual digital person model according to the digital person speed generation technology;
the interaction control submodule is used for obtaining facial expressions of a user according to facial data of the user, generating corresponding facial action sequence data in a mapping mode, and generating gesture action sequence data according to the gesture action data in a mapping mode;
the data processing sub-module is used for generating virtual character rendering data through a graph rendering technology according to the facial action sequence data and the gesture action sequence data, and generating virtual character interaction data according to the touch instruction, the gesture interaction data and the voice interaction data;
and the character synthesis submodule is used for synthesizing corresponding virtual digital characters according to the virtual character rendering data and the virtual character interaction data and combining the virtual digital human models.
In the above technical solution, preferably, the interactive simulation module includes a rendering technology sub-module, an intelligent voice sub-module, and a motion capturing sub-module;
the rendering technology submodule is used for creating a virtual geometric model to serve as an element of a virtual world, rendering different elements by utilizing a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, so that the digital virtual man can interact in the digital neighborhood;
the intelligent voice sub-module performs voice recognition, semantic understanding and intention recognition according to the voice interaction data, and can convert a text generated by a system into voice for output;
and the motion capture submodule adopts a Vnect gesture estimation algorithm to carry out motion capture on the gesture motion data and the gesture interaction data, recognizes and obtains skeleton gesture data of the whole body of the human body, and drives the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in the digital neighborhood.
In the above technical solution, preferably, the authoring interactive platform includes an AI helper sub-module, an intelligent recommendation sub-module, a social communication sub-module and a virtual store sub-module;
The AI assistant submodule provides personalized guiding and navigation service for the user according to the voice interaction data based on a self-supervision end-to-end neural network model;
the intelligent recommendation sub-module establishes a behavior mode of a current user according to behavior data of the digital virtual person in the digital neighborhood, and defines a state space and an action space which meet requirements and preferences for the user according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in the meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
the social communication submodule is used for providing social interaction, communication and cooperation functions among different digital virtual persons in the digital neighborhood and realizing instant communication based on a WebSocket real-time communication protocol;
the virtual store submodule is used for constructing a virtual store, displaying various virtual commodities in the virtual store, providing a transaction mode corresponding to the virtual commodities, and transferring ownership of the virtual commodities after a user pays digital currency based on the digital virtual person.
The invention also provides a meta-universe experience interaction method based on the blockchain technology, which is applied to the meta-universe experience interaction system based on the blockchain technology disclosed in any one of the technical schemes, and comprises the following steps:
Acquiring a touch instruction of a touch display screen, and acquiring face data, gesture action data, gesture interaction data and voice interaction data of a user;
generating a digital virtual person which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
synchronously displaying a digital neighborhood of a meta universe and the digital virtual person on the touch display screen, and realizing synchronous interaction action of the digital virtual person in the digital neighborhood according to the gesture interaction data and the voice interaction data;
providing personalized AI voice assistant service for the user according to the voice interaction data, providing optimized state space and action space for the user according to the interaction action of the user, providing social interaction function between the users for the user, and providing commodity exhibition transaction function in the digital neighborhood for the user;
and recording the data generated in the meta-universe experience interaction process on the blockchain.
In the above technical solution, preferably, the specific process of generating the digital virtual person keeping synchronous and dynamic with the expression and the action of the user according to the face data and the gesture action data of the user includes:
Generating a virtual digital human model according to a digital human speed generation technology;
the facial expression of the user is obtained according to the facial data recognition of the user, corresponding facial action sequence data is generated through mapping, and gesture action sequence data is generated according to the gesture action data mapping;
generating virtual character rendering data through a graphic rendering technology according to the facial action sequence data and the gesture action sequence data, and generating virtual character interaction data according to the touch instruction, the gesture interaction data and the voice interaction data;
and combining the virtual digital human model according to the virtual human rendering data and the virtual human interaction data to synthesize a corresponding virtual digital human.
In the above technical solution, preferably, the step of synchronously displaying the digital neighborhood of the meta-universe and the digital virtual person on the touch display screen, and implementing synchronous interaction of the digital virtual person in the digital neighborhood according to the gesture interaction data and the voice interaction data includes:
creating a virtual geometric model as an element of a virtual world, rendering different elements by using a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, so that the digital virtual man can interact in the digital neighborhood;
Performing voice recognition, semantic understanding and intention recognition according to the voice interaction data, and converting text generated by a system into voice for output;
and performing motion capture on the gesture motion data and the gesture interaction data by adopting a Vnect gesture estimation algorithm, identifying to obtain skeleton gesture data of the whole body of the human body, and driving the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in the digital neighborhood.
In the above technical solution, preferably, the specific process of performing motion capture on the gesture motion data and the gesture interaction data by using a VNect pose estimation algorithm to identify and obtain the human body whole body skeleton pose data includes:
based on computer vision, performing image recognition on the gesture action data and the gesture interaction data, and extracting a human body boundary box;
performing CNN regression on the image in the human body boundary box to obtain a human body key point thermodynamic diagram;
performing time domain filtering according to the human body key point thermodynamic diagram to obtain human body key point coordinates;
binding human bones according to the human key point coordinates to obtain a three-dimensional posture estimation image;
and adopting the Vnect gesture estimation algorithm to identify the continuous gesture action data and gesture interaction data, so as to obtain continuous human body whole body skeleton gesture data.
In the above technical solution, preferably, a personalized AI voice assistant service is provided for a user according to the voice interaction data, an optimized state space and an optimized action space are provided for the user according to the interaction of the user, a social interaction function between the users is provided for the user, and a commodity exhibition and sales transaction function in the digital neighborhood is provided for the user, and the specific process includes:
providing personalized guiding and navigation service for the user according to the voice interaction data based on a self-supervision end-to-end neural network model;
establishing a behavior mode of a current user according to behavior data of the digital virtual person in the digital neighborhood, and defining a state space and an action space which meet requirements and preferences for the user according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in the meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
according to the touch instruction and the interaction action, social interaction, communication and cooperation among different digital virtual persons in the digital neighborhood are realized, and instant messaging is realized based on a WebSocket real-time communication protocol;
And constructing a virtual store, displaying various virtual commodities in the virtual store, providing a transaction mode corresponding to the virtual commodities, and transferring ownership of the virtual commodities after the user pays digital currency based on the digital virtual person.
In the above technical solution, preferably, the creating a virtual geometric model is used as an element of a virtual world, and rendering different elements by using a rendering technology to obtain a virtual scene model, and performing overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, where the specific process includes:
creating a geometric scene of the meta universe, importing the existing model resources, and performing geometric information analysis, material information analysis, texture map analysis and animation effect analysis on the model to obtain elements in the meta universe;
placing the modeled elements in the geometric scene, setting the position, rotation and scaling properties of each element, and constructing the overall layout of the meta universe;
setting materials and textures for elements in the meta universe, creating a renderer to render, and obtaining the digital neighborhood by realizing the required appearance effect.
Compared with the prior art, the invention has the beneficial effects that: the display interaction module is used as a display interface for acquiring user data and interacting content in real time, the generated digital virtual person realizes synchronous interaction in the digital neighborhood of the created virtual scene according to the interaction action of the user, personalized and safe digital neighborhood experience is provided for the user, social interaction and digital commodity exhibition transaction can be realized in the digital neighborhood, and in addition, a high-safety and credibility experience foundation is provided for the experience interaction process through a blockchain technology.
Drawings
FIG. 1 is a schematic block diagram of a meta-universe experience interactive system based on blockchain technology according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a touch display screen displaying a virtual digital person according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a touch display screen for acquiring user information to obtain a digital virtual person according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a meta-universe experience interaction method based on a blockchain technology according to an embodiment of the present invention.
In the figure, the correspondence between each component and the reference numeral is:
1. the system comprises a display interaction module, a virtual person module, an intelligent perception sub-module, an interaction control sub-module, a data processing sub-module, a character synthesis sub-module, an interaction simulation module, a rendering technology sub-module, a smart voice sub-module, a motion capture sub-module, a media playing sub-module, a creation interaction platform, a media player sub-module, a creation interaction platform, a AI assistant sub-module, a smart recommendation sub-module, a social communication sub-module, a virtual store sub-module and a blockchain module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, the meta-universe experience interaction system based on the blockchain technology provided by the invention comprises: the system comprises a display interaction module 1, a virtual person module 2, an interaction simulation module 3, an creation interaction platform 4 and a blockchain module 5;
the display interaction module 1 is a touch display screen and is used for displaying interaction content, receiving touch control instructions and collecting face data, gesture action data, gesture interaction data and voice interaction data of a user;
the virtual person module 2 is used for generating a digital virtual person which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
the interactive simulation module 3 is used for creating a virtual scene model in advance as a digital neighborhood of the meta universe, synchronously displaying the virtual scene model and the digital virtual man on the display interactive module 1 in real time, and realizing synchronous interactive actions in the digital neighborhood according to gesture interactive data and voice interactive data;
the authoring interactive platform 4 is used for providing an AI voice assistant for the user, providing personalized service for the user according to voice interaction data, providing an optimized state space and an optimized action space for the user according to interaction actions of the user, providing social interaction functions among the users for the user, and providing commodity exhibition and trade functions in a digital neighborhood for the user;
The blockchain module 5 is used for recording data generated by the display interaction module 1, the virtual person module 2, the interaction simulation module 3 and the creation interaction platform 4 on the blockchain.
In this embodiment, the display interaction module 1 is used as a display interface for acquiring user data and interacting content in real time, and the generated digital dummy realizes synchronous interaction in the digital neighborhood of the created virtual scene according to the interaction action of the user, so that personalized and safe digital neighborhood experience is provided for the user, social interaction and digital commodity exhibition transaction can be realized in the digital neighborhood, and in addition, a high-safety and credibility experience basis is provided for the experience interaction process through the blockchain technology.
Specifically, an interactive digital screen system for realizing meta-universe experience in a digital neighborhood by adopting a blockchain technology provides personalized and safe digital neighborhood experience for users, enables the digital neighborhood experience to have unique digital identities and assets, provides unique and attractive consumption experience for a consumer terminal, and helps a partner to realize business targets and establish deep emotion connection with the users.
Specifically, the user is a participant of the digital human virtual interactive system based on the touch display screen, and can perform interactive operation through the touch display screen, including touch operation, voice instruction, gesture and the like of the touch screen, and meanwhile, the user can perform management and setting of personal information through the touch display screen, such as changing personal data, adjusting preference setting, selecting favorite virtual character style and the like, so that more personalized and customized user experience can be provided.
As shown in fig. 2, the touch display screen is one of the core components of the digital human virtual interactive system, and is responsible for displaying virtual characters, user interfaces and other interactive contents, providing a high-quality visual experience, providing clear and fine image and video display effects, and enhancing the immersion and interactive experience of users. Meanwhile, the touch display screen supports a multi-touch function, and a user can directly operate on the screen through a finger or a touch pen. The user can interact with the virtual character, operate by gestures, select menus and the like through touching the screen, and an intuitive and flexible interaction mode is provided.
As shown in fig. 3, in addition, a camera and a pickup are further arranged on the touch display screen, the camera is used for collecting face, gesture motion and gesture interaction data of a user, and the pickup is used for collecting voice interaction data of the user. The face data comprise facial motion, expression change, action images of eyes, mouth and other parts, and the gesture action data comprise images of trunk actions, head actions and limb actions. The Camera preferably adopts a 3D Depth Camera/binocular vision Camera.
In the above embodiment, preferably, the virtual person module 2 includes the intelligent perception sub-module 21, the interactive control sub-module 22, the data processing sub-module 23, and the character synthesizing sub-module 24;
The intelligent perception submodule 21 is used for generating a virtual digital person model according to the digital person speed generation technology;
the interaction control sub-module 22 is configured to identify facial expressions of a user according to facial data of the user, map and generate corresponding facial motion sequence data, and map and generate gesture motion sequence data according to gesture motion data;
the data processing sub-module 23 is configured to generate virtual character rendering data according to the facial motion sequence data and the gesture motion sequence data through a graphics rendering technology, and generate virtual character interaction data according to a touch instruction, gesture interaction data and voice interaction data;
the character synthesizing sub-module 24 is configured to synthesize a corresponding virtual digital character according to the virtual character rendering data and the virtual character interaction data in combination with the virtual digital human model.
Specifically, through accurate perception of camera and sensor, catch user's action and expression, combine with virtual digital mannequin, real-time generation is with user's action and the virtual digital personage rendering data of expression assorted virtual personage interactive data, simultaneously will be based on the virtual personage poster or the real-time dynamic display of virtual personage video of presetting poster template or video template on the display screen, provide interactive feedback for the user, can be under the user does not need wearing specific action to catch clothing and helmet the virtual digital personage of real-time synchronization of production, user's interactive experience has been promoted, more individualized and customized user experience has been satisfied.
In the above embodiment, preferably, the interactive simulation module 3 includes a rendering technology sub-module 31, an intelligent voice sub-module 32, and a motion capture sub-module 33;
the rendering technology submodule 31 is used for creating a virtual geometric model as an element of the virtual world, rendering different elements by utilizing a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of the meta universe, so that a digital virtual person can interact in the digital neighborhood;
the intelligent voice sub-module 32 performs voice recognition, semantic understanding and intention recognition according to the voice interaction data, and can convert the text generated by the system into voice for output;
the motion capture submodule 33 adopts a Vnect gesture estimation algorithm to carry out motion capture on gesture motion data and gesture interaction data, recognizes and obtains skeleton gesture data of the whole body of the human body, and drives the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in a digital neighborhood.
In particular, currently mainstream Unity3D rendering techniques are used for creating various elements in the virtual world, including scenes, characters, objects, special effects, and the like. The processing flow is as follows: creating a geometric scene of the metauniverse, and simultaneously importing the existing model resources to realize an analytical model, wherein the analytical model comprises geometric body information analysis, material information analysis, texture map analysis and animation effect analysis, and is used for modeling and designing objects in the metauniverse; placing the modeled elements in a scene, setting the position, rotation, scaling and other attributes of the elements, and constructing the overall layout of the meta universe; materials and textures are set, and proper materials and textures are distributed for objects in the meta-universe so as to achieve the required appearance effect. A renderer (WebGLRenderer) is then created, rendering (renderer) is completed, and finally realistic graphics and visual effects are presented.
In addition, the system further comprises a media playing sub-module 34 for providing rich and various media content display capabilities, and the processing flow is as follows: loading a media file, namely loading the media file to be played into a player, wherein the media file can be an audio file, a video file or an image file; after media decoding and media loading are completed, the media file needs to be decoded, and the decoding process restores the compressed media data into original data for subsequent processing and playing. The decoder performs corresponding decoding operations, such as h.264, h.265, AAC, etc., according to the format and encoding mode of the media file; and after the decoding is finished, processing the media data, including noise reduction, equalization, reverberation, time domain transformation and the like of the audio to improve the quality and effect of the audio, and also including clipping, frame rate conversion, color correction, special effect addition and the like of the video to improve the appearance and effect of the video.
In the above embodiment, the authoring interactive platform 4 preferably includes an AI helper sub-module 41, an intelligent recommendation sub-module 42, a social communication sub-module 43, and a virtual store sub-module 44;
the AI helper submodule 41 provides personalized guiding and navigation services for users according to voice interaction data based on a self-supervision end-to-end neural network model;
The intelligent recommendation sub-module 42 establishes a behavior mode of a current user according to behavior data of a digital virtual person in a digital neighborhood, and defines a state space and an action space for the user to meet requirements and preferences according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in a meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
the social communication sub-module 43 is used for providing social interaction, communication and cooperation functions among different digital virtual persons in the digital neighborhood, and realizing instant communication based on WebSocket real-time communication protocol;
the virtual store sub-module 44 is configured to construct a virtual store, display various virtual goods in the virtual store, and provide a transaction mode corresponding to the virtual goods, and transfer ownership of the virtual goods after the user pays digital money based on the digital virtual person.
In this embodiment, the AI helper submodule 41 functions to provide personalized services such as guiding and navigation to the user, and to provide functions such as route planning, scenic spot introduction and real-time navigation, so that the user can better understand and utilize the resources in the metaspace. At the same time the AI helper submodule 41 may conduct natural language conversations with the user, answer questions, participate in discussions, provide information, etc. It can simulate human dialogue ability, understand user's intention and semanteme and respond accordingly. The technical bottom layer is a self-supervision end-to-end neural network model, and the model can understand and learn the voice and limb language of the user and can realize smoother and natural dialogue with an AI assistant in the virtual digital world.
The intelligent recommendation sub-module 42 may continuously adjust and optimize the recommendation strategy by analyzing feedback, behavior, and other contextual relationships of the user to provide virtual scenes, activities, and interactions that more closely meet the user's needs. And (3) establishing a behavior model of the user by collecting and analyzing behavior data of the user in the meta universe. This includes the interests, preferences, interaction means, etc. of the user. We use reinforcement learning models, with which to infer potential needs and behavioral trends of the user. In the meta-universe, a suitable state space and action space are defined through reinforcement learning, and the state space can describe the state of a user in the aspects of virtual environment, social relationship, personal characteristics and the like in the meta-universe. The action space represents actions taken, such as recommending a particular virtual scene, merchandise, or social interaction style.
Wherein the social communication sub-module 43 provides social interaction, communication and collaboration functions between users. Allowing users to create and manage personal data including avatars, nicknames, personal profiles, and the like. These profiles may be presented to other users to help establish cognition and connections between users. And simultaneously, the system can provide real-time chat and message functions, so that the user can conduct instant text, voice or video communication with friends or other users. The module uses WebSocket real-time communication protocol to realize real-time message transmission and instant communication. The system also supports the organization and participation of virtual activities and parties, and users can create or participate in various virtual activities, such as exhibition, and the like, and interact and experience in the virtual space with other users.
Wherein the virtual store sub-module 44 provides virtual merchandise display, purchase, and transaction functions. The virtual store module may display various virtual goods, such as virtual objects, virtual scenes, virtual characters, etc. The virtual digital person created by the user can be presented here, and the user can find the commodity of interest by browsing a store page or a search function. For each virtual good, the store module will provide detailed descriptions and introductions, including the features, functions, applicable scenarios, etc. of the good. The information can help the user to make a purchase decision, also support the user to purchase virtual goods and provide corresponding transaction flow. The user can select goods, add to the shopping cart, select payment means, and complete the purchase process. The virtual store sub-module 44 supports a variety of payment means, such as virtual currency, encrypted currency, or traditional currency. For NFT works, we can provide an NFT trading platform that supports NFT marketing and ownership transfer between users. Users may publish their NFT work in a mall and transact with other users.
In the above embodiment, the blockchain module 5 includes blockchain communication, blockchain data storage, contract rule engine, public recognition technology, data encryption and NFT modules, and the functions of each module are as follows:
(1) Blockchain communication: in blockchain communications, a communication protocol needs to be defined that specifies the rules and procedures of communication between participants. The protocol may include message formats, data exchange means, authentication mechanisms, etc. These rules will ensure reliability and consistency of the communication. The present invention uses the TCP/IP protocol, which provides reliable data transmission and network connection functions, and receives the virtual digital content generated by the virtual character module.
(2) Blockchain data storage: a data structure of Merkle Tree (Merkle Tree) is used to organize transactions or data records. Merkle tree is a binary tree structure that recursively computes hash values of a plurality of data blocks through a hash function, ultimately generating a root hash value. This allows for quick verification of the integrity of the data, as any modification to the data will result in a change in the root hash value. The digital contents related to the digital dummy generated in the above embodiment are written into the module.
(3) Contract rules engine: for managing and executing rules and logic defined in the smart contracts. The module provides a mechanism to manage and execute these contract rules so that they can be run and responded to automatically. Rules in the intelligent contract can be dynamically adjusted according to business requirements, and corresponding operations are triggered under specified conditions.
(4) The public recognition technology: the module is mainly aimed at enabling each node in the distributed system to agree on a specific problem and ensuring consistency of data, and can keep the system operating normally even in the presence of malicious nodes or network failures. And ensuring the accuracy of the data of the digital virtual person and the operation data of the user.
(5) Data encryption: the module uses a cryptographic algorithm to convert data (data such as digital virtual persons, user operations and the like) into ciphertext so as to protect confidentiality and security of the data. The present invention uses a hash function to perform the hash operation. Each block in the blockchain contains a hash value that is obtained by hashing all transactions and other data for the block. The hash function is irreversible, i.e. the original data cannot be restored from the hash value, thus protecting the confidentiality of the data.
(6) NFT (Non-functional Token) module: the module generates a custom digital asset, a digital asset created based on blockchain technology, for representing unique or unique digital items or digitized ownership. The NFT module of the present invention is used to generate digital assets to a user's dummy, the assets being owned by the user.
As shown in fig. 4, the present invention further provides a meta-universe experience interaction method based on a blockchain technology, which is applied to the meta-universe experience interaction system based on the blockchain technology disclosed in any one of the above embodiments, and includes:
acquiring a touch instruction of a touch display screen, and acquiring face data, gesture action data, gesture interaction data and voice interaction data of a user;
generating a digital virtual person which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
synchronously displaying the digital neighborhood and the digital virtual man of the meta universe on the touch display screen, and realizing synchronous interaction action of the digital virtual man in the digital neighborhood according to gesture interaction data and voice interaction data;
providing personalized AI voice assistant service for users according to voice interaction data, providing optimized state space and action space for users according to interaction actions of the users, providing social interaction functions among the users for the users, and providing commodity exhibition and trade functions in digital blocks for the users;
and recording the data generated in the meta-universe experience interaction process on the blockchain.
In this embodiment, the display interaction module 1 is used as a display interface for acquiring user data and interacting content in real time, and the generated digital dummy realizes synchronous interaction in the digital neighborhood of the created virtual scene according to the interaction action of the user, so that personalized and safe digital neighborhood experience is provided for the user, social interaction and digital commodity exhibition transaction can be realized in the digital neighborhood, and in addition, a high-safety and credibility experience basis is provided for the experience interaction process through the blockchain technology.
In the above embodiment, preferably, the specific process of generating the digital virtual person keeping synchronous dynamic with the expression and the action of the user according to the face data and the gesture action data of the user includes:
generating a virtual digital human model according to a digital human speed generation technology;
the facial expression of the user is obtained according to the facial data recognition of the user, corresponding facial action sequence data is generated through mapping, and gesture action sequence data is generated according to the gesture action data mapping;
generating virtual character rendering data through a graph rendering technology according to the facial action sequence data and the gesture action sequence data, and generating virtual character interaction data according to the touch control instruction, the gesture interaction data and the voice interaction data;
and combining the virtual digital human model according to the virtual human rendering data and the virtual human interaction data to synthesize a corresponding virtual digital human.
Specifically, through accurate perception of camera and sensor, catch user's action and expression, combine with virtual digital mannequin, real-time generation is with user's action and the virtual digital personage rendering data of expression assorted virtual personage interactive data, simultaneously will be based on the virtual personage poster or the real-time dynamic display of virtual personage video of presetting poster template or video template on the display screen, provide interactive feedback for the user, can be under the user does not need wearing specific action to catch clothing and helmet the virtual digital personage of real-time synchronization of production, user's interactive experience has been promoted, more individualized and customized user experience has been satisfied.
In the above embodiment, preferably, the digital neighborhood and the digital virtual person in the meta universe are synchronously displayed on the touch display screen, and according to the gesture interaction data and the voice interaction data, the synchronous interaction action of the digital virtual person in the digital neighborhood is realized, and the specific process includes:
creating a virtual geometric model as an element of a virtual world, rendering different elements by using a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, so that a digital virtual person can interact in the digital neighborhood;
performing voice recognition, semantic understanding and intention recognition according to voice interaction data, and converting text generated by a system into voice for output;
and performing motion capture on gesture motion data and gesture interaction data by adopting a Vnect gesture estimation algorithm, identifying to obtain skeleton gesture data of the whole body of the human body, and driving the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in a digital neighborhood.
In the above embodiment, preferably, the specific process of performing motion capture on gesture motion data and gesture interaction data by using the VNect pose estimation algorithm to identify and obtain skeleton pose data of a whole body of a human body includes:
Based on computer vision, carrying out image recognition on gesture action data and gesture interaction data, and extracting a human body boundary box;
performing CNN regression on the image in the human body boundary box to obtain a human body key point thermodynamic diagram;
performing time domain filtering according to the thermodynamic diagram of the human body key points to obtain the coordinates of the human body key points;
binding human bones according to the human key point coordinates to obtain a three-dimensional posture estimation image;
and (3) adopting a Vnect gesture estimation algorithm to identify continuous gesture motion data and gesture interaction data, so as to obtain continuous human body whole body skeleton gesture data.
In the foregoing embodiment, preferably, a personalized AI voice assistant service is provided for a user according to voice interaction data, an optimized state space and an optimized action space are provided for the user according to interaction actions of the user, a social interaction function between users is provided for the user, and a commodity exhibition and trade function in a digital neighborhood is provided for the user, and the specific process includes:
based on a self-supervision end-to-end neural network model, personalized guiding and navigation service is provided for a user according to voice interaction data;
establishing a behavior mode of a current user according to behavior data of a digital virtual person in a digital neighborhood, and defining a state space and an action space which meet requirements and preferences for the user according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in a meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
According to the touch instruction and the interaction action, social interaction, communication and cooperation among different digital virtual persons in the digital neighborhood are realized, and instant communication is realized based on a WebSocket real-time communication protocol;
and constructing a virtual store, displaying various virtual commodities in the virtual store, providing a transaction mode corresponding to the virtual commodities, and transferring ownership of the virtual commodities after the user pays digital currency based on the digital virtual person.
In the above embodiment, preferably, a virtual geometric model is created as an element of a virtual world, and different elements are rendered by using a rendering technology to obtain a virtual scene model, and the virtual scene model is integrally laid out to construct a digital neighborhood of a meta universe, and the specific process includes:
creating a geometric scene of the meta universe, importing the existing model resources, and performing geometric information analysis, material information analysis, texture map analysis and animation effect analysis on the model to obtain elements in the meta universe;
placing the modeled elements in a geometric scene, setting the position, rotation and scaling properties of each element, and constructing the overall layout of the meta universe;
setting materials and textures for elements in the meta universe, creating a renderer to render, and achieving the required appearance effect to obtain the digital neighborhood.
According to the meta-universe experience interaction method based on the blockchain technology disclosed in the above embodiment, in a specific implementation process, reference is made to the implementation of each module in the meta-universe experience interaction system based on the blockchain technology disclosed in the above embodiment, and details are not repeated here.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A meta-universe experience interactive system based on a blockchain technology, comprising: the system comprises a display interaction module, a virtual person module, an interaction simulation module, an creation interaction platform and a blockchain module;
the display interaction module is a touch display screen and is used for displaying interaction content, receiving touch control instructions and collecting face data, gesture action data, gesture interaction data and voice interaction data of a user;
the virtual human module is used for generating a digital virtual human which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
The interactive simulation module is used for creating a digital neighborhood with a virtual scene model as a meta universe in advance, synchronously displaying the digital neighborhood on the display interaction module in real time with the digital virtual person, and realizing synchronous interaction action in the digital neighborhood according to the gesture interaction data and the voice interaction data;
the creation interaction platform is used for providing an AI voice assistant for the user, providing personalized service for the user according to the voice interaction data, providing an optimized state space and an optimized action space for the user according to the interaction action of the user, providing a social interaction function among the users for the user, and providing a commodity exhibition and sales transaction function in the digital neighborhood for the user;
the blockchain module is used for recording data generated by the display interaction module, the virtual person module, the interaction simulation module and the creation interaction platform on a blockchain.
2. The blockchain technology-based meta-universe experience interactive system as in claim 1, wherein the virtual man module comprises an intelligent perception sub-module, an interaction control sub-module, a data processing sub-module and a character synthesis sub-module;
The intelligent perception submodule is used for generating a virtual digital person model according to the digital person speed generation technology;
the interaction control submodule is used for obtaining facial expressions of a user according to facial data of the user, generating corresponding facial action sequence data in a mapping mode, and generating gesture action sequence data according to the gesture action data in a mapping mode;
the data processing sub-module is used for generating virtual character rendering data through a graph rendering technology according to the facial action sequence data and the gesture action sequence data, and generating virtual character interaction data according to the touch instruction, the gesture interaction data and the voice interaction data;
and the character synthesis submodule is used for synthesizing corresponding virtual digital characters according to the virtual character rendering data and the virtual character interaction data and combining the virtual digital human models.
3. The blockchain technology-based meta-universe experience interactive system of claim 2, wherein the interactive simulation module comprises a rendering technology sub-module, an intelligent voice sub-module and a motion capture sub-module;
the rendering technology submodule is used for creating a virtual geometric model to serve as an element of a virtual world, rendering different elements by utilizing a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, so that the digital virtual man can interact in the digital neighborhood;
The intelligent voice sub-module performs voice recognition, semantic understanding and intention recognition according to the voice interaction data, and can convert a text generated by a system into voice for output;
and the motion capture submodule adopts a Vnect gesture estimation algorithm to carry out motion capture on the gesture motion data and the gesture interaction data, recognizes and obtains skeleton gesture data of the whole body of the human body, and drives the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in the digital neighborhood.
4. The blockchain technology-based meta-universe experience interactive system of claim 3, wherein the authoring interactive platform comprises an AI helper sub-module, an intelligent recommendation sub-module, a social communication sub-module and a virtual store sub-module;
the AI assistant submodule provides personalized guiding and navigation service for the user according to the voice interaction data based on a self-supervision end-to-end neural network model;
the intelligent recommendation sub-module establishes a behavior mode of a current user according to behavior data of the digital virtual person in the digital neighborhood, and defines a state space and an action space which meet requirements and preferences for the user according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in the meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
The social communication submodule is used for providing social interaction, communication and cooperation functions among different digital virtual persons in the digital neighborhood and realizing instant communication based on a WebSocket real-time communication protocol;
the virtual store submodule is used for constructing a virtual store, displaying various virtual commodities in the virtual store, providing a transaction mode corresponding to the virtual commodities, and transferring ownership of the virtual commodities after a user pays digital currency based on the digital virtual person.
5. A meta-universe experience interaction method based on a blockchain technology, which is applied to the meta-universe experience interaction system based on the blockchain technology as claimed in any one of claims 1 to 4, and comprises the following steps:
acquiring a touch instruction of a touch display screen, and acquiring face data, gesture action data, gesture interaction data and voice interaction data of a user;
generating a digital virtual person which keeps synchronous and dynamic with the expression and action of the user according to the face data and the gesture action data of the user;
synchronously displaying a digital neighborhood of a meta universe and the digital virtual person on the touch display screen, and realizing synchronous interaction action of the digital virtual person in the digital neighborhood according to the gesture interaction data and the voice interaction data;
Providing personalized AI voice assistant service for the user according to the voice interaction data, providing optimized state space and action space for the user according to the interaction action of the user, providing social interaction function between the users for the user, and providing commodity exhibition transaction function in the digital neighborhood for the user;
and recording the data generated in the meta-universe experience interaction process on the blockchain.
6. The method for interaction of meta-universe experience based on blockchain technology according to claim 5, wherein the specific process of generating the digital virtual person keeping synchronous and dynamic with the user expression and action according to the user face data and the gesture action data comprises the following steps:
generating a virtual digital human model according to a digital human speed generation technology;
the facial expression of the user is obtained according to the facial data recognition of the user, corresponding facial action sequence data is generated through mapping, and gesture action sequence data is generated according to the gesture action data mapping;
generating virtual character rendering data through a graphic rendering technology according to the facial action sequence data and the gesture action sequence data, and generating virtual character interaction data according to the touch instruction, the gesture interaction data and the voice interaction data;
And combining the virtual digital human model according to the virtual human rendering data and the virtual human interaction data to synthesize a corresponding virtual digital human.
7. The method for interaction of metauniverse experience based on blockchain technology according to claim 6, wherein the steps of synchronously displaying the digital neighborhood of metauniverse and the digital virtual man on the touch display screen, and realizing synchronous interaction of the digital virtual man in the digital neighborhood according to the gesture interaction data and the voice interaction data comprise the following steps:
creating a virtual geometric model as an element of a virtual world, rendering different elements by using a rendering technology to obtain a virtual scene model, and carrying out overall layout construction on the virtual scene model to obtain a digital neighborhood of a meta universe, so that the digital virtual man can interact in the digital neighborhood;
performing voice recognition, semantic understanding and intention recognition according to the voice interaction data, and converting text generated by a system into voice for output;
and performing motion capture on the gesture motion data and the gesture interaction data by adopting a Vnect gesture estimation algorithm, identifying to obtain skeleton gesture data of the whole body of the human body, and driving the skeleton gesture data to the digital virtual person in real time to realize synchronous motion of the digital virtual person in the digital neighborhood.
8. The blockchain technology-based meta-universe experience interaction method of claim 7, wherein the specific process of performing motion capture on the gesture motion data and the gesture interaction data by using a VNect pose estimation algorithm and identifying to obtain human body whole body skeleton pose data comprises the following steps:
based on computer vision, performing image recognition on the gesture action data and the gesture interaction data, and extracting a human body boundary box;
performing CNN regression on the image in the human body boundary box to obtain a human body key point thermodynamic diagram;
performing time domain filtering according to the human body key point thermodynamic diagram to obtain human body key point coordinates;
binding human bones according to the human key point coordinates to obtain a three-dimensional posture estimation image;
and adopting the Vnect gesture estimation algorithm to identify the continuous gesture action data and gesture interaction data, so as to obtain continuous human body whole body skeleton gesture data.
9. The blockchain technology-based meta-universe experience interaction method of claim 8, wherein personalized AI voice assistant service is provided for users according to the voice interaction data, optimized state space and action space are provided for users according to interaction actions of the users, social interaction functions among the users are provided for the users, commodity exhibition trade functions in the digital neighborhood are provided for the users, and the method comprises the following specific steps:
Providing personalized guiding and navigation service for the user according to the voice interaction data based on a self-supervision end-to-end neural network model;
establishing a behavior mode of a current user according to behavior data of the digital virtual person in the digital neighborhood, and defining a state space and an action space which meet requirements and preferences for the user according to the behavior mode, wherein the state space is used for describing virtual environments, social relations and personal characteristics of the user in the meta universe, and the action space is used for representing a specific recommended virtual scene, commodity or social interaction mode adopted;
according to the touch instruction and the interaction action, social interaction, communication and cooperation among different digital virtual persons in the digital neighborhood are realized, and instant messaging is realized based on a WebSocket real-time communication protocol;
and constructing a virtual store, displaying various virtual commodities in the virtual store, providing a transaction mode corresponding to the virtual commodities, and transferring ownership of the virtual commodities after the user pays digital currency based on the digital virtual person.
10. The interaction method for meta-universe experience based on the blockchain technology as claimed in claim 7, wherein the creating a virtual geometric model is used as an element of a virtual world, rendering different elements by using a rendering technology to obtain a virtual scene model, and performing overall layout construction on the virtual scene model to obtain a digital neighborhood of the meta-universe, and the specific process comprises:
Creating a geometric scene of the meta universe, importing the existing model resources, and performing geometric information analysis, material information analysis, texture map analysis and animation effect analysis on the model to obtain elements in the meta universe;
placing the modeled elements in the geometric scene, setting the position, rotation and scaling properties of each element, and constructing the overall layout of the meta universe;
setting materials and textures for elements in the meta universe, creating a renderer to render, and obtaining the digital neighborhood by realizing the required appearance effect.
CN202311491910.XA 2023-11-09 2023-11-09 Meta universe experience interaction system and method based on blockchain technology Pending CN117539349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311491910.XA CN117539349A (en) 2023-11-09 2023-11-09 Meta universe experience interaction system and method based on blockchain technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311491910.XA CN117539349A (en) 2023-11-09 2023-11-09 Meta universe experience interaction system and method based on blockchain technology

Publications (1)

Publication Number Publication Date
CN117539349A true CN117539349A (en) 2024-02-09

Family

ID=89791130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311491910.XA Pending CN117539349A (en) 2023-11-09 2023-11-09 Meta universe experience interaction system and method based on blockchain technology

Country Status (1)

Country Link
CN (1) CN117539349A (en)

Similar Documents

Publication Publication Date Title
NALBANT et al. Development and transformation in digital marketing and branding with artificial intelligence and digital technologies dynamics in the Metaverse universe
US10950020B2 (en) Real-time AR content management and intelligent data analysis system
Whyte Virtual reality and the built environment
Dionisio et al. 3D virtual worlds and the metaverse: Current status and future possibilities
US9870636B2 (en) Method for sharing emotions through the creation of three dimensional avatars and their interaction
Manolova et al. Context-aware holographic communication based on semantic knowledge extraction
Uddin et al. Unveiling the metaverse: Exploring emerging trends, multifaceted perspectives, and future challenges
JP2022500795A (en) Avatar animation
US20230090253A1 (en) Systems and methods for authoring and managing extended reality (xr) avatars
Sami et al. The metaverse: Survey, trends, novel pipeline ecosystem & future directions
Abilkaiyrkyzy et al. Metaverse key requirements and platforms survey
Seymour et al. Mapping beyond the uncanny valley: A Delphi study on aiding adoption of realistic digital faces
Chamola et al. Beyond Reality: The Pivotal Role of Generative AI in the Metaverse
CN117519477A (en) Digital human virtual interaction system and method based on display screen
Sun et al. Animating synthetic dyadic conversations with variations based on context and agent attributes
Priya et al. Augmented reality and speech control from automobile showcasing
Soliman et al. Artificial intelligence powered Metaverse: analysis, challenges and future perspectives
Cilizoglu et al. Designers' Expectations from Virtual Product Experience in Metaverse
CN117539349A (en) Meta universe experience interaction system and method based on blockchain technology
Wang et al. Evolution and innovations in animation: A comprehensive review and future directions
Murala et al. Metaverse: A Study on Immersive Technologies
Abbattista et al. SAMIR: A Smart 3D Assistant on the Web.
Malerba Exploring the Potential of the Metaverse for Value Creation: An Analysis of Opportunities, Challenges, and Societal Impact, with a Focus on the Chinese Context
Cui et al. Virtual Human: A Comprehensive Survey on Academic and Applications
Cakir et al. Audio to video: Generating a talking fake agent

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination