CN116958447A - Automatic meta-universe character generation system and method based on Internet of things - Google Patents

Automatic meta-universe character generation system and method based on Internet of things Download PDF

Info

Publication number
CN116958447A
CN116958447A CN202311001214.6A CN202311001214A CN116958447A CN 116958447 A CN116958447 A CN 116958447A CN 202311001214 A CN202311001214 A CN 202311001214A CN 116958447 A CN116958447 A CN 116958447A
Authority
CN
China
Prior art keywords
model
internet
outdoor activity
things
activity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311001214.6A
Other languages
Chinese (zh)
Inventor
骆俊
刘畅
陈宇劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Solid Color Digital Technology Co ltd
Original Assignee
Shenzhen Solid Color Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Solid Color Digital Technology Co ltd filed Critical Shenzhen Solid Color Digital Technology Co ltd
Priority to CN202311001214.6A priority Critical patent/CN116958447A/en
Publication of CN116958447A publication Critical patent/CN116958447A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The invention provides a meta-universe character automatic generation system and method based on the Internet of things. According to the scheme provided by the embodiment of the invention, an outdoor activity scheme with high sense of realism and immersion can be quickly generated or constructed according to the personalized requirements of users, and combined with a metauniverse platform and a blockchain technology, so that the uniqueness, the safety and the traceability of the activity are realized. Meanwhile, rich and various virtual roles and interaction modes can be provided for users, and the interestingness and participation of outdoor activities are increased.

Description

Automatic meta-universe character generation system and method based on Internet of things
Technical Field
The invention relates to the technical field of the Internet of things, in particular to a meta-universe character automatic generation system and method based on the Internet of things.
Background
Metauniverse (Metaverse) is a virtual world linked and created with technical means that maps and interacts with the real world. The meta-universe is a process of virtualizing and digitizing the real world, and needs to be greatly improved in terms of content production, economic systems, user experience, physical world content and the like. A large number of three-dimensional models, such as three-dimensional environmental models, three-dimensional object models, human body three-dimensional models, and the like, are required in the meta-universe creation process. The existing meta-universe character generation scheme also has the problems of low efficiency and inaccurate generation result.
Disclosure of Invention
Based on the problems, the invention provides a meta-universe character automatic generation system and method based on the Internet of things, and the scheme of the embodiment of the invention can quickly generate or construct an outdoor activity scheme with high sense of realism and immersion according to the personalized requirements of users, so that rich and various virtual characters and interaction modes can be provided for the users, and the interestingness and participation of the outdoor activity are increased.
In view of this, an aspect of the present invention provides a meta-universe character automatic generation system based on the internet of things, including: the system comprises a cloud server, an Internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals and an Internet of things server for managing the Internet of things terminal, the simulation terminals and the intelligent perception terminals;
the internet of things terminal is configured to: receiving outdoor activity parameters input by a user, and sending the outdoor activity parameters to the cloud server;
the cloud server is configured to:
selecting one or more matched first outdoor activity templates from a preset outdoor activity template library according to the outdoor activity parameters, and fusing the first outdoor activity templates to generate an initial outdoor activity model;
According to the outdoor activity parameters and the position information of the terminal of the Internet of things, carrying out detail adjustment on the initial outdoor activity model to obtain a basic outdoor activity model;
importing the basic outdoor activity model into a metauniverse platform, and distributing a unique identifier and a blockchain address to the basic outdoor activity model;
adding a virtual character model to the basic outdoor activity model;
storing the basic outdoor activity model and related information thereof in a distributed database of the metauniverse platform, and sending the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
the internet of things server is configured to: and returning the identifier and the blockchain address to the terminal of the Internet of things.
Optionally, the internet of things terminal is configured to:
receiving a first request of the user, and sending the first request to the Internet of things server;
transmitting the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
determining a first intelligent perception terminal from a plurality of intelligent perception terminals which are preconfigured in the space where the first simulation terminal is located according to the basic outdoor activity model;
The first analog terminal is configured to: simulating a virtual scene image corresponding to the basic outdoor activity model, and projecting the virtual scene image to obtain a virtual scene;
the internet of things server is configured to:
enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
and adjusting the basic outdoor activity model, the role model and the interaction model according to the monitoring data.
Optionally, the step of adjusting the basic outdoor activity model, the character model, and the interaction model according to the monitoring data, the internet of things server is configured to:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment, and obtaining a first virtual character and a first character model;
displaying the adjusted first virtual character in real time in a projection space of the first simulation terminal through the first simulation terminal by utilizing a real-time rendering technology, so that a participant can interact with the first virtual character;
Acquiring interaction data generated by interaction between the participant acquired by the first intelligent perception terminal and the first virtual character in a mode of gestures, voices and/or control equipment;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
Optionally, the step of adjusting the basic outdoor activity model, the character model, and the interaction model according to the monitoring data, the internet of things server is configured to:
evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
Optionally, the step of adding a virtual character model to the basic outdoor activity model, the cloud server is configured to include:
Generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
The invention provides a meta-universe character automatic generation method based on the Internet of things, which is applied to a meta-universe character automatic generation system based on the Internet of things, wherein the meta-universe character automatic generation system comprises a cloud server, an Internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals and an Internet of things server for managing the Internet of things terminal, the simulation terminals and the intelligent perception terminals; the meta-universe character automatic generation method comprises the following steps:
the terminal of the Internet of things receives outdoor activity parameters input by a user and sends the outdoor activity parameters to the cloud server;
the cloud server selects one or more matched first outdoor activity templates from a preset outdoor activity template library according to the outdoor activity parameters, and fuses the first outdoor activity templates to generate an initial outdoor activity model;
The cloud server carries out detail adjustment on the initial outdoor activity model according to the outdoor activity parameters and the position information of the terminal of the Internet of things to obtain a basic outdoor activity model;
the cloud server imports the basic outdoor activity model into a meta-universe platform and distributes a unique identifier and a blockchain address for the basic outdoor activity model;
the cloud server adds a virtual character model to the basic outdoor activity model;
storing the basic outdoor activity model and related information thereof in a distributed database of the metauniverse platform, and sending the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
and the Internet of things server returns the identifier and the blockchain address to the Internet of things terminal.
Optionally, the metauniverse character automatic generation method further includes:
the terminal of the Internet of things receives a first request of the user and sends the first request to the server of the Internet of things;
the Internet of things server sends the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
The Internet of things server determines a first intelligent sensing terminal from a plurality of intelligent sensing terminals which are preconfigured in the space where the first simulation terminal is located according to the basic outdoor activity model;
the first simulation terminal simulates a virtual scene image corresponding to the basic outdoor activity model, and projects the virtual scene image to obtain a virtual scene;
enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
and the Internet of things server adjusts the basic outdoor activity model, the role model and the interaction model according to the monitoring data.
Optionally, the step of adjusting the basic outdoor activity model, the character model and the interaction model by the internet of things server according to the monitoring data includes:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment, and obtaining a first virtual character and a first character model;
displaying the adjusted first virtual character in real time in a projection space of the first simulation terminal through the first simulation terminal by utilizing a real-time rendering technology, so that a participant can interact with the first virtual character;
The participant interacts with the first virtual character by means of gestures, voice and/or a manipulation device;
the first intelligent perception terminal acquires interaction data between the participant and the first virtual role;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
Optionally, the step of adjusting the basic outdoor activity model, the character model and the interaction model by the internet of things server according to the monitoring data includes:
evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
Optionally, the step of adding a virtual character model to the basic outdoor activity model by the cloud server includes:
Generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
By adopting the technical scheme of the invention, the meta-universe character automatic generation method based on the Internet of things comprises the following steps: the terminal of the Internet of things receives outdoor activity parameters input by a user and sends the outdoor activity parameters to the cloud server; the cloud server selects one or more matched first outdoor activity templates from a preset outdoor activity template library according to the outdoor activity parameters, and fuses the first outdoor activity templates to generate an initial outdoor activity model; the cloud server carries out detail adjustment on the initial outdoor activity model according to the outdoor activity parameters and the position information of the terminal of the Internet of things to obtain a basic outdoor activity model; the cloud server imports the basic outdoor activity model into a meta-universe platform and distributes a unique identifier and a blockchain address for the basic outdoor activity model; the cloud server adds a virtual character model to the basic outdoor activity model; storing the basic outdoor activity model and related information thereof in a distributed database of the metauniverse platform, and sending the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server; and the Internet of things server returns the identifier and the blockchain address to the Internet of things terminal. According to the scheme provided by the embodiment of the invention, an outdoor activity scheme with high sense of realism and immersion can be quickly generated or constructed according to the personalized requirements of users, and combined with a metauniverse platform and a blockchain technology, so that the uniqueness, the safety and the traceability of the activity are realized. Meanwhile, rich and various virtual roles and interaction modes can be provided for users, and the interestingness and participation of outdoor activities are increased.
Drawings
FIG. 1 is a schematic block diagram of a meta-universe character automatic generation system based on the Internet of things, provided by an embodiment of the application;
fig. 2 is a flowchart of a meta-universe character automatic generation method based on the internet of things according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following describes a meta-universe character automatic generation system and a method based on the internet of things according to some embodiments of the present application with reference to fig. 1 to 2.
As shown in fig. 1, an embodiment of the present application provides a meta-universe character automatic generation system based on the internet of things, including: the system comprises a cloud server, an Internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals and an Internet of things server for managing the Internet of things terminal, the simulation terminals and the intelligent perception terminals;
the internet of things terminal is configured to: receiving outdoor activity parameters (including but not limited to type, place, time, number of people, character features, activity theme, etc. of the activity) input by a user, and transmitting the outdoor activity parameters to the cloud server;
The cloud server is configured to:
according to the outdoor activity parameters, one or more matched first outdoor activity templates are selected from a preset outdoor activity template library (templates for storing different types of outdoor activities are preset in the template, and the templates at least comprise information of types, scenes, subjects, duration, personnel, equipment, behaviors and the like of the activities), and are fused to generate an initial outdoor activity model, which can be specifically: inputting the outdoor activity parameters (such as activity type, scene, number of people, etc., and the parameters are represented by numerical values or labels); extracting key features (such as activity types, scene labels, personnel numbers and the like) from the outdoor activity parameters through a feature extraction algorithm; matching and comparing the extracted key features with feature vectors of templates in the outdoor movable template library, and calculating feature similarity; selecting the first N templates which are most matched with the outdoor activity parameters according to the similarity sorting; selecting a parameterized generating network (such as a deep neural network capable of generating conditions), respectively inputting the first N templates and the outdoor activity parameters to generate N parameterized new templates (the parameterized generating network generates a conditional image according to the input templates and parameters through a trained model and outputs a parameterized new template which is similar to the input templates but is provided with parameters designated by a user as conditions to become a personalized new template, for example, the input template is a 'company marketing department outdoor expansion', the parameters are '50 persons', the parameterized generating network takes the input templates as conditions and adds '50 persons' parameter conditions, and the generating network outputs a new template similar to the 'company marketing department outdoor expansion but with 50 roles'; fusing N new templates generated in the previous step by adopting a template fusion algorithm (such as template fusion based on an attention mechanism) to obtain a total model; verifying and checking the total model to ensure that the total model meets the input requirement of the outdoor activity parameters; finally outputting the verified initial outdoor activity model;
According to the outdoor activity parameters and the position information of the terminal of the Internet of things, carrying out detail adjustment on the initial outdoor activity model (including but not limited to modifying the route of an activity, the arrangement of the course of the activity, the type and the quantity of equipment/props, the type and the quantity of foods and the like) to obtain a basic outdoor activity model;
importing the basic outdoor activity model into a metauniverse platform, and distributing a unique identifier and a blockchain address to the basic outdoor activity model;
adding a virtual character model to the basic outdoor activity model, the virtual character model including but not limited to: appearance models such as facial models (including facial meshes, expression simulations, etc.), body models including stereoscopic models of body parts (e.g., head, hands, feet, etc.), garment models, and skin and hairstyle detail models, etc.; behavioral models such as basic actions (including basic actions such as various standing, walking, etc.), interactive actions involving various bodies and gestures, and facial animation (such as facial expression changes and language mouth shape synchronization), etc.; interaction models such as dialogue systems (which can perform voice or text dialogue interactions), emotion systems (which have emotion response capability), autonomous control (which can perform autonomous decision actions according to circumstances), and the like; scene adaptation models such as spatial perception (perceivable virtual space and avoiding obstacles), scene interaction (interactable with virtual environment scene), etc.;
Storing the basic outdoor activity model and its related information (including but not limited to virtual character models, outdoor activity parameters, etc.) in a distributed database of the metauniverse platform, and transmitting the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
the internet of things server is configured to: and returning the identifier and the blockchain address to the terminal of the Internet of things.
It is understood that in this embodiment, the outdoor activity parameters include, but are not limited to, activity types (such as different types of outdoor activities like hiking, mountain climbing, swimming, outdoor barbecue, outdoor expansion, etc.), activity scenes (such as different outdoor environmental scenes like mountain, lake, grassland/grassland, beach/river beach, etc.), activity duration, number of participants, participant attributes (such as age, sex, body shape, health status, outdoor activity experience, etc.), activity topics, activity objectives (such as fitness, holidays, expansion social circles, etc.), activity requirements (such as required equipment, skill level, etc.), activity intensity (mild, medium or high intensity), budget information, etc.
According to the scheme provided by the embodiment of the invention, an outdoor activity scheme with high sense of realism and immersion can be quickly generated or constructed according to the personalized requirements of users, and the outdoor activity scheme is combined with a meta universe platform and a blockchain technology to realize the uniqueness, the safety and the traceability of the activity. Meanwhile, rich and various virtual roles and interaction modes can be provided for users, and the interestingness and participation of outdoor activities are increased.
It should be noted that the block diagram of the meta-space character automatic generation system based on the internet of things shown in fig. 1 is only schematic, and the number of the illustrated modules does not limit the protection scope of the present invention.
In some possible embodiments of the present invention, the terminal of the internet of things is configured to:
receiving a first request (comprising an analog terminal selection instruction) of the user, and sending the first request to the internet of things server;
transmitting the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
according to the basic outdoor activity model, determining a first intelligent perception terminal from a plurality of intelligent perception terminals which are preconfigured in a space where the first simulation terminal is located (such as selecting a first high-definition camera which is matched with a space range contained in the basic outdoor activity model and can efficiently and high-quality acquire real-time images and action data of participants from a plurality of high-definition cameras;
The first analog terminal is configured to: simulating a virtual scene image corresponding to the basic outdoor activity model, and projecting the virtual scene image to obtain a virtual scene, wherein the virtual scene is specifically as follows: importing the basic outdoor activity model into the first simulation terminal; the first simulation terminal uses a proper rendering engine and a virtual reality development tool, adds corresponding activity elements into a virtual scene (comprising temporary facilities, activity areas, props, personas and the like, and designs actions and rules for interaction with the elements so as to enhance user participation and experience) according to specific outdoor activity requirements, scene models, object models, persona models and other data related in the basic outdoor activity model, and constructs a virtual scene image of the basic outdoor activity model; designing a user interface and a control mode of the first simulation terminal, so that a participant can navigate, interact and operate a virtual scene (interaction can be performed by using modes of a handle, a touch screen, gesture recognition, voice input and the like); the first simulation terminal realizes vivid rendering and simulation of the virtual scene image by controlling proper light setting, application of materials and textures, adjustment of physical properties and the like, and projects the virtual scene image into a virtual scene; after the virtual scene projection is completed, testing and optimizing are carried out, so that the stability, performance and user experience of the virtual scene are ensured, and the virtual scene is subjected to necessary adjustment and optimization according to a test result, so that better user experience is provided;
The internet of things server is configured to:
enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
and adjusting the basic outdoor activity model, the role model and the interaction model according to the monitoring data to provide a virtual scene or a basic outdoor activity model or a role model or an interaction model which is most fit with the actual outdoor activity reality, thereby ensuring that outdoor activity planning can be accurately and efficiently completed (such as modifying a three-dimensional role model of a participant by utilizing collected participant images and action data through image processing and action capturing technology, and simultaneously adjusting the proportion, appearance and the like of the role model according to the characteristics of the height, the body shape and the like of the participant).
In some possible embodiments of the present invention, the step of adjusting the basic outdoor activity model, the character model, and the interaction model according to the monitoring data, the internet of things server is configured to:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment to obtain a first virtual character and a first character model (for example, in a cold environment, the virtual character wears thick clothes and exhales white breath);
The adjusted first virtual role is displayed in a projection space of the first simulation terminal (namely projected in the virtual scene) in real time through the first simulation terminal by utilizing a real-time rendering technology, so that a participant (the user or other testers added by the user) can interact with the first virtual role (the adjusted first virtual role can also be displayed in a projection space of the second simulation terminal in real time through the second simulation terminal in the simulation terminal, so that the participant can interact with the first virtual role), namely, the interaction condition of the participant and other participants in actual outdoor activities can be simulated, and the participant can carry out experience and evaluation on the basic outdoor activity model in an immersive manner;
acquiring interaction data generated by interaction between the participant acquired by the first intelligent perception terminal and the first virtual character in a mode of gestures, voices and/or control equipment;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
In this embodiment, the active participants experience and test the virtual scene generated according to the basic outdoor activity model, and then modify the interaction model according to the interaction condition of the participants and the virtual roles, so that the interaction model which is more attached to the participants can be obtained, and the user experience is improved.
In some possible embodiments of the present invention, the step of adjusting the basic outdoor activity model, the character model, and the interaction model according to the monitoring data, the internet of things server is configured to:
evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
It can be appreciated that, in order to provide personalized services to improve the user experience of the activity participants, in this embodiment, by collecting interaction data of the participants in the virtual scenario corresponding to the basic outdoor activity model, a "perfect scenario" of the corresponding outdoor activity, such as an activity type, an activity theme, an activity flow, an activity place, an activity time, activity participants, behavior guidance of the activity participants (such as individual activity behavior planning and attention of each participant), activity equipment/prop, emergency plan, and the like, may be further determined, so as to obtain a first outdoor activity model capable of guaranteeing high efficiency, intelligence, and safety of the organization of the outdoor activity.
In some possible embodiments of the invention, in order to provide personalized services and experiences to the participants; by combining with artificial intelligence technology, through analyzing interests, preferences and demands of participants, related sub-activity content, sub-activity time, sub-activity place, navigation path and/or interaction task in sub-activity in the whole outdoor activity are recommended for the participants, and real-time feedback and guidance are provided.
In some possible embodiments of the present invention, the step of adding a virtual character model to the basic outdoor activity model, the cloud server is configured to include:
generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
It will be appreciated that in order for the basic outdoor activity model to provide a more accurate reference, in this embodiment, one or more matching virtual character models are generated or constructed based on the user-entered character parameters or the personnel characteristic data of the participating personnel, and associated with the basic outdoor activity model, which may be: constructing a virtual character library (comprising preset virtual character resources of different sexes, ages, body types, character features, health states, skills and the like); inputting character parameters or person characteristic data (may be characteristic data such as gender, age, height weight, character characteristics, health status, skills, etc.); establishing a parameterized mapping model of the role; mapping input character parameters or personnel characteristic data to the most matched character resources in the virtual character library; inputting the acquired character resources and character parameter/personnel characteristic data by using a generating countermeasure network to generate a customized new virtual character; fine tuning the new virtual character to ensure that the generated character meets the requirements of input character parameters/personnel characteristic data; outputting the customized virtual character model; and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, wherein the interaction models can be: constructing a virtual character interaction model library, and presetting interaction modules such as action interaction modules, voice interaction modules, sight interaction modules and the like of different types; inputting interaction parameters (such as identity/character labels of virtual characters, interaction scenes, expected interaction modes and the like) or personnel characteristic data; mapping the input interaction parameter/personnel characteristic data and preset interaction modules through a machine learning model, and selecting a matched interaction module; performing parameter fine adjustment on the selected interaction module according to the interaction parameter/personnel characteristic data to generate a customized interaction module (such as adjusting voice style, action details and the like); the customized interaction module is adjusted by using role parameters of the first virtual role to form a complete interaction model; loading an interactive model in a virtual scene, performing verification test, collecting user interaction feedback, adjusting and optimizing the interactive model, and realizing continuous learning; and associating the interaction model with the virtual character model.
Referring to fig. 2, another embodiment of the present invention provides a meta-universe character automatic generation method based on the internet of things, which is applied to a meta-universe character automatic generation system based on the internet of things, wherein the meta-universe character automatic generation system comprises a cloud server, an internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals, and an internet of things server for managing the internet of things terminal, the simulation terminals and the intelligent perception terminals; the meta-universe character automatic generation method comprises the following steps:
the terminal of the Internet of things receives outdoor activity parameters input by a user and sends the outdoor activity parameters to the cloud server;
the cloud server selects one or more matched first outdoor activity templates from a preset outdoor activity template library (templates for storing different types of outdoor activities are preset in the cloud server according to the outdoor activity parameters, and the templates at least comprise information of types, scenes, subjects, duration, personnel, equipment, behaviors and the like of the activities), and fuses the first outdoor activity templates to generate an initial outdoor activity model, which specifically can be as follows: inputting the outdoor activity parameters (such as activity type, scene, number of people, etc., and the parameters are represented by numerical values or labels); extracting key features (such as activity types, scene labels, personnel numbers and the like) from the outdoor activity parameters through a feature extraction algorithm; matching and comparing the extracted key features with feature vectors of templates in the outdoor movable template library, and calculating feature similarity; selecting the first N templates which are most matched with the outdoor activity parameters according to the similarity sorting; selecting a parameterized generating network (such as a deep neural network capable of generating conditions), respectively inputting the first N templates and the outdoor activity parameters to generate N parameterized new templates (the parameterized generating network generates a conditional image according to the input templates and parameters through a trained model and outputs a parameterized new template which is similar to the input templates but is provided with parameters designated by a user as conditions to become a personalized new template, for example, the input template is a 'company marketing department outdoor expansion', the parameters are '50 persons', the parameterized generating network takes the input templates as conditions and adds '50 persons' parameter conditions, and the generating network outputs a new template similar to the 'company marketing department outdoor expansion but with 50 roles'; fusing N new templates generated in the previous step by adopting a template fusion algorithm (such as template fusion based on an attention mechanism) to obtain a total model; verifying and checking the total model to ensure that the total model meets the input requirement of the outdoor activity parameters; finally outputting the verified initial outdoor activity model;
The cloud server carries out detail adjustment (including but not limited to modifying the route of an activity, the arrangement of the course of the activity, the type and the number of equipment/props, the type and the number of foods and the like) on the initial outdoor activity model according to the outdoor activity parameters and the position information of the terminal of the Internet of things to obtain a basic outdoor activity model;
the cloud server imports the basic outdoor activity model into a meta-universe platform and distributes a unique identifier and a blockchain address for the basic outdoor activity model;
the cloud server adds virtual character models to the basic outdoor activity model, including but not limited to: appearance models such as facial models (including facial meshes, expression simulations, etc.), body models including stereoscopic models of body parts (e.g., head, hands, feet, etc.), garment models, and skin and hairstyle detail models, etc.; behavioral models such as basic actions (including basic actions such as various standing, walking, etc.), interactive actions involving various bodies and gestures, and facial animation (such as facial expression changes and language mouth shape synchronization), etc.; interaction models such as dialogue systems (which can perform voice or text dialogue interactions), emotion systems (which have emotion response capability), autonomous control (which can perform autonomous decision actions according to circumstances), and the like; scene adaptation models such as spatial perception (perceivable virtual space and avoiding obstacles), scene interaction (interactable with virtual environment scene), etc.;
Storing the basic outdoor activity model and its related information (including but not limited to virtual character models, outdoor activity parameters, etc.) in a distributed database of the metauniverse platform, and transmitting the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
and the Internet of things server returns the identifier and the blockchain address to the Internet of things terminal.
It is understood that in this embodiment, the outdoor activity parameters include, but are not limited to, activity types (such as different types of outdoor activities like hiking, mountain climbing, swimming, outdoor barbecue, outdoor expansion, etc.), activity scenes (such as different outdoor environmental scenes like mountain, lake, grassland/grassland, beach/river beach, etc.), activity duration, number of participants, participant attributes (such as age, sex, body shape, health status, outdoor activity experience, etc.), activity topics, activity objectives (such as fitness, holidays, expansion social circles, etc.), activity requirements (such as required equipment, skill level, etc.), activity intensity (mild, medium or high intensity), budget information, etc.
According to the scheme provided by the embodiment of the invention, an outdoor activity scheme with high sense of realism and immersion can be quickly generated or constructed according to the personalized requirements of users, and the outdoor activity scheme is combined with a meta universe platform and a blockchain technology to realize the uniqueness, the safety and the traceability of the activity. Meanwhile, rich and various virtual roles and interaction modes can be provided for users, and the interestingness and participation of outdoor activities are increased.
In some possible embodiments of the present invention, the metauniverse character automatic generation method further includes:
the internet of things terminal receives a first request (comprising an analog terminal selection instruction) of the user and sends the first request to the internet of things server;
the Internet of things server sends the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
the Internet of things server determines a first intelligent perception terminal from a plurality of intelligent perception terminals which are preconfigured in the space where the first simulation terminal is located according to the basic outdoor activity model (such as selecting a first high-definition camera which is matched with the space range contained in the basic outdoor activity model and can efficiently and high-quality acquire real-time images and action data of participants from a plurality of high-definition cameras;
the first simulation terminal simulates a virtual scene image corresponding to the basic outdoor activity model, and projects the virtual scene image to obtain a virtual scene, specifically: importing the basic outdoor activity model into the first simulation terminal; the first simulation terminal uses a proper rendering engine and a virtual reality development tool, adds corresponding activity elements into a virtual scene (comprising temporary facilities, activity areas, props, personas and the like, and designs actions and rules for interaction with the elements so as to enhance user participation and experience) according to specific outdoor activity requirements, scene models, object models, persona models and other data related in the basic outdoor activity model, and constructs a virtual scene image of the basic outdoor activity model; designing a user interface and a control mode of the first simulation terminal, so that a participant can navigate, interact and operate a virtual scene (interaction can be performed by using modes of a handle, a touch screen, gesture recognition, voice input and the like); the first simulation terminal realizes vivid rendering and simulation of the virtual scene image by controlling proper light setting, application of materials and textures, adjustment of physical properties and the like, and projects the virtual scene image into a virtual scene; after the virtual scene projection is completed, testing and optimizing are carried out, so that the stability, performance and user experience of the virtual scene are ensured, and the virtual scene is subjected to necessary adjustment and optimization according to a test result, so that better user experience is provided;
Enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
the internet of things server adjusts the basic outdoor activity model, the role model and the interaction model according to the monitoring data to provide a virtual scene or a basic outdoor activity model or a role model or an interaction model which is the most fit with the actual outdoor activity reality, so that the outdoor activity planning can be accurately and efficiently completed (for example, the collected participant image and action data are utilized, the role model of the participant is modified through an image processing and action capturing technology, and meanwhile, the proportion, the appearance and the like of the role model are adjusted according to the characteristics of the height, the body type and the like of the participant).
In some possible embodiments of the present invention, the step of the internet of things server adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data includes:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment to obtain a first virtual character and a first character model (for example, in a cold environment, the virtual character wears thick clothes and exhales white breath);
The adjusted first virtual role is displayed in a projection space of the first simulation terminal (namely projected in the virtual scene) in real time through the first simulation terminal by utilizing a real-time rendering technology, so that a participant (the user or other testers added by the user) can interact with the first virtual role (the adjusted first virtual role can also be displayed in a projection space of the second simulation terminal in real time through the second simulation terminal in the simulation terminal, so that the participant can interact with the first virtual role), namely, the interaction condition of the participant and other participants in actual outdoor activities can be simulated, and the participant can carry out experience and evaluation on the basic outdoor activity model in an immersive manner;
the participant interacts with the first virtual character by means of gestures, voice and/or a manipulation device;
the first intelligent perception terminal acquires interaction data between the participant and the first virtual role;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
In this embodiment, the active participants experience and test the virtual scene generated according to the basic outdoor activity model, and then modify the interaction model according to the interaction condition of the participants and the virtual roles, so that the interaction model which is more attached to the participants can be obtained, and the user experience is improved.
In some possible embodiments of the present invention, the step of the internet of things server adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data includes:
evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
It can be appreciated that, in order to provide personalized services to improve the user experience of the activity participants, in this embodiment, by collecting interaction data of the participants in the virtual scenario corresponding to the basic outdoor activity model, a "perfect scenario" of the corresponding outdoor activity, such as an activity type, an activity theme, an activity flow, an activity place, an activity time, activity participants, behavior guidance of the activity participants (such as individual activity behavior planning and attention of each participant), activity equipment/prop, emergency plan, and the like, may be further determined, so as to obtain a first outdoor activity model capable of guaranteeing high efficiency, intelligence, and safety of the organization of the outdoor activity.
In some possible embodiments of the invention, in order to provide personalized services and experiences to the participants; by combining with artificial intelligence technology, through analyzing interests, preferences and demands of participants, related sub-activity content, sub-activity time, sub-activity place, navigation path and/or interaction task in sub-activity in the whole outdoor activity are recommended for the participants, and real-time feedback and guidance are provided.
In some possible embodiments of the present invention, the step of adding a virtual character model to the basic outdoor activity model by the cloud server includes:
generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
It will be appreciated that in order for the basic outdoor activity model to provide a more accurate reference, in this embodiment, one or more matching virtual character models are generated or constructed based on the user-entered character parameters or the personnel characteristic data of the participating personnel, and associated with the basic outdoor activity model, which may be: constructing a virtual character library (comprising preset virtual character resources of different sexes, ages, body types, character features, health states, skills and the like); inputting character parameters or person characteristic data (may be characteristic data such as gender, age, height weight, character characteristics, health status, skills, etc.); establishing a parameterized mapping model of the role; mapping input character parameters or personnel characteristic data to the most matched character resources in the virtual character library; inputting the acquired character resources and character parameter/personnel characteristic data by using a generating countermeasure network to generate a customized new virtual character; fine tuning the new virtual character to ensure that the generated character meets the requirements of input character parameters/personnel characteristic data; outputting the customized virtual character model; and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, wherein the interaction models can be: constructing a virtual character interaction model library, and presetting interaction modules such as action interaction modules, voice interaction modules, sight interaction modules and the like of different types; inputting interaction parameters (such as identity/character labels of virtual characters, interaction scenes, expected interaction modes and the like) or personnel characteristic data; mapping the input interaction parameter/personnel characteristic data and preset interaction modules through a machine learning model, and selecting a matched interaction module; performing parameter fine adjustment on the selected interaction module according to the interaction parameter/personnel characteristic data to generate a customized interaction module (such as adjusting voice style, action details and the like); the customized interaction module is adjusted by using role parameters of the first virtual role to form a complete interaction model; loading an interactive model in a virtual scene, performing verification test, collecting user interaction feedback, adjusting and optimizing the interactive model, and realizing continuous learning; and associating the interaction model with the virtual character model.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Although the present application is disclosed above, the present application is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the application.

Claims (10)

1. The meta-universe role automatic generation system based on the Internet of things is characterized by comprising the following steps: the system comprises a cloud server, an Internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals and an Internet of things server for managing the Internet of things terminal, the simulation terminals and the intelligent perception terminals;
the internet of things terminal is configured to: receiving outdoor activity parameters input by a user, and sending the outdoor activity parameters to the cloud server;
the cloud server is configured to:
selecting one or more matched first outdoor activity templates from a preset outdoor activity template library according to the outdoor activity parameters, and fusing the first outdoor activity templates to generate an initial outdoor activity model;
according to the outdoor activity parameters and the position information of the terminal of the Internet of things, carrying out detail adjustment on the initial outdoor activity model to obtain a basic outdoor activity model;
importing the basic outdoor activity model into a metauniverse platform, and distributing a unique identifier and a blockchain address to the basic outdoor activity model;
adding a virtual character model to the basic outdoor activity model;
storing the basic outdoor activity model and related information thereof in a distributed database of the metauniverse platform, and sending the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
The internet of things server is configured to: and returning the identifier and the blockchain address to the terminal of the Internet of things.
2. The metauniverse character automatic generation system based on the internet of things according to claim 1, wherein the terminal of the internet of things is configured to:
receiving a first request of the user, and sending the first request to the Internet of things server;
transmitting the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
determining a first intelligent perception terminal from a plurality of intelligent perception terminals which are preconfigured in the space where the first simulation terminal is located according to the basic outdoor activity model;
the first analog terminal is configured to: simulating a virtual scene image corresponding to the basic outdoor activity model, and projecting the virtual scene image to obtain a virtual scene;
the internet of things server is configured to:
enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
and adjusting the basic outdoor activity model, the role model and the interaction model according to the monitoring data.
3. The system for automatically generating metauniverse characters based on the internet of things according to claim 2, wherein the step of adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data, the server of the internet of things is configured to:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment, and obtaining a first virtual character and a first character model;
displaying the adjusted first virtual character in real time in a projection space of the first simulation terminal through the first simulation terminal by utilizing a real-time rendering technology, so that a participant can interact with the first virtual character;
acquiring interaction data generated by interaction between the participant acquired by the first intelligent perception terminal and the first virtual character in a mode of gestures, voices and/or control equipment;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
4. The system for automatically generating metauniverse characters based on the internet of things of claim 3, wherein the step of adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data, the server of the internet of things is configured to:
Evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
5. The system for automatically generating metaverse characters based on the internet of things according to claim 1-4, wherein the step of adding a virtual character model to the basic outdoor activity model, the cloud server is configured to include:
generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
6. The meta-universe character automatic generation method based on the Internet of things is characterized by being applied to a meta-universe character automatic generation system based on the Internet of things, wherein the meta-universe character automatic generation system comprises a cloud server, an Internet of things terminal, a plurality of simulation terminals, a plurality of intelligent perception terminals and an Internet of things server for managing the Internet of things terminal, the simulation terminals and the intelligent perception terminals; the meta-universe character automatic generation method comprises the following steps:
the terminal of the Internet of things receives outdoor activity parameters input by a user and sends the outdoor activity parameters to the cloud server;
the cloud server selects one or more matched first outdoor activity templates from a preset outdoor activity template library according to the outdoor activity parameters, and fuses the first outdoor activity templates to generate an initial outdoor activity model;
the cloud server carries out detail adjustment on the initial outdoor activity model according to the outdoor activity parameters and the position information of the terminal of the Internet of things to obtain a basic outdoor activity model;
the cloud server imports the basic outdoor activity model into a meta-universe platform and distributes a unique identifier and a blockchain address for the basic outdoor activity model;
The cloud server adds a virtual character model to the basic outdoor activity model;
storing the basic outdoor activity model and related information thereof in a distributed database of the metauniverse platform, and sending the identifier of the basic outdoor activity model and the blockchain address to the corresponding internet of things server;
and the Internet of things server returns the identifier and the blockchain address to the Internet of things terminal.
7. The automatic meta-universe character generation method based on the internet of things of claim 6, wherein the automatic meta-universe character generation method further comprises:
the terminal of the Internet of things receives a first request of the user and sends the first request to the server of the Internet of things;
the Internet of things server sends the basic outdoor activity model to a first simulation terminal selected by the user from the simulation terminals according to the first request;
the Internet of things server determines a first intelligent sensing terminal from a plurality of intelligent sensing terminals which are preconfigured in the space where the first simulation terminal is located according to the basic outdoor activity model;
The first simulation terminal simulates a virtual scene image corresponding to the basic outdoor activity model, and projects the virtual scene image to obtain a virtual scene;
enabling the first intelligent sensing terminal to collect monitoring data in the virtual scene field;
and the Internet of things server adjusts the basic outdoor activity model, the role model and the interaction model according to the monitoring data.
8. The method for automatically generating metauniverse characters based on the internet of things according to claim 7, wherein the step of the internet of things server adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data comprises the following steps:
extracting environmental data from the monitored data;
according to the environment data, adjusting the appearance and the behavior of the virtual character corresponding to the character model to be matched with the actual environment, and obtaining a first virtual character and a first character model;
displaying the adjusted first virtual character in real time in a projection space of the first simulation terminal through the first simulation terminal by utilizing a real-time rendering technology, so that a participant can interact with the first virtual character;
The participant interacts with the first virtual character by means of gestures, voice and/or a manipulation device;
the first intelligent perception terminal acquires interaction data between the participant and the first virtual role;
and adjusting the interaction model according to the interaction data to obtain a first interaction model.
9. The method for automatically generating metauniverse characters based on the internet of things according to claim 8, wherein the step of the internet of things server adjusting the basic outdoor activity model, the character model and the interaction model according to the monitoring data comprises the following steps:
evaluating the activity participation interest matching degree, the physical matching degree, the character matching degree and the health matching degree of the participants according to the interaction data to obtain first evaluation data;
and modifying and adjusting the data of the outdoor activities in the basic outdoor activity model, such as activity types, activity topics, activity flows, activity places, activity time, activity participators, behavior guides of the activity participators, activity equipment/props and emergency schemes according to the first character model, the first interaction model and the first evaluation data to obtain a first outdoor activity model.
10. The automatic meta-space character generation method based on the internet of things according to claim 6-9, wherein the step of adding a virtual character model to the basic outdoor activity model by the cloud server comprises the following steps:
generating or constructing one or more matched virtual character models according to character parameters input by the user or personnel characteristic data of the participators, and associating the virtual character models with the basic outdoor activity model;
and generating or selecting one or more matched interaction models according to the interaction parameters input by the user or the personnel characteristic data, and associating the matched interaction models with the virtual character model.
CN202311001214.6A 2023-08-09 2023-08-09 Automatic meta-universe character generation system and method based on Internet of things Pending CN116958447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311001214.6A CN116958447A (en) 2023-08-09 2023-08-09 Automatic meta-universe character generation system and method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311001214.6A CN116958447A (en) 2023-08-09 2023-08-09 Automatic meta-universe character generation system and method based on Internet of things

Publications (1)

Publication Number Publication Date
CN116958447A true CN116958447A (en) 2023-10-27

Family

ID=88447489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311001214.6A Pending CN116958447A (en) 2023-08-09 2023-08-09 Automatic meta-universe character generation system and method based on Internet of things

Country Status (1)

Country Link
CN (1) CN116958447A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105879410A (en) * 2016-06-03 2016-08-24 深圳市领芯者科技有限公司 Interactive biological system and interactive method based on sensing toy
CN111612559A (en) * 2019-02-26 2020-09-01 浙江开奇科技有限公司 Implementation method of virtual intelligent interactive cloud shelf
CN114415881A (en) * 2022-01-24 2022-04-29 东北大学 Meta-universe skiing system with real-time cloud-linked elements in ski field environment
CN114548939A (en) * 2022-02-25 2022-05-27 深圳市德立信环境工程有限公司 Public welfare activity operation method and system based on meta universe and block chain
CN114692802A (en) * 2022-03-16 2022-07-01 北京上方传媒科技股份有限公司 Method and system for using two-dimension code label as meta-space ecological entry
CN114820894A (en) * 2022-05-07 2022-07-29 深圳市固有色数码技术有限公司 Virtual role generation method and system
CN115203534A (en) * 2022-06-15 2022-10-18 丁伟俊 Activity preference analysis method and system applied to digital space
CN115408622A (en) * 2022-09-05 2022-11-29 江苏银承网络科技股份有限公司 Online interaction method and device based on meta universe and storage medium
CN115712657A (en) * 2022-10-20 2023-02-24 高哲 User demand mining method and system based on meta universe
CN115953540A (en) * 2023-02-03 2023-04-11 深圳市积木数字版权研究中心有限公司 Meta-universe virtual space construction system and method based on three-dimensional panorama
CN116032967A (en) * 2022-12-30 2023-04-28 天翼物联科技有限公司 Internet of things equipment management method and device in meta universe, computer equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105879410A (en) * 2016-06-03 2016-08-24 深圳市领芯者科技有限公司 Interactive biological system and interactive method based on sensing toy
CN111612559A (en) * 2019-02-26 2020-09-01 浙江开奇科技有限公司 Implementation method of virtual intelligent interactive cloud shelf
CN114415881A (en) * 2022-01-24 2022-04-29 东北大学 Meta-universe skiing system with real-time cloud-linked elements in ski field environment
CN114548939A (en) * 2022-02-25 2022-05-27 深圳市德立信环境工程有限公司 Public welfare activity operation method and system based on meta universe and block chain
CN114692802A (en) * 2022-03-16 2022-07-01 北京上方传媒科技股份有限公司 Method and system for using two-dimension code label as meta-space ecological entry
CN114820894A (en) * 2022-05-07 2022-07-29 深圳市固有色数码技术有限公司 Virtual role generation method and system
CN115203534A (en) * 2022-06-15 2022-10-18 丁伟俊 Activity preference analysis method and system applied to digital space
CN115408622A (en) * 2022-09-05 2022-11-29 江苏银承网络科技股份有限公司 Online interaction method and device based on meta universe and storage medium
CN115712657A (en) * 2022-10-20 2023-02-24 高哲 User demand mining method and system based on meta universe
CN116032967A (en) * 2022-12-30 2023-04-28 天翼物联科技有限公司 Internet of things equipment management method and device in meta universe, computer equipment and storage medium
CN115953540A (en) * 2023-02-03 2023-04-11 深圳市积木数字版权研究中心有限公司 Meta-universe virtual space construction system and method based on three-dimensional panorama

Similar Documents

Publication Publication Date Title
US11600033B2 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
US11501363B2 (en) 3D platform for aesthetic simulation
CN115769234A (en) Template-based generation of 3D object mesh from 2D images
Narang et al. Motion recognition of self and others on realistic 3D avatars
CN113255052B (en) Home decoration scheme recommendation method and system based on virtual reality and storage medium
US20160027024A1 (en) Gathering and providing consumer intelligence
KR102510023B1 (en) Method and computer program to determine user's mental state by using user's behavioral data or input data
CN109409199A (en) Micro- expression training method, device, storage medium and electronic equipment
Chen et al. Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation: 10th International Conference, VAMR 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I
Cao et al. Intelligent physical education teaching tracking system based on multimedia data analysis and artificial intelligence
Shumaker et al. Virtual, Augmented and Mixed Reality: Applications of Virtual and Augmented Reality: 6th International Conference, VAMR 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22-27, 2014, Proceedings, Part II
Wang et al. Virtual reality enabled human-centric requirements engineering
US20190050881A1 (en) Method and apparatus for rewarding reaction of simulation participant
Gillies et al. Applying the CASSM framework to improving end user debugging of interactive machine learning
CN117271749A (en) Creation method and computer for non-player characters in meta-universe scene
CN111507478A (en) AI cosmetic teacher system
Stamps III Simulating designed environments
CN116958447A (en) Automatic meta-universe character generation system and method based on Internet of things
Nunnari et al. Generation of virtual characters from personality traits
CN113781271A (en) Makeup teaching method and device, electronic equipment and storage medium
KR101901826B1 (en) Visualization system of space contents experience information and reward system thereof
Kamceva et al. On the general paradigms for implementing adaptive e-learning systems
CN117978953A (en) Network conference interaction method, device, computer equipment and storage medium
CN113139133B (en) Cloud exhibition content recommendation method, system and equipment based on generation countermeasure network
Hoyet Towards Perception-based Character Animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination