CN117934782A - Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene - Google Patents

Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene Download PDF

Info

Publication number
CN117934782A
CN117934782A CN202410316994.1A CN202410316994A CN117934782A CN 117934782 A CN117934782 A CN 117934782A CN 202410316994 A CN202410316994 A CN 202410316994A CN 117934782 A CN117934782 A CN 117934782A
Authority
CN
China
Prior art keywords
scene
target
data
augmented reality
dynamic interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410316994.1A
Other languages
Chinese (zh)
Other versions
CN117934782B (en
Inventor
吴湛
车守刚
刘永逵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Virtual Reality Shenzhen Intelligent Technology Co ltd
Original Assignee
Virtual Reality Shenzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virtual Reality Shenzhen Intelligent Technology Co ltd filed Critical Virtual Reality Shenzhen Intelligent Technology Co ltd
Priority to CN202410316994.1A priority Critical patent/CN117934782B/en
Priority claimed from CN202410316994.1A external-priority patent/CN117934782B/en
Publication of CN117934782A publication Critical patent/CN117934782A/en
Application granted granted Critical
Publication of CN117934782B publication Critical patent/CN117934782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of image data processing, and discloses a construction method, a device, equipment and a storage medium of an XR (X-ray) augmented reality scene. The method comprises the following steps: performing environment scanning on a target scene area to obtain area environment data, performing sparse coding and three-dimensional environment modeling to obtain a three-dimensional environment model; object scanning is carried out to obtain scene object data and virtual object generation is carried out to obtain a virtual object model; performing object dynamic interaction analysis to obtain virtual object dynamic interaction logic; scene integration and multitask learning are carried out, and an initial XR augmented reality scene is obtained; acquiring physiological signals and behavior data and creating a multisensory feedback mechanism; and acquiring scene response data and performing cooperative control response optimization to obtain a target XR (X-ray) augmented reality scene corresponding to the target scene area, so that the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene are improved.

Description

Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene
Technical Field
The application relates to the technical field of image data processing, in particular to a construction method, a device, equipment and a storage medium of an XR (X-ray) augmented reality scene.
Background
With the continuous expansion of application scenes and the increasing demands of users, the traditional XR scene construction method faces a plurality of challenges, in particular how to efficiently create realistic three-dimensional environment models and how to realize natural and smooth user interaction in the models.
Conventional approaches often rely on complex manual operations and extensive manual intervention, which are not only inefficient, but also difficult to handle in complex or dynamically changing real world environments. In addition, the prior art is often frustrated in capturing subtle environmental features and textures, and is difficult to achieve with a high degree of realism. Traditional single feedback mechanisms have failed to meet the multi-sensory experience needs of users. How to create and adjust the multi-sensory feedback mechanism in real time according to the physiological signals and behavior data of the user so as to provide richer and real immersion experience becomes a problem to be solved in research and development.
Disclosure of Invention
The application provides a construction method, a device, equipment and a storage medium of an XR (X-ray) augmented reality scene, so that the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene are improved.
The first aspect of the present application provides a method for constructing an XR augmented reality scene, where the method for constructing an XR augmented reality scene includes:
Performing environment scanning on a target scene area to obtain area environment data, and performing sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
Object scanning is carried out on the target scene area to obtain scene object data, virtual object generation is carried out on the scene object data through a variation self-encoder to obtain a virtual object model;
performing object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic;
according to the virtual object dynamic interaction logic, scene integration and multi-task learning are carried out on the three-dimensional environment model and the virtual object model, and an initial XR (X-ray) augmented reality scene corresponding to the target scene area is obtained;
Acquiring physiological signals and behavior data of a plurality of target users through a plurality of target cooperative devices according to the initial XR augmented reality scene, and creating a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
And acquiring scene response data of each target cooperative device through the multi-sense feedback mechanism, and performing cooperative control response optimization on the initial XR augmented reality scene according to the scene response data to obtain a target XR augmented reality scene corresponding to the target scene area.
The second aspect of the present application provides an XR augmented reality scene construction apparatus, the XR augmented reality scene construction apparatus comprising:
The modeling module is used for carrying out environment scanning on a target scene area to obtain area environment data, and carrying out sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
the generating module is used for carrying out object scanning on the target scene area to obtain scene object data, and carrying out virtual object generation on the scene object data through the variation self-encoder to obtain a virtual object model;
The analysis module is used for carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic;
The integration module is used for carrying out scene integration and multi-task learning on the three-dimensional environment model and the virtual object model according to the virtual object dynamic interaction logic to obtain an initial XR (virtual reality) augmented reality scene corresponding to the target scene area;
The creation module is used for acquiring physiological signals and behavior data of a plurality of target users through a plurality of target cooperative devices according to the initial XR augmented reality scene, and creating a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
and the optimization module is used for acquiring scene response data of each target cooperative device through the multi-sense feedback mechanism, and carrying out cooperative control response optimization on the initial XR augmented reality scene according to the scene response data to obtain a target XR augmented reality scene corresponding to the target scene area.
A third aspect of the present application provides a computer apparatus comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the computer device to perform the XR augmented reality scenario construction method described above.
A fourth aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described method of constructing an XR augmented reality scene.
According to the technical scheme provided by the application, the 3D scanning equipment is utilized to automatically scan the environment and the object, and the method can be used for automatically capturing and processing data from the actual environment to generate a high-precision three-dimensional environment model by combining sparse coding and a three-dimensional environment modeling technology. The process greatly reduces the requirement of manual intervention and improves the efficiency and speed of the whole scene construction. The scene object data is processed through the variational self-encoder, and the generated virtual object model can be innovatively designed and adjusted according to requirements while keeping object details and texture realism. This not only increases the realism and visual effect of the virtual environment, but also provides a richer and diversified experience for the user. And carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network, so that the complex interaction relationship among objects can be deeply understood, and natural and smooth dynamic interaction logic is generated. The method enables the objects in the virtual environment to respond to the operation of the user in a more natural and real mode, and improves the interaction satisfaction degree and immersion experience of the user. By integrating a plurality of target cooperative devices to acquire physiological signals and behavior data of users and combining a multi-sense feedback mechanism, the XR scene can be adjusted and optimized in real time to adapt to the requirements of different users. The collaborative control response optimization can ensure that each user can obtain personalized multi-sense experience, and can dynamically adjust according to user feedback to ensure that an XR scene is always in an optimal state. The scene integration task is optimized through the multi-task learning model, the method can effectively process the correlation and conflict among various tasks, and the allocation and the use of system resources are optimized. The method not only improves the rendering quality and response speed of the XR scene, but also ensures the long-term stability and reliability of the system, and further improves the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a method for constructing an XR augmented reality scene according to an embodiment of the application;
fig. 2 is a schematic diagram of an embodiment of an XR augmented reality scene constructing apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a construction method, a device, equipment and a storage medium of an XR (X-ray) augmented reality scene, so that the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene are improved.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present application is described below, referring to fig. 1, and an embodiment of a method for constructing an XR augmented reality scene in an embodiment of the present application includes:
Step 101, carrying out environment scanning on a target scene area to obtain area environment data, and carrying out sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
It may be understood that the execution body of the present application may be an XR augmented reality scene building device, and may also be a terminal or a server, which is not limited herein. The embodiment of the application is described by taking a server as an execution main body as an example.
Specifically, the 3D scanning device performs environmental scanning on the target scene area to obtain initial environmental image data. Image enhancement processing is performed on the initial environmental image data to improve the quality of the image, such as contrast and brightness. And carrying out local region segmentation on the enhanced environment image data, and dividing the whole environment image into a plurality of local environment image data by identifying different regions in the image, so as to facilitate further analysis of the characteristics of each local region. And extracting the edge direction of the local environment image data, and identifying the boundary and direction of the object in the image to obtain an edge direction characteristic image. And extracting gradient features based on the edge direction feature image, and calculating the color gradient change of each point in the image to obtain a target gradient feature image. The gradient features reflect the speed of the brightness change in the image and provide texture information of the object surface. And performing environment data conversion on the target gradient characteristic image and the local environment image data to obtain regional environment data. The entire scene is subdivided into a plurality of first environment data by region segmentation of the region environment data, each data representing a specific region in the scene. And performing sparse coding on the first environmental data, extracting the most important features, and compressing the data volume to obtain a plurality of second environmental data. And carrying out three-dimensional environment modeling on the target scene area based on the second environment data. And converting the two-dimensional image data into a three-dimensional model, and finally obtaining the three-dimensional environment model through calculation and simulation. The three-dimensional model reflects the spatial layout and structure of the target scene area and also simulates the physical characteristics of illumination, shadows, etc. in the environment.
102, Carrying out object scanning on a target scene area to obtain scene object data, and generating virtual objects on the scene object data through a variation self-encoder to obtain a virtual object model;
Specifically, object scanning is performed on the target scene area, and scene object data is obtained. And inputting scene object data into a preset object feature extraction model, wherein the model is formed by a first convolution pooling network, a second convolution pooling network and an output layer. And processing the scene object data through a first convolution pooling network, and obtaining the first target object feature through feature point multiplication and summation operation and feature maximum value extraction. Local features are captured through convolution operation, and pooling operation is responsible for reducing feature dimension, extracting the most significant features, and ensuring that the model can focus on the most important parts in the image. And similarly processing the scene object data through a second convolution pooling network, capturing and extracting the characteristics of the scene object from another angle through a special network structure, and obtaining the characteristics of a second target object. And carrying out feature fusion on the first target object feature and the second target object feature through the output layer to obtain a fusion target object feature. And (3) carrying out object space parameter learning on the fused target object features through a variation self-encoder, and carrying out depth analysis and encoding on the object features to obtain feature parameters describing the space position and the form of the object. The variational self-encoder can learn potential representation of data, and reconstruct a virtual object according to the learned object space characteristic parameters through the capability of generating a model, so that a virtual object model is finally obtained.
Step 103, carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic;
it should be noted that, corresponding object dynamic interaction elements are defined according to the virtual object model, and dynamic behaviors and state changes of each virtual object possibly participating in the interaction process are identified and calibrated. And constructing an object dynamic interaction graph for the object dynamic interaction element based on the graph structure. In the graph, nodes represent dynamic interaction elements of an object, while edges represent potential interaction relationships between these elements. The process of constructing a dynamic interaction map of a target object is actually to build an understandable and analyzable structured representation for object interactions in a virtual scene, which can map out clearly the dynamic relationships and interaction logic between objects. And analyzing the object dynamic interaction path of the target object dynamic interaction graph, and identifying all possible interaction paths among objects by analyzing the graph structure. Each path represents a potential interaction means reflecting how objects interact with each other. And carrying out object dynamic interaction logic matching according to the analyzed multiple object dynamic interaction paths. And identifying and classifying the interaction logic corresponding to each path, so as to ensure that a set of logic framework can be provided for each dynamic interaction mode. The logic framework defines behavior rules and state changes when interacting between objects. And analyzing object dynamic interaction logic corresponding to each object dynamic interaction path through the graph neural network, and identifying and learning the dynamic interaction modes among the objects by utilizing the graph neural network. The graph neural network can effectively capture complex interaction relations and modes among objects through deep learning of graph structures. Based on the deep learning analysis, the dynamic interaction logic between objects is further optimized, and the dynamic interaction logic of the virtual objects is obtained.
104, According to the dynamic interaction logic of the virtual object, performing scene integration and multi-task learning on the three-dimensional environment model and the virtual object model to obtain an initial XR (virtual reality) extended reality scene corresponding to the target scene area;
Specifically, scene integration is performed on the three-dimensional environment model and the virtual object model, and a plurality of scene integration tasks are generated. The task aims to ensure compatibility and consistency between virtual objects and the three-dimensional environment to create a coherent and realistic virtual scene. And carrying out task correlation analysis, and evaluating the correlation between scene integration tasks to obtain task correlation indexes. By identifying which tasks are closely related logically or functionally, learning resources are more efficiently allocated, ensuring the effectiveness of the learning process and the effectiveness of the integration strategy. And learning and optimizing the three-dimensional environment model and the virtual object model according to the task correlation indexes through the multi-task learning model. The application of the multi-task learning model enables multiple integrated tasks to be considered simultaneously, and the overall learning efficiency and performance are improved by sharing learning resources, so that a first scene integration strategy is output, which is based on deep understanding of the multiple integrated tasks and interrelationships between the multiple integrated tasks, and aims to maximize the effect and realism of scene integration. And carrying out strategy optimization on the first scene integration strategy according to the virtual object dynamic interaction logic, and generating a second scene integration strategy. The optimization process considers dynamic interaction between virtual objects and between the virtual objects and the environment, ensures that scene integration is reasonable in static layout, and can keep consistency and fluency in dynamic interaction. And according to the second scene integration strategy, performing final scene integration on the three-dimensional environment model and the virtual object model to obtain an initial XR (X-ray) augmented reality scene corresponding to the target scene area.
Step 105, acquiring physiological signals and behavior data of a plurality of target users through a plurality of target cooperative devices according to an initial XR augmented reality scene, and creating a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
Specifically, according to the initial XR augmented reality scene, physiological signals and behavior data of a plurality of target users are obtained through a plurality of target cooperative devices. These data are multidimensional, including heart rate, galvanic skin response, eye tracking, and motion tracking, etc., and can reflect the physical and emotional states of the user when experiencing XR content. And carrying out emotion calculation and behavior analysis on the physiological signals and the behavior data, and understanding and explaining the emotion response and behavior pattern of the user by utilizing a data analysis technology and a machine learning algorithm. And obtaining emotion calculation results and behavior analysis results of each target user through analysis, wherein the results reveal perception, emotion change and behavior tendency of the user in a specific scene. And creating a corresponding multi-sense feedback mechanism for each target user according to the emotion calculation result and the behavior analysis result. The mechanism design allows for feedback to the user in an optimal manner through a variety of sensory channels, such as visual, auditory, tactile, etc., thereby enhancing their immersion and experience satisfaction. The multisensory feedback mechanism is integrated into the initial XR augmented reality scene, so that the feedback mechanism can respond to the physiological and behavioral changes of the user in real time in the whole process of interaction between the user and the scene.
And 106, acquiring scene response data of each target cooperative device through a multi-sense feedback mechanism, and performing cooperative control response optimization on the initial XR augmented reality scene according to the scene response data to obtain a target XR augmented reality scene corresponding to the target scene area.
Specifically, scene response data of each target cooperative device is obtained through a multi-sense feedback mechanism based on physiological and behavioral reactions generated when a user interacts with an XR scene. The collected data includes the user's motion and location information, and also covers the physiological responses of the user, such as heart rate, galvanic skin activity, etc., which together constitute a comprehensive response to the scene. Response feature extraction is performed on the scene response data, and key information which can represent the user interaction characteristics of each device is extracted from the original data. And carrying out cooperative control response analysis according to scene response characteristics, and understanding interaction modes among different users and influence of the interaction modes on the whole XR scene to obtain an initial cooperative control response parameter set. And initializing a parameter cluster body of the initial cooperative control response parameter set by a preset genetic algorithm, and searching an optimal cooperative control response parameter set by the initialization, fitness calculation and cross iteration optimization process of the parameter set group. In this process, each first cooperative control response parameter set is subjected to fitness calculation, and the calculation evaluates the effect of each group of parameters on simulating user interaction and controlling scene response, so that optimization of the parameter sets is performed towards improving user experience. Through cross-iterative optimization, these parameter sets continually evolve, gradually approaching the optimal cooperative control response configuration. The process is dynamic, the interaction difference between different users and the interaction complexity between the individual and the scene are considered, and finally, the target cooperative control response parameter set is obtained through the optimization solution of a genetic algorithm. And applying the target parameter set to the initial XR augmented reality scene, and realizing dynamic adjustment and optimization of the scene through cooperative control response optimization to obtain a target XR augmented reality scene corresponding to the target scene area.
According to the embodiment of the application, the 3D scanning equipment is utilized to automatically scan the environment and the object, and the method can be used for automatically capturing and processing data from the actual environment to generate a high-precision three-dimensional environment model in high efficiency by combining sparse coding and a three-dimensional environment modeling technology. The process greatly reduces the requirement of manual intervention and improves the efficiency and speed of the whole scene construction. The scene object data is processed through the variational self-encoder, and the generated virtual object model can be innovatively designed and adjusted according to requirements while keeping object details and texture realism. This not only increases the realism and visual effect of the virtual environment, but also provides a richer and diversified experience for the user. And carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network, so that the complex interaction relationship among objects can be deeply understood, and natural and smooth dynamic interaction logic is generated. The method enables the objects in the virtual environment to respond to the operation of the user in a more natural and real mode, and improves the interaction satisfaction degree and immersion experience of the user. By integrating a plurality of target cooperative devices to acquire physiological signals and behavior data of users and combining a multi-sense feedback mechanism, the XR scene can be adjusted and optimized in real time to adapt to the requirements of different users. The collaborative control response optimization can ensure that each user can obtain personalized multi-sense experience, and can dynamically adjust according to user feedback to ensure that an XR scene is always in an optimal state. The scene integration task is optimized through the multi-task learning model, the method can effectively process the correlation and conflict among various tasks, and the allocation and the use of system resources are optimized. The method not only improves the rendering quality and response speed of the XR scene, but also ensures the long-term stability and reliability of the system, and further improves the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene.
In a specific embodiment, the process of executing step 101 may specifically include the following steps:
(1) Performing environment scanning on a target scene area through 3D scanning equipment to obtain initial environment image data;
(2) Performing image enhancement processing on the initial environment image data to obtain enhanced environment image data, and performing local region segmentation on the enhanced environment image data to obtain local environment image data;
(3) Extracting edge directions of local environment image data to obtain edge direction feature images, and extracting gradient features of the edge direction feature images to obtain target gradient feature images;
(4) Performing environment data conversion on the target gradient characteristic image and the local environment image data to obtain regional environment data;
(5) Performing region segmentation on the region environment data to obtain a plurality of first environment data, and performing sparse coding on the plurality of first environment data to obtain a plurality of second environment data;
(6) And carrying out three-dimensional environment modeling on the target scene area according to the plurality of second environment data to obtain a three-dimensional environment model.
Specifically, the 3D scanning device performs environmental scanning on the target scene area to obtain initial environmental image data. The data contains key information such as the geometry, the size, the spatial layout and the like of the target scene. And performing image enhancement processing on the initial environment image data, and improving the image quality by adjusting the contrast and brightness of the image, applying denoising and other technologies, so that details in the image are clearer. The enhanced ambient image data is segmented into local regions, and the entire scene is divided into a number of local regions by applying an image segmentation algorithm, each region representing a specific portion of the scene. And extracting the edge direction of each local environment image data, and identifying and extracting edge information in the image by an edge detection algorithm to obtain an edge direction characteristic image. The edge direction feature image reveals the contours of the objects in the scene and their direction. And carrying out gradient feature extraction on the edge direction feature image, and obtaining a target gradient feature image by calculating the change rate of pixel intensity in the image, wherein the gradient feature image emphasizes the texture feature and the shape change in the image. And integrating the target gradient characteristic image and the local environment image data to perform environment data conversion, and converting the image data into a format more suitable for three-dimensional modeling to obtain regional environment data. By region segmentation of the region context data, a plurality of first context data is obtained, each representing a more specific region or object in the scene. In order to effectively process the data and extract the most critical information, sparse coding is performed on the plurality of first environmental data, redundant information is removed, and only the most representative features are reserved, so that a plurality of second environmental data are obtained. Sparse coding reduces complexity of data processing and improves efficiency and accuracy of a subsequent modeling process. And based on the second environment data, carrying out three-dimensional environment modeling on the target scene area by utilizing a three-dimensional modeling technology. By simulating each detail in the scene, including the shape, position, texture, etc. of the object, a three-dimensional environmental model is finally obtained.
In a specific embodiment, the process of executing step 102 may specifically include the following steps:
(1) Object scanning is carried out on a target scene area to obtain scene object data, the scene object data is input into a preset object feature extraction model, and the object feature extraction model comprises: a first convolutional pooled network, a second convolutional pooled network, and an output layer;
(2) Performing feature point multiplication and summation operation and feature maximum value extraction on scene object data through a first convolution pooling network to obtain first target object features;
(3) Performing feature point multiplication and summation operation and feature maximum value extraction on scene object data through a second convolution pooling network to obtain second target object features;
(4) Carrying out feature fusion on the first target object feature and the second target object feature through the output layer to obtain a fusion target object feature;
(5) And learning object space parameters of the fusion target object features through the variation self-encoder to obtain object space feature parameters, and reconstructing the virtual object according to the object space feature parameters to obtain a virtual object model.
Specifically, object scanning is performed on a target scene area, and captured scene object data contains geometric information, surface texture and other visual features of the object. The scene object data is input into a preset object feature extraction model, and the model is composed of a first convolution pooling network, a second convolution pooling network and an output layer. And processing scene object data through a first convolution pooling network, capturing local features of the object through feature point multiplication and summation operation, and extracting the most obvious part in the features through maximum pooling operation to obtain first target object features. And extracting features of scene object data through a second convolution pooling network, and capturing features of objects from another angle by adjusting parameters such as the size of a convolution kernel, the step length or a pooling strategy to obtain second target object features. And carrying out feature fusion on the first object feature and the second object feature through the output layer. Features from both convolution pooling networks are integrated by specific algorithms, such as weighted summation, stitching or fusion strategies, to obtain fusion target object features. And learning the characteristics of the fusion target object through a variational self-encoder (VAE) to obtain the spatial characteristic parameters of the object. A variational self-encoder is a generative model that learns potential representations of data and is able to generate new instances of data by the learned representations. In this embodiment, the VAE obtains feature parameters describing the spatial location and morphology of the object by learning the potential distribution of the object features. Reconstructing the virtual object according to the object space characteristic parameters to generate a virtual object model.
In a specific embodiment, the process of executing step 103 may specifically include the following steps:
(1) Defining corresponding object dynamic interaction elements according to the virtual object model, and constructing an object dynamic interaction graph of the object dynamic interaction elements based on the graph structure to obtain a target object dynamic interaction graph;
(2) Analyzing the object dynamic interaction path of the target object dynamic interaction graph to obtain a plurality of object dynamic interaction paths;
(3) Performing object dynamic interaction logic matching on the target object dynamic interaction graph according to the plurality of object dynamic interaction paths to obtain object dynamic interaction logic corresponding to each object dynamic interaction path;
(4) Carrying out inter-object dynamic interaction mode analysis on object dynamic interaction logic corresponding to each object dynamic interaction path through a graph neural network to obtain an inter-object dynamic interaction mode;
(5) And carrying out interaction logic optimization on the object dynamic interaction logic corresponding to each object dynamic interaction path according to the inter-object dynamic interaction mode to obtain virtual object dynamic interaction logic.
In particular, corresponding dynamic interaction elements of the object are defined according to the virtual object model, and represent all interaction behaviors and state changes possibly participated in by the object. And constructing an object dynamic interaction graph for the object dynamic interaction element based on the graph structure. In the graph, nodes represent different dynamic interaction elements of an object, and edges represent potential interaction relationships between the elements. The goal of constructing a dynamic interaction graph of target objects is to create a structured representation that fully describes the interaction relationships between objects, which can reflect the direct interactions between objects, and also reveal the indirect associations and effects between them. And analyzing the object dynamic interaction path through the object dynamic interaction graph to obtain a plurality of object dynamic interaction paths. These paths represent a specific sequence of interactions that may occur between objects, each path being a continuous link from one interaction element to another, together forming a full view of the interactions of the objects in the scene. Through the analysis process, it can be identified which interactions between objects occur frequently and which occur infrequently, providing a basis for subsequent interaction logic matching and optimization. And carrying out object dynamic interaction logic matching according to the dynamic interaction path. For each dynamic interaction path, finding the corresponding dynamic interaction logic of the objects, namely, determining how the objects should interact under the specific interaction path. These interaction logics may be based on physical rules and may also be affected by scene settings or user behavior. And analyzing the object dynamic interaction logic corresponding to each object dynamic interaction path through the graph neural network. The graph neural network can capture a complex dynamic interaction mode between objects by learning the characteristics of nodes and edges in the graph, so that a deep law of interaction between objects is revealed. The deep learning method can learn how to effectively simulate the dynamic relationship between objects from a large amount of interaction data, and provides a data-based and intelligent understanding for each interaction. And further optimizing the object dynamic interaction logic corresponding to each object dynamic interaction path according to the inter-object dynamic interaction mode to obtain the virtual object dynamic interaction logic.
In a specific embodiment, the process of executing step 104 may specifically include the following steps:
(1) Scene integration is carried out on the three-dimensional environment model and the virtual object model, and a plurality of scene integration tasks are generated;
(2) Performing task correlation analysis on the plurality of scene integration tasks to obtain task correlation indexes;
(3) Through the multi-task learning model, performing multi-task learning and scene integration optimization on the three-dimensional environment model and the virtual object model according to task correlation indexes, and outputting a first scene integration strategy;
(4) Performing strategy optimization on the first scene integration strategy according to the virtual object dynamic interaction logic to obtain a second scene integration strategy;
(5) And carrying out scene integration on the three-dimensional environment model and the virtual object model according to a second scene integration strategy to obtain an initial XR (X-ray diffraction) augmented reality scene corresponding to the target scene area.
Specifically, the three-dimensional environment model and the virtual object model are subjected to scene integration, the virtual object is placed at a proper position in the three-dimensional environment, and the size, the direction and other attributes of the virtual object are adjusted to accord with the physical rule and the visual effect of an actual scene. And performing task correlation analysis on the scene integration task to obtain a task correlation index. And comprehensively learning and optimizing the three-dimensional environment model and the virtual object model according to the task correlation indexes by means of the multi-task learning model. The multi-task learning model can simultaneously consider a plurality of integrated tasks, and learning efficiency and optimization effect are improved by sharing information. When performing multitasking, the model will try to find a balance point so that all integrated tasks are satisfied as much as possible while optimizing the overall effect of the entire scene. And obtaining a first scene integration strategy through the learning and optimization of the multi-task learning model. And performing strategy optimization on the first scene integration strategy, and adjusting the position, action and interaction of the objects according to dynamic interaction logic between the virtual objects to obtain a second scene integration strategy. And according to the second scene integration strategy, performing final scene integration on the three-dimensional environment model and the virtual object model. The method comprises the steps of adjusting the position of an object, optimizing details of visual effects such as illumination and texture of a scene, ensuring continuity and reality of the whole scene in vision, and finally obtaining an initial XR augmented reality scene corresponding to a target scene area.
In a specific embodiment, the process of executing step 105 may specifically include the following steps:
(1) According to the initial XR augmented reality scene, physiological signals and behavior data of a plurality of target users are obtained through a plurality of target cooperative devices;
(2) Carrying out emotion calculation and behavior analysis on the physiological signals and the behavior data to obtain emotion calculation results and behavior analysis results of each target user;
(3) And creating a multisensory feedback mechanism of each target user according to the emotion calculation result and the behavior analysis result, and integrating the multisensory feedback mechanism into the initial XR augmented reality scene.
Specifically, physiological and behavioral data of a user while experiencing XR content is collected through various sensors and tracking devices, such as heart rate monitors, galvanic skin response sensors, eye tracker, and motion capture devices. And carrying out emotion calculation and behavior analysis on the physiological signals and the behavior data. The emotion calculating section interprets physiological data of the user using the algorithm model, and recognizes an emotional state of the user, such as happiness, sadness, tension or relaxation, and the like. At the same time, behavioral analysis explores the user's behavioral patterns, including how the user interacts with objects in the XR environment, how attention is diverted in the virtual scene, and reactions to specific stimuli or events. A personalized multi-sensory feedback mechanism is created based on the emotion calculation results and the behavior analysis results of each target user. According to the emotion and behavior characteristics of the user, an experience capable of causing positive feedback is designed. The multisensory feedback mechanism is integrated into the initial XR augmented reality scenario. This requires the system to be able to monitor and analyze the user's feedback in real time and also to quickly and accurately adjust various aspects of the virtual environment to accommodate the needs and preferences of each user. This involves a number of dynamic content generation and real-time rendering techniques, as well as real-time processing and analysis capabilities for user data.
In a specific embodiment, the process of executing step 106 may specifically include the following steps:
(1) Acquiring scene response data of each target cooperative device through a multi-sense feedback mechanism, and extracting response characteristics of the scene response data to obtain the scene response characteristics of each target cooperative device;
(2) Performing cooperative control response analysis on the initial XR augmented reality scene through scene response characteristics of each target cooperative device to obtain an initial cooperative control response parameter set;
(3) Initializing a parameter cluster body of the initial cooperative control response parameter set through a preset genetic algorithm to obtain a plurality of first cooperative control response parameter sets;
(4) Respectively carrying out fitness calculation on a plurality of first cooperative control response parameter sets to obtain a target fitness value of each first cooperative control response parameter set;
(5) Performing cross iterative optimization on the first cooperative control response parameter sets according to the target fitness value to obtain second cooperative control response parameter sets;
(6) And carrying out optimization solution on the plurality of second cooperative control response parameter sets to obtain a target cooperative control response parameter set, and carrying out cooperative control response optimization on the initial XR augmented reality scene through the target cooperative control response parameter set to obtain a target XR augmented reality scene corresponding to the target scene area.
Specifically, scene response data of each target cooperative device is obtained through a multi-sense feedback mechanism. Through various sensor and tracking techniques, the user's behavioral and physiological responses in the virtual environment, such as eye movements, gestures, heart rate, etc., are captured. And extracting response characteristics of the scene response data. Key features that can represent user interaction patterns are extracted from raw data through data analysis and machine learning techniques. For example, by analyzing eye movement data of a user, determining a scene element of most interest to the user; by analyzing the gesture data, the interaction intention of the user is known. And carrying out cooperative control response analysis based on the scene response characteristics of each target cooperative device, and understanding how the interaction behaviors of different users affect the performance of the whole XR scene, so as to obtain an initial cooperative control response parameter set. These parameter sets define how the scene is dynamically adjusted according to the user's behavior and response, such as changing the layout of scene elements, adjusting lighting effects, or triggering specific events. In order to optimize initial cooperative control response parameters, initializing a parameter cluster body through a preset genetic algorithm, and generating a plurality of first cooperative control response parameter sets. The genetic algorithm is an optimization algorithm simulating natural selection and genetic mechanism, and a better solution is evolved from generation to generation through crossover, mutation and selection operation. In this process, each parameter set is treated as an individual whose performance is evaluated by fitness calculations, i.e. the effect of each parameter set in controlling the XR scene response. And carrying out fitness calculation on the first cooperative control response parameter sets, evaluating the performance of each parameter set, and carrying out cross iteration optimization according to the target fitness value. By simulating a natural selection mechanism in biological evolution, parameter sets are continuously optimized to improve the responsiveness of scenes and user experience. And determining an optimal target cooperative control response parameter set by optimally solving a plurality of second cooperative control response parameter sets. The parameter set can most effectively optimize the responsiveness of the initial XR augmented reality scene, so that the scene can be accurately adjusted according to the behaviors and reactions of different users.
The method for constructing an XR augmented reality scene in the embodiment of the present application is described above, and the apparatus for constructing an XR augmented reality scene in the embodiment of the present application is described below, referring to fig. 2, and one embodiment of the apparatus for constructing an XR augmented reality scene in the embodiment of the present application includes:
The modeling module 201 is configured to perform environmental scanning on a target scene area to obtain area environment data, and perform sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
The generating module 202 is configured to perform object scanning on the target scene area to obtain scene object data, and perform virtual object generation on the scene object data through the variation self-encoder to obtain a virtual object model;
The analysis module 203 is configured to perform object dynamic interaction analysis on the virtual object model based on the graph neural network, so as to obtain virtual object dynamic interaction logic;
The integration module 204 is configured to perform scene integration and multitask learning on the three-dimensional environment model and the virtual object model according to the virtual object dynamic interaction logic, so as to obtain an initial XR extended reality scene corresponding to the target scene area;
The creating module 205 is configured to obtain physiological signals and behavior data of a plurality of target users according to the initial XR augmented reality scene and through a plurality of target cooperative devices, and create a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
The optimization module 206 is configured to obtain, through a multi-sensory feedback mechanism, scene response data of each target cooperative device, and perform cooperative control response optimization on the initial XR augmented reality scene according to the scene response data, so as to obtain a target XR augmented reality scene corresponding to the target scene area.
Through the cooperative cooperation of the components, the method can automatically capture and process data from the actual environment to generate a high-precision three-dimensional environment model by utilizing 3D scanning equipment to automatically perform environment and object scanning and combining sparse coding and three-dimensional environment modeling technology. The process greatly reduces the requirement of manual intervention and improves the efficiency and speed of the whole scene construction. The scene object data is processed through the variational self-encoder, and the generated virtual object model can be innovatively designed and adjusted according to requirements while keeping object details and texture realism. This not only increases the realism and visual effect of the virtual environment, but also provides a richer and diversified experience for the user. And carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network, so that the complex interaction relationship among objects can be deeply understood, and natural and smooth dynamic interaction logic is generated. The method enables the objects in the virtual environment to respond to the operation of the user in a more natural and real mode, and improves the interaction satisfaction degree and immersion experience of the user. By integrating a plurality of target cooperative devices to acquire physiological signals and behavior data of users and combining a multi-sense feedback mechanism, the XR scene can be adjusted and optimized in real time to adapt to the requirements of different users. The collaborative control response optimization can ensure that each user can obtain personalized multi-sense experience, and can dynamically adjust according to user feedback to ensure that an XR scene is always in an optimal state. The scene integration task is optimized through the multi-task learning model, the method can effectively process the correlation and conflict among various tasks, and the allocation and the use of system resources are optimized. The method not only improves the rendering quality and response speed of the XR scene, but also ensures the long-term stability and reliability of the system, and further improves the construction accuracy of the XR augmented reality scene and the rendering effect of the XR augmented reality scene.
The present application also provides a computer device, where the computer device includes a memory and a processor, where the memory stores computer readable instructions that, when executed by the processor, cause the processor to execute the steps of the method for constructing an XR augmented reality scene in the foregoing embodiments.
The present application also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the method for constructing an XR augmented reality scenario.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. The method for constructing the XR augmented reality scene is characterized by comprising the following steps of:
Performing environment scanning on a target scene area to obtain area environment data, and performing sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
Object scanning is carried out on the target scene area to obtain scene object data, virtual object generation is carried out on the scene object data through a variation self-encoder to obtain a virtual object model;
performing object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic;
according to the virtual object dynamic interaction logic, scene integration and multi-task learning are carried out on the three-dimensional environment model and the virtual object model, and an initial XR (X-ray) augmented reality scene corresponding to the target scene area is obtained;
Acquiring physiological signals and behavior data of a plurality of target users through a plurality of target cooperative devices according to the initial XR augmented reality scene, and creating a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
And acquiring scene response data of each target cooperative device through the multi-sense feedback mechanism, and performing cooperative control response optimization on the initial XR augmented reality scene according to the scene response data to obtain a target XR augmented reality scene corresponding to the target scene area.
2. The method for constructing an XR augmented reality scene according to claim 1, wherein the performing environmental scanning on the target scene region to obtain regional environmental data, performing sparse coding and three-dimensional environmental modeling on the regional environmental data to obtain a three-dimensional environmental model comprises:
performing environment scanning on a target scene area through 3D scanning equipment to obtain initial environment image data;
Performing image enhancement processing on the initial environment image data to obtain enhanced environment image data, and performing local region segmentation on the enhanced environment image data to obtain local environment image data;
Extracting the edge direction of the local environment image data to obtain an edge direction feature image, and extracting gradient features of the edge direction feature image to obtain a target gradient feature image;
performing environment data conversion on the target gradient characteristic image and the local environment image data to obtain regional environment data;
performing region segmentation on the region environment data to obtain a plurality of first environment data, and performing sparse coding on the plurality of first environment data to obtain a plurality of second environment data;
And carrying out three-dimensional environment modeling on the target scene area according to the plurality of second environment data to obtain a three-dimensional environment model.
3. The method for constructing an XR augmented reality scene according to claim 1, wherein the performing object scanning on the target scene area to obtain scene object data, and performing virtual object generation on the scene object data through a variation self-encoder to obtain a virtual object model, comprises:
Scanning the object of the target scene area to obtain scene object data, and inputting the scene object data into a preset object feature extraction model, wherein the object feature extraction model comprises: a first convolutional pooled network, a second convolutional pooled network, and an output layer;
Performing characteristic point multiplication and summation operation and characteristic maximum value extraction on the scene object data through the first convolution pooling network to obtain a first target object characteristic;
performing feature point multiplication and summation operation and feature maximum value extraction on the scene object data through the second convolution pooling network to obtain a second target object feature;
performing feature fusion on the first target object feature and the second target object feature through the output layer to obtain a fusion target object feature;
and learning object space parameters of the fusion target object features through a variation self-encoder to obtain object space feature parameters, and reconstructing the virtual object according to the object space feature parameters to obtain a virtual object model.
4. The method for constructing an XR augmented reality scene according to claim 2, wherein the performing object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic comprises:
defining corresponding object dynamic interaction elements according to the virtual object model, and constructing an object dynamic interaction graph of the object dynamic interaction elements based on a graph structure to obtain a target object dynamic interaction graph;
Analyzing the object dynamic interaction path of the target object dynamic interaction graph to obtain a plurality of object dynamic interaction paths;
Performing object dynamic interaction logic matching on the target object dynamic interaction graph according to the plurality of object dynamic interaction paths to obtain object dynamic interaction logic corresponding to each object dynamic interaction path;
Carrying out inter-object dynamic interaction mode analysis on object dynamic interaction logic corresponding to each object dynamic interaction path through a graph neural network to obtain an inter-object dynamic interaction mode;
and carrying out interaction logic optimization on the object dynamic interaction logic corresponding to each object dynamic interaction path according to the inter-object dynamic interaction mode to obtain virtual object dynamic interaction logic.
5. The method for constructing an XR extended reality scene according to claim 1, wherein the performing scene integration and multitask learning on the three-dimensional environment model and the virtual object model according to the virtual object dynamic interaction logic to obtain an initial XR extended reality scene corresponding to the target scene area comprises:
Performing scene integration on the three-dimensional environment model and the virtual object model to generate a plurality of scene integration tasks;
Performing task correlation analysis on the plurality of scene integration tasks to obtain task correlation indexes;
Through a multi-task learning model, performing multi-task learning and scene integration optimization on the three-dimensional environment model and the virtual object model according to the task correlation indexes, and outputting a first scene integration strategy;
performing strategy optimization on the first scene integration strategy according to the virtual object dynamic interaction logic to obtain a second scene integration strategy;
And carrying out scene integration on the three-dimensional environment model and the virtual object model according to the second scene integration strategy to obtain an initial XR (X-ray diffraction) extended reality scene corresponding to the target scene area.
6. The method according to claim 1, wherein the multi-sensory feedback mechanism for obtaining physiological signals and behavior data of a plurality of target users according to the initial XR augmented reality scene and through a plurality of target cooperative devices, and creating the initial XR augmented reality scene according to the physiological signals and the behavior data, comprises:
According to the initial XR augmented reality scene, physiological signals and behavior data of a plurality of target users are obtained through a plurality of target cooperative devices;
carrying out emotion calculation and behavior analysis on the physiological signals and the behavior data to obtain emotion calculation results and behavior analysis results of each target user;
And creating a multisensory feedback mechanism of each target user according to the emotion calculation result and the behavior analysis result, and integrating the multisensory feedback mechanism into the initial XR augmented reality scene.
7. The method for constructing an XR augmented reality scene according to claim 1, wherein the acquiring, by the multi-sensory feedback mechanism, scene response data of each target cooperative device, and performing cooperative control response optimization on the initial XR augmented reality scene according to the scene response data, to obtain a target XR augmented reality scene corresponding to the target scene region, includes:
Acquiring scene response data of each target cooperative device through the multi-sense feedback mechanism, and extracting response characteristics of the scene response data to obtain the scene response characteristics of each target cooperative device;
performing cooperative control response analysis on the initial XR augmented reality scene through scene response characteristics of each target cooperative device to obtain an initial cooperative control response parameter set;
initializing the parameter clusters of the initial cooperative control response parameter set through a preset genetic algorithm to obtain a plurality of first cooperative control response parameter sets;
respectively carrying out fitness calculation on the plurality of first cooperative control response parameter sets to obtain a target fitness value of each first cooperative control response parameter set;
Performing cross iterative optimization on the first cooperative control response parameter sets according to the target fitness value to obtain second cooperative control response parameter sets;
And carrying out optimization solution on the plurality of second cooperative control response parameter sets to obtain a target cooperative control response parameter set, and carrying out cooperative control response optimization on the initial XR augmented reality scene through the target cooperative control response parameter set to obtain a target XR augmented reality scene corresponding to the target scene area.
8. An XR augmented reality scene construction apparatus, the XR augmented reality scene construction apparatus comprising:
The modeling module is used for carrying out environment scanning on a target scene area to obtain area environment data, and carrying out sparse coding and three-dimensional environment modeling on the area environment data to obtain a three-dimensional environment model;
the generating module is used for carrying out object scanning on the target scene area to obtain scene object data, and carrying out virtual object generation on the scene object data through the variation self-encoder to obtain a virtual object model;
The analysis module is used for carrying out object dynamic interaction analysis on the virtual object model based on the graph neural network to obtain virtual object dynamic interaction logic;
The integration module is used for carrying out scene integration and multi-task learning on the three-dimensional environment model and the virtual object model according to the virtual object dynamic interaction logic to obtain an initial XR (virtual reality) augmented reality scene corresponding to the target scene area;
The creation module is used for acquiring physiological signals and behavior data of a plurality of target users through a plurality of target cooperative devices according to the initial XR augmented reality scene, and creating a multisensory feedback mechanism of the initial XR augmented reality scene according to the physiological signals and the behavior data;
and the optimization module is used for acquiring scene response data of each target cooperative device through the multi-sense feedback mechanism, and carrying out cooperative control response optimization on the initial XR augmented reality scene according to the scene response data to obtain a target XR augmented reality scene corresponding to the target scene area.
9. A computer device, the computer device comprising: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invoking the instructions in the memory to cause the computer device to perform the method of constructing an XR augmented reality scene as claimed in any one of claims 1 to 7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the method of constructing an XR augmented reality scene of any one of claims 1-7.
CN202410316994.1A 2024-03-20 Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene Active CN117934782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410316994.1A CN117934782B (en) 2024-03-20 Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410316994.1A CN117934782B (en) 2024-03-20 Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene

Publications (2)

Publication Number Publication Date
CN117934782A true CN117934782A (en) 2024-04-26
CN117934782B CN117934782B (en) 2024-05-31

Family

ID=

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022139643A1 (en) * 2020-12-22 2022-06-30 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices related to extended reality
CN115376695A (en) * 2022-10-25 2022-11-22 安徽星辰智跃科技有限责任公司 Method, system and device for neuropsychological assessment and intervention based on augmented reality
CN116563740A (en) * 2023-05-15 2023-08-08 北京字跳网络技术有限公司 Control method and device based on augmented reality, electronic equipment and storage medium
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system
CN117392892A (en) * 2023-10-24 2024-01-12 中国人民解放军国防科技大学 XR-based simulated grenade training method, system, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022139643A1 (en) * 2020-12-22 2022-06-30 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices related to extended reality
CN115376695A (en) * 2022-10-25 2022-11-22 安徽星辰智跃科技有限责任公司 Method, system and device for neuropsychological assessment and intervention based on augmented reality
CN116563740A (en) * 2023-05-15 2023-08-08 北京字跳网络技术有限公司 Control method and device based on augmented reality, electronic equipment and storage medium
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system
CN117392892A (en) * 2023-10-24 2024-01-12 中国人民解放军国防科技大学 XR-based simulated grenade training method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108961369B (en) Method and device for generating 3D animation
CN108846343B (en) Multi-task collaborative analysis method based on three-dimensional video
Pettee et al. Beyond imitation: Generative and variational choreography via machine learning
Zhang et al. Inkthetics: a comprehensive computational model for aesthetic evaluation of Chinese ink paintings
Beattie et al. Incorporating the perception of visual roughness into the design of mid-air haptic textures
CN116050284B (en) Fashion redesign system utilizing AIGC technology
Mousavi et al. Ai playground: Unreal engine-based data ablation tool for deep learning
Bibbò et al. Neural network design using a virtual reality platform
CN117218300B (en) Three-dimensional model construction method, three-dimensional model construction training method and device
Wang et al. Intelligent metaverse scene content construction
CN117934782B (en) Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene
CN117934782A (en) Construction method, device, equipment and storage medium of XR (X-ray) augmented reality scene
Etheredge et al. Decontextualized learning for interpretable hierarchical representations of visual patterns
Rogowitz et al. Integrating human-and computer-based approaches to feature extraction and analysis
Pucihar et al. FUSE: Towards AI-Based Future Services for Generating Augmented Reality Experiences
Wu et al. A survey of recent practice of Artificial Life in visual art
CN106097373B (en) A kind of smiling face's synthetic method based on branch's formula sparse component analysis model
Dinerstein et al. Enhancing computer graphics through machine learning: a survey
CN117934690B (en) Household soft management method, device, equipment and storage medium
Xiao et al. Optimization of 3D Animation Design Based on Support Vector Machine Algorithm
Wang et al. Animation Design Based on Anatomically Constrained Neural Networks
Zhang et al. Zero-Shot Real Facial Attribute Separation and Transfer at Novel Views
Wu et al. Effects Study of CAD Technology on the Dissemination of Artistic Images Using Big Data
CN118116054A (en) Face aging accurate simulation method based on face recognition and physical simulation
Li et al. Dynamic Adjustment and CAD Real-time Rendering Algorithm for Advertising Art Design based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant