CN117274471A - Interaction method, device, equipment and storage medium of intelligent equipment in virtual space - Google Patents

Interaction method, device, equipment and storage medium of intelligent equipment in virtual space Download PDF

Info

Publication number
CN117274471A
CN117274471A CN202311469056.7A CN202311469056A CN117274471A CN 117274471 A CN117274471 A CN 117274471A CN 202311469056 A CN202311469056 A CN 202311469056A CN 117274471 A CN117274471 A CN 117274471A
Authority
CN
China
Prior art keywords
virtual
motion data
model
intelligent terminal
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311469056.7A
Other languages
Chinese (zh)
Inventor
夏雨
王鲁平
陆地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiyi Weixiang Technology Co ltd
Original Assignee
Shenzhen Xiyi Weixiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiyi Weixiang Technology Co ltd filed Critical Shenzhen Xiyi Weixiang Technology Co ltd
Priority to CN202311469056.7A priority Critical patent/CN117274471A/en
Publication of CN117274471A publication Critical patent/CN117274471A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of intelligent wearable equipment and discloses an interaction method, device, equipment and storage medium of intelligent equipment in a virtual space, wherein the method comprises the steps of creating a virtual scene; importing the virtual avatar into a virtual scene, and acquiring initial motion data; and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm, and generating corresponding standard motion data so as to realize interaction of each virtual avatar in the virtual scene. Through the mode, the virtual space is built through the space 3D materials in advance, corresponding initial motion data are obtained through different intelligent terminals, various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, interaction among different types of intelligent terminals is achieved, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.

Description

Interaction method, device, equipment and storage medium of intelligent equipment in virtual space
Technical Field
The application relates to the technical field of intelligent wearable devices, in particular to an interaction method, device and equipment of intelligent devices in a virtual space and a storage medium.
Background
There are many software and tools available on the market that can perform virtual scene construction, and also provide collaboration capabilities based on a certain type of platform or terminal. However, the construction complexity of the existing tool is very high, a certain expertise is often required for construction, a great deal of additional interaction development work is also required for interaction of different devices, and a set of virtual scenes which can not only cross platforms but also cross devices and can also perform collaborative interaction cannot be quickly constructed. The existing tools and development flows can not realize the rapid construction of a cross-platform and cross-device collaborative interaction virtual scene without codes. Therefore, how to construct a cross-platform and cross-terminal virtual space collaborative interaction scene becomes a technical problem to be solved at present.
Disclosure of Invention
The application provides an interaction method, device, equipment and storage medium of intelligent equipment in a virtual space, and solves the problem of constructing a cross-platform and cross-terminal virtual space collaborative interaction scene.
In a first aspect, the present application provides an interaction method of an intelligent device in a virtual space, where the method includes:
generating a 3D model according to a preset space 3D material, and creating a virtual scene according to the 3D model;
importing the virtual avatar of at least one intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar;
and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
Further, before importing the virtual avatar of at least one intelligent terminal into the virtual scene and acquiring the initial motion data of each virtual avatar, the method comprises the following steps:
dividing the 3D model into at least one locking partition based on an object composition structure of the 3D model, and generating interactive locking logic based on each locking partition.
Further, a first 3D model exists in the virtual scene, and the importing the virtual avatar of at least one intelligent terminal into the virtual scene and obtaining the initial motion data of each virtual avatar includes:
copying the first 3D model under the condition that at least two virtual avatars exist in the virtual scene and send a cooperative operation instruction to the first 3D model at the same time, and generating a second 3D model;
respectively responding to the collaborative operation instruction through the first 3D model and the second 3D model, so that the first 3D model and the second 3D model share the interactive locking logic;
substituting the first 3D model and the second 3D model into independent spaces respectively and keeping synchronous change until the first 3D model or the second 3D model stops executing the cooperative operation instruction, and acquiring the corresponding initial motion data respectively.
Further, the intelligent terminals include a first type intelligent terminal, a second type intelligent terminal and a third type intelligent terminal, the initial motion data includes overall motion data and limb motion data, the virtual avatar of at least one intelligent terminal is imported into the virtual scene, and initial motion data of each virtual avatar is obtained, including:
under the condition that the intelligent terminal is the first type intelligent terminal, the first type intelligent terminal drives the virtual avatar to perform overall motion, and overall motion data are obtained;
under the condition that the intelligent terminal is the second type intelligent terminal, the second type intelligent terminal drives the virtual avatar to perform limb movement, and limb movement data are obtained;
and under the condition that the intelligent terminal is the third type intelligent terminal, driving the virtual avatar to perform integral movement and/or limb movement through the third type intelligent terminal respectively, and acquiring corresponding integral movement data and/or limb movement data.
Further, before generating a 3D model according to the preset spatial 3D material and creating a virtual scene according to the 3D model, the method includes:
acquiring non-3D materials, wherein the non-3D materials comprise pictures, videos, texts and/or documents;
and converting the non-3D material into the preset space 3D material based on a preset 3D component packaging tool.
Further, generating a 3D model according to a preset spatial 3D material, and creating a virtual scene according to the 3D model, including:
constructing a plane with equal ratio to the 3D material in a preset space, and converting the 3D material in the preset space into mapping data;
generating a material picture in the preset space based on the material of the 3D material in the preset space and the mapping data;
generating a 3D object based on the mapping data, the material picture and a reference direction, wherein the reference direction is the opposite direction of the normal line of the plane;
and constructing the virtual scene according to at least one 3D object.
Further, the method for performing standardization processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene comprises the following steps:
determining a corresponding preset standard data structure based on the type of the intelligent terminal;
and converting the data format of each initial motion data into a standard data format through the preset standard data structure, and generating the corresponding standard motion data.
In a second aspect, the present application further provides an interaction apparatus for an intelligent device in a virtual space, where the apparatus includes:
the virtual scene creation module is used for generating a 3D model according to the 3D material in the preset space and creating a virtual scene according to the 3D model;
the initial motion data acquisition module is used for importing the virtual avatar of at least one intelligent terminal into the virtual scene and acquiring the initial motion data of each virtual avatar;
the standard motion data interaction module is used for carrying out standardized processing on the initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of the intelligent terminals in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
In a third aspect, the present application also provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the interaction method of the intelligent device in the virtual space when the computer program is executed.
In a fourth aspect, the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor causes the processor to implement an interaction method of a smart device in a virtual space as described above.
The application discloses an interaction method, device, equipment and storage medium of intelligent equipment in a virtual space, wherein the interaction method of the intelligent equipment in the virtual space comprises the steps of generating a 3D model according to 3D materials of a preset space, and creating a virtual scene according to the 3D model; importing the virtual avatar of at least one intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar; and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data. Through the mode, the virtual space is built through the space 3D materials in advance, corresponding initial motion data are obtained through different intelligent terminals, various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, interaction among different types of intelligent terminals is achieved, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an interaction method of an intelligent device in a virtual space according to a first embodiment of the present application;
FIG. 2 is a schematic flow chart of an interaction method of an intelligent device in a virtual space according to a second embodiment of the present application;
FIG. 3 is a schematic flow chart of an interaction method of an intelligent device in a virtual space according to a third embodiment of the present application;
FIG. 4 is a schematic block diagram of an interaction device of an intelligent device in a virtual space according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The embodiment of the application provides an interaction method, device and equipment of intelligent equipment in a virtual space and a storage medium. The interaction method of the intelligent equipment in the virtual space can be applied to a server, the virtual space is built in advance through the 3D material of the space, corresponding initial motion data are obtained through different intelligent terminals, and various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, so that interaction among different types of intelligent terminals is realized, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved. The server may be an independent server or a server cluster.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of an interaction method of an intelligent device in a virtual space according to a first embodiment of the present application. The interaction method of the intelligent equipment in the virtual space can be applied to a server, the virtual space is built through the 3D material in advance, corresponding initial motion data are obtained through different intelligent terminals, and various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, so that interaction among different types of intelligent terminals is realized, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.
As shown in fig. 1, the interaction method of the intelligent device in the virtual space specifically includes steps S10 to S30.
Step S10, a 3D model is generated according to 3D materials of a preset space, and a virtual scene is created according to the 3D model;
in one embodiment, a virtual scene is created, one or more persons co-construct, and a user may arrange scene elements available for interaction in the virtual space, such as 3D models, information panels, text introduction, lights, environmental sound effects, streaming windows, etc. When a plurality of people cooperate to construct, the construction actions and the voice information of other people can be observed at the same time, so that the cooperation is better. After the creation is completed, a plurality of people can enter the scene together to carry out communication and cooperative interaction.
Step S20, importing at least one virtual avatar of an intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar;
in one embodiment, the intelligent terminal is a physical device for which a virtual avatar representing the intelligent terminal is created in a virtual scene (i.e., virtual space) for synchronization of the actual scene with the motion data of the virtual scene.
The motion data of different virtual avatars are divided into two main types, one type is the overall motion data of displacement in space after the user wears the data, and the motion performance of the data in space is consistent, and the data can be expressed as the overall motion of the user no matter what equipment and input source are adopted.
The other is limb movement data, which is the relative movement that occurs with respect to the user's coordinate system. Such data may be characterized by different devices and input sources, which may result in different ways of limb movement, such as: the handle can drive the arm of the avatar to move, the pen can drive the pen beside the avatar to move, and the mouse and the screen touch can not produce gesture movement. However, no matter how the gesture is, the method can perform ray selection and collision detection determination according to the equipment characteristics so as to drive the state change of the target, so as to achieve the interaction effect in the interaction space.
Step S30, carrying out standardization processing on the initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of the intelligent terminals in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
In one embodiment, the event hierarchy portion of the standard data structure defines a state change relationship between people, people and objects, and objects in the interactive scene, such as: gaze, grip, contact, activation, etc. And unifying input events of different devices into the same event, such as: a gazed state, which is that an object is hit by an eyeball focus in an XR (Extended reality) helmet device supporting eyeball tracking; in an XR helmet device supporting a handle controller, the gazed state is that an object is hit by rays emitted by the front direction of the handle controller; in an XR helmet device supporting gesture control, the gazed state is that an object is hit by rays emitted by the front direction of an index finger; in a PC (Personal Computer ) device and a VR (Virtual Reality) all-in-one machine supporting mouse operation, the gazed state is that an object is hit by a ray emitted from a spatial position corresponding to a hovering screen position of the mouse along a spatial position of a center point of the camera; in mobile devices and devices that can only be controlled by a screen, the gazed state is that the object is hit by rays that are issued along the spatial position of the camera center point to the spatial position corresponding to the screen center point.
The embodiment discloses an interaction method, device, equipment and storage medium of intelligent equipment in a virtual space, wherein the interaction method of the intelligent equipment in the virtual space comprises the steps of generating a 3D model according to 3D materials of a preset space, and creating a virtual scene according to the 3D model; importing the virtual avatar of at least one intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar; and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data. Through the mode, the virtual space is built through the space 3D materials in advance, corresponding initial motion data are obtained through different intelligent terminals, various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, interaction among different types of intelligent terminals is achieved, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.
Based on the embodiment shown in fig. 1, before step S20, the method includes:
dividing the 3D model into at least one locking partition based on an object composition structure of the 3D model, and generating interactive locking logic based on each locking partition.
In one embodiment, the present embodiment provides an interactive locking mechanism for 3D models to facilitate collaborative operation of multiple people on the same model object. After loading a 3D model from a scene, the program divides the model into a plurality of lock partitions according to the object composition structure of the 3D model itself (parent-child relationships between sub-objects in the 3D model object). The parent-child relationship in the original structure is still maintained between the partitions. When user a is operating a partition, user B is not able to operate that partition nor is it able to operate the sub-partition of that partition. For example: when the 3D model O1 has three sub-objects of m1, m2 and m3, wherein m1 and m2 are children of O1, and m3 is children of m 1. User a is operating m1 at this time, user B cannot operate m1 or m3 at the same time; if the user A is operating m3 at this time, the user B cannot operate m3 at the same time; user a is now operating O1 as a whole, user B cannot operate O1 and all its children.
Referring to fig. 2, fig. 2 is a schematic flowchart of an interaction method of an intelligent device in a virtual space according to a second embodiment of the present application. The interaction method of the intelligent equipment in the virtual space can be applied to a server, the virtual space is built through the 3D material in advance, corresponding initial motion data are obtained through different intelligent terminals, and various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, so that interaction among different types of intelligent terminals is realized, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.
Based on the above embodiment, as shown in fig. 2 of the present embodiment, step S20 includes steps S201 to S203.
Step S201, copying the first 3D model and generating a second 3D model under the condition that at least two virtual avatars in the virtual scene send a cooperative operation instruction to the first 3D model simultaneously;
step S202, responding to the collaborative operation instruction through the first 3D model and the second 3D model respectively, and enabling the first 3D model and the second 3D model to share the interactive locking logic;
step S203, substituting the first 3D model and the second 3D model into independent spaces respectively and maintaining synchronous change until the first 3D model or the second 3D model stops executing the collaborative operation instruction, and acquiring the corresponding initial motion data respectively.
In one embodiment, the present embodiment provides a collaborative replication function for a 3D model, when a user a operates a model O1 in a current interaction space, a user B may replicate the model O1 as O2, and the O2 and the O1 may simultaneously receive operations of the user a and the user B and share the same interaction locking logic, and the O2 may be brought into another interaction space by the user B and still keep changing synchronously until the user a or the user B actively cancels the collaborative replication state, where the O1 and the O2 are separated into two independent models and no longer affect each other.
Based on the embodiment shown in fig. 1, in this embodiment, step S20 further includes:
under the condition that the intelligent terminal is the first type intelligent terminal, the first type intelligent terminal drives the virtual avatar to perform overall motion, and overall motion data are obtained;
under the condition that the intelligent terminal is the second type intelligent terminal, the second type intelligent terminal drives the virtual avatar to perform limb movement, and limb movement data are obtained;
and under the condition that the intelligent terminal is the third type intelligent terminal, driving the virtual avatar to perform integral movement and/or limb movement through the third type intelligent terminal respectively, and acquiring corresponding integral movement data and/or limb movement data.
In one embodiment, the first type of smart terminal may be a PC and a cell phone and the second type of smart terminal may be a handle and a gesture of an XR helmet. For XR helmets, movement of the helmet part will bring about the overall movement of the avatar, which produces avatar overall movement data; while the movement of the handle and the gesture is a movement relative to the helmet, limb movement data is generated. For PC and mobile equipment, keys, mice, screen touch and the like can only drive the virtual avatar to perform overall motion, so that only overall motion data is generated.
Based on the embodiment shown in fig. 1, before step S10, the method includes:
acquiring non-3D materials, wherein the non-3D materials comprise pictures, videos, texts and/or documents;
and converting the non-3D material into the preset space 3D material based on a preset 3D component packaging tool.
Referring to fig. 3, fig. 3 is a schematic flowchart of an interaction method of an intelligent device in a virtual space according to a third embodiment of the present application. The interaction method of the intelligent equipment in the virtual space can be applied to a server, the virtual space is built through the 3D material in advance, corresponding initial motion data are obtained through different intelligent terminals, and various initial motion data are converted into standard motion data with uniform formats according to a preset data conversion algorithm, so that interaction among different types of intelligent terminals is realized, and the problem of building a cross-platform and cross-terminal virtual space collaborative interaction scene is solved.
Based on the above embodiment, as shown in fig. 3 of the present embodiment, step S10 includes steps S101 to S104.
Step S101, constructing a plane with equal ratio to the 3D material of the preset space in the preset space, and converting the 3D material of the preset space into mapping data;
step S102, generating a material picture in the preset space based on the material of the 3D material of the preset space and the mapping data;
step S103, generating a 3D object based on the mapping data, the material picture and a reference direction, wherein the reference direction is the opposite direction of the normal line of the plane;
and step S104, constructing the virtual scene according to at least one 3D object.
In one embodiment, the present embodiment provides a 3D component packaging tool for importing non-3D material from a user, such as: pictures, videos, text, documents, etc. are converted into spatial 3D objects. The implementation process is as follows: a plane with the same ratio as the material is constructed in the space, the material is converted into mapping data in the form of an image, and the mapping data is endowed to the plane in the form of the material, so that a material picture is displayed in the space. Extrusion in the opposite direction to the normal to the plane and extrusion outward in the center of the plane to build the rim and thickness to complete the build of the 3D object. The created 3D is given physical properties and an event hierarchy to support subsequent interoperation.
Based on any of the above embodiments, in this embodiment, step S30 includes:
determining a corresponding preset standard data structure based on the type of the intelligent terminal;
and converting the data format of each initial motion data into a standard data format through the preset standard data structure, and generating the corresponding standard motion data.
In one embodiment, the present embodiment provides a set of standard data structures to convert the motion data of different APIs (Application Program Interface ) such as openxr (application program and XR runtime interface), webxr (browser provided API for operating XR device), steamvr (full-function 360 ° room-type space virtual reality experience), opengl (Open Graphics Library, open graphic library), webgl (Web Graphics Library, a 3D drawing protocol) and the like into the same data format, and translate the interaction event systems of different input devices/modes into the same event system, so as to perform cross-platform and cross-device collaborative interaction operation.
The motion data portion in the standard data structure of this embodiment separates the overall motion data from the limb motion data, so that devices incapable of generating limb motion data can still interact in the interaction space.
Referring to fig. 4, fig. 4 is a schematic block diagram of an interaction device of a smart device in a virtual space, where the interaction device of the smart device in the virtual space is used to execute the foregoing interaction method of the smart device in the virtual space according to an embodiment of the present application. The interaction device of the intelligent device in the virtual space can be configured on a server.
As shown in fig. 4, the interaction device of the intelligent device in the virtual space includes:
the virtual scene creation module 410 is configured to generate a 3D model according to a 3D material in a preset space, and create a virtual scene according to the 3D model;
an initial motion data obtaining module 420, configured to import at least one virtual avatar of an intelligent terminal into the virtual scene, and obtain initial motion data of each virtual avatar;
the standard motion data interaction module 430 is configured to perform standardization processing on each initial motion data based on a preset data conversion algorithm, and generate corresponding standard motion data to implement interaction of each intelligent terminal in the virtual scene, where the standard motion data is cross-platform and cross-terminal motion data.
Further, the interaction device of the intelligent device in the virtual space further comprises:
and the interactive locking logic generation module is used for dividing the 3D model into at least one locking partition based on the object composition structure of the 3D model and generating interactive locking logic based on each locking partition.
Further, the initial motion data acquisition module 420 includes:
a model copying unit, configured to copy the first 3D model and generate a second 3D model when at least two virtual avatars exist in the virtual scene and send a cooperative operation instruction to the first 3D model at the same time;
an interaction locking logic sharing unit, configured to respectively respond to the collaborative operation instruction through the first 3D model and the second 3D model, so that the first 3D model and the second 3D model share the interaction locking logic;
the data acquisition unit is used for substituting the first 3D model and the second 3D model into independent spaces respectively and keeping synchronous change until the first 3D model or the second 3D model stops executing the cooperative operation instruction, and acquiring the corresponding initial motion data respectively.
Further, the initial motion data acquisition module 420 includes:
the whole motion data acquisition unit is used for driving the virtual avatar to carry out whole motion through the first type intelligent terminal under the condition that the intelligent terminal is the first type intelligent terminal, so as to acquire the whole motion data;
the limb movement data acquisition unit is used for driving the virtual avatar to perform limb movement through the second type intelligent terminal under the condition that the intelligent terminal is the second type intelligent terminal, so as to acquire the limb movement data;
and the multiple data acquisition unit is used for respectively driving the virtual avatar to perform integral movement and/or limb movement through the third type intelligent terminal under the condition that the intelligent terminal is the third type intelligent terminal so as to acquire corresponding integral movement data and/or limb movement data.
Further, the interaction device of the intelligent device in the virtual space further comprises:
the non-3D material acquisition module is used for acquiring non-3D materials, wherein the non-3D materials comprise pictures, videos, texts and/or documents;
and the 3D packaging module is used for converting the non-3D material into the 3D material in the preset space based on a preset 3D component packaging tool.
Further, the virtual scene creation module 410 includes:
the 3D material conversion unit is used for constructing a plane with equal ratio to the 3D material in the preset space and converting the 3D material in the preset space into mapping data;
a material picture generation unit, configured to generate a material picture in the preset space based on the material of the 3D material in the preset space and the map data;
a 3D object generating unit configured to generate a 3D object based on the map data, the material picture, and a reference direction, where the reference direction is a direction opposite to a normal of the plane;
and the virtual scene construction unit is used for constructing the virtual scene according to at least one 3D object.
Further, the standard motion data interaction module 430 includes:
the standard data structure determining unit is used for determining a corresponding preset standard data structure based on the type of the intelligent terminal;
the standard motion data generation unit is used for converting the data format of each initial motion data into a standard data format through the preset standard data structure to generate the corresponding standard motion data.
It should be noted that, for convenience and brevity of description, the specific working process of the apparatus and each module described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
With reference to FIG. 5, the computer device includes a processor, memory, and a network interface connected by a system bus, where the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor to perform any of a number of interaction methods for smart devices in a virtual space.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any of the methods of interaction of the smart device in the virtual space.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
generating a 3D model according to a preset space 3D material, and creating a virtual scene according to the 3D model;
importing the virtual avatar of at least one intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar;
and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
In one embodiment, before importing the virtual avatar of at least one intelligent terminal into the virtual scene and acquiring the initial motion data of each virtual avatar, the method is used for realizing:
dividing the 3D model into at least one locking partition based on an object composition structure of the 3D model, and generating interactive locking logic based on each locking partition.
In one embodiment, a first 3D model exists in a virtual scene, and the method includes importing at least one virtual avatar of an intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar, for implementing:
copying the first 3D model under the condition that at least two virtual avatars exist in the virtual scene and send a cooperative operation instruction to the first 3D model at the same time, and generating a second 3D model;
respectively responding to the collaborative operation instruction through the first 3D model and the second 3D model, so that the first 3D model and the second 3D model share the interactive locking logic;
substituting the first 3D model and the second 3D model into independent spaces respectively and keeping synchronous change until the first 3D model or the second 3D model stops executing the cooperative operation instruction, and acquiring the corresponding initial motion data respectively.
In one embodiment, the intelligent terminals include a first type intelligent terminal, a second type intelligent terminal and a third type intelligent terminal, the initial motion data includes overall motion data and limb motion data, and the virtual avatar of at least one intelligent terminal is imported into the virtual scene, and initial motion data of each virtual avatar is obtained, so as to realize:
under the condition that the intelligent terminal is the first type intelligent terminal, the first type intelligent terminal drives the virtual avatar to perform overall motion, and overall motion data are obtained;
under the condition that the intelligent terminal is the second type intelligent terminal, the second type intelligent terminal drives the virtual avatar to perform limb movement, and limb movement data are obtained;
and under the condition that the intelligent terminal is the third type intelligent terminal, driving the virtual avatar to perform integral movement and/or limb movement through the third type intelligent terminal respectively, and acquiring corresponding integral movement data and/or limb movement data.
In one embodiment, the method is used for realizing the following steps of generating a 3D model according to the preset space 3D material and before creating a virtual scene according to the 3D model:
acquiring non-3D materials, wherein the non-3D materials comprise pictures, videos, texts and/or documents;
and converting the non-3D material into the preset space 3D material based on a preset 3D component packaging tool.
In one embodiment, a 3D model is generated according to a preset spatial 3D material, and a virtual scene is created according to the 3D model, so as to realize:
constructing a plane with equal ratio to the 3D material in a preset space, and converting the 3D material in the preset space into mapping data;
generating a material picture in the preset space based on the material of the 3D material in the preset space and the mapping data;
generating a 3D object based on the mapping data, the material picture and a reference direction, wherein the reference direction is the opposite direction of the normal line of the plane;
and constructing the virtual scene according to at least one 3D object.
In one embodiment, the initial motion data is normalized based on a preset data conversion algorithm, so as to generate corresponding standard motion data, so as to realize interaction of the intelligent terminals in the virtual scene, and the method is used for realizing:
determining a corresponding preset standard data structure based on the type of the intelligent terminal;
and converting the data format of each initial motion data into a standard data format through the preset standard data structure, and generating the corresponding standard motion data.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program comprises program instructions, and the processor executes the program instructions to realize the interaction method of the intelligent device in any virtual space provided by the embodiment of the application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided on the computer device.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An interaction method of intelligent equipment in a virtual space is characterized by comprising the following steps:
generating a 3D model according to a preset space 3D material, and creating a virtual scene according to the 3D model;
importing the virtual avatar of at least one intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar;
and carrying out standardized processing on each initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of each intelligent terminal in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
2. The method for interaction between intelligent devices in a virtual space according to claim 1, wherein before importing at least one virtual avatar of an intelligent terminal into the virtual scene and acquiring initial motion data of each virtual avatar, the method comprises:
dividing the 3D model into at least one locking partition based on an object composition structure of the 3D model, and generating interactive locking logic based on each locking partition.
3. The method for interaction of intelligent devices in a virtual space according to claim 2, wherein a first 3D model exists in the virtual scene, the importing at least one virtual avatar of the intelligent terminal into the virtual scene, and acquiring initial motion data of each virtual avatar, includes:
copying the first 3D model under the condition that at least two virtual avatars exist in the virtual scene and send a cooperative operation instruction to the first 3D model at the same time, and generating a second 3D model;
respectively responding to the collaborative operation instruction through the first 3D model and the second 3D model, so that the first 3D model and the second 3D model share the interactive locking logic;
substituting the first 3D model and the second 3D model into independent spaces respectively and keeping synchronous change until the first 3D model or the second 3D model stops executing the cooperative operation instruction, and acquiring the corresponding initial motion data respectively.
4. The method for interacting with an intelligent device in a virtual space according to claim 1, wherein the intelligent terminals include a first type of intelligent terminal, a second type of intelligent terminal and a third type of intelligent terminal, the initial motion data includes overall motion data and limb motion data, the importing a virtual avatar of at least one intelligent terminal into the virtual scene and acquiring initial motion data of each virtual avatar includes:
under the condition that the intelligent terminal is the first type intelligent terminal, the first type intelligent terminal drives the virtual avatar to perform overall motion, and overall motion data are obtained;
under the condition that the intelligent terminal is the second type intelligent terminal, the second type intelligent terminal drives the virtual avatar to perform limb movement, and limb movement data are obtained;
and under the condition that the intelligent terminal is the third type intelligent terminal, driving the virtual avatar to perform integral movement and/or limb movement through the third type intelligent terminal respectively, and acquiring corresponding integral movement data and/or limb movement data.
5. The method for interaction of intelligent devices in a virtual space according to claim 1, wherein before generating a 3D model according to a preset space 3D material and creating a virtual scene according to the 3D model, the method comprises:
acquiring non-3D materials, wherein the non-3D materials comprise pictures, videos, texts and/or documents;
and converting the non-3D material into the preset space 3D material based on a preset 3D component packaging tool.
6. The method for interaction of intelligent devices in a virtual space according to claim 5, wherein the generating a 3D model according to the 3D material of the preset space and creating a virtual scene according to the 3D model comprises:
constructing a plane with equal ratio to the 3D material in a preset space, and converting the 3D material in the preset space into mapping data;
generating a material picture in the preset space based on the material of the 3D material in the preset space and the mapping data;
generating a 3D object based on the mapping data, the material picture and a reference direction, wherein the reference direction is the opposite direction of the normal line of the plane;
and constructing the virtual scene according to at least one 3D object.
7. The method for interaction between intelligent devices in a virtual space according to any one of claims 1 to 6, wherein the normalizing each initial motion data based on a preset data transformation algorithm to generate corresponding standard motion data, so as to implement interaction between each intelligent terminal in the virtual scene, includes:
determining a corresponding preset standard data structure based on the type of the intelligent terminal;
and converting the data format of each initial motion data into a standard data format through the preset standard data structure, and generating the corresponding standard motion data.
8. An interaction device for an intelligent device in a virtual space, comprising:
the virtual scene creation module is used for generating a 3D model according to the 3D material in the preset space and creating a virtual scene according to the 3D model;
the initial motion data acquisition module is used for importing the virtual avatar of at least one intelligent terminal into the virtual scene and acquiring the initial motion data of each virtual avatar;
the standard motion data interaction module is used for carrying out standardized processing on the initial motion data based on a preset data conversion algorithm to generate corresponding standard motion data so as to realize interaction of the intelligent terminals in the virtual scene, wherein the standard motion data are cross-platform and cross-terminal motion data.
9. A computer device, the computer device comprising a memory and a processor;
the memory is used for storing a computer program;
the processor being configured to execute the computer program and to implement the interaction method of the smart device in the virtual space as claimed in any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor causes the processor to implement the interaction method of a smart device in a virtual space as claimed in any one of claims 1 to 7.
CN202311469056.7A 2023-11-03 2023-11-03 Interaction method, device, equipment and storage medium of intelligent equipment in virtual space Pending CN117274471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311469056.7A CN117274471A (en) 2023-11-03 2023-11-03 Interaction method, device, equipment and storage medium of intelligent equipment in virtual space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311469056.7A CN117274471A (en) 2023-11-03 2023-11-03 Interaction method, device, equipment and storage medium of intelligent equipment in virtual space

Publications (1)

Publication Number Publication Date
CN117274471A true CN117274471A (en) 2023-12-22

Family

ID=89204453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311469056.7A Pending CN117274471A (en) 2023-11-03 2023-11-03 Interaction method, device, equipment and storage medium of intelligent equipment in virtual space

Country Status (1)

Country Link
CN (1) CN117274471A (en)

Similar Documents

Publication Publication Date Title
US10592238B2 (en) Application system that enables a plurality of runtime versions of an application
EP3769509B1 (en) Multi-endpoint mixed-reality meetings
EP3011438B1 (en) Methods and systems for electronic ink projection
Febretti et al. Omegalib: A multi-view application framework for hybrid reality display environments
JP7128188B2 (en) 3D MODEL CONSTRUCTION METHOD, APPARATUS AND SYSTEM
US20240143350A1 (en) Rules Based User Interface Generation
US20190378335A1 (en) Viewer position coordination in simulated reality
CN117274471A (en) Interaction method, device, equipment and storage medium of intelligent equipment in virtual space
JP2022115747A (en) Content creation system and method
CN111708475A (en) Virtual keyboard generation method and device
Trindade et al. LVRL: Reducing the gap between immersive VR and desktop graphical applications
Walker Improving everyday computing tasks with head-mounted displays
CN105210019A (en) User interface response to an asynchronous manipulation
US11694376B2 (en) Intuitive 3D transformations for 2D graphics
KR101075420B1 (en) Tabletop interface apparatus, collaboration control apparatus, tabletop interface based collaboration system and its method
US8587599B1 (en) Asset server for shared hardware graphic data
KR101202164B1 (en) System for providing of three-dimensional digital comic viewer and method thereof
Reiling Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions
CN117093069A (en) Cross-dimension interaction method, device and equipment for hybrid application
Brendel et al. Exploring the immediate mode GUI concept for graphical user interfaces in mixed reality applications
Turjya et al. 1 Technologies That
Turjya et al. Technologies That Will Fuel the Future Metaverse and Its Potential Implementation in the Healthcare System
US9159160B1 (en) Texture sharing between application modules
freiling Reiling Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions
Xu et al. Virtual training system design for portable equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination