CN116938996A - Method, device and equipment for loading model by user in meta-universe environment - Google Patents

Method, device and equipment for loading model by user in meta-universe environment Download PDF

Info

Publication number
CN116938996A
CN116938996A CN202210343027.5A CN202210343027A CN116938996A CN 116938996 A CN116938996 A CN 116938996A CN 202210343027 A CN202210343027 A CN 202210343027A CN 116938996 A CN116938996 A CN 116938996A
Authority
CN
China
Prior art keywords
user
users
model
meta
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210343027.5A
Other languages
Chinese (zh)
Inventor
董辰
陈梦颖
许晓东
韩书君
王碧舳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210343027.5A priority Critical patent/CN116938996A/en
Publication of CN116938996A publication Critical patent/CN116938996A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences

Abstract

The disclosure provides a method, a device and equipment for loading a model by a user in a meta-universe environment, relates to the field of meta-universe, and particularly relates to the field of methods, devices, equipment and storage media for loading the model by the user in the meta-universe environment. The specific implementation scheme is as follows: the method for loading the model by the user in the meta-universe environment is characterized by comprising the following steps: responding to the fact that at least two users enter the same local area communication network range in a meta-universe environment, and loading models of the at least two users by a first server of the local area communication network; at least two of the users load a mutual model through the first server in response to the at least two users coming into line of sight of each other. By implementing the technical scheme, the efficiency of loading the model by the user in the meta-universe environment can be greatly improved, the occupation proportion of network bandwidth is reduced, and the experience of the user in the meta-universe environment is improved.

Description

Method, device and equipment for loading model by user in meta-universe environment
Technical Field
The present disclosure relates to the field of metauniverse technologies, and in particular, to a method, an apparatus, a device, and a storage medium for loading a model by a user in a metauniverse environment.
Background
The eidolon can dream Go for a long time, which is a good application in the meta universe. But the user can only see himself and independently find the eidolon dream, thereby reducing the interestingness. The communication network is added into the universe, and in the universe, the user can experience various immersive social scenes by means of own virtual avatar, communicate and entertain together in the common experience which is close to reality, and finally, the partners of the like-minded friends are found, and social connection is established. How to solve the problem that a certain user loads models of other users is a technical problem to be solved.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for a user loading model in a meta-universe environment in which a user loads other users.
According to a first aspect of the present disclosure, there is provided a method for loading a model by a user in a meta-universe environment, including:
responding to the fact that at least two users enter the same local area communication network range in a meta-universe environment, and loading models of the at least two users by a first server of the local area communication network;
at least two of the users load a mutual model through the first server in response to the at least two users coming into line of sight of each other.
Preferably, in response to at least two users coming into the mutual line of sight, at least two users load mutual models through the first server, before further comprising:
responding to the user entering a meta-universe environment, and counting data loaded before a model of the user by a second server in the cloud;
and the second server sends the model of the user to the first server according to the data loaded before the model of the user.
Preferably, the data loaded before the model of the user includes: the number of times the model of the user is loaded by the other users.
Preferably, the loading of the mutual models by the at least two users through the first server includes:
the first server reads hardware configuration information of meta-universe equipment worn by any one of at least two users;
the first server sends respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information;
repeating the steps so that each user of the at least two users can load the respective models of the other users.
Preferably, the transmitting the respective models of the remaining users except the any one user to the meta-cosmic device worn by the any one user includes:
slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration;
and the meta-cosmic equipment worn by any user receives the slice and fuses the slice into respective models of the rest users.
Preferably, the slicing the respective models of the remaining users according to the hardware configuration information of the metauniverse device worn by any one user so as to conform to the receiving capability of the hardware configuration includes:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration.
According to a second aspect of the present disclosure, there is also provided an apparatus for loading a model by a user in a meta-cosmic environment, including:
a first loading module: the method comprises the steps that in response to at least two users entering the same local area communication network range in a meta-universe environment, a first server of the local area communication network loads models of the at least two users;
and a second loading module: for loading a mutual model by at least two of said users via said first server in response to at least two of said users coming into mutual line of sight.
Preferably, the method further comprises:
and a statistics module: the cloud end second server is used for responding to the user entering the meta-universe environment and counting data loaded before the model of the user;
a first sending module: for sending the model of the user to the first server according to data previously loaded by the model of the user.
Preferably, the data loaded before the model of the user includes: the number of times the model of the user is loaded by the other users.
Preferably, the second loading module includes:
and a reading module: the hardware configuration information of the meta-universe equipment worn by any one of at least two users is read by the first server;
and a second sending module: the first server is used for sending respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information;
and (3) repeating the module: for repeating the above steps, so that each of said at least two users can load the respective models of the other said users.
Preferably, the second transmitting module includes:
slicing module: slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration;
and a fusion module: the metauniverse device for wearing by the any one user receives the slices and fuses the slices into respective models of the remaining users.
Preferably, the slicing module includes:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration.
According to a third aspect of the present disclosure, there is also provided an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above claims.
According to a fourth aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above-mentioned technical solutions.
According to a fifth aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the above-mentioned technical solutions.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a first embodiment of a method of user loading a model in a metauniverse environment in accordance with the present disclosure;
FIG. 2 is a schematic diagram of a second embodiment of a method of user loading a model in a metauniverse environment in accordance with the present disclosure;
FIG. 3 is a schematic diagram of at least two of the users loading a mutual model through the first server according to the present disclosure;
FIG. 4 is a schematic diagram of transmitting respective models of the remaining ones of the users other than the any one user to a metauniverse device worn by the any one user in accordance with the present disclosure;
FIG. 5 is a schematic diagram of a first embodiment of an apparatus for user loading of a model in a metauniverse environment in accordance with the present disclosure;
FIG. 6 is a schematic diagram of a second embodiment of an apparatus for user loading of a model in a metauniverse environment in accordance with the present disclosure;
FIG. 7 is a schematic diagram of a second load module according to the present disclosure;
fig. 8 is a schematic diagram of a second transmission module according to the present disclosure;
FIG. 9 is a block diagram of an electronic device for implementing a method of user loading models in a metauniverse environment in accordance with an embodiment of the present disclosure.
Reference numerals illustrate:
5 device for loading model by user in meta-universe environment
501. Statistics module 502 first sending module
503. First load module 504 second load module
5041. Reading module 5042 second transmitting module
5043. Repetition module
50421. Slicing module 50422 fusion module
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The intelligent network transmits service information mainly through an artificial intelligent model, and the first service information to be transmitted is compressed into second service information related to the artificial intelligent model through the artificial intelligent model, so that data traffic in the network is greatly reduced, and the compression efficiency is far higher than that of a traditional compression algorithm. The sending end equipment extracts the first service information by utilizing a preconfigured first model and obtains second service information to be transmitted; and the sending end equipment transmits the second service information to the receiving end equipment. The receiving terminal equipment receives the second service information and carries out recovery processing on the second service information by utilizing a second pre-configured model to obtain third service information; the third service information recovered by the second model has a slight quality difference compared with the original first service information, but the third service information and the first service information are consistent in content, and the experience of the user is almost unchanged. Before the sending end device transmits the second service information to the receiving end device, the method further comprises: the updating module judges whether the receiving end equipment needs to update the second model, and transmits a preconfigured third model to the receiving end equipment when judging that the second model needs to be updated, and the receiving end equipment updates the second model by using the third model. The service information is processed through the pre-trained artificial intelligent model, so that the data transmission quantity in the communication service can be obviously reduced, and the information transmission efficiency is greatly improved. These models are relatively stable and have reusability and transmissibility. The propagation and multiplexing of the model will help to enhance network intelligence while reducing overhead and resource waste. The model can be divided into a plurality of model slices according to different dividing rules, the model slices can be transmitted among different network nodes, and the model slices can be assembled into the model. Model slices may be stored scattered across multiple network nodes. When a network node requests to find itself missing or needing to update a model or a slice of a model, it may request from surrounding nodes that may have the slice by way of a request.
As shown in fig. 1, according to a first aspect of the present disclosure, there is provided a method for loading a model by a user in a metauniverse environment, including:
s103: responding to the fact that at least two users enter the same local area communication network range in a meta-universe environment, and loading models of the at least two users by a first server of the local area communication network; the term "local area communication network" refers to communication of users in an area, and the parties and surrounding people can communicate in the "area" whether or not the parties are mounted on a communication node, even if different networks are used (for example, one person is a 5G mobile 5G base station, another person is a 4G mobile 4G base station, and the parties belong to different operators, and a wifi and a cellular communication are performed). In a meta-universe environment, when each user interacts with objects in the surrounding environment, the user also needs to interact with other people in the surrounding environment; the method comprises the steps that a foundation for interaction with other users using the metauniverse is firstly that a model of other users needs to be loaded under the metauniverse environment; if there is no model for other users, no more interactions are talking about. In this embodiment, model loading between multiple users within the same local area communication network under a metauniverse environment is mainly reflected. The local area communication network is a network with a certain range which is designated in advance, and can be within the range of one network node or a plurality of network nodes. When a plurality of users are in the range of the same local area communication network, the users can be considered to be possibly required to load mutual models, and the first server is started; a model of a plurality of users is loaded by the first server. The first server is a service type computing device for managing other network nodes, servers and actual users in the local area communication network range. The first server comprises at least one server. And loading the models of the at least two users by the first server, so that the at least two users can acquire the models of each other. In this embodiment, the at least two users enter the same local area communication network through the network, that is, the users may enter the same local area communication network at remote locations, so long as the users can enter the local area communication network, the users entering the local area communication network may be regarded as other users wish to load the model of the users.
S104: at least two of the users load a mutual model through the first server in response to the at least two users coming into line of sight of each other. In this embodiment, the users are in the same area, not just users in the same local area communication network. When other users appear in the line of sight of the user, in the meta-cosmic environment, there is a need to convert the other users into users visible in the meta-cosmic environment. The models of other users are obtained through a first server. For example: when the user A is in the metauniverse environment and is at the street A in the actual environment, when the user B is not only in the same local area communication network, but also appears near the street A and can be captured by a camera in the metauniverse device worn by the user A, namely, the user B enters the sight of the user A, and the user A also enters the sight of the user B, the metauniverse devices worn by the user A and the user B load mutual models through the first server so as to display the images of each other on the respective metauniverse devices. The avatar is based on a respective model stored on the first server.
As shown in fig. 2, preferably, in response to at least two users coming into the line of sight of each other, at least two users load mutual models through the first server, before further including:
s101: responding to the user entering a meta-universe environment, and counting data loaded before a model of the user by a second server in the cloud; in this embodiment, first, the user enters a metauniverse environment by wearing a metauniverse device. After entering the meta-universe environment, the second server in the cloud may count models that the user has previously loaded elsewhere or locally by other users. The second server here is a network node, server, hosting all the data cases of the user.
S102: and the second server sends the model of the user to the first server according to the data loaded before the model of the user. After the second server obtains the historical usage data of the user, the model, which is ranked at the front, in the models loaded by the user at one time can be sent to the first server of the local area communication network where the user is located. Other models with certain properties may also be considered for transmission to the first server. For example, the user is in the vicinity of a football pitch, then a model relating to football that was previously loaded by the user may be sent to a first server of a local area communication network in the vicinity of the football pitch.
Preferably, the data loaded before the model of the user includes: the number of times the model of the user is loaded by the other users. In this embodiment, the number of times the model of the user is loaded is used as a filtering condition, the model with the number of times is sent to the first server, and the model with the number of times is sent to other users who need to load the model of the user by the first server.
As shown in fig. 3, preferably, the loading of the mutual models by the at least two users through the first server includes:
s1041: the first server reads hardware configuration information of meta-universe equipment worn by any one of at least two users; the metauniverse devices worn by each user have different hardware configurations, and the same hardware can be optimized to different configuration information. Therefore, before the first server specifically sends the model of the user to other users, the hardware condition of the meta-universe device worn by the user needs to be acquired first, so that a proper model sending mode can be selected conveniently and sent later.
S1042: the first server sends respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information; based on the hardware configuration information of the meta-cosmic device worn by the user, the first server may send the model of the user according to the configuration information in a certain way. For example: the hardware of the metauniverse device B comprises a 1GHz processor and a memory with the capacity of 2GB, and the 2GHz processor of the other metauniverse device C and the memory with the capacity of 1 GB; the metadata device C processes the data quickly, but the memory is smaller; while the metauniverse B is larger in memory, but processes the data slower. Then for metadevice C the first server may send the model of the user to the user wearing metadevice C in larger sizes. Whereas for metadevice B, the first server may send the model of the user to the user wearing metadevice B in smaller sizes.
S1043: repeating the steps so that each user of the at least two users can load the respective models of the other users. Repeating the above steps, all users in the same local area communication network and in the line of sight with each other can load their own models.
As shown in fig. 4, preferably, the sending the respective models of the remaining users except for the any one user to the meta-cosmic device worn by the any one user includes:
s10421: slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration; model slicing, as previously described, may facilitate the transmission of models, as well as the hardware reception of different of the metauniverse devices.
S10422: and the meta-cosmic equipment worn by any user receives the slice and fuses the slice into respective models of the rest users. After receiving the model slices of a plurality of users, the meta-space device needs to fuse the respective slices of the respective users to obtain the respective models of different users, so that the user images can be loaded subsequently.
Preferably, the slicing the respective models of the remaining users according to the hardware configuration information of the metauniverse device worn by any one user so as to conform to the receiving capability of the hardware configuration includes:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration. The model can be sliced into different slices according to the requirements of hardware configuration. For example, if the memory of a metauniverse device has only a 20MB capacity size, but a slice has a 40MB capacity size, then the 40MB capacity size slice may be problematic when received by the metauniverse device; the slice capacity should be set to be below 20MB for reception by the hardware device.
As shown in fig. 5, according to a second aspect of the present disclosure, there is also provided an apparatus 5 for loading a model by a user in a metauniverse environment, including:
first loading module 503: the method comprises the steps that in response to at least two users entering the same local area communication network range in a meta-universe environment, a first server of the local area communication network loads models of the at least two users;
the second loading module 504: for loading a mutual model by at least two of said users via said first server in response to at least two of said users coming into mutual line of sight.
As shown in fig. 6, preferably, the method further comprises:
statistics module 501: the cloud end second server is used for responding to the user entering the meta-universe environment and counting data loaded before the model of the user;
first transmitting module 502: for sending the model of the user to the first server according to data previously loaded by the model of the user.
Preferably, the data loaded before the model of the user includes: the number of times the model of the user is loaded by the other users.
As shown in fig. 7, preferably, the second loading module 504 includes:
read module 5041: the hardware configuration information of the meta-universe equipment worn by any one of at least two users is read by the first server;
second transmitting module 5042: the first server is used for sending respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information;
repetition module 5043: for repeating the above steps, so that each of said at least two users can load the respective models of the other said users.
As shown in fig. 8, preferably, the second transmitting module 5042 includes:
slicing module 50421: slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration;
fusion module 50422: the metauniverse device for wearing by the any one user receives the slices and fuses the slices into respective models of the remaining users.
Preferably, the slicing module includes:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration.
According to a third aspect of the present disclosure, there is also provided an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above claims.
According to a fourth aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above-mentioned technical solutions.
According to a fifth aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the above-mentioned technical solutions.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the various methods and processes described above, such as a method of loading a model by a user in a meta-universe environment. For example, in some embodiments, the method of user loading a model in a metauniverse environment may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into RAM 903 and executed by the computing unit 901, one or more steps of the method of user loading a model in the meta-universe environment described above may be performed. Alternatively, in other embodiments, computing unit 901 may be configured to perform the method of user loading the model in the metauniverse environment in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A method for loading a model by a user in a meta-universe environment, comprising:
responding to the fact that at least two users enter the same local area communication network range in a meta-universe environment, and loading models of the at least two users by a first server of the local area communication network;
at least two of the users load a mutual model through the first server in response to the at least two users coming into line of sight of each other.
2. The method of claim 1, wherein in response to at least two of the users coming within a line of sight of each other, at least two of the users load a model of each other via the first server, further comprising, before:
responding to the user entering a meta-universe environment, and counting data loaded before a model of the user by a second server in the cloud;
and the second server sends the model of the user to the first server according to the data loaded before the model of the user.
3. The method of claim 2, wherein the data previously loaded by the model of the user comprises: the number of times the model of the user is loaded by the other users.
4. The method according to claim 1, wherein the loading of the mutual model by the at least two users via the first server comprises:
the first server reads hardware configuration information of meta-universe equipment worn by any one of at least two users;
the first server sends respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information;
repeating the steps so that each user of the at least two users can load the respective models of the other users.
5. The method of claim 4, wherein said transmitting the respective models of the remaining ones of the users other than the any one of the users to a meta-universe device worn by the any one of the users comprises:
slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration;
and the meta-cosmic equipment worn by any user receives the slice and fuses the slice into respective models of the rest users.
6. The method of claim 5, wherein slicing the respective model of the remaining users according to hardware configuration information of a metauniverse device worn by the any one user to conform to the receiving capabilities of the hardware configuration comprises:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration.
7. A device for loading a model by a user in a meta-universe environment, comprising:
a first loading module: the method comprises the steps that in response to at least two users entering the same local area communication network range in a meta-universe environment, a first server of the local area communication network loads models of the at least two users;
and a second loading module: for loading a mutual model by at least two of said users via said first server in response to at least two of said users coming into mutual line of sight.
8. The apparatus as recited in claim 7, further comprising:
and a statistics module: the cloud end second server is used for responding to the user entering the meta-universe environment and counting data loaded before the model of the user;
a first sending module: for sending the model of the user to the first server according to data previously loaded by the model of the user.
9. The apparatus of claim 8, wherein the data previously loaded by the model of the user comprises: the number of times the model of the user is loaded by the other users.
10. The apparatus of claim 7, wherein the second loading module comprises:
and a reading module: the hardware configuration information of the meta-universe equipment worn by any one of at least two users is read by the first server;
and a second sending module: the first server is used for sending respective models of the rest users except any user to meta-universe equipment worn by the any user according to the hardware configuration information;
and (3) repeating the module: for repeating the above steps, so that each of said at least two users can load the respective models of the other said users.
11. The apparatus of claim 10, wherein the second transmitting module comprises:
slicing module: slicing the respective models of the rest users according to the hardware configuration information of the meta-universe equipment worn by any user so as to accord with the receiving capability of the hardware configuration;
and a fusion module: the metauniverse device for wearing by the any one user receives the slices and fuses the slices into respective models of the remaining users.
12. The apparatus of claim 11, wherein the slicing module comprises:
the size, type and/or function of the slice conforms to the receiving capabilities of the hardware configuration.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202210343027.5A 2022-04-02 2022-04-02 Method, device and equipment for loading model by user in meta-universe environment Pending CN116938996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210343027.5A CN116938996A (en) 2022-04-02 2022-04-02 Method, device and equipment for loading model by user in meta-universe environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210343027.5A CN116938996A (en) 2022-04-02 2022-04-02 Method, device and equipment for loading model by user in meta-universe environment

Publications (1)

Publication Number Publication Date
CN116938996A true CN116938996A (en) 2023-10-24

Family

ID=88391108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210343027.5A Pending CN116938996A (en) 2022-04-02 2022-04-02 Method, device and equipment for loading model by user in meta-universe environment

Country Status (1)

Country Link
CN (1) CN116938996A (en)

Similar Documents

Publication Publication Date Title
US10915822B2 (en) Complex event processing method, apparatus, and system
CN112769897B (en) Synchronization method and device of edge calculation message, electronic equipment and storage medium
CN104573109A (en) System, terminal and method for automatic recommendation based on group relation
CN112839067A (en) Data synchronization method and device
CN114253710A (en) Processing method of computing request, intelligent terminal, cloud server, equipment and medium
CN103036762A (en) Method and device for information processing in instant messaging
CN116938996A (en) Method, device and equipment for loading model by user in meta-universe environment
CN113824689B (en) Edge computing network, data transmission method, device, equipment and storage medium
CN114268799A (en) Streaming media transmission method and device, electronic equipment and medium
CN114692898A (en) MEC federal learning method, device and computer readable storage medium
CN114374703A (en) Method, device and equipment for acquiring cloud mobile phone information and storage medium
CN116614367A (en) Grouping method and model updating method of terminal equipment
CN116614382A (en) Method and device for obtaining model in meta-universe environment
CN115086300B (en) Video file scheduling method and device
CN115589391B (en) Instant messaging processing method, device and equipment based on block chain and storage medium
CN116527498A (en) Model transmission method, device, electronic equipment and storage medium
CN116996421B (en) Network quality detection method and related equipment
CN116614402A (en) Model transmission method, device, electronic equipment and storage medium
CN117221127A (en) Method and device for using artificial intelligence service at communication network end side
US11853814B2 (en) Automatically generating events
CN113805919B (en) Rendering special effect updating method and device, electronic equipment and storage medium
CN113014656B (en) Private cloud information synchronization method and device
CN114157917B (en) Video editing method and device and terminal equipment
CN117289784A (en) Method and device for obtaining model by user in meta-universe environment and electronic equipment
CN116738644A (en) Model distribution method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination