CN117492860A - Scene loading method and device for digital space, terminal equipment and server - Google Patents

Scene loading method and device for digital space, terminal equipment and server Download PDF

Info

Publication number
CN117492860A
CN117492860A CN202311414183.7A CN202311414183A CN117492860A CN 117492860 A CN117492860 A CN 117492860A CN 202311414183 A CN202311414183 A CN 202311414183A CN 117492860 A CN117492860 A CN 117492860A
Authority
CN
China
Prior art keywords
scene
target user
digital
scenes
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311414183.7A
Other languages
Chinese (zh)
Inventor
屠正洋
余廷钊
叶琦
黄丹妮
王熙
李俊颉
孟成林
习晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202311414183.7A priority Critical patent/CN117492860A/en
Publication of CN117492860A publication Critical patent/CN117492860A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A scene loading method, device, terminal equipment and server of digital space, the digital space includes a plurality of scenes, the method includes: acquiring behavior data of a target user in a digital space and environment information corresponding to a digital object operated by the target user; determining at least one first scene to be used by the target user from a plurality of scenes according to the behavior data and the environment information; and preloading at least one first scene in terminal equipment held by the target user.

Description

Scene loading method and device for digital space, terminal equipment and server
Technical Field
The embodiment of the specification belongs to the technical field of computers, and particularly relates to a scene loading method, device, terminal equipment and server of a digital space.
Background
Digital space is a virtual environment that computing devices simulate and present through computer graphics and computing power. The digital space may be implemented based on Virtual Reality (VR) technology, based on augmented reality (augmented reality, AR) technology, or may be implemented based on other two-dimensional or three-dimensional maps. The data volume of the digital space is often relatively large, and thus it is often difficult to fully load the entire digital space into a terminal device held by a user.
Disclosure of Invention
The invention aims to provide a scene loading method, device, terminal equipment and server of a digital space.
In a first aspect, a method for loading a scene in a digital space, the digital space including a plurality of scenes, the method comprising: acquiring behavior data of a target user in a digital space and environment information corresponding to a digital object operated by the target user; determining at least one first scene to be used by the target user from the plurality of scenes according to the behavior data and the environment information; and preloading the at least one first scene in terminal equipment held by the target user.
In a second aspect, there is provided a scene loading device of a digital space, the digital space including a plurality of scenes therein, the device comprising: the information acquisition unit is configured to acquire behavior data of a target user in a digital space and environment information corresponding to the digital object operated by the target user; a scene determination unit configured to determine at least one first scene to be used by the target user from the plurality of scenes based on the behavior data and the environmental information; and the loading processing unit is configured to preload the at least one first scene in the terminal equipment held by the target user.
In a third aspect, there is provided a terminal device deployed with the scene loading apparatus of the digital space provided in the second aspect.
In a fourth aspect, there is provided a server deployed with the scene loading device of the digital space provided in the second aspect.
In a fifth aspect, there is provided a computing device comprising a memory having executable code stored therein and a processor which, when executing the executable code, implements the method provided in the first aspect.
In a sixth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computing device, performs the method provided in the first aspect.
According to the technical scheme provided by the embodiment of the specification, the environment information corresponding to the behavior data of the target user in the digital space and the digital object operated by the behavior data are comprehensively considered, so that at least one first scene to be used by the target user can be more accurately determined from a plurality of scenes included in the digital space; by preloading the at least one first scene in the terminal equipment held by the target user, the target user does not need to spend excessive time waiting for loading the corresponding scene in the terminal equipment held by the target user when needing to use any scene in the at least one first scene, and the user experience can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a technical scenario of a technical solution provided in an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for scene loading in digital space provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of an exemplary scenario provided in an embodiment of the present description for predicting a target user to use;
FIG. 4 is a schematic diagram of a process for loading a scene in a digital space as exemplarily provided in an embodiment of the present specification;
fig. 5 is a schematic structural diagram of a scene loading device in digital space according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solution in the present specification better understood by those skilled in the art, the technical solution in the embodiments of the present specification will be clearly and completely described in the following with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Digital spaces implemented based on VR technology, AR technology, or other multidimensional maps are often relatively large in data volume making it difficult to fully load the entire digital space into a terminal device held by a user. In one possible implementation, a digital space with a large data volume may be partitioned into multiple scenes with a relatively small data volume; where multiple elements may be generally included in a single scene, the single element may be a multi-dimensional map space, text and/or patterns deployed in the multi-dimensional map space, a computer program associated with the text and/or patterns and used to implement the corresponding transaction, and so forth. The aforementioned multi-dimensional map space may be a conventional two-dimensional map space or a three-dimensional map space, or may be a three-dimensional map space implemented based on AR technology or VR technology.
Fig. 1 is a schematic diagram of a technical scenario of a technical solution provided in an embodiment of the present disclosure. Referring to fig. 1, an exemplary digital space may be divided into a plurality of scenes such as scene 1 to scene 7, for example. The elements in scene 1 may include their corresponding multidimensional map space, as well as elements E01, E02, E03, and E04 implemented based on text and/or patterns; in addition to this, computer programs for implementing predetermined transactions, each corresponding to E01, E02, E03 and E04, may be included. Illustratively, the predetermined transaction implemented by the computer program corresponding to element E04 may include: when the target user triggers E04, displaying a scene selection interface; determining a scene (e.g., scene 2) that the target user needs to use based on the operations performed by the target user in the scene selection interface; and then performs a corresponding scene switching operation such as switching the digital object (e.g., M1) operated by the target user into scene 2. Similar to scenario 1, the elements in scenario 2 may include, for example, their corresponding multidimensional map spaces, E05, E06, and E07, and may include, in addition, computer programs for implementing predetermined transactions, each corresponding to E05, E06, and E07. Similar to scenario 1, the elements in scenario 3 may include their corresponding multidimensional map spaces, E08, E09, and E10, as well as computer programs for implementing predetermined transactions, each corresponding to E08, E09, and E10.
The digital object may be a digital person or other object, for example, which may correspond to a game character operated by a user in various games.
When the digital space is divided into a plurality of scenes, the corresponding scenes can be loaded in the digital space held by the target user as required based on the actual demands of the target user on the digital space. For example, in the case where the digital space is managed by the server, the target user may request to use a certain scene through the terminal device held by the terminal device, and the server may issue data of the scene (i.e., each element included in the scene) to the terminal device, and the terminal device loads the scene into its memory for use by the target user. For another example, in the case that the digital space is managed by the terminal device itself, the target user may request to use a certain scene through the terminal device held by the terminal device, and the terminal device may correspondingly load the scene into its memory for use by the target user.
For example, please continue to refer to fig. 1. The target user U1 corresponding to the digital object M1 may, for example, trigger the element E04 in the terminal device held by the target user U1, so that the terminal device displays the scene selection interface; then the target user U1 can execute corresponding operation in the scene selection interface to realize the selection of the scene (such as scene 2) which the target user U needs to use, thereby completing the related operation of requesting to use the scene 2; correspondingly, the terminal device can execute the corresponding scene switching operation, for example, the scene 2 is loaded into the memory of the terminal device, and the digital object M1 operated by the target user U1 is switched into the scene 2.
Although the data volume of a single scene is far smaller than the data volume of the whole digital space, if a target user requests to use a certain scene through terminal equipment, the scene is loaded into the memory of the terminal equipment, and the target user still needs to wait for a relatively long time before loading the scene into the memory of the terminal equipment; moreover, there may be a large number of elements in the loaded scene that are not used by the target user, the user experience is poor and there is a large waste of resources.
In view of this, at least one method, device, terminal device and server for loading a scene in a digital space are provided in the embodiments of the present disclosure. By comprehensively considering the behavior data of the target user in the digital space and the environment information corresponding to the digital object operated by the target user, at least one first scene to be used by the target user can be determined from a plurality of scenes contained in the digital space more accurately; by preloading the at least one first scene in the terminal equipment held by the target user, the target user does not need to spend excessive time waiting for loading the corresponding scene in the terminal equipment held by the target user when needing to use any scene in the at least one scene, so that the user experience can be improved, and meanwhile, the resource waste can be avoided.
The technical solution provided in the embodiments of the present specification is described in detail below in conjunction with the technical scenario shown in fig. 1.
Fig. 2 is a flowchart of a scenario loading method of a digital space provided in an embodiment of the present disclosure. The digital space D involved in the method is divided into a plurality of scenes. In the case where the digital space D is managed by the terminal device, the method may be implemented independently by the terminal device; in case the digital space D is managed by a server, the method may be implemented by the terminal and the server in cooperation.
The method will be exemplarily described mainly by taking the example that the digital space D includes the scenes 1 to 7 shown in fig. 1.
Referring to fig. 2, the method may include, but is not limited to, some or all of the following steps S201 to S205.
First, in step S201, the behavior data of the target user U1 in the digital space D and the environment information corresponding to the digital object M1 operated by the same are acquired.
The aforementioned behavioral data may include, but is not limited to, at least one of the following a 1-a 3:
a1, a movement trajectory of the digital object M1 within a predetermined time period T before the current time, for example, a movement trajectory of the digital object M1 within the predetermined time period T before the current time, in the scene 1 illustrated in fig. 1.
a2, elements operated by the target user U1 and their corresponding modes of operation in the digital space D within a predetermined time period T before the current time, wherein the modes of operation may include, but are not limited to, touching, dragging, clicking, and the like.
a3, the usage frequency of the target user U1 to the plurality of scenes respectively in the historical time interval before the time interval to which the current time belongs, for example, the usage frequency of the target user U1 to the scenes 1 to 7 illustrated in fig. 1 respectively.
The behavior data of the target user U1 may also include other types of data than the foregoing a1 to a 3. For example, task behavior data of the target user U1 may also be included, which indicates a behavior pattern adopted by the target user U1 when completing the predetermined task within a predetermined time period T before the current time, including, for example, exploration, collection, or delivery. For another example, environmental behavior data of the target user U1 for a plurality of scenes may also be included, which indicates a preference degree of the target user U1 for each of the plurality of scenes, and the preference degree may be generally classified into a plurality of categories like, dislike, participation, and the like.
The aforementioned environmental information may include, but is not limited to, at least one of the following b1 to b 3:
b1, the position information of the digital object M1 in the digital space D, which position information comprises at least a scene identification of the scene in which the digital object M1 is located, and optionally also the position coordinates of the digital object M1 in the multidimensional map space of the scene 1.
b2, scene usage information of the user corresponding to another digital object (hereinafter referred to as an adjacent digital object) adjacent to the digital object M1 in position in the time interval to which the current time belongs. Where adjacent digital object may be the remaining digital object that is located in the same scene at the current time as digital object M1, e.g., digital object M1 is located in scene 1 illustrated in fig. 1 at the current time, then adjacent digital objects of digital object M1 may include digital objects M2-M4. Or the distance between the rest of the digital objects and the digital object M1 can be calculated according to the position coordinates of the rest of the digital objects in the same scene at the current moment and the position coordinates of the digital object M1 at the current moment, and then the adjacent digital objects of the digital object M1 can be determined from the rest of the digital objects according to the distances of the rest of the digital objects. The scene usage information for the adjacent digital object is used to indicate scenes used by the user to which the adjacent digital object corresponds.
b3, scene use information of the user corresponding to the other digital object (hereinafter referred to as the associated digital object) having the association relationship with the digital object M1 in the time interval to which the current time belongs. The aforementioned association relationships may include, for example, friend relationships and/or belonging to the same organization. The scene usage information associated with the digital object is used to indicate scenes that the user associated with the digital object used.
Next, in step S203, at least one first scene to be used by the target user U1 is determined from the plurality of scenes based on the behavior data and the environmental information.
Referring to fig. 3, it may be realized that at least one first scene to be used by the target user U1 is determined from a plurality of scenes according to behavior data and environment information by some or all of the following steps S2031 to S2035.
In step S2031, the behavior data and the environmental information are processed through a pre-trained machine learning model, and an identification of at least one second scene is obtained, where at least part of the at least one second scene belongs to at least one first scene.
The machine learning model may be a support vector machine model, a naive bayes model, or various possible neural network models.
The machine learning model may be trained by a plurality of training samples. A single training sample is divided into two parts, input data and tag data. Wherein, the input data may include sample behavior data and sample environmental information corresponding to the aforementioned behavior data and environmental information; the tag data may include whether the corresponding user uses each scene in the digital space within the corresponding period of time based on the aforementioned sample behavior data and sample environment information.
The training process and working principle of the machine learning model can refer to the related technology, and are not repeated here.
The foregoing step S2031 may be independent of the subsequent step S2033 and step S2035, and is a possible implementation of the foregoing step S203, that is, the foregoing at least one second scene may be directly used as the at least one first scene. As a more preferable embodiment, on the basis of the foregoing step S2031, the following steps S2033 and S2035 may be further performed, so as to more accurately determine at least one first scenario that may be used by the target user.
In step S2033, behavior data and environment information are processed according to predefined business rules, and an identification of at least one third scene is obtained, where at least part of the at least one third scene belongs to at least one first scene.
The business rule includes, for example, at least one of the following rules c1 to c 4:
c1, approaching the rule. Based on b2 in the context information, a certain scene will be determined to be the third scene if the user corresponding to one or more neighboring digital objects of digital object M1 has used that scene during the current time interval.
And c2, time rule. Based on a3 in the behavior data, if the usage frequency of a certain scene reaches a preset threshold value in a historical time interval before the time interval to which the current time belongs by the target user U1, the scene is determined to be a third scene.
And c3, interest rules. Based on the environmental behavior data in the behavior data, if the preference degree of the target user U1 for a certain scene is favorite, the scene will be determined as a third scene.
And c4, social rules. Based on b3 in the context information, if one or more associated digital objects of digital object M1 have used a certain scene within the current time interval, that scene will be determined to be the third scene.
The foregoing rules c 1-c 4 are merely exemplary, and the worker may customize the business rules in combination with the characteristics of the digital space itself.
In step S2035, the identifiers of the at least one second scene and the identifiers of the at least one third scene are combined to obtain the identifiers of the at least one first scene.
Through the foregoing steps S2031 to S2035, the scene set predicted by the machine model and the scene set predicted by the business rule form complementation, so that the possible scene used by the target user can be predicted more accurately and comprehensively.
The solution provided in fig. 3 described above is merely exemplary. For example, intersection can be further taken between the identification of at least one second scene and the identification of at least one third scene to obtain the identification of at least one first scene; wherein if the intersection is empty, it is stated that the target user U1 may not currently need to use other scenes than the scene in which it is at the current time.
After determining at least one scenario that the target user may use in the various possible manners described above, step S205 may be performed next, where at least one first scenario is preloaded in the terminal device held by the target user U1.
In the case where the foregoing step S201 and step S203 are performed by the server, the server may transmit data of at least one first scene to the terminal device held by the target user U1, so that the terminal device preloads the at least one first scene.
When determining the plurality of first scenes in step S203, the server or the terminal device may further determine priority information corresponding to each of the plurality of first scenes; when step S201 and step S203 are performed by the server, the server may first determine priority information corresponding to each of the plurality of first scenes, and send data of each of the plurality of first scenes to the terminal device in order of priority from high to low. Correspondingly, the terminal device can preload the first scenes in sequence according to the priority information corresponding to the first scenes in sequence from high priority to low priority.
The priority information of a scene may be determined based on its importance metric value and complexity metric value; wherein the importance and complexity metrics may be predefined. Generally, the importance metric value of a scene can be evaluated according to the requirements of users and the design targets of products, and the higher the importance metric value is, the higher the corresponding data should be loaded preferentially in theory; the complexity metric value of the scene can reflect the various demands such as computing resources, loading time and the like required by the scene, and the higher the complexity metric value is, the higher the corresponding data should be loaded preferentially in theory. In addition, the priority information of the first scene may also comprehensively consider whether the first scene belongs to the at least one second scene and the at least one third scene at the same time, for example, when a certain first scene belongs to the at least one second scene and the at least one third scene at the same time, the priority information of the first scene may be modified by using a preset value, so that the first scene has a higher priority, so that the first scene is preferentially loaded by the terminal device.
The first scene that needs to be preloaded typically includes a plurality of elements. The target user U1 may not need to use all elements in the first scene in the process of actually using the first scene. In order to avoid wasting resources, priority information of a plurality of elements in the first scene can be determined; during the preloading of the first scene, at least part of the elements can be loaded according to the priority information of the elements; in the case where the target user U1 requests to use the first scenario through the terminal device, loading of the remaining elements other than the aforementioned at least part of the plurality of elements is continued.
Referring to fig. 4, a plurality of elements in scene 2, including in particular its corresponding multidimensional map space, elements E05, E06 and E07 implemented using text and/or patterns, and computer programs associated with each of E05, E06 and E7 for implementing predetermined transactions. The multidimensional map space in scene 2 may be given a higher priority for elements E05, E06 and E07 implemented using text and/or patterns and a lower priority for the computer programs associated with each of E05, E06 and E7. In the process of preloading the scene 2, loading the multidimensional map space with higher priority and E06, E06 and E07 into the memory of the terminal equipment. Correspondingly, after the digital object M1 operated by the target user U1 has been switched to the scene 2, i.e. after the target user U1 requests to use the scene 2, the remaining computer programs are completely loaded into the memory of the terminal device.
The terminal device may also preload the first scenario in various ways, as in ways 1-4 below.
Mode 1, pre-loaded as needed. That is, the terminal device loads only a predetermined part of elements in the first scene into the memory of the terminal device in the process of preloading the first scene. When the target user U1 requests to use the first scene, more specifically, after the digital object M1 operated by the target user U1 is switched to the first scene, the terminal device or the server processes the behavior data of the target user U1 in the first scene and the environmental information of the digital object M1 in the first scene through a pre-trained machine learning model and/or a pre-defined business rule, predicts the element to be used by the target user U1, and if the predicted element to be used is not successfully loaded into the memory of the terminal device, loads the element into the memory.
Mode 2, hierarchical preloading. Namely, in the process of preloading the first scene, the terminal equipment loads elements belonging to the first scene in batches; for example, for image class elements, the image class elements are loaded sequentially in order of resolution from low to high.
Mode 3, data compression and optimization. Efficient data compression and optimization techniques are used to reduce the amount of data of a scene that needs to be preloaded, e.g. by reducing the texture resolution of image class elements, using a more efficient data format, etc.
Mode 4, background preloading. For scenes needing to be used in turn, preloading is carried out according to the use sequence. For example, referring to fig. 1, when the target user U1 talks with a non-player character (NPC) or completes a certain task in the scene 2, it logically needs to use the scene 3 to perform a subsequent task after completing talking with the NPC or completing a certain task; then scenario 3 may be preloaded during the use of scenario 2 by target user U1.
Based on the same conception as the foregoing method embodiment, a scene loading device 500 for a digital space is also provided in the present embodiment, where the digital space includes a plurality of scenes. Referring to fig. 5, the apparatus 500 includes: an information acquisition unit 501 configured to acquire behavior data of a target user in a digital space and environment information corresponding to a digital object operated by the target user; a scene determination unit 503 configured to determine at least one first scene to be used by the target user from the plurality of scenes based on the behavior data and the environmental information; a loading processing unit 505, configured to preload the at least one first scenario in a terminal device held by the target user.
Based on the same conception as the foregoing method embodiment, a terminal device is provided in the present embodiment, in which a scene loading apparatus 500 for digital space provided in the foregoing embodiment is deployed.
Based on the same conception as the foregoing method embodiment, a server is provided in the present embodiment, in which a scene loading device 500 of a digital space provided in the foregoing embodiment is deployed.
There is also provided in embodiments of the present specification a computer-readable storage medium having stored thereon a computer program/instructions which, when executed in a computer, cause the computer to perform the method steps performed by a terminal device or server in the respective embodiments described above.
Embodiments of the present disclosure also provide a computing device, including a memory and a processor, where the memory stores a computer program/instruction, and the processor executes the computer program/instruction to implement the method steps performed by the terminal device or the server in the foregoing embodiments.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation device is a server system. Of course, the present application does not exclude that as future computer technology evolves, the computer implementing the functions of the above-described embodiments may be, for example, a personal computer, a laptop computer, a car-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element. For example, if first, second, etc. words are used to indicate a name, but not any particular order.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, when one or more of the present description is implemented, the functions of each module may be implemented in the same piece or pieces of software and/or hardware, or a module that implements the same function may be implemented by a plurality of sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be used by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present description may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely an example of one or more embodiments of the present specification and is not intended to limit the one or more embodiments of the present specification. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the present specification, should be included in the scope of the claims.

Claims (17)

1. A scene loading method of a digital space, the digital space comprising a plurality of scenes, the method comprising:
acquiring behavior data of a target user in a digital space and environment information corresponding to a digital object operated by the target user;
determining at least one first scene to be used by the target user from the plurality of scenes according to the behavior data and the environment information;
and preloading the at least one first scene in terminal equipment held by the target user.
2. The method of claim 1, the method performed by a server; the preloading of the at least one first scene in the terminal equipment held by the target user comprises the following steps: and sending the data of the at least one scene to the terminal equipment held by the target user, so that the terminal equipment preloads the at least one first scene.
3. The method of claim 1, the determining at least one first scene to be used by the target user from the plurality of scenes according to the behavior data and the environmental information, comprising: and processing the behavior data and the environment information through a pre-trained machine learning model to obtain identification of at least one second scene, wherein at least part of the at least one second scene belongs to the at least one first scene.
4. The method of claim 2, wherein the determining at least one first scene to be used by the target user from the plurality of scenes according to the behavior data and the environment information, further comprises: and processing the behavior data and the environment information according to a predefined business rule to obtain an identification of at least one third scene, wherein at least part of the at least one third scene belongs to the at least one first scene.
5. The method of claim 3, wherein the determining at least one first scene to be used by the target user from the plurality of scenes according to the behavior data and the environmental information, further comprises: and obtaining the identification of the at least one first scene by taking the union of the identification of the at least one second scene and the identification of the at least one third scene.
6. The method according to claim 1, wherein preloading the at least one first scenario in a terminal device held by the target user comprises:
determining priority information corresponding to each of the at least one first scene;
and according to the priority information corresponding to each at least one first scene, preloading the at least one first scene in the terminal equipment held by the target user in sequence.
7. The method of claim 5, the priority information of the first scene being determined based on an importance metric value and a complexity metric value of the first scene.
8. The method of claim 1, the behavioral data comprising at least one of: the method comprises the steps of moving tracks of the digital object in a preset time before the current moment, elements operated by the target user in the preset time before the current moment in the digital space, corresponding operation modes of the elements, and using frequencies of the target user to the scenes in a historical time interval before a time interval to which the current moment belongs.
9. The method of claim 1, the environmental information comprising at least one of: the method comprises the steps of setting position information of a digital object in a digital space, scene use information of users corresponding to other digital objects adjacent to the digital object in position in a time interval of the current moment, and scene use information of users corresponding to other digital objects with association relations with the digital object in the time interval of the current moment.
10. The method of claim 1, comprising a fourth scene in the at least one first scene, the fourth scene comprising a plurality of elements therein; wherein preloading the at least one first scene comprises: and loading at least part of the elements according to the priority information of the elements.
11. The method of claim 9, the method further comprising: and loading the rest elements except at least part of the elements in terminal equipment held by the target user under the condition that the target user requests to use the fourth scene through the terminal equipment.
12. The method of claim 9, the element comprising a multidimensional map space, text and/or patterns disposed in the multidimensional map space, or a computer program associated with the text and/or patterns and for implementing a corresponding transaction.
13. A scene loading device of a digital space, the digital space including a plurality of scenes therein, the device comprising:
the information acquisition unit is configured to acquire behavior data of a target user in a digital space and environment information corresponding to the digital object operated by the target user;
a scene determination unit configured to determine at least one first scene to be used by the target user from the plurality of scenes based on the behavior data and the environmental information;
and the loading processing unit is configured to preload the at least one first scene in the terminal equipment held by the target user.
14. A terminal device in which a scene loading means of a digital space as claimed in claim 12 is deployed.
15. A server in which a digital space scene loader as claimed in claim 12 is deployed.
16. A computing device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any of claims 1-11.
17. A computer readable storage medium having stored thereon a computer program which, when executed in a computing device, implements the method of any of claims 1-11.
CN202311414183.7A 2023-10-27 2023-10-27 Scene loading method and device for digital space, terminal equipment and server Pending CN117492860A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311414183.7A CN117492860A (en) 2023-10-27 2023-10-27 Scene loading method and device for digital space, terminal equipment and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311414183.7A CN117492860A (en) 2023-10-27 2023-10-27 Scene loading method and device for digital space, terminal equipment and server

Publications (1)

Publication Number Publication Date
CN117492860A true CN117492860A (en) 2024-02-02

Family

ID=89682004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311414183.7A Pending CN117492860A (en) 2023-10-27 2023-10-27 Scene loading method and device for digital space, terminal equipment and server

Country Status (1)

Country Link
CN (1) CN117492860A (en)

Similar Documents

Publication Publication Date Title
CN116126365B (en) Model deployment method, system, storage medium and electronic equipment
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
CN108415695A (en) A kind of data processing method, device and equipment based on visualization component
US20210142154A1 (en) Memory pre-allocation for forward calculation in a neural network
CN110826894A (en) Hyper-parameter determination method and device and electronic equipment
CN115774552A (en) Configurated algorithm design method and device, electronic equipment and readable storage medium
CN115828162A (en) Classification model training method and device, storage medium and electronic equipment
CN116757278B (en) Training method and device of prediction model, storage medium and electronic equipment
CN116402113B (en) Task execution method and device, storage medium and electronic equipment
CN111191090B (en) Method, device, equipment and storage medium for determining service data presentation graph type
CN115543945B (en) Model compression method and device, storage medium and electronic equipment
CN116151363B (en) Distributed Reinforcement Learning System
CN117492860A (en) Scene loading method and device for digital space, terminal equipment and server
CN116308375A (en) Data processing method, device and equipment
CN111026458B (en) Application program exit time setting method and device
CN113360154A (en) Page construction method, device, equipment and readable medium
CN116882278B (en) Think simulation experiment system
CN117370536B (en) Task execution method and device, storage medium and electronic equipment
CN116881724B (en) Sample labeling method, device and equipment
CN117348999B (en) Service execution system and service execution method
CN109271269A (en) A kind of processing method, device and equipment that application sudden strain of a muscle is moved back
CN110851416B (en) Data storage performance analysis method and device, host machine determination method and device
CN117350351B (en) Training method of user response prediction system, user response prediction method and device
CN111428994B (en) Service processing method and device and electronic equipment
CN117455015A (en) Model optimization method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination