CN113856197A - Object interaction method and device in virtual scene - Google Patents

Object interaction method and device in virtual scene Download PDF

Info

Publication number
CN113856197A
CN113856197A CN202111282847.XA CN202111282847A CN113856197A CN 113856197 A CN113856197 A CN 113856197A CN 202111282847 A CN202111282847 A CN 202111282847A CN 113856197 A CN113856197 A CN 113856197A
Authority
CN
China
Prior art keywords
interaction
interactive
model
target
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111282847.XA
Other languages
Chinese (zh)
Inventor
宁锌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mihoyo Tianming Technology Co Ltd
Original Assignee
Shanghai Mihoyo Tianming Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mihoyo Tianming Technology Co Ltd filed Critical Shanghai Mihoyo Tianming Technology Co Ltd
Priority to CN202111282847.XA priority Critical patent/CN113856197A/en
Publication of CN113856197A publication Critical patent/CN113856197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/552Details of game data or player data management for downloading to client devices, e.g. using OS version, hardware or software profile of the client device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the field of electronic information, and particularly discloses an object interaction method and device in a virtual scene, which are used for solving the problem of low object loading speed. The method comprises the following steps: acquiring scene information corresponding to a virtual scene, and determining an interactive object which accords with interactive conditions in a target object; the target object is an object which is contained in the virtual scene and loaded with an object visual model; loading an object interaction model of an interaction object; wherein the object interaction model is used for responding to interaction operation triggered by the interaction object; and unloading the object interaction model under the condition that the interaction object of the loaded object interaction model is determined to meet the unloading condition, and caching the unloaded object interaction model into a cache space. According to the method, the object visual model is separated from the object interaction model, so that the object interaction model can be loaded as required, the object loading speed is greatly increased, and the interface display smoothness is ensured.

Description

Object interaction method and device in virtual scene
Technical Field
The embodiment of the invention relates to the field of electronic information, in particular to an object interaction method and device in a virtual scene.
Background
With the development of virtual reality technology, the types and the number of objects capable of being shown in a virtual scene are increased. Moreover, as the interactive technology matures, many objects contained in the virtual scene can realize the interactive function. For example, the partial virtual objects can respond to an interactive operation triggered by an action object in the virtual scene to expose a state corresponding to the interactive operation. For example, in a virtual scene, if a certain animal or plant object is attacked by other users, the animal or plant object should present a state corresponding to the attack, such as falling down, falling off branches and leaves, and the like.
In the related art, the object model of the virtual object includes both visual class information and interaction class information. The visual information is used for realizing a visual display function, and the interactive information is used for realizing an interactive response function. Accordingly, when loading the virtual object, the visual class information and the interactive class information included in the object model need to be loaded at the same time.
The inventor finds that the existing object loading mode at least has the following defects in the process of implementing the invention: when the number of the interactive virtual objects included in the virtual scene is large, the object models of the virtual objects need to be loaded one by one, and the technical problems of time consumption for loading, interface seizure and the like are caused due to the large amount of information in the object models.
Disclosure of Invention
In view of the above, the present invention is proposed to provide an object interaction method and apparatus in a virtual scene that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided an object interaction method in a virtual scene, including:
acquiring scene information corresponding to the virtual scene, and determining an interactive object which meets interactive conditions in a target object according to the scene information; wherein the target object is an object loaded with an object visual model contained in the virtual scene;
loading an object interaction model of the interaction object; wherein the object interaction model is used for responding to an interaction operation triggered by the interaction object;
and under the condition that the interactive object of the loaded object interactive model is determined to meet the unloading condition, unloading the object interactive model of the interactive object meeting the unloading condition, and caching the object interactive model of the interactive object meeting the unloading condition into a cache space.
According to still another aspect of the present invention, there is provided an object loading apparatus in a virtual scene, including:
the acquisition module is suitable for acquiring scene information corresponding to the virtual scene and determining an interactive object which meets interactive conditions in a target object according to the scene information; wherein the target object is an object loaded with an object visual model contained in the virtual scene;
the loading module is suitable for loading an object interaction model of the interaction object; wherein the object interaction model is used for responding to an interaction operation triggered by the interaction object;
and the unloading module is suitable for unloading the object interaction model of the interaction object meeting the unloading condition under the condition that the interaction object of the loaded object interaction model meets the unloading condition, and caching the object interaction model of the interaction object meeting the unloading condition into a cache space.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the object interaction method in the virtual scene.
According to another aspect of the embodiments of the present invention, there is provided a computer storage medium, where at least one executable instruction is stored, and the executable instruction causes a processor to perform an operation corresponding to the object interaction method in the virtual scene.
In the object interaction method and device in the virtual scene, the interaction object which meets the interaction condition in the target object is determined according to the scene information corresponding to the virtual scene; the target object is an object which is contained in the virtual scene and loaded with an object visual model; loading an object interaction model of an interaction object; and unloading the object interaction model of the interaction object meeting the unloading condition under the condition that the interaction object of the loaded object interaction model meets the unloading condition, and caching the object interaction model of the interaction object meeting the unloading condition into a cache space. In the invention, the object visual model is separated from the object interaction model, each target object only loads the object visual model in the initial state, and the object visual model does not contain interaction information and occupies less resources, so that the loading speed can be improved, and the interface is prevented from being jammed. And dynamically determining an interactive object meeting the interactive condition according to the scene information corresponding to the virtual scene, and further loading an object interactive model of the interactive object so as to respond to interactive operation through the object interactive model. Therefore, the method separates the object visual model from the object interaction model, so that the object interaction model can be loaded as required, thereby greatly improving the object loading speed and ensuring the smoothness of interface display. And when the object interaction model is unloaded, the object interaction model is cached in the cache space, so that the loading speed can be increased when the object interaction model is loaded again.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating a method for object interaction in a virtual scene according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for object interaction in a virtual scene according to another embodiment of the present invention;
FIG. 3 is a block diagram of an object interaction device in a virtual scene according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to yet another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flowchart of an object interaction method in a virtual scene according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step S110: acquiring scene information corresponding to the virtual scene, and determining an interactive object which meets interactive conditions in the target object according to the scene information; the target object is an object loaded with an object visual model contained in the virtual scene.
Wherein the scene information corresponding to the virtual scene includes: and content related to the loading progress of the scene, the switching state of the scene, and the states of the respective objects included in the scene. In specific implementation, all information associated with the virtual scene can be used as scene information, and the specific meaning of the scene information is not limited by the invention. The virtual scene generally includes a plurality of virtual objects, such as a character object, a plant object, and an article object. Each object in the virtual scene has object state information, and the object state information is used for reflecting attributes such as position state and interaction state of the object. Accordingly, the scene information corresponding to the virtual scene includes: object state information of individual objects that appear directly and/or indirectly in the virtual scene. Wherein, the object directly appearing in the virtual scene refers to: objects present in the virtual scene, such as character objects, plant objects, etc., present in the virtual scene; objects that appear indirectly in a virtual scene refer to: an object that is not directly present in the virtual scene, but whose motion state will affect the display state of other objects in the virtual scene. For example, for a game-class virtual scene, a corresponding virtual object is usually set for a game user, the virtual object may not be directly presented in the game interface, but as the virtual object moves, the display states of other objects in the game interface are correspondingly adjusted. The virtual object corresponding to the game user is also called a controlled object and is used for executing corresponding operation according to the control of the game user.
Therefore, in this step, the scene information corresponding to the virtual scene is dynamically detected, where the scene information mainly includes the state information of the objects corresponding to the virtual scene, and those skilled in the art can flexibly set the types and the number of the objects corresponding to the virtual scene, which is not limited by the present invention.
In addition, the scene information may be set as information related to a loading progress, a change in view angle, or a switching state of the scene, in addition to the state information related to the object. For example, when the angle of view of the virtual scene changes gradually, the display state of the virtual scene is adjusted accordingly with the change of the angle of view (for example, the image in the scene shows dynamic change from far to near), and accordingly, the angle of view change information of the virtual scene may be used as the scene information.
When the interactive object meeting the interactive condition in the target object is determined according to the scene information, the interactive condition can be set in various ways. For example, the interaction condition may be set according to object state information of an action object corresponding to the virtual scene, and/or object state information of a target object included in the virtual scene. The action object corresponding to the virtual scene is as follows: directly or indirectly in a virtual scene, and can actively trigger interactions and/or can dynamically move objects in position. For example, a person or an animal in the virtual scene may be an action object. Accordingly, the object state information of the action object includes various kinds of information related to the interaction and/or the position change of the action object. For example, the object state information includes: relative position information of the action object relative to the target object, and/or operation state information of the interaction operation triggered by the action object. The operation state information of the interactive operation triggered by the action object is used for describing various contents related to the interactive operation, such as the operation type, the operation position, the operation result and the like of the interactive operation. Similarly, the object state information of the target object is similar to the object state information of the action object. In specific implementation, the object state information of the action object can be used as an interaction condition, so that when the action object approaches the target object or initiates an interaction operation aiming at the target object, the target object is determined to meet the interaction condition; the object state information of the target object can also be used as an interaction condition, so that when the target object is close to the preset position, the target object is determined to meet the interaction condition. In short, the present invention does not limit the specific meaning of the interaction condition, and those skilled in the art can flexibly set the interaction condition according to the dynamically monitored object state information of each object. For example, in an alternative implementation, the interaction condition mainly includes: distance class interaction conditions, and operation class interaction conditions. And judging whether the relative distance between the action object and the target object is smaller than a second preset distance threshold value or not according to the distance interaction condition, and judging whether the action object has an interaction intention aiming at the target object or not according to the operation interaction condition. Wherein the distance class condition and the operation class condition can be used separately. Alternatively, the distance class condition and the interaction class condition may be used in combination, and when the distance class condition and the interaction class condition are used in combination, the interaction condition is a combination condition including both the distance class interaction condition and the operation class interaction condition.
In addition, in addition to setting the interaction condition according to the state information of the object corresponding to the virtual scene, the interaction condition may be set according to other types of scene information. For example, when the view angle of the virtual scene changes gradually, the change of the view angle of the scene is dynamically detected, and the interaction condition is set according to the image content displayed by the virtual scene after the change of the view angle of the scene. For example, after the scene view angle changes, if it is detected that a preset image area in the virtual scene moves to a specified position, it is determined that the target object located in the preset image area meets the interaction condition. The preset image area and the designated position can be flexibly set according to the actual situation. In short, the present invention does not limit the specific meaning of the interaction condition, and those skilled in the art can flexibly set the interaction condition according to the content related to the scene information of the virtual scene.
In addition, the object visual model of the target object is used to present visual information of the target object, such as color, shape, and material. Therefore, it is necessary to load the object visual model of the target object to ensure proper rendering of the target object.
Step S120: loading an object interaction model of an interaction object; wherein the object interaction model is used for responding to interaction operations triggered by the interaction objects.
Wherein the target object has an object interaction model in addition to the object visual model. The object interaction model is used to provide interaction functionality. Therefore, for an interactive object meeting the interaction condition, an object interaction model needs to be further loaded to ensure that the target object can normally respond to the interactive operation.
Step S130: and under the condition that the interactive object of the loaded object interactive model is determined to meet the unloading condition, unloading the object interactive model of the interactive object meeting the unloading condition, and caching the object interactive model of the interactive object meeting the unloading condition into a cache space.
The unloading condition corresponds to the interaction condition, and may specifically be a distance-class unloading condition or an operation-class unloading condition. In short, the invention does not limit the specific connotation of the unloading condition, and all information which can represent that the target object does not have the interactive response requirement can be used as the unloading condition.
In order to increase the speed of subsequent loading, in this embodiment, the object interaction model of the interaction object that meets the unloading condition is cached in the cache space instead of directly deleting the object interaction model of the interaction object that meets the unloading condition. Therefore, when the target object meets the interaction condition again, the target object can be loaded from the cache space quickly, and the response speed is improved.
Therefore, in the object interaction method in the virtual scene provided by the invention, the interaction objects which meet the interaction conditions in the target objects are determined according to the scene information corresponding to the virtual scene; the target object is an object which is contained in the virtual scene and loaded with an object visual model; loading an object interaction model of an interaction object; and unloading the object interaction model of the interaction object meeting the unloading condition under the condition that the interaction object of the loaded object interaction model meets the unloading condition, and caching the object interaction model of the interaction object meeting the unloading condition into a cache space. In the invention, the object visual model is separated from the object interaction model, each target object only loads the object visual model in the initial state, and the object visual model does not contain interaction information and occupies less resources, so that the loading speed can be improved, and the interface is prevented from being jammed. And dynamically determining an interactive object meeting the interactive condition according to the scene information corresponding to the virtual scene, and further loading an object interactive model of the interactive object so as to respond to interactive operation through the object interactive model. Therefore, the method separates the object visual model from the object interaction model, so that the object interaction model can be loaded as required, thereby greatly improving the object loading speed and ensuring the smoothness of interface display. And when the object interaction model is unloaded, the object interaction model is cached in the cache space, so that the loading speed can be increased when the object interaction model is loaded again.
Fig. 2 shows a flowchart of an object interaction method in a virtual scene according to another embodiment of the present invention. As shown in fig. 2, the method includes:
step S200: a target object contained in the virtual scene is determined.
Specifically, in the process of implementing the present invention, the inventor finds that, in a conventional object loading manner, for an object which can present an interaction state and is included in a virtual scene, an object model needs to be set for the object, and the object model needs to support a visual presentation function and an interaction operation function of the object at the same time. Correspondingly, when loading the virtual object capable of presenting the interactive state, the object model supporting both the visual display function and the interactive operation function needs to be loaded. Due to the complex function of the model, the loading is time-consuming and easily causes interface jamming.
In order to solve the above problem, in the present embodiment, a target object included in a virtual scene is determined in advance so that a visual model and an interactive model separated from each other are set for the target object. The target object in this embodiment is the above-mentioned object that can present the interaction state. Preferably, the target objects in the present embodiment mainly refer to: an object capable of presenting a collision response state corresponding to a trigger operation in response to the trigger operation of the action object. Usually, the target object cannot actively trigger the interaction operation, and only can passively respond to the interaction operation triggered by the action object. Wherein, the action object is: virtual objects, such as movable animal-like objects, that can actively trigger an interactive operation. Action objects include objects that appear directly or indirectly in a virtual scene. In specific implementation, object description information of each object included in the virtual scene is acquired, and a target object included in the virtual scene is determined according to the object description information. Correspondingly, in the subsequent steps, an object visual model and an object interaction model which are separated from each other are set for the target object, so that the object loading speed is increased in a mode of respectively loading the object visual model and the object interaction model. Therefore, specific objects are selected as target objects through the object description information, the loading of the object interaction model on demand can be achieved, and the loading efficiency of the target objects is optimized. For example, an interactable object which cannot actively trigger the interactive operation and/or has a low interaction frequency can be set as the target object, because the object is usually only used for passively responding the interactive operation and does not need to respond when the interactive operation is not received, the object is used as the target object, and the loading efficiency can be improved on the premise of not influencing the interactive effect.
Specifically, the object description information includes at least one of: object category information, number of similar objects, object interaction mode, and historical interaction records. Accordingly, the determination of the target object may be achieved by at least one of the following:
in a first implementation manner, the object description information is object type information, and a target object and a non-target object are determined according to the object type information. For example, a virtual object whose object type information is an animal is classified as a non-target object, and a virtual object whose object type information is a plant is classified as a target object. The non-target object is loaded in a one-time loading mode, and the target object is loaded in a mode that the object visual model and the object interaction model are separated from each other. Because the plant virtual object can not actively trigger the interactive operation, the object interactive model can not be loaded at the initial loading stage, and the loading efficiency is improved. Additionally, non-target objects may include: the method comprises the steps of loading animal objects of object models simultaneously supporting a visual display function and an interactive operation function at one time, and loading only static objects (such as mountains, cliffs and the like) of the object models supporting the visual display function.
In a second implementation manner, the object description information is the number of similar objects, and the target object and the non-target object are divided according to the number of similar objects. For example, the virtual objects with the number of the same kind of objects exceeding a preset value are divided into target objects. In practical situations, a large number of plant objects (such as vegetation objects) may be included in the virtual scene of the game class, and the loading time consumption is large because of the large number of plant objects, and the loading time consumption of the plant objects can be greatly reduced by determining the plant objects as the target objects.
In a third implementation manner, the object description information is an object interaction manner, and a target object and a non-target object are determined according to the object interaction manner. For example, an object with a single interactive mode is determined as a target object. In practical situations, one part of the objects has multiple interaction modes, and the other part of the objects only has a single interaction mode, and accordingly, the interaction probability of the object with the single interaction mode is low, and therefore, the division of the object with the single interaction mode into the target objects is beneficial to the improvement of the loading efficiency.
In a fourth implementation manner, the object description information is a history interaction record, and a target object and a non-target object are determined according to the history interaction record of the object. For example, historical interaction records of various types of objects by each user are obtained in advance, so that the objects are classified according to interaction frequency, and the objects with low interaction frequency are divided into target objects. Because the probability that the object with lower interaction frequency generates interaction is lower, the object interaction model does not need to be loaded under the condition that the object does not generate interaction, and therefore, the method is also beneficial to improving the loading efficiency.
The above division modes can be used alone or in combination, and the present invention is not limited to the details.
In addition, step S200 is an optional step, and this step may be omitted in scenes with a small number of objects or a single object type, and all objects in the virtual scene may be directly determined as target objects.
Step S210: and setting an object visual model and an object interaction model which are separated from each other aiming at the target object.
On one hand, the visual display characteristics of the target object are obtained, and the object visual model of the target object is generated according to the visual display characteristics of the target object. The object visual model is used for realizing a visual display function of the target object, and specifically includes attribute information related to a visual effect of the target object, such as a coordinate position, a color, a size, a rotation angle, a scaling and the like of the target object. And on the other hand, acquiring the interactive response mode of the target object, and generating an object interaction model of the target object according to the interactive response mode of the target object. The object interaction model is used for enabling the target object to have the capability of responding to interaction operation, and specifically comprises logic functions related to collision detection. Therefore, the object visual model and the object interaction model of the target object are separately arranged, so that the object interaction model can be loaded as required, and the loading speed is increased.
In addition, the inventor finds that in the process of implementing the present invention, a plurality of target objects of the same type may be included in a virtual scene, and therefore, it is necessary to consume a large amount of system resources to separately set the object visual model and the object interaction model for each target object. In order to save system resources and improve storage efficiency, optionally, the target object is further divided according to object types, and a corresponding type visual model and a type interaction model are respectively set for each object type. The type visual model is used for realizing the visual display function of the target object of the corresponding type, and the type interactive model is used for realizing the interactive operation function of the target object of the corresponding type. By multiplexing the same type of target objects with the same type of visual models or type interaction models, the storage resource consumption can be greatly reduced, and the batch loading of the same type of target objects is conveniently realized.
In one implementation, clustering is performed on each target object contained in a virtual scene to obtain at least one object type; and respectively setting a type interaction model corresponding to each object type. For example, the target objects of trees are clustered into tree object types, the target objects of shrubs are clustered into shrub object types, and corresponding type interaction models are set for the object types.
In one implementation, the clustering may be performed according to the object type and the object visual information of each target object to obtain at least one object type. Wherein, the object species can be the tree species mentioned above, etc.; the subject visual information includes: object size, object shape, visual modules contained in the object. For example, a plurality of target objects having similar object sizes or similar object shapes are clustered into one object type. For example, the sizes or shapes of various trees are different, and therefore, a uniform type interaction model can be set for the trees. For another example, a plurality of target objects including the same visual module are clustered into one object type, and a type interaction model is set according to the same visual module. Wherein, the visual module is: and means for rendering a visual effect of each object component of the target object. For example, apple tree objects include: the visual effect display system comprises a branch visual module for presenting the visual effect of branches, a trunk visual module for presenting the visual effect of a trunk, and a fruit visual module for presenting the visual effect of fruits. Accordingly, one subject visual module may be extracted from a plurality of visual modules corresponding to the target object, and the type interaction model may be set based on the subject visual module. The subject visual module is typically a visual module common to multiple target objects, or a visual module with a larger surface area or a more regular shape. In one implementation, a subject visual module is extracted from a plurality of visual modules of each target object, respectively; and clustering a plurality of target objects identical to the main body vision module into object types corresponding to the main body vision module. Correspondingly, respectively aiming at each object type, a type interaction model is set according to the main body vision module corresponding to the object type. For example, a trunk-class visual module is used as the main body visual module, and the type interaction model is set based on the trunk-class visual module. Accordingly, the shape and position of the type interaction model matches the trunk. Specifically, a rectangular-shaped collision box may be added at the trunk position as the type interaction model.
In addition, step S210 is an optional step, and is aimed at increasing the subsequent loading speed, in other embodiments of the present invention, step S210 may also be omitted, and the object visual model and the object interaction model are generated and loaded in real time in the subsequent steps.
Step S220: and acquiring a target object contained in the virtual scene, and loading an object visual model of the target object in the virtual scene.
The virtual scene in the embodiment includes various scenes such as a virtual reality scene, a game scene, a human-computer interaction scene, and the like. Since the target objects are already divided in step S200 and step S210 in advance, in this step, the target objects included in the virtual scene can be obtained directly according to the object identifiers of the respective objects, and then the object visual model of the target objects is loaded in the virtual scene.
In addition, as mentioned above, in order to save system resources and improve storage efficiency, the target object may be divided according to object types, and a corresponding type visual model and a type interaction model are respectively set for each object type. Correspondingly, before the object visual model of the target object is loaded in the virtual scene, a type visual model corresponding to the object type of the target object is further obtained, and the object visual model of the target object is obtained according to the type visual model.
Considering that each target object in an actual scene may have a unique pose state, for example, information such as a growth direction, a size, coordinates, and the like of a plurality of vegetation objects of the same kind when the vegetation objects grow at different positions are different from each other, in order to accurately show the pose of each target object, in this step, first, a type visual model corresponding to an object type of the target object is obtained; then, acquiring object pose information corresponding to the object identifier of the target object; and finally, adjusting the pose state of the type visual model according to the object pose information to obtain an object visual model of the target object. Therefore, the method can load the adjusted type visual model as the object visual model of the target object. Wherein the object pose information is stored in association with an object identification of the target object for describing a pose state of the target object. Specifically, the object pose information includes: the position coordinates, the size, the rotation angle, and the scale of the object, and the like of the object are related to the position and the posture of the object. In addition, the object pose information may further include: color information, material information, and the like of the object, in short, all attribute features related to the position and posture of the object can be used as object pose information. It can be seen that, in the present embodiment, a target object is represented by an object type and an object identifier, where the object type is used to indicate a category to which the object belongs, and the object identifier is used to uniquely identify a specific object. Moreover, by storing the visual characteristics common to the same type of target objects in the type visual model associated with the object type and storing the unique pose characteristics of each target object in the object pose information associated with the object identifier, the commonality of the same type of objects and the characteristics of each object can be taken into account.
Step S230: scene information corresponding to the virtual scene is detected.
In this embodiment, the scene information corresponding to the virtual scene mainly refers to: object state information of an action object corresponding to the virtual scene. Wherein, the action object corresponding to the virtual scene comprises: the action object directly or indirectly appears in the virtual scene, and the action object mainly refers to an object capable of actively triggering an interactive action, and specifically may be an animal object, a character object, and the like in the virtual scene. Or, the controlled object may also correspond to a game user in a game scene. Specifically, the object state information of the action object includes: relative position information of the action object relative to the target object, and/or operation state information of the interaction operation triggered by the action object. However, since the motion object can move the position, the relative position information of the motion object with respect to the target object is constantly changing. In addition, the interactive operation triggered by the action object is dynamically detected, so that the operation state information of the interactive operation is acquired.
Of course, those skilled in the art can understand that the scene information corresponding to the virtual scene may also be other various types of scene-related information, and the present invention is not limited thereto.
Step S240: and determining an interactive object which meets the interactive condition in the target object according to the scene information.
Specifically, whether the target object meets the interaction condition is judged according to the scene information, and the target object meeting the interaction condition is determined as the interaction object. When the number of the target objects is multiple, if all the target objects meet the interaction condition, all the target objects are interaction objects; and if only part of the target objects meet the interaction conditions, the part of the target objects are the interaction objects. Since the scene information is dynamically changed, the interactive object that meets the interactive condition in the target object is also dynamically changed. Specifically, the interaction condition includes at least one of the following:
the first interaction condition is a distance judgment condition: the relative distance between the action object and the target object is smaller than a first preset distance threshold. Specifically, when the relative distance between the action object and the target object is determined to be smaller than a first preset distance threshold according to the relative position information of the action object relative to the target object, it is determined that the target object meets the interaction condition. Therefore, the first interaction condition is a distance condition, and is mainly determined according to the relative distance of the target object relative to the action object or the preset position. In specific implementation, if the relative distance between the action object and any one of the target objects is smaller than a first preset distance threshold, the target object with the relative distance smaller than the first preset distance threshold is determined as the interactive object meeting the interaction condition. The first preset distance threshold may be set according to a specific scene, for example, in a game scene, the first preset distance threshold may be set according to a distance value corresponding to a common attack range.
The second kind of interaction condition is an interaction intention judgment condition: the action object has an interaction intention for the target object; wherein the interaction intention is determined according to the operation type of the interaction operation triggered by the action object. In specific implementation, information such as an operation type of interactive operation triggered by an action object is acquired, and the target object with the interactive intention is determined as an interactive object meeting an interactive condition under the condition that the action object has the interactive intention aiming at any target object according to the operation type. Specifically, the interactive operation triggered by the action object is detected, and the target object with the interactive intention of the action object is judged according to the operation type and the operation state of the interactive operation. Determining a target object of which the action object has triggered the interaction behavior as an interaction object with an interaction intention, for example, determining a target object of which the action object has performed an attack operation as an interaction object with an interaction intention; it is also possible to determine a target object, for which an action object has not triggered an attack action and which is to trigger an attack operation (such as a preparatory action before an attack such as raising an arm is performed), as an interaction object having an interaction intention.
Wherein the second kind of interaction condition is to judge whether the action object has an interaction intention for the target object. In specific implementation, the operation types of various interactive operations related to the interactive intention can be predetermined, so that whether the action object has the interactive intention or not is judged according to the operation type of the interactive operation triggered by the action object. The operation types of various interactive operations related to the interactive intention can be further divided into: a first type of operation corresponding to a direct intent, such as an attack-type operation; and a second operation type corresponding to indirect intent, such as a prepare operation corresponding to an attack class operation. By detecting whether the operation type of the interactive operation triggered by the action object aiming at the target object belongs to the first operation type or the second operation type, the operation intention of the action object can be accurately judged. The intention of the action object can be pre-judged in advance by setting the second operation type, so that the timely loading of the object interaction model of the target object is ensured.
The two methods can be used independently or in combination. When the two are used in combination, when the target object simultaneously satisfies the two conditions, namely, the target object is close to the target object and has an interaction intention, the target object can be determined to be an interaction object. Or, when the two are used in combination, priorities may be set for the two conditions, for example, the distance determination condition is executed first, and the interaction intention determination condition is further executed when the distance determination condition is satisfied, so that the interaction intention is monitored only for a plurality of target objects at a short distance, thereby reducing the monitoring cost. For another example, the interaction intention judgment condition is executed first, and the distance judgment condition is further executed under the condition of having the interaction intention, so that the distance is monitored only for a plurality of target objects having the interaction intention, and the purpose of reducing the monitoring cost can also be achieved.
Step S250: and loading an object interaction model of the interaction object.
After determining the interactive object, loading an object interaction model of the interactive object. As described above, in order to improve the multiplexing rate of the model and reduce the storage space, a type interaction model is set for a target object of the same type, and accordingly, before the object interaction model of the interaction object is loaded, a type interaction model corresponding to the object type of the interaction object is further obtained, and the object interaction model of the interaction object is obtained according to the type interaction model.
In addition, considering that each target object has different pose information, in one implementation, the pose state of the type interaction model may be further adjusted according to the object pose information corresponding to the object identifier of the interaction object to obtain the object interaction model of the interaction object. The object pose information is the object pose information stored in association with the object identifier and acquired in step S220. Through the object pose information, the type interaction model which is generally used for the target object of the same type can be adjusted to the object interaction model matched with the specific pose state of the current target object, so that the display state of each target object is optimized on the premise of reducing the storage space as much as possible.
In particular, the type interaction model may be implemented by a crash box, in particular for storing crash detection information. Also, in order to ensure the interaction accuracy, it is necessary to completely match the type interaction model with the type visual model. For example, when the shape of the detection area of the crash box completely conforms to the shape presented by the type vision model, it can be ensured that any edge position of the target object can respond accurately in the event of a crash, thereby improving the accuracy. In this embodiment, a general type visual model and a type interaction model are set in advance for a same type of target object, and compared with a mode in which an object visual model and an object interaction model are set separately for each target object, accurate matching of shapes of the type visual model and the type interaction model is facilitated on the premise of reducing a storage space. In addition, in the embodiment, object pose information is further set for each target object, and accordingly, the pose states of the type visual model and the type interaction model can be adjusted through the object pose information.
In addition, in this embodiment, it is necessary to determine whether an object interaction model of an interaction object is cached in the cache space; under the condition that the object interaction model of the interaction object is cached in the cache space, loading the object interaction model of the interaction object from the cache space; and loading the object interaction model of the interaction object from the non-cache space under the condition that the object interaction model of the interaction object is not cached in the cache space. Wherein, the buffer memory space includes: memory space, cache memory and other spaces capable of raising access speed. The non-cache space includes: a hard disk space included in the local terminal, or a storage space included in the cloud server. The hard disk has the advantages of large storage capacity, stable data, low possibility of losing and the like, so that the object interaction model is stored in the hard disk with high safety. The cloud server can synchronize data of the multiple terminals, and therefore, the object interaction model is stored in the cloud server, so that data synchronization and backup among the multiple terminals are achieved conveniently.
Step S260: and determining an associated object having an association relation with the interactive object, and loading an object interaction model of the associated object.
The sequence of step S260 and step S250 is not limited in the present invention, and those skilled in the art can understand that step S260 and step S250 may also be executed simultaneously. In addition, step S260 is an optional step, and in other embodiments of the present invention, step S260 may also be omitted.
When the number of target objects included in the virtual scene is multiple, the influence range of the interactive operation is large, and thus, a plurality of adjacent target objects may be influenced by the interactive operation. At this time, in order to simulate the interaction state of each target object, when the object interaction model of the interaction object is loaded, it is necessary to further determine an association object having an association relationship with the interaction object, so as to further load the object interaction model of the association object.
Specifically, when determining the association object having an association relationship with the interaction object, the method may be implemented in various ways, for example, determining a target object within a first preset range of the interaction object as the association object, where the way is to determine a target object in a neighboring area of the interaction object as the association object; as another example, the target object within the second preset range of the action object is determined as the associated object, and the manner is intended to determine the target object within the vicinity of the action object as the associated object. The first preset range and/or the second preset range may be set in a plurality of ways, for example, the first preset range and/or the second preset range may be determined in at least one of the following ways:
in a first determination mode, device attribute information of terminal equipment displaying a virtual scene is acquired, an area range threshold corresponding to the device attribute information is determined, and a first preset range and/or a second preset range are/is determined according to the area range threshold. The device attribute information of the terminal device includes: device type information (e.g., mobile device, fixed device, etc.), and/or device capability information (e.g., hardware configuration information, etc.). Correspondingly, if the device type information is the mobile device and/or the configuration of the device is determined to be low according to the device performance information, the threshold value of the area range needs to be reduced so as to avoid the problem of jamming caused by overlarge loading capacity; if the device type information is fixed device and/or the configuration of the device is determined to be higher according to the device performance information, the threshold value of the area range can be increased, so that the target object in a larger range is loaded, and the interactive experience is more real. Therefore, the size of the area range threshold can be flexibly determined according to the device attribute information by pre-storing the corresponding relation between the device attribute information and the area range threshold. The region range threshold is used to set the size of the first preset range and/or the second preset range.
In a second determination mode, motion track information of an action object and operation position information of a plurality of interactive operations continuously triggered by the action object are acquired, an intention operation area of the action object is predicted according to the motion track information and the operation position information, and a first preset range and/or a second preset range are/is determined according to the intention operation area. For example, from the motion trajectory information of the action object, the motion tendency and the motion direction of the action object can be determined, so that the intended operation region of the action object is predicted from the motion tendency and the motion direction. For another example, according to the operation position information of a plurality of interactive operations continuously triggered by the action object, the interaction trend and the interaction direction of the action object can be determined, so that the intended operation area of the action object is predicted according to the interaction trend and the interaction direction. For example, when the action object performs an attack class interoperation, the intended operation region may be predicted according to the attack direction of the attack class interoperation for a plurality of times. By predicting the intention operation area, possible operation areas can be predicted before the action object triggers the interactive operation, so that the setting of the associated object is more reasonable. In practice, in this mode, the association object having the association relationship with the interaction object may be directly determined from the predicted intended operation region of the action object without setting the first preset range and the second preset range.
In the third determination mode, a historical interaction record of the action object is obtained, the interaction preference of the action object for various types of target objects is determined according to the historical interaction record, the preference type of the action object is determined according to the interaction preference, and the first preset range and/or the second preset range is/are determined according to the region where the target object corresponding to the preference type is located. Specifically, through the historical interaction records of the action objects, the information such as the interaction frequency and the interaction mode of the action objects for different types of target objects can be determined, so that the interaction preference of the action objects for various types of target objects is obtained, and the preference types are determined according to the interaction preference. For example, a target object with a higher frequency of interaction may be screened as the associated object. For another example, the interaction order or interaction combination of the action object for different types of target objects may also be determined according to the historical interaction records, so that the associated objects are set according to the interaction order or interaction combination. By interaction order is meant: the action object triggers order information of the interaction operation for a plurality of different target objects, for example, the action object tends to interact according to the order of the first class target object, the second class target object and the third class target object, and correspondingly, if the current interaction object is the second class target object, the third class target object can be used as a related object. The interactive combination means that: the action object is prone to interact with an object combination composed of a plurality of target objects, for example, the action object interacts with the target object a and also interacts with the target object B, and the target object B may be determined as a related object when the target object a is a current interaction object.
In a fourth determination mode, the first preset range and/or the second preset range is/are determined according to task sequence information and/or route setting information corresponding to the action object. For example, in a game scene, the action object usually executes a series of operation actions for completing a specified task according to the setting of a game level, and accordingly, the first preset range and/or the second preset range can be determined according to the series of operation actions for completing the specified task. Therefore, the task sequence information can be acquired in advance according to the game level setting, correspondingly, the intention operation area of the action object can be predicted according to the task sequence information, and then the associated object can be screened according to the intention operation area. The route setting information may be determined based on topographic information in the virtual scene, and for example, if an unreachable area such as a canyon or a cliff exists around the action object, it is necessary to determine the route based on the reachable area and set the related object based on the route.
Step S270: and responding to the interactive operation triggered by the action object, and controlling the display state of the object visual model of the target object to be a collision response state.
Specifically, in response to an interactive operation triggered by an action object, in the case that it is determined that the interactive operation collides with the interactive object according to the object interaction model, a collision response state corresponding to an operation type and/or a collision position of the interactive operation is determined, and a display state of an object visual model of the interactive object is controlled to be changed from an initial state to a collision response state. When the object visual model is not collided, the corresponding display state is the initial state.
In particular, each collision position and the collision response state corresponding to the collision position need to be stored in the object interaction model. For example, for a plant type target object, each branch in the plant type target object is taken as a collision position, and the object interaction model stores the corresponding relationship between each collision position and the collision response state in advance. For example, when the branch a is collided, the collision response state is that the branch a and the branch beside the branch a fall; when the branch B is collided, the collision response state is that the branch B and branches beside the branch B fall off; when a fruit is impacted, the impact responds that the fruit and its next branch fall. In short, by storing each collision position and its corresponding collision response state in advance, the display state of the object visual model can be controlled to change.
In addition, in practical situations, the collision response state is related to the operation type of the interactive operation in addition to the collision position of the interactive operation, for example, the collision response states corresponding to the interactive operations of different attack types are different. Therefore, various operation types and corresponding collision response states also need to be stored in the object interaction model. Therefore, the object interaction model stores a collision response mapping table, and the collision response mapping table is used for storing mapping relations among the collision position, the interaction operation and the collision response state.
Step S280: and under the condition that the interactive object of the loaded object interactive model is determined to meet the unloading condition, unloading the object interactive model of the interactive object meeting the unloading condition, and caching the object interactive model of the interactive object meeting the unloading condition into a cache space.
Specifically, whether the interactive object of the loaded object interactive model meets the unloading condition is judged; and if so, unloading the object interaction model of the interaction object meeting the unloading condition. Wherein the unloading condition comprises at least one of:
the first unloading condition is a distance class unloading condition, and comprises the following steps: the relative distance between the interaction object and the action object of the loaded object interaction model is larger than a second preset distance threshold. Specifically, whether the relative distance between the interaction object loaded with the object interaction model and the action object is greater than a second preset distance threshold value is judged; if so, determining the interactive objects with the relative distance larger than the second preset distance threshold value as the interactive objects meeting the unloading condition.
The second unloading condition is an interactive class unloading condition, which comprises the following steps: and the non-interactive time length between the interactive object and the action object of the loaded object interactive model is greater than a preset time length threshold value. Specifically, whether the non-interactive time length between the interactive object and the action object of the loaded object interactive model is greater than a preset time length threshold value is judged; if so, determining the interactive object with the non-interactive duration being greater than the preset duration threshold as the interactive object meeting the unloading condition.
The two unloading conditions can be used independently or in combination. In addition, those skilled in the art can flexibly set various unloading conditions, and the present invention is not limited thereto. When the two conditions are used in combination, the interactive object can be determined to meet the unloading condition when the target object simultaneously meets the two conditions, namely the distance is long and the non-interactive time is longer than the preset time threshold. Or when the two are used in combination, priorities may be set for the two unloading conditions, for example, whether the distance class unloading condition is met is judged first, and whether the interaction class unloading condition is met is further judged under the condition that the distance class unloading condition is met, so that the non-interaction duration is monitored only for a plurality of target objects with longer distances, and the monitoring cost is reduced. For another example, whether the interactive unloading condition is met is judged first, and whether the distance unloading condition is met is further judged under the condition that the interactive unloading condition is met, so that the distance is monitored only for a plurality of target objects which are not interacted for a long time, and the purpose of reducing the monitoring cost can be achieved.
The execution sequence of each step can be flexibly adjusted by those skilled in the art, and each step can be split into more steps, or combined into fewer steps, or some steps can be deleted. The first and second embodiments can be combined with each other, but the invention is not limited thereto. Moreover, the above steps may be executed in a loop, for example, after the object interaction model is unloaded, if it is detected again that the target object meets the interaction condition, the corresponding object interaction model is loaded again. In summary, the loading operation and the unloading operation of the object interaction model can be dynamically executed along with the detection result of the scene information.
In addition, when it is determined that the associated object of the loaded object interaction model meets the unloading condition, the object interaction model of the associated object meeting the unloading condition needs to be unloaded, and the object interaction model of the associated object meeting the unloading condition needs to be cached in the cache space. In a word, the unloaded object interaction model is cached to the cache space, so that the subsequent loading speed can be increased.
In addition, in an optional implementation manner, in order to save a cache space, the target objects included in the virtual scene are further divided into a first type of target object and a second type of target object; wherein the first type of target object has a higher response priority than the second type of target object. Correspondingly, when the object interaction model of the interaction object meeting the unloading condition is cached to the cache space, whether the interaction object meeting the unloading condition is the first type of target object is further judged; if so, caching the object interaction model of the interaction object meeting the unloading condition to a cache space; if not, the object interaction model of the interaction object meeting the unloading condition is directly deleted, and certainly, after deletion, the object interaction model of the interaction object still remains in the non-cache space and is directly loaded from the non-cache space next time.
By the method, the interactive objects with higher response priority can be cached in the cache space, so that the loading speed is increased when the interactive objects are loaded again. The higher response priority means that the interactive object needs to respond immediately when receiving the interactive operation, and if the response is not timely, abnormal conditions such as picture blockage, error in the interactive result and the like can be caused. For example, in a game scenario, when some interactive objects respond to an interactive operation, the result of the interactive operation is related to the entity attribute of the associated entity, and therefore, in order to ensure the accuracy of the entity attribute of the associated entity, it is necessary to respond to the interactive operation of the interactive object in time, that is: the response priority of a target object with an associated entity is higher than the response priority of a target object without an associated entity. For example, assume that the target object is an "apple tree" with an associated entity "apple basket"; also, whenever an apple is knocked down by an "apple tree", the entity attributes of its corresponding associated entity "apple basket" will change dynamically (adjusted according to the increasing number of apples in apple blue).
Therefore, when the target objects contained in the virtual scene are divided into the first type of target objects and the second type of target objects, the method can be realized in various ways, and the target objects contained in the virtual scene can be divided into the first type of target objects and the second type of target objects according to the number and the types of the associated entities of the target objects, the object types of the target objects, the interactive response mode and/or the historical interactive frequency. For example, the target object with higher historical interaction frequency is determined as the first type target object.
In addition, in yet another optional implementation manner, the cache space further includes: the access speed of each cache subspace is different, and each cache subspace corresponds to different object priorities respectively. For example, the cache space is divided into a plurality of levels, and the access speed and the data caching duration of the cache subspaces of different levels are different. In one case, the cache duration of the cache subspace of the first level is a first duration, and the access speed is a first rate; the cache duration of the second level cache subspace is the second duration, the access speed is the second rate … …, and so on. The first duration is less than the second duration, and the first rate is greater than the second rate. For example, the cache subspace of the first level only caches the object interaction model unloaded within 30 minutes, but the access speed is higher; the second level of cache subspace caches the object interaction model unloaded within 2 hours, but the access speed is slightly slower. In general, the probability that the just unloaded object interaction model interacts again is high, and the probability that the object interaction model with longer unloading time interacts again is low, so that the access speed of the high-frequency interaction object can be effectively increased and the cache space can be reduced by reasonably setting the cache duration and the access speed of the cache subspace of each level. The cache duration of each cache subspace can be determined according to the duration of each game level in the game scene. Correspondingly, when the object interaction model of the interaction object meeting the unloading condition is cached to the cache space, the object priority of the interaction object meeting the unloading condition is further determined, and the object interaction model of the interaction object meeting the unloading condition is cached to the cache subspace corresponding to the object priority. The object priority of the interactive object can be determined according to the type, access frequency and other information of the object.
In summary, the object visual model and the object interaction model are separated from each other, and the object state information corresponding to the virtual scene is dynamically detected, so that the interaction object meeting the interaction condition in the target object is dynamically determined, and the loading on demand of the object interaction model is realized, thereby greatly improving the object loading speed and ensuring the interface display fluency. In addition, by setting the type visual model and the type interaction model which are universal for the target object of the same type, the storage space can be reduced, the reuse rate of the model is improved, and the accurate matching between the type visual model and the type interaction model is convenient to ensure, so that the accuracy of the interaction response is improved. Moreover, by setting the object pose information for the target object, the pose state of each target object can be flexibly set on the basis of the type vision model and the type interaction model. In addition, the method can improve the speed of subsequent loading by caching the object interaction model of the interaction object meeting the unloading condition into the cache space, and can further improve the loading speed and improve the utilization rate of the cache space by dividing the target object into at least two types, dividing a plurality of levels aiming at the cache space and setting a plurality of object priorities aiming at the target object.
Of course, those skilled in the art may also cache the object interaction model of the target object in the cache space in advance to increase the first loading speed, which is not limited in the present invention.
EXAMPLE III
Fig. 3 shows an object loading apparatus in a virtual scene according to a third embodiment of the present invention, including:
an obtaining module 31, adapted to obtain scene information corresponding to the virtual scene, and determine an interactive object meeting an interactive condition in the target object according to the scene information; wherein the target object is an object loaded with an object visual model contained in the virtual scene;
a loading module 32 adapted to load an object interaction model of the interaction object; wherein the object interaction model is used for responding to an interaction operation triggered by the interaction object;
the unloading module 33 is adapted to, in a case that it is determined that the interactive object of the loaded object interactive model meets the unloading condition, unload the object interactive model of the interactive object meeting the unloading condition, and cache the object interactive model of the interactive object meeting the unloading condition in the cache space.
Optionally, the loading module is specifically adapted to:
judging whether an object interaction model of the interaction object is cached in the cache space;
and under the condition that the object interaction model of the interaction object is cached in the cache space, loading the object interaction model of the interaction object from the cache space.
Optionally, the loading module is specifically adapted to:
and loading the object interaction model of the interaction object from the non-cache space under the condition that the object interaction model of the interaction object is not cached in the cache space.
Optionally, the cache space includes: a memory space; the non-cache space includes: a hard disk space included in the local terminal, or a storage space included in the cloud server.
Optionally, the obtaining module is further adapted to:
dividing target objects contained in the virtual scene into a first class of target objects and a second class of target objects; wherein the first class of target objects has a higher response priority than the second class of target objects;
the unloading module is specifically adapted to:
judging whether the interactive object meeting the unloading condition is a first-class target object or not;
and if so, caching the object interaction model of the interaction object meeting the unloading condition to a cache space.
Optionally, the obtaining module is specifically adapted to:
and dividing the target objects contained in the virtual scene into a first class of target objects and a second class of target objects according to the object types, interactive response modes, historical interactive frequency and/or the number of associated entities of the target objects.
Optionally, the cache space includes: the access speed of each cache subspace is different, and each cache subspace corresponds to different object priorities respectively;
the unloading module is specifically adapted to:
determining the object priority of the interactive objects meeting the unloading conditions, and caching the object interaction model of the interactive objects meeting the unloading conditions to a cache subspace corresponding to the object priority.
Optionally, the scene information corresponding to the virtual scene includes at least one of: object state information of an action object corresponding to the virtual scene and object state information of a target object included in the virtual scene;
wherein the object state information of the action object corresponding to the virtual scene includes: the relative position information of the action object relative to the target object and/or the operation state information of the interactive operation triggered by the action object;
the interaction condition comprises at least one of:
the relative distance between the action object and the target object is smaller than a first preset distance threshold;
the action object has an interaction intention for the target object; wherein the interaction intention is determined according to an operation type of an interaction operation triggered by the action object;
and, the unloading condition includes:
the relative distance between the interaction object loaded with the object interaction model and the action object is larger than a second preset distance threshold value; and/or the presence of a gas in the gas,
and the non-interactive time length between the interactive object and the action object of the loaded object interactive model is greater than a preset time length threshold value.
Optionally, if the number of target objects included in the virtual scene is multiple, the loading module is further adapted to:
determining an associated object having an association relation with the interactive object, and loading an object interaction model of the associated object;
wherein the determining of the association object having an association relationship with the interaction object comprises: determining a target object within a first preset range of the interactive object as the associated object; and/or determining a target object in a second preset range of the action object as the associated object;
and the first preset range and/or the second preset range is determined by at least one of the following ways:
acquiring equipment attribute information of terminal equipment displaying the virtual scene, determining an area range threshold corresponding to the equipment attribute information, and determining the first preset range and/or the second preset range according to the area range threshold;
acquiring motion track information of the action object and operation position information of a plurality of interactive operations continuously triggered by the action object, predicting an intention operation area of the action object according to the motion track information and the operation position information, and determining the first preset range and/or the second preset range according to the intention operation area;
acquiring a historical interaction record of the action object, determining interaction preference of the action object for various types of target objects according to the historical interaction record, determining a preference type of the action object according to the interaction preference, and determining the first preset range and/or the second preset range according to an area where the target object corresponding to the preference type is located; and the number of the first and second groups,
and determining the first preset range and/or the second preset range according to task sequence information and/or route setting information corresponding to the action object.
The specific structure and the working principle of each module may refer to the description of the corresponding part of the method embodiment, and are not described herein again.
Yet another embodiment of the present application provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute the object interaction method in the virtual scene in any of the above method embodiments. The executable instructions may be specifically configured to cause a processor to perform respective operations corresponding to the above-described method embodiments.
Fig. 4 is a schematic structural diagram of an electronic device according to another embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 4, the electronic device may include: a processor (processor)502, a Communications Interface 506, a memory 504, and a communication bus 508.
Wherein:
the processor 502, communication interface 506, and memory 504 communicate with each other via a communication bus 508.
A communication interface 506 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically execute relevant steps in the above embodiment of the object interaction method in the virtual scene.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 504 is used for storing the program 510. Memory 504 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to enable the processor 502 to execute the corresponding operations in the above method embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (12)

1. An object interaction method in a virtual scene, comprising:
acquiring scene information corresponding to the virtual scene, and determining an interactive object which meets interactive conditions in a target object according to the scene information; wherein the target object is an object loaded with an object visual model contained in the virtual scene;
loading an object interaction model of the interaction object; wherein the object interaction model is used for responding to an interaction operation triggered by the interaction object;
and under the condition that the interactive object of the loaded object interactive model is determined to meet the unloading condition, unloading the object interactive model of the interactive object meeting the unloading condition, and caching the object interactive model of the interactive object meeting the unloading condition into a cache space.
2. The method of claim 1, wherein the loading the object interaction model of the interaction object comprises:
judging whether an object interaction model of the interaction object is cached in the cache space;
and under the condition that the object interaction model of the interaction object is cached in the cache space, loading the object interaction model of the interaction object from the cache space.
3. The method of claim 2, wherein after determining whether the object interaction model of the interaction object is cached in the cache space, the method further comprises:
and loading the object interaction model of the interaction object from the non-cache space under the condition that the object interaction model of the interaction object is not cached in the cache space.
4. The method of claim 3, wherein the cache space comprises: a memory space; the non-cache space includes: a hard disk space included in the local terminal, or a storage space included in the cloud server.
5. The method of any of claims 1-4, wherein prior to said obtaining context information corresponding to the virtual context, further comprising:
dividing target objects contained in the virtual scene into a first class of target objects and a second class of target objects; wherein the first class of target objects has a higher response priority than the second class of target objects;
the caching the object interaction model of the interaction object meeting the unloading condition into a cache space comprises:
judging whether the interactive object meeting the unloading condition is a first-class target object or not;
and if so, caching the object interaction model of the interaction object meeting the unloading condition to a cache space.
6. The method of claim 5, wherein the dividing the target objects included in the virtual scene into a first class of target objects and a second class of target objects comprises:
and dividing the target objects contained in the virtual scene into a first class of target objects and a second class of target objects according to the object types, interactive response modes, historical interactive frequency and/or the number of associated entities of the target objects.
7. The method of claim 5 or 6, wherein the cache space comprises: the access speed of each cache subspace is different, and each cache subspace corresponds to different object priorities respectively;
the caching the object interaction model of the interaction object meeting the unloading condition into a cache space comprises:
determining the object priority of the interactive objects meeting the unloading conditions, and caching the object interaction model of the interactive objects meeting the unloading conditions to a cache subspace corresponding to the object priority.
8. The method of any of claims 1-7, wherein the context information corresponding to the virtual context comprises at least one of: object state information of an action object corresponding to the virtual scene and object state information of a target object included in the virtual scene;
wherein the object state information of the action object corresponding to the virtual scene includes: the relative position information of the action object relative to the target object and/or the operation state information of the interactive operation triggered by the action object;
the interaction condition comprises at least one of:
the relative distance between the action object and the target object is smaller than a first preset distance threshold;
the action object has an interaction intention for the target object; wherein the interaction intention is determined according to an operation type of an interaction operation triggered by the action object;
and, the unloading condition includes:
the relative distance between the interaction object loaded with the object interaction model and the action object is larger than a second preset distance threshold value; and/or the presence of a gas in the gas,
and the non-interactive time length between the interactive object and the action object of the loaded object interactive model is greater than a preset time length threshold value.
9. The method according to any one of claims 1 to 8, wherein the number of target objects included in the virtual scene is plural, and after determining an interactive object meeting the interaction condition among the target objects according to the scene information, the method further includes:
determining an associated object having an association relation with the interactive object, and loading an object interaction model of the associated object;
wherein the determining of the association object having an association relationship with the interaction object comprises: determining a target object within a first preset range of the interactive object as the associated object; and/or determining a target object in a second preset range of the action object as the associated object;
and the first preset range and/or the second preset range is determined by at least one of the following ways:
acquiring equipment attribute information of terminal equipment displaying the virtual scene, determining an area range threshold corresponding to the equipment attribute information, and determining the first preset range and/or the second preset range according to the area range threshold;
acquiring motion track information of the action object and operation position information of a plurality of interactive operations continuously triggered by the action object, predicting an intention operation area of the action object according to the motion track information and the operation position information, and determining the first preset range and/or the second preset range according to the intention operation area;
acquiring a historical interaction record of the action object, determining interaction preference of the action object for various types of target objects according to the historical interaction record, determining a preference type of the action object according to the interaction preference, and determining the first preset range and/or the second preset range according to an area where the target object corresponding to the preference type is located; and the number of the first and second groups,
and determining the first preset range and/or the second preset range according to task sequence information and/or route setting information corresponding to the action object.
10. An object loading apparatus in a virtual scene, comprising:
the acquisition module is suitable for acquiring scene information corresponding to the virtual scene and determining an interactive object which meets interactive conditions in a target object according to the scene information; wherein the target object is an object loaded with an object visual model contained in the virtual scene;
the loading module is suitable for loading an object interaction model of the interaction object; wherein the object interaction model is used for responding to an interaction operation triggered by the interaction object;
and the unloading module is suitable for unloading the object interaction model of the interaction object meeting the unloading condition under the condition that the interaction object of the loaded object interaction model meets the unloading condition, and caching the object interaction model of the interaction object meeting the unloading condition into a cache space.
11. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the object interaction method in the virtual scene in any one of claims 1-9.
12. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the object interaction method in a virtual scene as claimed in any one of claims 1-9.
CN202111282847.XA 2021-11-01 2021-11-01 Object interaction method and device in virtual scene Pending CN113856197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111282847.XA CN113856197A (en) 2021-11-01 2021-11-01 Object interaction method and device in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111282847.XA CN113856197A (en) 2021-11-01 2021-11-01 Object interaction method and device in virtual scene

Publications (1)

Publication Number Publication Date
CN113856197A true CN113856197A (en) 2021-12-31

Family

ID=78986569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111282847.XA Pending CN113856197A (en) 2021-11-01 2021-11-01 Object interaction method and device in virtual scene

Country Status (1)

Country Link
CN (1) CN113856197A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348325A (en) * 2022-08-24 2022-11-15 中国电子科技集团公司第十五研究所 Multichannel real-time transmission priority management and control method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348325A (en) * 2022-08-24 2022-11-15 中国电子科技集团公司第十五研究所 Multichannel real-time transmission priority management and control method and system
CN115348325B (en) * 2022-08-24 2024-01-23 中国电子科技集团公司第十五研究所 Multichannel real-time transmission priority management and control method and system

Similar Documents

Publication Publication Date Title
US10198838B2 (en) Geometric work scheduling with dynamic and probabilistic work trimming
US11907164B2 (en) File loading method and apparatus, electronic device, and storage medium
CN110377527A (en) A kind of method and relevant device of memory management
US11918900B2 (en) Scene recognition method and apparatus, terminal, and storage medium
CN114071047A (en) Frame rate control method and related device
CN113856197A (en) Object interaction method and device in virtual scene
CN109725802B (en) Page interaction method and device
CN114020355B (en) Object loading method and device based on cache space
CN113786614B (en) Object loading method and device in virtual scene
CN113221819A (en) Detection method and device for package violent sorting, computer equipment and storage medium
CN110665223B (en) Game resource caching method, decision network training method and device
CN112883043A (en) Data statistics method and system based on artificial intelligence and cloud computing and cloud center
CN108470368B (en) Method and device for determining rendering object in virtual scene and electronic equipment
US11854139B2 (en) Graphics processing unit traversal engine
CN111760273B (en) Game fragment processing method, device and equipment
US20180144521A1 (en) Geometric Work Scheduling of Irregularly Shaped Work Items
CN108255417A (en) Data access method, electronic device and readable storage medium storing program for executing
JP7397940B2 (en) Scene entity processing using flattened lists of subitems in computer games
US20220107722A1 (en) Method for dynamically showing virtual boundary, electronic device and computer readable storage medium thereof
CN117237697B (en) Small sample image detection method, system, medium and equipment
US20240086225A1 (en) Container group scheduling methods and apparatuses
CN109840123B (en) Method and device for displaying data to user
CN108304492B (en) Search list updating method, storage medium, device and system
CN114596405A (en) Scene data processing method, system, device and storage medium
CN107807855B (en) Application cleaning method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination