CN113786614B - Object loading method and device in virtual scene - Google Patents

Object loading method and device in virtual scene Download PDF

Info

Publication number
CN113786614B
CN113786614B CN202111105047.0A CN202111105047A CN113786614B CN 113786614 B CN113786614 B CN 113786614B CN 202111105047 A CN202111105047 A CN 202111105047A CN 113786614 B CN113786614 B CN 113786614B
Authority
CN
China
Prior art keywords
interaction
model
information
target
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111105047.0A
Other languages
Chinese (zh)
Other versions
CN113786614A (en
Inventor
宁锌
刘晓丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mihoyo Tianming Technology Co Ltd
Original Assignee
Shanghai Mihoyo Tianming Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mihoyo Tianming Technology Co Ltd filed Critical Shanghai Mihoyo Tianming Technology Co Ltd
Priority to CN202111105047.0A priority Critical patent/CN113786614B/en
Publication of CN113786614A publication Critical patent/CN113786614A/en
Application granted granted Critical
Publication of CN113786614B publication Critical patent/CN113786614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the field of electronic information, and particularly discloses an object loading method and device in a virtual scene, which are used for solving the problem of low object loading speed. The method comprises the following steps: detecting scene information corresponding to a virtual scene; determining an interaction object which accords with the interaction condition in the target object according to the scene information; the target object is an object of a loaded object visual model contained in the virtual scene; loading an object interaction model of the interaction object; wherein the object interaction model is for responding to an interaction operation triggered for the interaction object. According to the method, the object visual model is separated from the object interaction model, so that the object interaction model can be loaded as required, the object loading speed is greatly improved, and the smoothness of interface display is ensured.

Description

Object loading method and device in virtual scene
Technical Field
The embodiment of the invention relates to the field of electronic information, in particular to an object loading method and device in a virtual scene.
Background
With the increasing development of virtual reality technology, the variety and number of objects that can be displayed in a virtual scene are increasing. Moreover, as the interactive technology is mature, many objects contained in the virtual scene can implement interactive functions.
For example, a portion of the virtual objects can respond to an interaction operation triggered by an action object in the virtual scene, thereby exhibiting a state corresponding to the interaction operation. For example, in a virtual scene, if an animal or plant object is attacked by other users, the animal or plant object should be in a state corresponding to the attack, such as falling down, or falling off branches and leaves.
The inventor finds that the existing object loading mode has at least the following defects in the process of realizing the invention: when the number of interactable virtual objects contained in the virtual scene is large, each virtual object needs to be loaded one by one, so that the technical problems of time consumption, interface blocking and the like are caused.
Disclosure of Invention
In view of the foregoing, the present invention has been made to provide a method and apparatus for loading objects in a virtual scene that overcomes or at least partially solves the foregoing problems.
According to one aspect of the present invention, there is provided an object loading method in a virtual scene, including:
detecting scene information corresponding to the virtual scene;
determining an interaction object which accords with the interaction condition in the target object according to the scene information; the target object is an object of a loaded object visual model contained in the virtual scene;
Loading an object interaction model of the interaction object; the object interaction model is used for responding to interaction operation triggered by the interaction object.
According to still another aspect of the present invention, there is provided an object loading apparatus in a virtual scene, including:
the detection module is suitable for detecting scene information corresponding to the virtual scene;
the determining module is suitable for determining the interactive object which accords with the interactive condition in the target object according to the scene information; the target object is an object of a loaded object visual model contained in the virtual scene;
the loading module is suitable for loading an object interaction model of the interaction object; the object interaction model is used for responding to interaction operation triggered by the interaction object.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to execute an operation corresponding to the object loading method in the virtual scene.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to an object loading method in a virtual scene as described above.
In the object loading method and device in the virtual scene, scene information corresponding to the virtual scene is detected, the interactive object which accords with the interactive condition in the target object is determined according to the scene information, and the object interaction model of the interactive object is loaded. In the invention, the object visual model is separated from the object interaction model, and each target object only loads the object visual model in the initial state, and the object visual model does not contain interaction information, so that the occupied resources are less, the loading speed can be improved, and interface jamming is avoided. And dynamically determining the interactive object which accords with the interactive condition according to the scene information corresponding to the virtual scene, and loading an object interactive model of the interactive object so as to respond to the interactive operation through the object interactive model. Therefore, the method separates the object visual model from the object interaction model, so that the object interaction model can be loaded as required, the object loading speed is greatly improved, and the smoothness of interface display is ensured.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a flowchart of an object loading method in a virtual scene according to a first embodiment of the present invention;
fig. 2 shows a flowchart of an object loading method in a virtual scene according to a second embodiment of the present invention;
FIG. 3 illustrates a flow chart of a method of loading plant class objects in a game class scenario provided by one example of the present invention;
fig. 4 is a schematic structural diagram of an object loading device in a virtual scene according to a third embodiment of the present invention;
Fig. 5 shows a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
Fig. 1 shows a flowchart of an object loading method in a virtual scene according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step S110: scene information corresponding to the virtual scene is detected.
The virtual scene in this embodiment includes various scenes presented by the electronic screen, such as a game scene, a virtual reality scene, a man-machine interaction scene, and the like. Accordingly, the scene information corresponding to the virtual scene includes: content related to the loading progress of the scene, the switching state of the scene, and the state of each object contained in the scene. In the specific implementation, all the information related to the virtual scene can be used as scene information, and the specific meaning of the scene information is not limited by the invention. In general, a virtual scene includes a plurality of virtual objects, such as a person object, a plant object, and an article object. Each object in the virtual scene has object state information for reflecting the position state, interaction state, and the like of the object. Accordingly, the scene information corresponding to the virtual scene includes: object state information for individual objects that appear directly and/or indirectly in the virtual scene. Wherein, the object directly appearing in the virtual scene refers to: objects presented in the virtual scene, such as character objects, plant objects, etc. presented in the virtual scene; the objects indirectly appearing in the virtual scene refer to: an object that is not directly presented in the virtual scene, but whose motion state will affect the display state of other objects in the virtual scene. For example, for a game-like virtual scene, a corresponding virtual object is typically set for a game user, which may not be directly presented in the game interface, but as the virtual object moves, the display states of other objects in the game interface will be correspondingly adjusted. Wherein, the virtual object corresponding to the game user is also called a controlled object, and is used for executing corresponding operation according to the control of the game user.
It can be seen that, in this step, scene information corresponding to the virtual scene is dynamically detected, wherein the scene information mainly includes status information of objects corresponding to the virtual scene, and a person skilled in the art can flexibly set the kind and number of the objects corresponding to the virtual scene, which is not limited in the present invention.
In addition, the scene information may be set to information related to the loading progress, the angle of view change, or the switching state of the scene, in addition to the state information related to the object. For example, in the case that the viewing angle of the virtual scene gradually changes, the display state of the virtual scene is adjusted along with the change of the viewing angle (for example, the image in the scene shows a dynamic change from far to near), and accordingly, the viewing angle change information of the virtual scene can be used as the above-mentioned scene information.
Step S120: determining an interaction object which accords with the interaction condition in the target object according to the scene information; the target object is an object of the loaded object visual model contained in the virtual scene.
Wherein, the object visual model of the target object is loaded in the virtual scene in advance. The target object is generally referred to as: the object of the interaction state can be displayed according to the interaction operation. In other words, the target object mainly refers to an object having an interactive function. Accordingly, in the present embodiment, the object visual model of the target object is loaded in the virtual scene in advance. The number of the target objects can be one or more, and the object visual model is used for presenting visual characteristics of the target objects, such as color, shape, material and the like.
When determining the interactive object conforming to the interactive condition in the target object according to the scene information, the interactive condition may be set in various ways. For example, the interaction condition may be set according to object state information of an action object corresponding to the virtual scene, and/or object state information of a target object included in the virtual scene. Wherein, the action object corresponding to the virtual scene refers to: directly or indirectly in the virtual scene, and can actively trigger interactive operations and/or can dynamically move the object of the position. For example, people and animals in a virtual scene can be the action objects. Accordingly, the object state information of the action object includes various kinds of information related to the interactive operation and/or the position change of the action object. For example, the object state information includes: relative position information of the action object relative to the target object, and/or operation state information of the interaction operation triggered by the action object. The operation state information of the interaction operation triggered by the action object is used for describing various contents related to the interaction operation, such as operation type, operation position, operation result and the like of the interaction operation. Similarly, the object state information of the target object is similar to the object state information of the action object. In the implementation, the object state information of the action object can be used as the interaction condition, so that when the action object approaches the target object or initiates the interaction operation aiming at the target object, the target object is determined to meet the interaction condition; the object state information of the target object may also be used as an interaction condition, so that when the target object approaches a preset position, it is determined that the target object meets the interaction condition. In short, the invention is not limited to the specific meaning of the interaction condition, and the person skilled in the art can flexibly set the interaction condition according to the object state information of each object dynamically monitored. For example, in one alternative implementation, the interaction conditions mainly include: distance class interaction conditions, and operation class interaction conditions. Judging whether the relative distance between the action object and the target object is smaller than a first preset distance threshold value or not through the distance class interaction condition, and judging whether the action object has interaction intention aiming at the target object or not through the operation class interaction condition. Wherein the distance class condition and the operation class condition can be used alone. Alternatively, the distance class condition and the interaction class condition may be used in combination, and when the distance class condition and the interaction class condition are used in combination, the interaction condition is a combination condition comprising the distance class interaction condition and the operation class interaction condition.
In addition, in addition to setting the interaction condition according to the state information of the object corresponding to the virtual scene, the interaction condition may be set according to other types of scene information. For example, in the case that the viewing angle of the virtual scene gradually changes, the change situation of the viewing angle of the scene is dynamically detected, and the interaction condition is set according to the image content displayed by the virtual scene after the change of the viewing angle of the scene. For example, after the scene view angle changes, if the preset image area in the virtual scene is detected to move to the designated position, it is determined that the target object located in the preset image area meets the interaction condition. The preset image area and the designated position can be flexibly set according to actual conditions. In short, the present invention is not limited to the specific meaning of the interaction condition, and those skilled in the art can flexibly set according to the content related to the scene information of the virtual scene.
Step S130: loading an object interaction model of the interaction object; wherein the object interaction model is for responding to an interaction operation triggered for the interaction object.
Specifically, by dynamically detecting object state information corresponding to the virtual scene, the interactive object which accords with the interactive condition in the target object can be dynamically determined, and then the object interactive model of the interactive object is dynamically loaded. The object interaction model is used for responding to interaction operation triggered by the action object aiming at the interaction object. In this embodiment, each target object has an object visual model and an object interaction model that are separated from each other, and each target object only loads the object visual model in an initial state. And the object interaction model of each target object can be loaded as required according to the detection result of the scene information corresponding to the virtual scene, so that timeliness of interaction response is ensured.
Therefore, the object visual model and the object interaction model are separated from each other in the mode, and the object state information corresponding to the virtual scene is dynamically detected, so that the interaction object which accords with the interaction condition in the target object is dynamically determined, the on-demand loading of the object interaction model is further realized, the loading speed of the target object is greatly improved, and the smoothness of interface display is ensured.
Example two
Fig. 2 shows a flowchart of an object loading method in a virtual scene according to a second embodiment of the present invention. As shown in fig. 2, the method includes:
step S200: a target object contained in the virtual scene is determined.
Specifically, in the process of implementing the present invention, the inventor finds that, in a conventional object loading manner, for an object that can present an interaction state and is included in a virtual scene, an object model needs to be set for the object, and the object model needs to support both a visual display function and an interactive operation function of the object. Accordingly, when loading a virtual object capable of presenting an interactive state, the object model supporting both the visual display function and the interactive operation function needs to be loaded. Because the model has complex functions, loading is time-consuming and can easily cause interface jamming.
In order to solve the above-described problem, in the present embodiment, target objects included in a virtual scene are determined in advance so that a visual model and an interactive model, which are separated from each other, are set for the target objects. The target object in this embodiment is the above-mentioned object capable of presenting the interaction state. Preferably, the target object in this embodiment mainly refers to: an object of a collision response state corresponding to a trigger operation can be presented in response to the trigger operation of the action object. In general, the target object cannot actively trigger the interaction operation, and can only passively respond to the interaction operation triggered by the action object. Wherein, the action object refers to: virtual objects, such as movable animal objects, that can actively trigger interactions. Similar to the embodiments, action objects include objects that appear directly or indirectly in a virtual scene.
In implementation, object description information of each object contained in the virtual scene is obtained, and a target object contained in the virtual scene is determined according to the object description information. Correspondingly, in the subsequent step, setting an object visual model and an object interaction model which are mutually separated aiming at the target object, so as to improve the object loading speed in a mode of respectively loading the object visual model and the object interaction model. Therefore, the object description information selects a specific object as a target object, so that the on-demand loading of the object interaction model can be realized, and the loading efficiency of the target object is optimized. For example, an interactable object which cannot actively trigger the interaction operation and/or has low interaction frequency can be selected as a target object, and the object is usually only used for passively responding to the interaction operation, and no response is needed when the interaction operation is not received, so that the loading efficiency can be improved on the premise of not influencing the interaction effect by taking the object as the target object.
Specifically, the object description information includes at least one of: object category information, number of homogeneous objects, object interaction mode, and history interaction record. Accordingly, the determination of the target object may be achieved by at least one of the following:
in a first implementation, the object description information is object class information, and the target object and the non-target object are determined according to the object class information. For example, a virtual object whose object class information is an animal class is divided into non-target objects, and a virtual object whose object class information is a plant class is divided into target objects. The non-target object is loaded in a one-time loading mode, and the target object is loaded in a mode that the object visual model and the object interaction model are separated from each other. Because the plant virtual object cannot actively trigger the interactive operation, the object interactive model can be temporarily not loaded in the initial loading stage, so that the loading efficiency is improved. In addition, the non-target objects may include: animal class objects of the object model supporting the visual display function and the interactive operation function are loaded at one time, and static class objects (such as mountains, cliffs and the like) of the object model supporting the visual display function are only loaded.
In a second implementation, the object description information is the number of homogeneous objects, and the target objects and the non-target objects are divided according to the number of homogeneous objects. For example, virtual objects whose number of homogeneous objects exceeds a preset value are divided into target objects. In practical situations, a large number of plant objects (such as vegetation objects) may be included in the virtual scene of the game, and the loading time is relatively large because of the large number of plant objects, and the loading time of the plant objects can be greatly shortened by determining the plant objects as target objects.
In a third implementation manner, the object description information is an object interaction manner, and the target object and the non-target object are determined according to the object interaction manner. For example, an object with a single interaction mode is determined as a target object. In practical situations, one part of the objects have multiple interaction modes, while the other part of the objects have only a single interaction mode, and accordingly, the interaction probability of the objects with the single interaction mode is low, so that the division of the objects with the single interaction mode into the target objects is beneficial to improving the loading efficiency.
In a fourth implementation, the object description information is a historical interaction record, and the target object and the non-target object are determined according to the historical interaction record of the object. For example, historical interaction records of various users for various types of objects are obtained in advance, so that the objects with low interaction frequency are classified according to the interaction frequency, and the objects with low interaction frequency are classified into target objects. Because the probability of generating interaction by the object with lower interaction frequency is lower, the object interaction model does not need to be loaded under the condition that the object does not generate interaction, and therefore, the method is also beneficial to improving the loading efficiency.
The above-mentioned several dividing modes can be used alone or in combination, and the invention is not limited to specific details.
In addition, step S200 is an optional step, and may be omitted in a scene with a small number of objects or a single object type, and all objects in the virtual scene may be directly determined as target objects.
Step S210: an object visual model and an object interaction model which are separated from each other are set for the target object.
In one aspect, visual display features of a target object are obtained, and an object visual model of the target object is generated according to the visual display features of the target object. The object visual model is used for realizing the visual display function of the target object, and specifically comprises attribute information related to the visual effect of the target object, such as coordinate position, color, size, rotation angle, scaling and the like of the target object. On the other hand, an interactive response mode of the target object is obtained, and an object interaction model of the target object is generated according to the interactive response mode of the target object. The object interaction model is used for enabling the target object to have the capability of responding to interaction operation, and specifically comprises logic functions related to collision detection. Therefore, the object visual model and the object interaction model of the target object are separately arranged, so that the on-demand loading of the object interaction model can be realized, and the loading speed is improved.
In addition, the inventor finds that in the process of implementing the present invention, multiple target objects of the same type may be included in the virtual scene, so that setting the object visual model and the object interaction model separately for each target object requires a lot of system resources. In order to save system resources and improve storage efficiency, optionally, the target object is further divided according to object types, and a corresponding type visual model and a corresponding type interaction model are respectively set for each object type. The type visual model is used for realizing the visual display function of the target object of the corresponding type, and the type interaction model is used for realizing the interaction operation function of the target object of the corresponding type. By multiplexing the same type of visual model or type interaction model with the same type of target object, the storage resource consumption can be greatly reduced, and the batch loading of the same type of target object can be conveniently realized.
In addition, the step S210 is an optional step, which aims to increase the subsequent loading speed, and in other embodiments of the present invention, the step S210 may be omitted, and the object visual model and the object interaction model may be generated and loaded in real time in the subsequent steps.
Step S220: and acquiring a target object contained in the virtual scene, and loading an object visual model of the target object in the virtual scene.
The virtual scene in the embodiment includes various scenes such as a virtual reality scene, a game scene, a man-machine interaction scene, and the like. Since the target objects have been divided in advance in step S200 and step S210, in this step, the target object included in the virtual scene can be obtained directly according to the object identifier of each object, and then the object visual model of the target object is loaded in the virtual scene.
In addition, as already mentioned above, in order to save system resources and improve storage efficiency, the target objects may be divided according to object types, and a corresponding type visual model and a type interaction model may be set for each object type. Correspondingly, before loading the object visual model of the target object in the virtual scene, further acquiring a type visual model corresponding to the object type of the target object, and obtaining the object visual model of the target object according to the type visual model.
Considering that each target object in an actual scene may have a unique pose state, for example, a plurality of vegetation objects of the same kind grow in different positions, and the growth direction, size, coordinates and other information are different, in order to accurately show the pose of each target object, in this step, firstly, a type visual model corresponding to the object type of the target object is acquired; then, object pose information corresponding to an object identifier of the target object is acquired; and finally, according to the object pose information, adjusting the pose state of the type visual model to obtain the object visual model of the target object. It follows that the method is capable of loading the adjusted type visual model as an object visual model of the target object. The object pose information is stored in association with the object identifier of the target object and is used for describing the pose state of the target object. Specifically, the object pose information includes: the position coordinates, the size, the rotation angle, the scale, and the like of the object are related to the position and the posture of the object. In addition, the object pose information may further include: in short, all attribute features related to the position and the posture of the object can be used as the object pose information. It can be seen that in this embodiment, a target object is represented by an object type and an object identifier, where the object type is used to indicate the category to which the object belongs, and the object identifier is used to uniquely identify a specific object. And, by storing the visual features common to the same type of target objects in the type visual model associated with the object type and storing the pose features unique to each target object in the object pose information associated with the object identification, the commonality of the similar objects and the characteristics of each object can be considered.
Step S230: scene information corresponding to the virtual scene is detected.
In this embodiment, scene information corresponding to a virtual scene mainly refers to: object state information of an action object corresponding to the virtual scene. Wherein, the action object corresponding to the virtual scene comprises: the action object directly or indirectly appears in the virtual scene, and the action object mainly refers to an object capable of actively triggering interaction action, and specifically can be an animal object, a character object and the like in the virtual scene. Or the controlled object corresponding to the game user in the game scene can be also adopted. Specifically, the object state information of the action object includes: relative position information of the action object relative to the target object, and/or operation state information of the interaction operation triggered by the action object. Since the moving object is able to move in position, the relative position information of the moving object with respect to the target object is constantly changing. In addition, the interactive operation triggered by the action object is dynamically detected, so that the operation state information of the interactive operation is obtained.
Of course, those skilled in the art will understand that the scene information corresponding to the virtual scene may also be other various kinds of information related to the scene, which is not limited by the present invention.
Step S240: and determining the interactive object which accords with the interactive condition in the target object according to the scene information.
Specifically, whether the target object meets the interaction condition is judged according to the scene information, and the target object meeting the interaction condition is determined to be the interaction object. When the number of the target objects is a plurality of the target objects, if all the target objects accord with the interaction conditions, all the target objects are interaction objects; if only part of the target objects meet the interaction conditions, the part of the target objects are interaction objects. Because the scene information is dynamically changed, the interactive objects which meet the interactive conditions in the target objects are also dynamically changed. Specifically, the interaction condition includes at least one of the following:
the first interaction condition is a distance judgment condition: the relative distance between the action object and the target object is smaller than a first preset distance threshold. Specifically, according to the relative position information of the action object relative to the target object, when the relative distance between the action object and the target object is smaller than a first preset distance threshold value, the target object is determined to accord with the interaction condition. Therefore, the first interaction condition is a distance type condition, and is mainly judged according to the relative distance between the target object and the action object or the preset position. In the implementation, if the relative distance between the action object and any target object is smaller than the first preset distance threshold, the target object with the relative distance smaller than the first preset distance threshold is determined to be the interaction object conforming to the interaction condition. The first preset distance threshold may be set according to a specific scenario, for example, in a game scenario, the first preset distance threshold may be set according to a distance value corresponding to a common attack range.
The second interaction condition is an interaction intention judgment condition: the action object has an interaction intention aiming at the target object; the interaction intention is determined according to the operation type of the interaction operation triggered by the action object. In specific implementation, information such as operation type of interaction operation triggered by the action object is obtained, and when the action object is determined to have interaction intention for any target object according to the operation type, the target object with the interaction intention is determined to be an interaction object meeting interaction conditions. Specifically, the interactive operation triggered by the action object is detected, and the action object is judged to have the target object of the interactive intention according to the operation type and the operation state of the interactive operation. Wherein, the target object of the action object which triggers the interaction behavior can be determined as the interaction object with the interaction intention, such as the target object of the action object which executes the attack operation is determined as the interaction object with the interaction intention; it is also possible to determine that an action object has not triggered an attack action and that a target object that is to trigger an attack operation (e.g., a preparatory action before an attack such as raising an arm is performed) is an interactive object having an interactive intention.
Wherein the second interaction condition is aimed at judging whether the action object has interaction intention for the target object. In particular, the operation types of various interactive operations related to the interactive intention may be predetermined, so as to determine whether the action object has the interactive intention according to the operation type of the interactive operation triggered by the action object. Among them, operation types of various kinds of interactive operations related to interactive intents can be further divided into: a first type of operation corresponding to a direct intent, such as an attack type operation; and a second operation type corresponding to the indirect intent, such as a prepare operation corresponding to the attack class operation. By detecting whether the operation type of the interaction operation triggered by the action object aiming at the target object belongs to the first operation type or the second operation type, the operation intention of the action object can be accurately judged. The intention of the action object can be prejudged in advance by setting the second operation type, so that the timely loading of the object interaction model of the target object is ensured.
The two modes can be used singly or in combination. When the two are combined, the target object can be determined to be an interactive object when the target object simultaneously meets the two conditions, namely, the distance is relatively close and the interactive intention exists. Or when the two conditions are combined, priority can be set for the two conditions, for example, the distance judgment condition is executed first, and the interaction intention judgment condition is further executed under the condition that the distance judgment condition is met, so that the interaction intention is monitored only for a plurality of target objects with relatively close distances, and the monitoring cost is reduced. For another example, the interactive intention judging condition is executed first, and the distance judging condition is further executed under the condition of having the interactive intention, so that the distance is monitored only for a plurality of target objects with the interactive intention, and the aim of reducing the monitoring cost can be achieved.
Step S250: and loading an object interaction model of the interaction object.
After determining the interactive object, an object interaction model of the interactive object is loaded. In order to improve the reuse rate of the models and reduce the storage space, the type interaction models are set for the same type of target objects, and correspondingly, before the object interaction models of the interaction objects are loaded, the type interaction models corresponding to the object types of the interaction objects are further acquired, and the object interaction models of the interaction objects are obtained according to the type interaction models.
In addition, because each target object has different pose information, in this step, the pose state of the type interaction model may be further adjusted according to the object pose information corresponding to the object identifier of the interaction object, so as to obtain the object interaction model of the interaction object. The object pose information is the object pose information stored in association with the object identifier acquired in step S220. Through the object pose information, the type interaction model of the target object which is commonly used for the same type can be adjusted to be an object interaction model matched with the specific pose state of the current target object, so that the display state of each target object is optimized on the premise of reducing the storage space as much as possible.
In particular, the type interaction model may be implemented by a crash box, in particular for storing crash detection information. Also, in order to ensure interaction accuracy, it is necessary to completely match the type interaction model with the type visual model. For example, when the shape of the detection area of the collision box is completely consistent with the shape presented by the type visual model, the collision of any edge position of the target object can be ensured to accurately respond, so that the accuracy is improved. In this embodiment, a general type visual model and a general type interaction model are set in advance for the same type of target object, which is convenient for realizing the precise matching of the shapes of the type visual model and the type interaction model on the premise of reducing the storage space compared with the mode of separately setting the object visual model and the object interaction model for each target object. In addition, in the embodiment, object pose information is further set for each target object, and accordingly, the pose states of the type visual model and the type interaction model can be adjusted through the object pose information, and as the object pose information according to which the type visual model and the type interaction model are adjusted in the pose is the same, the adjusted pose of the type visual model and the type interaction model can be completely matched, and therefore accuracy of interaction response is further improved.
Step S260: and determining an associated object with an association relation with the interactive object, and loading an object interaction model of the associated object.
The sequence of step S260 and step S250 is not limited, and those skilled in the art can understand that step S260 and step S250 may be performed simultaneously. In addition, step S260 is an optional step, and in other embodiments of the present invention, step S260 may be omitted.
When the number of target objects included in the virtual scene is plural, since the range of influence of the interactive operation is large, it may be caused that the adjacent plural target objects are influenced by the interactive operation. At this time, in order to truly simulate the interaction state of each target object, when the object interaction model of the interaction object is loaded, the associated object having an association relationship with the interaction object needs to be further determined, so as to further load the object interaction model of the associated object.
Specifically, when determining the associated object having the association relationship with the interactive object, it may be implemented in various manners, for example, determining the target object within the first preset range of the interactive object as the associated object, where the manner is to determine the target object within the vicinity of the interactive object as the associated object; for another example, a target object within a second preset range of the action object is determined as the associated object in a manner aimed at determining the target object within the vicinity of the action object as the associated object. The first preset range and/or the second preset range may be set in various manners, for example, the first preset range and/or the second preset range may be determined by at least one of the following manners:
In a first determination mode, acquiring equipment attribute information of terminal equipment displaying a virtual scene, determining a regional range threshold corresponding to the equipment attribute information, and determining a first preset range and/or a second preset range according to the regional range threshold. Wherein, the equipment attribute information of the terminal equipment comprises: device type information (e.g., mobile device, fixed device, etc.), and/or device capability information (e.g., hardware configuration information, etc.). Correspondingly, if the equipment type information is mobile equipment and/or the equipment configuration is determined to be low according to the equipment performance information, the area range threshold value needs to be reduced so as to avoid the problem of blocking caused by overlarge loading amount; if the device type information is fixed device and/or the configuration of the device is determined to be high according to the device performance information, the area range threshold can be increased, so that a target object in a larger range is loaded, and the interaction experience is more real. Therefore, the corresponding relation between the equipment attribute information and the regional range threshold value is stored in advance, so that the size of the regional range threshold value can be flexibly determined according to the equipment attribute information. The regional scope threshold is used to set the size of the first preset scope and/or the second preset scope.
In a second determination mode, motion track information of an action object and operation position information of a plurality of interactive operations continuously triggered by the action object are obtained, an intended operation area of the action object is predicted according to the motion track information and the operation position information, and a first preset range and/or a second preset range are determined according to the intended operation area. For example, from the motion trajectory information of the action object, the motion trend and the motion direction of the action object can be determined, so that the intended operation region of the action object is predicted from the motion trend and the motion direction. For another example, according to operation position information of a plurality of interactive operations continuously triggered by the action object, an interaction trend and an interaction direction of the action object can be determined, so that an intended operation area of the action object is predicted according to the interaction trend and the interaction direction. For example, when an action object performs an attack class interaction operation, an intended operation region may be predicted according to an attack direction of a plurality of attack class interactions operations. By predicting the intended operation area, the possible operation area can be predicted before the action object triggers the interactive operation, so that the setting of the associated object is more reasonable. In practical situations, in this manner, the association object having the association relationship with the interaction object may also be determined directly according to the predicted intended operation area of the action object, without setting the first preset range and the second preset range.
In a third determination mode, a historical interaction record of the action object is obtained, interaction preference of the action object for various types of target objects is determined according to the historical interaction record, preference types of the action object are determined according to the interaction preference, and a first preset range and/or a second preset range are determined according to the region where the target object corresponding to the preference types is located. Specifically, through the historical interaction record of the action object, the interaction frequency, interaction mode and other information of the action object aiming at different types of target objects can be determined, so that the interaction preference of the action object aiming at various types of target objects is obtained, and the preference type is determined according to the interaction preference. For example, a target object with a higher interaction frequency may be screened as the associated object. For another example, the interaction order or interaction combination of the action objects for different types of target objects may also be determined according to the historical interaction record, so that the associated objects are set according to the interaction order or interaction combination. By interactive order is meant: the action object is used for triggering the sequence information of the interaction operation aiming at a plurality of different target objects, for example, the action object tends to interact according to the sequence of the first type target object, the second type target object and the third type target object, and correspondingly, if the current interaction object is the second type target object, the third type target object can be used as the associated object. By interactive combination is meant: the action object tends to interact with respect to the object combination composed of a plurality of target objects, for example, the action object interacts with the target object a and also interacts with the target object B, and then the target object B can be determined as an associated object when the target object a is the current interaction object.
In the fourth determination mode, the first preset range and/or the second preset range are determined according to task sequence information and/or route setting information corresponding to the action object. The task sequence information corresponding to the action object is used to describe each task to be executed by the action object, for example, in a game scene, the action object generally executes a series of operation actions for completing the specified task according to the setting of the game level, and accordingly, the first preset range and/or the second preset range can be determined according to the series of operation actions for completing the specified task. Therefore, the task sequence information can be obtained in advance according to the game level setting, and correspondingly, the intention operation area of the action object can be predicted according to the task sequence information, and then the associated object is screened according to the intention operation area. In addition, the route setting information may be determined based on the topographic information in the virtual scene, and if, for example, there is an unreachable area such as a canyon or cliff around the action object, it is necessary to determine a route based on the reachable area, and further set the associated object based on the route.
Step S270: and responding to the interaction operation triggered by the action object, and controlling the display state of the object visual model of the target object to be a collision response state.
Specifically, in response to an interactive operation triggered by an action object, in the case that the interactive operation is determined to collide with the interactive object according to the object interaction model, a collision response state corresponding to the operation type and/or the collision position of the interactive operation is determined, and the display state of the object vision model of the interactive object is controlled to be changed from an initial state to a collision response state. When the object visual model is not collided, the corresponding display state is an initial state.
In practice, it is necessary to store each collision position and a collision response state corresponding to the collision position in the object interaction model. For example, for a plant-based target object, each branch in the plant-based target object is taken as a collision position, and the correspondence between each collision position and the collision response state is stored in the object interaction model in advance. For example, when the branch A is collided, the collision response state is that the branch A and branches beside the branch A drop; when the branch B is collided, the collision response state is that the branch B and branches beside the branch B drop; when the fruit is collided, the collision response state is that the fruit and branches beside the fruit drop. In short, by storing each collision position and its corresponding collision response state in advance, the display state of the control object visual model can be changed.
In addition, in actual cases, the collision response state is related to the operation type of the interaction operation in addition to the collision position of the interaction operation, for example, the collision response states corresponding to the interaction operations of different attack types are also different. Therefore, various operation types and their corresponding collision response states also need to be stored in the object interaction model. It can be seen that the object interaction model stores a collision response mapping table, which is used to store the mapping relationship among the collision position, the interaction operation, and the collision response state.
Step S280: and unloading the object interaction model of the interaction object conforming to the unloading condition under the condition that the interaction object of the loaded object interaction model conforms to the unloading condition.
Specifically, judging whether the interaction object of the loaded object interaction model accords with an unloading condition or not; and if yes, unloading the object interaction model of the interaction object conforming to the unloading condition. Wherein the unloading conditions include at least one of:
the first unloading condition is a distance class unloading condition, comprising: the relative distance between the interaction object and the action object of the loaded object interaction model is larger than a second preset distance threshold. Specifically, judging whether the relative distance between the interaction object loaded with the object interaction model and the action object is larger than a second preset distance threshold value or not; if yes, determining the interactive object with the relative distance larger than a second preset distance threshold value as the interactive object conforming to the unloading condition.
The second kind of unloading conditions are interactive unloading conditions, including: the non-interactive time length between the interactive object and the action object of the loaded object interaction model is larger than a preset time length threshold. Specifically, judging whether the non-interactive duration between the interactive object and the action object of the loaded object interaction model is greater than a preset duration threshold; if yes, determining the interactive object with the non-interactive time length being greater than the preset time length threshold as the interactive object conforming to the unloading condition.
The two unloading conditions can be used either alone or in combination. In addition, those skilled in the art can flexibly set various unloading conditions, and the invention is not limited to this. When the two conditions are combined, when the target object simultaneously meets the two conditions, namely the distance is far and the time length of no interaction is longer than the preset time length threshold value, the interaction object is determined to accord with the unloading condition. Or when the two are combined, priority can be set for the two unloading conditions, for example, whether the distance unloading conditions are met is judged first, and whether the interaction unloading conditions are met is further judged under the condition that the distance unloading conditions are met, so that the non-interaction duration is monitored only for a plurality of target objects with long distances, and the monitoring cost is reduced. For another example, whether the interaction class unloading condition is met is judged first, and whether the distance class unloading condition is met is further judged under the condition that the interaction class unloading condition is met, so that the distance is monitored only for a plurality of target objects with longer non-interaction duration, and the aim of reducing the monitoring cost can be achieved.
The execution sequence of the steps can be flexibly adjusted by a person skilled in the art, and the steps can be split into more steps or combined into fewer steps, and part of the steps can be deleted. The first and second embodiments may be combined with each other, and the present invention is not limited thereto. Moreover, the steps described above may be performed in a loop, for example, after the object interaction model is unloaded, if the target object is detected to meet the interaction condition again, the corresponding object interaction model is loaded again. In summary, the loading operation and the unloading operation of the object interaction model can be dynamically executed according to the detection result of the scene information.
In summary, the method separates the object visual model and the object interaction model from each other, and dynamically detects object state information corresponding to the virtual scene, so as to dynamically determine the interaction object meeting the interaction condition in the target object, and further realize on-demand loading of the object interaction model, thereby greatly improving the object loading speed and ensuring the smoothness of interface display. And by setting the type visual model and the type interaction model which are commonly used for the same type of target object, the storage space can be reduced, the multiplexing rate of the model is improved, and accurate matching between the type visual model and the type interaction model is convenient to ensure, so that the accuracy of interaction response is improved. Moreover, by setting object pose information for the target objects, the pose states of the target objects can be flexibly set on the basis of the type visual model and the type interaction model.
For ease of understanding, a specific implementation of the present embodiment will be described in detail by taking a specific example as an example. This example is described taking a virtual scene as an example of a game class scene. In a game-like scene, in order to simulate a scene in the real world, a large number of plant-like objects, such as shrubs, grasses, trees, etc., need to be loaded. In addition, in order to simulate a real interaction scene, when the plant objects receive interaction operations (such as collision, attack, etc.), the plant objects need to be able to display corresponding interaction states such as swing, branch drop, etc. In the conventional game loading manner, object resources of each plant object need to be loaded at one time, specifically including an object model for implementing a visual display function and an interactive operation function, so that a great amount of system resources are consumed, and a game interface is blocked. To solve the above problem, fig. 3 shows a plant class object loading method in a game class scene, as shown in fig. 3, the example includes the steps of:
step S310: and loading an object visual model of the plant class object.
Specifically, when a game user enters the game world, namely, the game interface is loaded, the object visual model of each plant object contained in the game interface is loaded at the same time. The object visual model is also called a plant visual resource and is used for displaying visual characteristics of plant objects.
Step S320: and detecting whether the action object enters the interaction range of the plant class object.
Specifically, it is detected whether the distance between the action object and the plant class object is smaller than a first preset distance threshold, if yes, it is indicated that the action object has entered the interaction range of the plant class object, and step S330 is executed.
Step S330: it is detected whether the action object has an interactive intention for the plant class object.
Specifically, the interaction operation triggered by the action object is detected, and if it is determined that the action object has an interaction intention with respect to the plant object according to the interaction operation triggered by the action object, step S340 is executed. When the action object triggers the attack class interaction operation aiming at the plant class object or triggers the preparation class interaction operation associated with the attack class interaction operation aiming at the plant class object, the action object is determined to have the interaction intention aiming at the plant class object. The preparation type interaction operation is a preparation operation before the attack type interaction operation is executed.
In an optional implementation manner, step S330 may be omitted, and step S340 is directly performed when the detection result of step S320 is yes.
Step S340: and loading an object interaction model of the plant class object.
Specifically, two object resources corresponding to a plant class object are set in advance for the plant class object, wherein one object resource is an object visual model, and the other object resource is an object interaction model. Accordingly, the method in this example is performed by the logic determination module to dynamically load the object interaction model of the plant class object. The logic determination module may be located inside each plant class object, so as to implement on-demand loading of the object interaction model of each plant class object through the logic determination module included in each plant class object. Alternatively, the logic determination module may be an external control module independent of each plant class object, where the external control module is configured to uniformly control loading or unloading of the object interaction model of each plant class object. The invention is not limited to the specific implementation of the logic determination module.
Optionally, it is contemplated that the interaction will typically have an effect on plant-like objects within a certain area. Thus, when loading the object interaction model of a plant class object, several plant class objects located around the plant class object may also be loaded simultaneously. For example, individual plant class objects within a preset range of the plant class object are loaded simultaneously.
Step S350: and responding to the interaction operation triggered by the action object, and controlling the object visual model to change the display state through the object interaction model of the plant class object.
Specifically, when the interactive operation triggered by the action object for the plant-class object is detected, the operation position and/or the operation type of the interactive operation are determined through the object interactive model of the plant-class object, so that the display state of the object visual model is controlled to be changed into a collision response state corresponding to the operation position and/or the operation type. For example, when it is determined that the collision is successful (i.e., the game user hits the plant-based object) based on the collision information, the corresponding item (e.g., fruit, branch, leaf, etc.) is dropped based on the object entity information of the plant-based object.
Step S360: and judging whether the plant object meets the unloading condition, if so, executing step S370.
Wherein the unloading conditions include: the action object leaves the interaction range of the plant class object, and/or the non-interaction time length of the action object and the plant class object reaches a preset time length threshold. The above two conditions may be used either singly or in combination.
Step S370: and unloading the object interaction model of the plant class object.
Specifically, when the action object leaves a certain distance and/or when the action object does not interact with the plant class object for a long time, in order to save system resources, the object interaction model of the plant class object is unloaded.
It will be understood, of course, that in this example, the steps described above are continuously performed, and accordingly, when the action object enters the interaction range of the plant class object again, the object interaction model of the plant class object will be loaded again, so as to implement dynamic loading and dynamic unloading of the object interaction model.
Through the above example, the object interaction model of the plant object in the game interface can be loaded as required, so that interface jamming is avoided, and system resources are saved.
Example III
Fig. 4 shows an object loading device in a virtual scene according to a third embodiment of the present invention, including:
a detection module 41 adapted to detect scene information corresponding to a virtual scene;
a determining module 42 adapted to determine an interaction object of the target objects according to the interaction conditions based on the scene information; the target object is an object of a loaded object visual model contained in the virtual scene;
a loading module 43 adapted to load an object interaction model of the interaction object; wherein the object interaction model is for responding to an interaction operation triggered for the interaction object.
Optionally, the loading module 43 is further adapted to: before object state information corresponding to a virtual scene is detected, determining a target object contained in the virtual scene according to the object description information, and loading an object visual model of the target object in the virtual scene; wherein the object description information includes at least one of: object category information, number of homogeneous objects, object interaction mode, and history interaction record.
Optionally, the loading module 43 is specifically adapted to: acquiring a type visual model corresponding to the object type of the target object, and acquiring the object visual model of the target object according to the type visual model;
acquiring a type interaction model corresponding to the object type of the interaction object; and obtaining an object interaction model of the interaction object according to the type interaction model.
Optionally, the loading module 43 is specifically adapted to: acquiring object pose information corresponding to an object identifier of a target object; according to the object pose information, adjusting the pose state of the type visual model to obtain an object visual model;
acquiring object pose information corresponding to object identifiers of interactive objects; and adjusting the pose state of the type interaction model according to the object pose information to obtain the object interaction model.
Optionally, the apparatus further comprises:
the response module 44 is adapted to, in response to an interaction operation triggered for an interaction object, determine a collision response state corresponding to an operation type and/or a collision position of the interaction operation in case that the interaction operation collides with the interaction object according to the object interaction model, and control a display state of an object visual model of the interaction object to be the collision response state.
Optionally, the scene information corresponding to the virtual scene includes at least one of: object state information of an action object corresponding to the virtual scene, and object state information of a target object included in the virtual scene.
Optionally, the object state information of the action object corresponding to the virtual scene includes: relative position information of the action object relative to the target object and/or operation state information of interaction operation triggered by the action object;
the interaction condition includes at least one of:
determining that the relative distance between the action object and the target object is smaller than a first preset distance threshold;
the action object has an interaction intention aiming at the target object; the interaction intention is determined according to the operation type of the interaction operation triggered by the action object.
Optionally, the loading module 43 is further adapted to: and determining an associated object with an association relation with the interactive object, and loading an object interaction model of the associated object.
Optionally, the loading module 43 is specifically adapted to: determining a target object in a first preset range of the interactive object as an associated object; and/or determining the target object in the second preset range of the action object as the associated object.
Optionally, the first preset range and/or the second preset range is determined by at least one of the following means:
acquiring equipment attribute information of terminal equipment displaying a virtual scene, determining an area range threshold corresponding to the equipment attribute information, and determining a first preset range and/or a second preset range according to the area range threshold;
acquiring motion trail information of an action object and operation position information of a plurality of interactive operations continuously triggered by the action object, predicting an intended operation area of the action object according to the motion trail information and the operation position information, and determining a first preset range and/or a second preset range according to the intended operation area;
acquiring a historical interaction record of an action object, determining interaction preference of the action object for various types of target objects according to the historical interaction record, determining a preference type of the action object according to the interaction preference, and determining a first preset range and/or a second preset range according to a region where the target object corresponding to the preference type is located; the method comprises the steps of,
and determining a first preset range and/or a second preset range according to the task sequence information and/or the route setting information corresponding to the action object.
Optionally, after loading the object interaction model of the interaction object, further comprises:
and unloading the object interaction model of the interaction object of the loaded object interaction model under the condition that the interaction object of the loaded object interaction model meets the unloading condition.
Optionally, the unloading conditions include:
the relative distance between the interaction object of the loaded object interaction model and the action object is larger than a second preset distance threshold; and/or the number of the groups of groups,
the non-interactive time length between the interactive object and the action object of the loaded object interaction model is larger than a preset time length threshold.
Optionally, the virtual scene is a game class scene, and the target object includes: a plant class object; the action objects in the virtual scene include: a controlled object corresponding to a game user.
The specific structure and working principle of each module may refer to the description of the corresponding parts of the method embodiment, and are not repeated here.
Example IV
A fourth embodiment of the present application provides a non-volatile computer storage medium, where at least one executable instruction is stored, where the computer executable instruction may perform the method for loading an object in a virtual scene in any of the foregoing method embodiments. The executable instructions may be particularly useful for causing a processor to perform the operations corresponding to the method embodiments described above.
Example five
Fig. 5 shows a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor 502, a communication interface (Communications Interface) 506, a memory 504, and a communication bus 508.
Wherein:
processor 502, communication interface 506, and memory 504 communicate with each other via communication bus 508.
A communication interface 506 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the embodiment of the method for loading objects in a virtual scene.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 504 for storing program 510. The memory 504 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to cause the processor 502 to perform the respective operations corresponding to the above-described method embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (17)

1. An object loading method in a virtual scene, comprising:
detecting scene information corresponding to the virtual scene;
determining an interaction object which accords with the interaction condition in the target object according to the scene information; the target object is an object of a loaded object visual model contained in the virtual scene;
Loading an object interaction model of the interaction object; the object interaction model is used for responding to interaction operation triggered by the interaction object; the target object is provided with an object visual model and an object interaction model which are separated from each other.
2. The method of claim 1, wherein prior to detecting scene information corresponding to the virtual scene, further comprising:
and determining a target object contained in the virtual scene according to the object description information, and loading an object visual model of the target object in the virtual scene.
3. The method of claim 2, wherein the object description information includes at least one of: object category information, number of homogeneous objects, object interaction mode, and history interaction record.
4. The method of claim 2, wherein prior to the step of loading the object visual model of the target object in the virtual scene, further comprising: acquiring a type visual model corresponding to the object type of the target object, and acquiring the object visual model of the target object according to the type visual model;
before loading the object interaction model of the interaction object, the method further comprises the following steps: acquiring a type interaction model corresponding to the object type of the interaction object; and obtaining an object interaction model of the interaction object according to the type interaction model.
5. The method of claim 4, wherein the deriving an object visual model of the target object from the type visual model comprises: acquiring object pose information corresponding to an object identifier of the target object; according to the object pose information, adjusting the pose state of the type visual model to obtain the object visual model;
the obtaining the object interaction model of the interaction object according to the type interaction model comprises the following steps: acquiring object pose information corresponding to the object identification of the interactive object; and adjusting the pose state of the type interaction model according to the object pose information to obtain the object interaction model.
6. The method according to any one of claims 1-5, wherein after loading the object interaction model of the interaction object, further comprising:
and in response to the interaction operation triggered by the interaction object, determining a collision response state corresponding to the operation type and/or the collision position of the interaction operation under the condition that the interaction operation collides with the interaction object according to the object interaction model, and controlling the display state of the object visual model of the interaction object to be the collision response state.
7. The method of any of claims 1-5, wherein the scene information corresponding to the virtual scene includes at least one of: object state information of an action object corresponding to the virtual scene, and object state information of a target object included in the virtual scene.
8. The method of claim 7, wherein the object state information of the action object corresponding to the virtual scene comprises: relative position information of the action object relative to the target object and/or operation state information of interaction operation triggered by the action object;
the interaction condition includes at least one of:
the relative distance between the action object and the target object is smaller than a first preset distance threshold;
the action object has an interaction intention aiming at the target object; the interaction intention is determined according to the operation type of the interaction operation triggered by the action object.
9. The method according to any one of claims 1-5, wherein after determining, according to the scene information, the interactive object that meets the interaction condition in the target objects, the number of target objects included in the virtual scene is a plurality of, further comprising:
And determining an associated object with an associated relation with the interactive object, and loading an object interaction model of the associated object.
10. The method of claim 9, wherein the determining that an association object exists in association with the interaction object comprises:
determining a target object in a first preset range of the interactive object as the associated object; and/or the number of the groups of groups,
and determining a target object in a second preset range of the action object as the associated object.
11. The method of claim 10, wherein the first preset range and/or the second preset range is determined by at least one of:
acquiring equipment attribute information of terminal equipment displaying the virtual scene, determining an area range threshold corresponding to the equipment attribute information, and determining the first preset range and/or the second preset range according to the area range threshold;
acquiring motion trail information of the action object and operation position information of a plurality of interactive operations continuously triggered by the action object, predicting an intended operation area of the action object according to the motion trail information and the operation position information, and determining the first preset range and/or the second preset range according to the intended operation area;
Acquiring a historical interaction record of the action object, determining interaction preference of the action object for various types of target objects according to the historical interaction record, determining a preference type of the action object according to the interaction preference, and determining the first preset range and/or the second preset range according to an area where the target object corresponding to the preference type is located; the method comprises the steps of,
and determining the first preset range and/or the second preset range according to the task sequence information and/or the route setting information corresponding to the action object.
12. The method according to any one of claims 1-5, wherein after loading the object interaction model of the interaction object, further comprising:
and unloading the object interaction model of the interaction object conforming to the unloading condition under the condition that the interaction object of the loaded object interaction model conforms to the unloading condition.
13. The method of claim 12, wherein the unloading condition comprises:
the relative distance between the interaction object of the loaded object interaction model and the action object is larger than a second preset distance threshold; and/or the number of the groups of groups,
the non-interactive time length between the interactive object and the action object of the loaded object interaction model is larger than a preset time length threshold.
14. The method of any of claims 1-5, wherein the virtual scene is a game-like scene, and the target object comprises: a plant class object; the action objects in the virtual scene include: a controlled object corresponding to a game user.
15. An object loading device in a virtual scene, comprising:
the detection module is suitable for detecting scene information corresponding to the virtual scene;
the determining module is suitable for determining the interactive object which accords with the interactive condition in the target object according to the scene information; the target object is an object of a loaded object visual model contained in the virtual scene;
the loading module is suitable for loading an object interaction model of the interaction object; the object interaction model is used for responding to interaction operation triggered by the interaction object; the target object is provided with an object visual model and an object interaction model which are separated from each other.
16. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform the operations corresponding to the method for loading objects in a virtual scene according to any one of claims 1-14.
17. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the method of object loading in a virtual scene as claimed in any one of claims 1 to 14.
CN202111105047.0A 2021-09-18 2021-09-18 Object loading method and device in virtual scene Active CN113786614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111105047.0A CN113786614B (en) 2021-09-18 2021-09-18 Object loading method and device in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111105047.0A CN113786614B (en) 2021-09-18 2021-09-18 Object loading method and device in virtual scene

Publications (2)

Publication Number Publication Date
CN113786614A CN113786614A (en) 2021-12-14
CN113786614B true CN113786614B (en) 2024-03-26

Family

ID=78879022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111105047.0A Active CN113786614B (en) 2021-09-18 2021-09-18 Object loading method and device in virtual scene

Country Status (1)

Country Link
CN (1) CN113786614B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8060864B1 (en) * 2005-01-07 2011-11-15 Interactive TKO, Inc. System and method for live software object interaction
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
CN111054074A (en) * 2019-12-27 2020-04-24 网易(杭州)网络有限公司 Method and device for moving virtual object in game and electronic equipment
CN111291151A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Interaction method and device and computer equipment
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8060864B1 (en) * 2005-01-07 2011-11-15 Interactive TKO, Inc. System and method for live software object interaction
CN111291151A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Interaction method and device and computer equipment
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
CN111054074A (en) * 2019-12-27 2020-04-24 网易(杭州)网络有限公司 Method and device for moving virtual object in game and electronic equipment
CN112057849A (en) * 2020-09-15 2020-12-11 网易(杭州)网络有限公司 Game scene rendering method and device and electronic equipment

Also Published As

Publication number Publication date
CN113786614A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN107181976B (en) Bullet screen display method and electronic equipment
US10198838B2 (en) Geometric work scheduling with dynamic and probabilistic work trimming
CN111399743B (en) Display control method and device in game
CN111932943A (en) Dynamic target detection method and device, storage medium and roadbed monitoring equipment
CN112711372B (en) Page response method in visual impairment mode, computing device and computer storage medium
US20230298237A1 (en) Data processing method, apparatus, and device and storage medium
US20210354037A1 (en) Scene recognition method and apparatus, terminal, and storage medium
CN113262483A (en) Operation control method and device for virtual article and electronic equipment
CN113856197A (en) Object interaction method and device in virtual scene
CN113786614B (en) Object loading method and device in virtual scene
CN109725802B (en) Page interaction method and device
KR20170065085A (en) Method and apparuts for system resource managemnet
CN113221819A (en) Detection method and device for package violent sorting, computer equipment and storage medium
CN114020355B (en) Object loading method and device based on cache space
CN111872928B (en) Obstacle attribute distinguishing method and system and intelligent robot
CN110665223B (en) Game resource caching method, decision network training method and device
US11854139B2 (en) Graphics processing unit traversal engine
CN116050159A (en) Simulation scene set generation method, device, equipment and medium
KR20220163421A (en) Method and apparatus for obtaining a cleaning path of a cleaning device
CN113449054B (en) Map switching method and mobile robot
US20180144521A1 (en) Geometric Work Scheduling of Irregularly Shaped Work Items
CN106598588A (en) Method and device for loading page element
CN111428886A (en) Fault diagnosis deep learning model self-adaptive updating method and device
CN111291756A (en) Method and device for detecting text area in image, computer equipment and computer storage medium
US20220370909A1 (en) Pathfinding Method, Device for a Game Object, and Computer-Readable Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant