CN112245923A - Collision detection method and device in game scene - Google Patents

Collision detection method and device in game scene Download PDF

Info

Publication number
CN112245923A
CN112245923A CN202011123913.4A CN202011123913A CN112245923A CN 112245923 A CN112245923 A CN 112245923A CN 202011123913 A CN202011123913 A CN 202011123913A CN 112245923 A CN112245923 A CN 112245923A
Authority
CN
China
Prior art keywords
virtual object
virtual
attribute information
position information
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011123913.4A
Other languages
Chinese (zh)
Inventor
梁文韬
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Tianyan Technology Co ltd
Original Assignee
Zhuhai Tianyan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Tianyan Technology Co ltd filed Critical Zhuhai Tianyan Technology Co ltd
Priority to CN202011123913.4A priority Critical patent/CN112245923A/en
Publication of CN112245923A publication Critical patent/CN112245923A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars

Abstract

The embodiment of the application discloses a collision detection method and device in a game scene. The method comprises the following steps: according to attribute information of each virtual object in a game scene prestored in a collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information; determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information; if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object; determining first position information of the virtual bounding box in the current game scene; and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to the game engine. According to the technical scheme, the efficiency and the accuracy of collision detection between the virtual objects are improved, and the stable operation of the game engine is ensured.

Description

Collision detection method and device in game scene
Technical Field
The invention relates to the technical field of virtual games, in particular to a collision detection method and device in a game scene.
Background
In the electronic game developed under the Unity platform, for virtual objects (such as bullets, characters, etc.) which may generate a collision phenomenon, whether a collision occurs is generally detected by respectively arranging a Collider (i.e. Collider) on each virtual object. For example, in a shooting game scene in which a struck object is shot with a bullet, collision bodies are provided in advance on the bullet and the struck object, and whether the bullet strikes the struck object is further detected by automatically detecting whether the collision occurs by two collision bodies.
Obviously, the method is difficult to be applied to a game scene with a large number of hit objects, because in the game scene with a large number of hit objects, collision objects are inevitably required to be respectively arranged on the large number of hit objects, which not only increases extra system performance overhead, but also causes slow detection speed and low accuracy of detection results due to excessive detection times of the collision objects, thereby causing great interference to game performance and affecting the experience of game players. Therefore, a collision detection method with higher detection efficiency and more accurate detection result is needed.
Disclosure of Invention
The embodiment of the application aims to provide a collision detection method and device in a game scene, and aims to solve the problems of low detection efficiency and low accuracy of the existing collision detection method.
In order to solve the above technical problem, the embodiment of the present application is implemented as follows:
in one aspect, an embodiment of the present application provides a collision detection method in a game scene, which is applied to a collision detection system, and the method includes:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
On the other hand, the embodiment of the present application provides a collision detection device in a game scene, which is applied to a collision detection system, and the device includes:
the first determining module is used for determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene according to the attribute information of each virtual object in the game scene, which is prestored in the collision detection system; the attribute information includes location information;
the first judging module is used for determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
the building and determining module is used for building a virtual bounding box corresponding to the first virtual object if the distance is smaller than or equal to a first preset threshold value, so that the first virtual object is located in the box-inside area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
a sending module, configured to send the first location information corresponding to the virtual bounding box and the second location information corresponding to the second virtual object to a game engine, so that the game engine determines, based on the first location information and the second location information, whether a collision occurs between the first virtual object and the second virtual object.
In another aspect, an embodiment of the present application provides a collision detection apparatus in a game scene, which is applied to a collision detection system, and includes a processor and a memory electrically connected to the processor, where the memory stores a computer program, and the processor is configured to call and execute the computer program from the memory to implement:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
In another aspect, an embodiment of the present application provides a storage medium applied to a collision detection system, for storing a computer program, where the computer program is executed by a processor to implement the following processes:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
By adopting the technical scheme of the embodiment of the invention, the collision detection system determines the attribute information respectively corresponding to the first virtual object and the second virtual object in the current game scene according to the pre-stored attribute information (including position information) of each virtual object in the game scene, and further determines the distance between the first virtual object and the second virtual object according to the attribute information; and only when the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object, and sending first position information corresponding to the virtual bounding box and second position information corresponding to the second virtual object to the game engine so that the game engine judges whether the first virtual object and the second virtual object collide. Therefore, according to the technical scheme, the collision detection script is not required to be written into each virtual object in the game scene, and the detection is carried out by the uniform collision detection system, so that the script burden of the game engine is effectively reduced, and the running performance of the game engine is improved; in addition, the collision detection system performs preliminary screening on the virtual objects to be detected in the distance dimension, so that the collision detection times in the game scene are greatly reduced, and unnecessary system performance overhead is avoided. Furthermore, the game scene changes rapidly, and the number of times of collision detection in the same game scene is greatly reduced, so that the collision detection efficiency of the virtual objects can be improved, and the situations of inaccurate detection results, incomplete detection and the like caused by excessive times of collision detection in the same game scene are avoided, so that the accuracy of the collision detection results is ensured, the influence of the number of virtual objects in the game scene is avoided, and the stable operation of the game engine is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow diagram of a method of collision detection in a game scene according to an embodiment of the invention;
FIG. 2 is a schematic construction diagram of a virtual bounding box according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a collision detection apparatus in a game scene according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a collision detection device in a game scene according to an embodiment of the present invention.
Detailed Description
The embodiment of the application provides a collision detection method and device in a game scene, and aims to solve the problems of low detection efficiency and low accuracy of the existing collision detection method.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the method for detecting a collision in a game scene according to one or more embodiments of the present invention, based on the idea of entity-component-system, collisions between virtual objects in the game scene are abstracted to a uniform system (i.e., a collision detection system) for execution, and association between the virtual objects and a collision detection script is decoupled. Among these, the idea of "entity-component-system" can be understood as: attribute information (such as position information, shape information, size information and the like) of each virtual object in a game scene is embodied into components, and the components are respectively stored in a collision detection system, so that when the collision detection system performs collision detection on the virtual objects, collision detection can be performed based on various items of attribute information prestored in local, and systematization of virtual object collision detection is realized. The collision detection method in the game scene provided by the invention is described in detail below.
First, the storage form of the attribute information of each virtual object in the game scene in the collision detection system in the embodiment of the present invention is described. Wherein the attribute information of each virtual object in the game scene may include at least one of position information, shape information, and size information. For a virtual object with a rotation attribute, the attribute information may further include rotation information, such as a rotation direction, a rotation angle, and the like. In the collision detection system, attribute information corresponding to a virtual object in each game scene may be stored for each game scene. For the same game scene with a plurality of virtual objects, the attribute information corresponding to each virtual object in the same game scene can be stored in a list form, and each entry in the list corresponds to one virtual object.
The arrangement order of the virtual objects in the list is not limited in the embodiment of the present invention, and for example, the virtual objects may be arranged randomly, or may be arranged according to the appearance order of the virtual objects in the corresponding game scene. When the collision detection system performs collision detection, detection may be performed sequentially according to the arrangement order of the virtual objects in the list. For example, for a list corresponding to a certain game scene, the first entry stores attribute information corresponding to the virtual object a, the second entry stores attribute information corresponding to the virtual object B, and so on, when performing collision detection, first, whether the virtual object a collides with another virtual object is detected, and then, whether the virtual object B collides with another virtual object is detected, so that the process is repeated until the virtual objects corresponding to each entry in the list are detected.
It should be noted that the above listed storage manners of the attribute information corresponding to each virtual object are not unique, and in practical applications, the attribute information corresponding to each virtual object may also be stored in other manners, for example, in an aggregation manner, each virtual object and the corresponding attribute information are respectively associated, and a plurality of attribute information pairs associated together are stored in the same aggregation, where each attribute information pair is an element in the aggregation.
Based on the attribute information corresponding to each virtual object being stored in the collision detection system in advance, the collision detection system can realize the collision detection of the virtual object according to the collision detection method provided in one or more of the following embodiments.
Fig. 1 is a schematic flow chart of a collision detection method in a game scene according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s102, according to the attribute information of each virtual object in the game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene, wherein the attribute information comprises position information.
The first attribute information corresponding to the first virtual object includes position information of the first virtual object, and may further include shape information and/or size information of the first virtual object. The second attribute information corresponding to the second virtual object includes position information of the second virtual object, and may further include shape information and/or size information of the second virtual object. For a virtual object with a rotation attribute, the attribute information may further include rotation information, such as a rotation direction, a rotation angle, and the like. For example, in a shooting game scene, if the first virtual object is a hit character and the second virtual object is a bullet, the hit character can rotate (e.g., rotate the arm, head, etc.), so the corresponding second attribute information may include rotation information of the hit character.
In the running process of the game, as the game scenes are continuously changed in an image frame mode, one image frame can be used as one game scene, and the current game scene is the current image frame.
And S104, determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information.
In this step, the distance between the first virtual object and the second virtual object may be calculated based on the position information in the first attribute information and the position information in the second attribute information.
In one embodiment, the position information is stored in the form of coordinates, i.e. if each point on the virtual object is considered as each coordinate point, the coordinate positions corresponding to each coordinate point on the virtual object may be stored separately, based on which the distance between two virtual objects may be determined by calculating the distance between the coordinate points on the two virtual objects. Since the virtual objects include a plurality of coordinate points, the position information corresponding to each virtual object necessarily includes coordinate positions corresponding to the plurality of coordinate points, in which case, the key coordinate points (e.g., the center coordinate point) in the virtual objects may be selected, and the distance between the virtual objects may be calculated by calculating the distance between the key coordinate points.
For example, for a virtual sphere a and a virtual sphere B, the key coordinate points are the respective centers of sphere, and the position information corresponding to the virtual sphere a includes the coordinate position of the center of sphere a in the current game scene, and the position information corresponding to the virtual sphere B includes the coordinate position of the center of sphere B in the current game scene. The distance between the center a and the center B can be calculated based on the center positions at which the center a and the center B correspond, respectively, and the distance between the center a and the center B can be taken as the distance between the virtual sphere a and the virtual sphere B.
S106, if the distance between the first virtual object and the second virtual object is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in a box-in area of the virtual bounding box, and determining first position information of the virtual bounding box in the current game scene.
The virtual bounding box can be a three-dimensional figure conforming to a regular shape or an irregular three-dimensional figure. For calculation, a regular shaped virtual bounding box is preferred, such as a cube, cuboid, cylinder, etc. The in-box area of the virtual bounding box refers to an area surrounded by each side length of the solid figure corresponding to the virtual bounding box.
If the virtual bounding box is a stereo graphic, the first position information corresponding to the virtual bounding box may include one or more items of position information of each side length of the stereo graphic, position information of a key point on each side length, position information of a key point in an area in the box, and the like. Through the first position information, the position of the virtual bounding box in the current game scene can be uniquely determined.
S108, sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to the game engine, so that the game engine judges whether the first virtual object and the second virtual object collide with each other or not based on the first position information and the second position information.
When the game engine determines whether the first virtual object and the second virtual object collide based on the first position information and the second position information, it may be determined whether the first position information and the second position information intersect with each other, that is, whether the second virtual object is located in the virtual bounding box corresponding to the first virtual object. If the first position information and the second position information are intersected, the second virtual object is positioned in the virtual bounding box corresponding to the first virtual object, and the first virtual object and the second virtual object can be determined to be collided. If the first position information and the second position information do not intersect, it is indicated that the second virtual object is not located in the virtual bounding box corresponding to the first virtual object, and it is determined that no collision occurs between the first virtual object and the second virtual object.
In one embodiment, the game engine can determine the corresponding position of the virtual bounding box according to the first position information corresponding to the virtual bounding box
By adopting the technical scheme of the embodiment of the invention, the collision detection system determines the attribute information respectively corresponding to the first virtual object and the second virtual object in the current game scene according to the pre-stored attribute information (including position information) of each virtual object in the game scene, and further determines the distance between the first virtual object and the second virtual object according to the attribute information; and only when the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object, and sending first position information corresponding to the virtual bounding box and second position information corresponding to the second virtual object to the game engine so that the game engine judges whether the first virtual object and the second virtual object collide. Therefore, according to the technical scheme, the collision detection script is not required to be written into each virtual object in the game scene, and the detection is carried out by the uniform collision detection system, so that the script burden of the game engine is effectively reduced, and the running performance of the game engine is improved; in addition, the collision detection system performs preliminary screening on the virtual objects to be detected in the distance dimension, so that the collision detection times in the game scene are greatly reduced, and unnecessary system performance overhead is avoided. Furthermore, the game scene changes rapidly, and the number of times of collision detection in the same game scene is greatly reduced, so that the collision detection efficiency of the virtual objects can be improved, and the situations of inaccurate detection results, incomplete detection and the like caused by excessive times of collision detection in the same game scene are avoided, so that the accuracy of the collision detection results is ensured, the influence of the number of virtual objects in the game scene is avoided, and the stable operation of the game engine is ensured.
In one embodiment, the position information corresponding to the virtual object includes position information of one or more key points on the virtual object, where the one or more key points can represent the shape and size of the virtual object to different degrees. For example, for a virtual object, a virtual character, its corresponding keypoints may include a body center point, at least one contour point on a head contour, at least one contour point on a two-arm contour, at least one contour point on a two-leg contour, and so on. From these key points, the outline shape, i.e., shape and size, of the virtual character can be determined.
Therefore, when determining the distance between the first virtual object and the second virtual object, the first virtual object and the second virtual object may be determined according to the distance between the key points corresponding to the first virtual object and the second virtual object, respectively. Specifically, first, according to third position information corresponding to the first virtual object, position information of a first key point on the first virtual object is determined, and according to second position information corresponding to the second virtual object, position information of a second key point on the second virtual object is determined; then, calculating the distance between the first key point and the second key point according to the position information of the first key point and the position information of the second key point; further, the distance between the first key point and the second key point is determined to be the distance between the first virtual object and the second virtual object.
Optionally, the first key point is a center point of the first virtual object, and the second key point is a center point of the second virtual object. That is, the distance between the central points of the two virtual objects is determined as the distance between the two virtual objects, and the method is simple in calculation and can meet the distance calculation requirement to a certain extent, so that preliminary screening can be conveniently and quickly carried out based on the distance between the two virtual objects.
In one embodiment, the first virtual object includes a plurality of virtual areas, and the third position information corresponding to the first virtual object includes position information corresponding to each virtual area. For example, if the first virtual object is a hit person and includes 3 virtual areas including a head area, four limb areas, and a trunk area, the third position information corresponding to the hit person may include position information corresponding to the head area, position information corresponding to the four limb areas, and position information corresponding to the trunk area. Each virtual area included in the first virtual object can be preset, and in an actual application scene, the size, the shape, the position and the like of each virtual area can be set in a user-defined mode.
In this embodiment, after determining the position information corresponding to each virtual area in the first virtual object, whether the first virtual object satisfies the preset segmentation condition may be determined according to the position information corresponding to each virtual area; if yes, segmenting the first virtual object according to a preset segmentation mode, and executing a subsequent collision detection process based on each segmented sub-virtual object; if not, the first virtual object is not divided, that is, the first virtual object as a whole is subjected to the subsequent collision detection process (i.e., S104 to S108).
Wherein, the preset segmentation conditions comprise: there is a distance between the two virtual areas greater than or equal to a second preset threshold. When the distance between the two virtual areas is greater than or equal to the second preset threshold, it is described that a virtual area with a longer distance exists on the first virtual object, so that it can be further described that the volume of the first virtual object is larger, and the first virtual object needs to be subjected to area segmentation.
Taking the first virtual object as an impacted character as an example, the first virtual object includes a head region, four limb regions and a trunk region, wherein the four limb regions may further include a left arm region, a right arm region and two leg regions. Assuming that the distance between the left arm area and the two leg areas is greater than or equal to the second preset threshold, then the person to be struck needs to be subjected to area segmentation at this time, for example, the person to be struck can be segmented into a head area, a left arm area, a right arm area and two leg areas.
In one embodiment, if the first virtual object is determined to be larger in volume, the first virtual object may be segmented into a plurality of sub-virtual objects before the distance between the first virtual object and the second virtual object is determined according to the first attribute information and the second attribute information. Specifically, the first virtual object may be divided into a plurality of sub-virtual objects according to a preset dividing manner, where the preset dividing manner includes a dividing size, a dividing position, a dividing region, and the like, and the dividing region may be a virtual region included in the preset first virtual object.
If the first virtual object is divided according to the dividing positions, one or more dividing positions on the first virtual object may be preset. For example, the first virtual object is a virtual character, and the segmentation position is preset as a head position, a limb position, a trunk position, and the like, when the first virtual object is segmented, the first virtual object can be segmented into a head region, a limb region, a trunk region, and the like according to the preset segmentation position, where the head region, the limb region, and the trunk region are respectively a sub-virtual object.
If the first virtual object is divided according to the division size, the division size (or size range) corresponding to each sub-virtual object can be preset. For example, if the first virtual object is a sphere with a larger spherical radius and the segmentation size is preset to be smaller than or equal to R, the first virtual object may be segmented into a plurality of small spheres with a spherical radius smaller than or equal to R according to the preset segmentation size, where each small sphere is a sub-virtual object.
In this embodiment, if the first virtual object is divided into a plurality of sub virtual objects, distances between the sub virtual objects and the second virtual object may be respectively determined, and if there is at least one target sub virtual object, the distance between the target sub virtual object and the second virtual object is less than or equal to a first preset threshold, virtual bounding boxes corresponding to the target sub virtual objects are constructed, so that the target sub virtual objects are located in the box interior areas of the corresponding virtual bounding boxes respectively. The calculation method of the distance between each sub-virtual object and the second virtual object is the same as the calculation method of the distance between the first virtual object and the second virtual object in the above embodiment, and is not repeated here.
In this embodiment, a first virtual object is divided into a plurality of sub-virtual objects, and when a distance between at least one target sub-virtual object and a second virtual object is smaller than or equal to a first preset threshold, a virtual bounding box corresponding to the target sub-virtual object is constructed; and then, sending first position information of the virtual bounding box corresponding to the constructed target sub-virtual object in the current game scene and second position information corresponding to the second virtual object to the game engine, so that the game engine judges whether the target sub-virtual object and the second virtual object collide based on the first position information and the second position information. Since the target sub-virtual object is a part of the first virtual object, in the case where it is determined that a collision occurs between the target sub-virtual object and the second virtual object, the first virtual object and the second virtual object are also bound to collide with each other.
Still taking the shooting type game scene as an example, the first virtual object is a shot character with a larger volume, and the second virtual object is a bullet. Firstly, segmenting a shot person to obtain a plurality of shot regions (namely, sub-virtual objects): the bullet detection method comprises the steps of calculating the distance between each impacted area and a bullet respectively in a head area, a left arm area, a right arm area, a trunk area and two leg areas, and judging whether the distance between at least one impacted area and the bullet is smaller than or equal to a first preset threshold value. Assuming that the distance between the left arm area and the bullet is smaller than or equal to a first preset threshold value, and the distance between the bullets in other impacted areas is larger than the first preset threshold value, only the left arm area and the corresponding virtual bounding box need to be constructed, so that the left arm area is located in the box-inside area of the corresponding virtual bounding box. And the virtual bounding boxes corresponding to other hit areas are not required to be constructed.
Fig. 2 is a schematic diagram of separately constructing virtual bounding boxes for each of the divided sub-virtual objects in an embodiment. As shown in fig. 2, for a shot person with a large volume, in the case that it is determined that only the head region, the right arm region and the right leg region are less than or equal to the first preset threshold, only virtual bounding boxes corresponding to the head region, the right arm region and the right leg region need to be constructed, and each sub virtual object is located in the box interior region of the corresponding virtual bounding box.
However, if the hit person is not divided but directly subjected to collision detection as a whole, there may be cases where: if the central point of the character to be struck in the game scene is taken as a key point and the distance between the central point and the bullet is calculated, the calculation result is that the distance between the character to be struck and the bullet is far (for example, the distance is larger than a first preset threshold), so that the collision detection is not carried out on the character to be struck and the bullet. In practice, however, the left arm area of the struck person is closer to the bullet (e.g., less than the first predetermined threshold), and there is a possibility of collision between the left arm area and the bullet. Obviously, the collision result detection is inaccurate without segmenting the hit person.
Therefore, the shot person with a large volume is divided into the plurality of sub-areas, and whether the virtual bounding boxes corresponding to the sub-areas are constructed or not is determined according to the distance between each sub-area and the bullet, so that the construction of the virtual bounding boxes is more targeted, namely, only the virtual bounding boxes of partial areas are required to be constructed, the construction times of the virtual bounding boxes are reduced, the collision detection efficiency of the system is improved, the condition that the distance calculation result between the shot person and the bullet is inaccurate when the shot person is used as a whole to construct the virtual bounding boxes is avoided, and the accuracy of the collision detection result is ensured.
In one embodiment, the attribute information corresponding to the virtual object includes position information and further includes at least one of rotation information, size information, and shape information, the rotation information including a rotation angle and/or a rotation direction. The virtual bounding box is a three-dimensional figure conforming to a regular shape. Therefore, when constructing the virtual bounding box corresponding to the first virtual object, in order to locate the first virtual object in the box-inside region of the virtual bounding box, the virtual bounding box can be constructed by the following method:
first, a solid figure matched with size information and/or shape information corresponding to the first virtual object is constructed. By "match" it is meant that the corresponding solid figure of the virtual bounding box is able to exactly enclose the first virtual object. For example, if the virtual bounding box is predetermined as a cube, a cube matching the size information and/or the shape information corresponding to the first virtual object is constructed, and the first virtual object can be located within the cube, i.e., the cube can enclose the first virtual object.
The size information is represented in a plurality of manners, and the manner can be specifically selected according to the type of the first virtual object. For example, if the first virtual object is a sphere or an approximate sphere, its size information may be characterized as a spherical radius; if the first virtual object is a cuboid or an approximate cuboid, the size information of the first virtual object can be represented as the side length of the cuboid; and so on. The shape information is related to the type of the first virtual object, and in the game scene, corresponding shape information can be set in advance for different types of first virtual objects.
Still taking the shooting game scene as an example, assume that the first virtual object is divided into a plurality of sub-virtual objects in advance, and a cube type virtual bounding box needs to be constructed for the sub-virtual object corresponding to the head region. Since the head region is approximately spherical, a cube whose spherical radius matches the spherical radius corresponding to the head region may be constructed so that the head region can be located within the cube, that is, the cube can surround the head region.
And secondly, rotating the three-dimensional graph according to the rotation information corresponding to the first virtual object so as to enable the rotated three-dimensional graph to be matched with the rotation information of the first virtual object.
Considering that the first virtual object may rotate in the current game scene, and the rotation direction and the rotation angle are both different from the rotation direction and the rotation angle corresponding to the stereoscopic graphic, in this case, the rotation direction and the rotation angle corresponding to the rotated stereoscopic graphic may be made to be consistent with the rotation direction and the rotation angle corresponding to the first virtual object by rotating the stereoscopic graphic.
In an embodiment, if the first virtual object is divided into a plurality of sub-virtual objects in advance, and each sub-virtual object is constructed with its own virtual bounding box, for each sub-virtual object, the corresponding virtual bounding box needs to be rotated, so that the rotation information corresponding to the rotated virtual bounding box matches with the rotation information corresponding to the sub-virtual object surrounded by the rotated virtual bounding box. As shown in fig. 2, the constructed virtual bounding box matches the rotation information corresponding to the sub-virtual object, that is, the rotation direction and the rotation angle are the same, so that the sub-virtual object can be bounded, and the redundant space inside the virtual bounding box is as small as possible.
It should be noted that, in the same game scene, the virtual bounding boxes corresponding to different virtual objects, or the virtual bounding boxes corresponding to different sub-virtual objects on the same virtual object, may be the same type of stereoscopic image, or different types of stereoscopic images. As shown in fig. 2, the virtual bounding box corresponding to the head region is a cube, and the virtual bounding box corresponding to the right arm region and the right leg region is a cuboid.
In constructing the virtual bounding box, the collision detection system may determine what type of virtual bounding box to construct based on the shape information of the first virtual object (or the child virtual object). Optionally, a mapping relationship between the shape information of the first virtual object and/or the sub-virtual object and the type of the virtual bounding box may be stored in the collision detection system in advance, where if the virtual bounding box is a solid figure, the type of the virtual bounding box is the shape of the solid figure. The collision detection system can determine which type of virtual bounding box is constructed for the virtual object according to the shape information in the attribute information corresponding to the virtual object and the pre-stored mapping relation.
For example, in the above mapping relationship, if the shape of the first virtual object and/or the sub-virtual object is a sphere, the type of the corresponding virtual bounding box may be a sphere or a cube; the shape of the first virtual object and/or the sub-virtual object is a long strip (such as the shape of a limb area), and the type of the corresponding virtual bounding box can be a cuboid; and so on.
In one embodiment, after the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object are sent to the game engine, the collision detection system may obtain a determination result, which is fed back by the game engine, as to whether a collision occurs between the first virtual object and the second virtual object; and if the judgment result is that the collision occurs, deleting the attribute information of the first virtual object in the current game scene, which is stored in the collision detection system.
As mentioned in the above embodiment, the attribute information corresponding to the virtual object may be stored in the collision detection system in a list form, and if the determination result obtained by the collision detection system is that a collision occurs, the first virtual object and the attribute information thereof corresponding to the determination result are deleted from the list, so as to avoid repeated detection on the same first virtual object in the current game scene.
Since the collision detection is for every two virtual objects, in another embodiment, in addition to deleting the attribute information corresponding to the first virtual object that has collided, the attribute information corresponding to the second virtual object that has collided with the first virtual object may also be deleted at the same time. In practical application, based on the type of the game scene, the collision detection requirement and other factors, the attribute information corresponding to two virtual objects with collision can be simultaneously deleted, or the attribute information corresponding to only one virtual object with collision can be deleted.
For example, if the collision chance of each virtual object is only once in the game scene, that is, if the first virtual object and the second virtual object collide with each other, the first virtual object and the second virtual object do not collide with each other any more. In this case, the attribute information corresponding to the first virtual object and the second virtual object is selected to be deleted at the same time. If the collider of the second virtual object is present several times, for example, the virtual character may collide with a plurality of virtual objects, when the collision of the virtual character is detected, only the attribute information corresponding to the virtual object colliding with the virtual character may be deleted.
In this embodiment, after deleting the attribute information of the first virtual object in the current game scene stored in the collision detection system, collision detection may be performed on the next virtual object according to the order of the attribute information corresponding to each virtual object stored in the list until all the virtual objects in the current game scene are detected. When the collision detection system performs collision detection, the preliminary screening is performed based on the distance between every two virtual objects, so that the collision detection efficiency of the collision detection system is hardly affected even if a plurality of virtual objects exist in the list.
When entering the next game scene, the collision detection system performs collision detection on each virtual object in the next game scene, and the detection method is the same as the collision detection method of the current game scene described in the above embodiments, and is not described here again.
In one embodiment, the state of the virtual object in the current game scene may be changed, including generating a new virtual object, deleting an existing virtual object, changing attribute information of an existing virtual object, and the like. Based on the above, the collision detection system can monitor the state change information of the virtual object in the current game scene, wherein the state change information comprises at least one of generating a new virtual object, deleting an existing virtual object and changing the attribute information of the existing virtual object; when the state change information is monitored, determining a target virtual object corresponding to the state change information, and updating attribute information corresponding to the target virtual object in a collision detection system.
Specifically, if it is monitored that a new virtual object is generated in the current game scene, the attribute information corresponding to the new virtual object is stored in the collision detection system, for example, is supplemented to the list corresponding to the current game scene. And if the fact that a certain virtual object in the current game scene is deleted is monitored, deleting the attribute information corresponding to the virtual object from the list corresponding to the current game scene. If the fact that the attribute information corresponding to a certain virtual object in the current game scene changes is monitored, for example, the position changes, the size increases and the like, the attribute information corresponding to the virtual object is synchronously changed in the list corresponding to the current game scene.
In this embodiment, the corresponding attribute information in the collision detection system can be updated synchronously based on the state change information of the virtual objects in the current game scene, so that the accuracy of the attribute information of each virtual object stored in the collision detection system is ensured, and the accuracy of the collision detection result is ensured.
In one embodiment, when the virtual objects to be detected are preliminarily screened, in addition to determining whether the distance between the two virtual objects is less than or equal to the first preset threshold, screening may be performed based on the relative rotation information between the two virtual objects. Specifically, if the relative rotation information between the first virtual object and the second virtual object is: if the included angle between the rotation direction of the first virtual object and the rotation direction of the second virtual object is greater than or equal to the preset angle, it is determined that the first virtual object and the second virtual object are unlikely to collide with each other, and at this time, collision detection may not be performed on the first virtual object and the second virtual object. That is, only when the distance between the first virtual object and the second virtual object is less than or equal to the first preset threshold and the included angle between the rotation direction of the first virtual object and the rotation direction of the second virtual object is less than the preset angle, the first virtual object and the second virtual object are subjected to collision detection.
Taking a shooting game scene as an example, the first virtual object is a hit character, and the second virtual object is a bullet. The rotation direction of the struck person is towards the left side, and the rotation direction of the bullet is towards the right side, namely the rotation directions of the struck person and the bullet are opposite (namely, larger than a preset angle), so that collision cannot occur even if the distance between the struck person and the bullet is very close, and therefore collision detection on the struck person and the bullet is not needed.
In this embodiment, the collision detection objects of the collision detection system are further reduced by screening based on the relative rotation information between the two virtual objects, so that the number of collision detections in a game scene is reduced, and unnecessary system performance overhead is avoided.
In summary, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
Based on the same idea, the collision detection method in the game scene provided in the embodiment of the present application further provides a collision detection device in the game scene.
Fig. 3 is a schematic block diagram of a collision detection apparatus in a game scene according to an embodiment of the present invention, as shown in fig. 3, the apparatus is applied to a collision detection system, and includes:
a first determining module 310, configured to determine, according to attribute information of each virtual object in a game scene pre-stored in the collision detection system, first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in a current game scene; the attribute information includes location information;
a second determining module 320, configured to determine a distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
a constructing and determining module 330, configured to construct a virtual bounding box corresponding to the first virtual object if the distance is smaller than or equal to a first preset threshold, so that the first virtual object is located in a box-inside area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
a sending module 340, configured to send the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine determines, based on the first position information and the second position information, whether a collision occurs between the first virtual object and the second virtual object.
In one embodiment, the apparatus further comprises:
a dividing module, configured to divide the first virtual object according to a preset dividing manner to obtain multiple sub-virtual objects before determining a distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information; the preset segmentation mode comprises a segmentation size and/or a segmentation position;
accordingly, the second determining module 320 includes:
a first determining unit configured to determine a distance between each of the sub-virtual objects and the second virtual object, respectively;
the construction and determination module 330 includes:
a first constructing unit, configured to construct a virtual bounding box corresponding to each target sub-virtual object, if there is at least one target sub-virtual object whose distance from the second virtual object is smaller than or equal to the first preset threshold, so that each target sub-virtual object is located in a box-in region of the corresponding virtual bounding box.
In one embodiment, the first virtual object includes a plurality of virtual areas; the third position information corresponding to the first virtual object comprises position information corresponding to each virtual area;
the device further comprises:
the judging module is used for judging whether the first virtual object meets a preset segmentation condition or not according to the position information corresponding to each virtual area before the first virtual object is segmented according to the preset segmentation mode; the preset segmentation conditions include: the distance between two virtual areas is greater than or equal to a second preset threshold value;
and the execution module is used for executing the step of segmenting the first virtual object according to a preset segmentation mode if the first virtual object is the target object.
In one embodiment, the second determining module 320 includes:
a second determining unit, configured to determine, according to third location information corresponding to the first virtual object, location information of a first key point on the first virtual object; determining the position information of a second key point on the second virtual object according to the second position information corresponding to the second virtual object;
a calculating unit, configured to calculate a distance between the first keypoint and the second keypoint according to the position information of the first keypoint and the position information of the second keypoint;
a third determining unit, configured to determine that a distance between the first keypoint and the second keypoint is a distance between the first virtual object and the second virtual object.
In one embodiment, the attribute information further includes at least one of rotation information, size information, shape information; the rotation information comprises a rotation angle and/or a rotation direction; the virtual bounding box is a three-dimensional figure conforming to a regular shape;
the construction and determination module 330 includes:
the second construction unit is used for constructing a three-dimensional graph matched with the size information and/or the shape information corresponding to the first virtual object;
and the rotating unit is used for rotating the three-dimensional graph according to the rotating information corresponding to the first virtual object so as to enable the rotated three-dimensional graph to be matched with the rotating information of the first virtual object.
In one embodiment, the apparatus further comprises:
an obtaining module, configured to obtain a result of determination, fed back by a game engine, as to whether a collision occurs between the first virtual object and the second virtual object after the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object are sent to the game engine;
and the deleting module is used for deleting the attribute information of the first virtual object in the current game scene, which is stored in the collision detection system, if the judgment result shows that the collision occurs.
In one embodiment, the apparatus further comprises:
the monitoring module is used for monitoring the state change information of the virtual object in the current game scene; the state change information comprises at least one item of attribute information of generating a new virtual object, deleting an existing virtual object and changing the existing virtual object;
the updating module is used for determining a target virtual object corresponding to the state change information when the state change information is monitored; and updating the attribute information corresponding to the target virtual object in the collision detection system.
By adopting the device provided by the embodiment of the invention, the collision detection system determines the attribute information respectively corresponding to the first virtual object and the second virtual object in the current game scene according to the pre-stored attribute information (including position information) of each virtual object in the game scene, and further determines the distance between the first virtual object and the second virtual object according to the attribute information; and only when the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object, and sending first position information corresponding to the virtual bounding box and second position information corresponding to the second virtual object to the game engine so that the game engine judges whether the first virtual object and the second virtual object collide. Therefore, the device does not need to write collision detection scripts into each virtual object in a game scene, and a uniform collision detection system is used for detecting the virtual objects, so that the script burden of a game engine is effectively reduced, and the running performance of the game engine is improved; in addition, the collision detection system performs preliminary screening on the virtual objects to be detected in the distance dimension, so that the collision detection times in the game scene are greatly reduced, and unnecessary system performance overhead is avoided. Furthermore, the game scene changes rapidly, and the number of times of collision detection in the same game scene is greatly reduced, so that the collision detection efficiency of the virtual objects can be improved, and the situations of inaccurate detection results, incomplete detection and the like caused by excessive times of collision detection in the same game scene are avoided, so that the accuracy of the collision detection results is ensured, the influence of the number of virtual objects in the game scene is avoided, and the stable operation of the game engine is ensured.
It should be understood by those skilled in the art that the collision detection apparatus in the game scene in fig. 3 can be used to implement the collision detection method in the game scene, and the detailed description thereof should be similar to the description of the foregoing method, and is not repeated herein in order to avoid complexity.
Based on the same idea, the embodiment of the present application further provides a collision detection device in a game scene, as shown in fig. 4. The collision detection devices in a game scenario may vary significantly due to configuration or performance differences and may include one or more processors 401 and memory 402, where one or more stored applications or data may be stored in memory 402. Wherein memory 402 may be transient or persistent. The application stored in memory 402 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for a collision detection device in a game scene. Still further, the processor 401 may be configured to communicate with the memory 402 to execute a series of computer-executable instructions in the memory 402 on a collision detection device in a game scenario. The collision detection apparatus in a game scenario may also include one or more power supplies 403, one or more wired or wireless network interfaces 404, one or more input-output interfaces 405, one or more keyboards 406.
In particular, in this embodiment, the collision detection device in the game scenario includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the collision detection device in the game scenario, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
The embodiment of the present application further provides a storage medium, where the storage medium stores one or more computer programs, where the one or more computer programs include instructions, and when the instructions are executed by an electronic device including multiple application programs, the electronic device can execute each process of the foregoing collision detection method in a game scene, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A collision detection method in a game scene is applied to a collision detection system, and is characterized by comprising the following steps:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
2. The method of claim 1, wherein before determining the distance between the first virtual object and the second virtual object based on the first attribute information and the second attribute information, the method further comprises:
segmenting the first virtual object according to a preset segmentation mode to obtain a plurality of sub-virtual objects; the preset segmentation mode comprises a segmentation size and/or a segmentation position;
accordingly, the determining of the distance between the first virtual object and the second virtual object; if the distance is less than or equal to a first preset threshold, constructing a virtual bounding box corresponding to the first virtual object, so that the first virtual object is located in an in-box area of the virtual bounding box, including:
respectively determining the distance between each sub-virtual object and the second virtual object;
if the distance between at least one target sub-virtual object and the second virtual object is smaller than or equal to the first preset threshold, constructing virtual bounding boxes corresponding to the target sub-virtual objects respectively, so that the target sub-virtual objects are located in the box-in areas of the corresponding virtual bounding boxes respectively.
3. The method of claim 2, wherein the first virtual object comprises a plurality of virtual areas; the third position information corresponding to the first virtual object comprises position information corresponding to each virtual area;
before the segmenting the first virtual object according to the preset segmentation mode, the method further includes:
judging whether the first virtual object meets a preset segmentation condition or not according to the position information corresponding to each virtual area; the preset segmentation conditions include: the distance between two virtual areas is greater than or equal to a second preset threshold value;
if so, executing the step of segmenting the first virtual object according to a preset segmentation mode.
4. The method of claim 1, wherein determining the distance between the first virtual object and the second virtual object based on the first attribute information and the second attribute information comprises:
determining position information of a first key point on the first virtual object according to third position information corresponding to the first virtual object; determining the position information of a second key point on the second virtual object according to the second position information corresponding to the second virtual object;
calculating the distance between the first key point and the second key point according to the position information of the first key point and the position information of the second key point;
determining a distance between the first keypoint and the second keypoint as a distance between the first virtual object and the second virtual object.
5. The method of claim 1, wherein the attribute information further comprises at least one of rotation information, size information, shape information; the rotation information comprises a rotation angle and/or a rotation direction; the virtual bounding box is a three-dimensional figure conforming to a regular shape;
the constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box includes:
constructing a three-dimensional graph matched with the size information and/or the shape information corresponding to the first virtual object;
and rotating the three-dimensional graph according to the rotation information corresponding to the first virtual object, so that the rotated three-dimensional graph is matched with the rotation information of the first virtual object.
6. The method of claim 1, wherein after sending the first location information corresponding to the virtual bounding box and the second location information corresponding to the second virtual object to a game engine, the method further comprises:
obtaining a judgment result fed back by the game engine and aiming at whether the first virtual object and the second virtual object collide or not;
and if the judgment result is that the collision occurs, deleting the attribute information of the first virtual object in the current game scene, which is stored in the collision detection system.
7. The method of claim 1, further comprising:
monitoring state change information of the virtual object in the current game scene; the state change information comprises at least one item of attribute information of generating a new virtual object, deleting an existing virtual object and changing the existing virtual object;
when the state change information is monitored, determining a target virtual object corresponding to the state change information; and updating the attribute information corresponding to the target virtual object in the collision detection system.
8. A collision detection device in a game scene is applied to a collision detection system, and is characterized by comprising:
the first determining module is used for determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene according to the attribute information of each virtual object in the game scene, which is prestored in the collision detection system; the attribute information includes location information;
a second determining module, configured to determine a distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
the building and determining module is used for building a virtual bounding box corresponding to the first virtual object if the distance is smaller than or equal to a first preset threshold value, so that the first virtual object is located in the box-inside area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
a sending module, configured to send the first location information corresponding to the virtual bounding box and the second location information corresponding to the second virtual object to a game engine, so that the game engine determines, based on the first location information and the second location information, whether a collision occurs between the first virtual object and the second virtual object.
9. The apparatus of claim 8, further comprising:
a dividing module, configured to divide the first virtual object according to a preset dividing manner to obtain multiple sub-virtual objects before determining a distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information; the preset segmentation mode comprises a segmentation size and/or a segmentation position;
accordingly, the second determining module comprises:
a first determining unit configured to determine a distance between each of the sub-virtual objects and the second virtual object, respectively;
the construction and determination module comprises:
a first constructing unit, configured to construct a virtual bounding box corresponding to each target sub-virtual object, if there is at least one target sub-virtual object whose distance from the second virtual object is smaller than or equal to the first preset threshold, so that each target sub-virtual object is located in a box-in region of the corresponding virtual bounding box.
10. The apparatus of claim 9, wherein the first virtual object comprises a plurality of virtual areas; the third position information corresponding to the first virtual object comprises position information corresponding to each virtual area;
the device further comprises:
the judging module is used for judging whether the first virtual object meets a preset segmentation condition or not according to the position information corresponding to each virtual area before the first virtual object is segmented according to the preset segmentation mode; the preset segmentation conditions include: the distance between two virtual areas is greater than or equal to a second preset threshold value;
and the execution module is used for executing the step of segmenting the first virtual object according to a preset segmentation mode if the first virtual object is the target object.
11. A collision detection device in a game scene, applied to a collision detection system, comprising a processor and a memory electrically connected to the processor, wherein the memory stores a computer program, and the processor is configured to call and execute the computer program from the memory to realize:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
12. A storage medium for use in a collision detection system, wherein the storage medium is used to store a computer program which, when executed by a processor, implements the following process:
according to attribute information of each virtual object in a game scene prestored in the collision detection system, determining first attribute information corresponding to a first virtual object and second attribute information corresponding to a second virtual object in the current game scene; the attribute information includes location information;
determining the distance between the first virtual object and the second virtual object according to the first attribute information and the second attribute information;
if the distance is smaller than or equal to a first preset threshold value, constructing a virtual bounding box corresponding to the first virtual object so that the first virtual object is located in an in-box area of the virtual bounding box; determining first position information of the virtual bounding box in the current game scene;
and sending the first position information corresponding to the virtual bounding box and the second position information corresponding to the second virtual object to a game engine, so that the game engine judges whether the first virtual object and the second virtual object collide based on the first position information and the second position information.
CN202011123913.4A 2020-10-20 2020-10-20 Collision detection method and device in game scene Withdrawn CN112245923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011123913.4A CN112245923A (en) 2020-10-20 2020-10-20 Collision detection method and device in game scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011123913.4A CN112245923A (en) 2020-10-20 2020-10-20 Collision detection method and device in game scene

Publications (1)

Publication Number Publication Date
CN112245923A true CN112245923A (en) 2021-01-22

Family

ID=74244353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011123913.4A Withdrawn CN112245923A (en) 2020-10-20 2020-10-20 Collision detection method and device in game scene

Country Status (1)

Country Link
CN (1) CN112245923A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077548A (en) * 2021-04-26 2021-07-06 北京百度网讯科技有限公司 Collision detection method, device, equipment and storage medium for object
CN113559518A (en) * 2021-07-30 2021-10-29 网易(杭州)网络有限公司 Interaction detection method and device of virtual model, electronic equipment and storage medium
CN114374540A (en) * 2021-12-15 2022-04-19 广州趣丸网络科技有限公司 Network interaction control method and device
CN115531877A (en) * 2022-11-21 2022-12-30 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine
JP7422222B2 (en) 2021-04-26 2024-01-25 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Collision detection method, apparatus, electronic device, storage medium and computer program for object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105498211A (en) * 2015-12-11 2016-04-20 网易(杭州)网络有限公司 Method and device for processing position relation in game
CN109448080A (en) * 2018-09-27 2019-03-08 深圳点猫科技有限公司 Language carries out method, the electronic equipment of collision detection to skeleton cartoon based on programming
CN110160579A (en) * 2019-05-29 2019-08-23 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of object detection
CN110209202A (en) * 2019-06-26 2019-09-06 深圳市道通智能航空技术有限公司 A kind of feas ible space generation method, device, aircraft and aerocraft system
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105498211A (en) * 2015-12-11 2016-04-20 网易(杭州)网络有限公司 Method and device for processing position relation in game
CN109448080A (en) * 2018-09-27 2019-03-08 深圳点猫科技有限公司 Language carries out method, the electronic equipment of collision detection to skeleton cartoon based on programming
CN110160579A (en) * 2019-05-29 2019-08-23 腾讯科技(深圳)有限公司 A kind of method and relevant apparatus of object detection
CN110209202A (en) * 2019-06-26 2019-09-06 深圳市道通智能航空技术有限公司 A kind of feas ible space generation method, device, aircraft and aerocraft system
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077548A (en) * 2021-04-26 2021-07-06 北京百度网讯科技有限公司 Collision detection method, device, equipment and storage medium for object
WO2022227489A1 (en) * 2021-04-26 2022-11-03 北京百度网讯科技有限公司 Collision detection method and apparatus for objects, and device and storage medium
CN113077548B (en) * 2021-04-26 2024-01-05 北京百度网讯科技有限公司 Collision detection method, device, equipment and storage medium for object
JP7422222B2 (en) 2021-04-26 2024-01-25 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Collision detection method, apparatus, electronic device, storage medium and computer program for object
CN113559518A (en) * 2021-07-30 2021-10-29 网易(杭州)网络有限公司 Interaction detection method and device of virtual model, electronic equipment and storage medium
CN114374540A (en) * 2021-12-15 2022-04-19 广州趣丸网络科技有限公司 Network interaction control method and device
CN114374540B (en) * 2021-12-15 2024-02-02 广州趣丸网络科技有限公司 Control method and device for network interaction
CN115531877A (en) * 2022-11-21 2022-12-30 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine
CN115531877B (en) * 2022-11-21 2023-03-07 北京蔚领时代科技有限公司 Method and system for measuring distance in virtual engine

Similar Documents

Publication Publication Date Title
CN112245923A (en) Collision detection method and device in game scene
US11301954B2 (en) Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
KR101690917B1 (en) Method and apparatus for simulating sound in virtual scenario, and terminal
US20210042991A1 (en) Object loading method and apparatus, storage medium, and electronic device
US20230055516A1 (en) Collision data processing method and apparatus, computer device, and storage medium
JP3034483B2 (en) Object search method and apparatus using the method
US20200327078A1 (en) Data processing method and device, dma controller, and computer readable storage medium
CN111063032B (en) Model rendering method, system and electronic device
Kuiper et al. Agent vision in multi-agent based simulation systems
EP3905204A1 (en) Scene recognition method and apparatus, terminal, and storage medium
CN113244619B (en) Data processing method, device, equipment and storage medium
CN109242973A (en) A kind of crash tests method, apparatus, electronic equipment and storage medium
CN113262482A (en) Object control method and device, storage medium and electronic equipment
US20170109462A1 (en) System and a method for determining approximate set of visible objects in beam tracing
CN112619152A (en) Game bounding box processing method and device and electronic equipment
CN111265874A (en) Method, device, equipment and storage medium for modeling target object in game
US20150146877A1 (en) System and a method for determining approximate set of visible objects in beam tracing
CN112584985A (en) Simulated signature key for robot simulation
CN114522420A (en) Game data processing method and device, computer equipment and storage medium
CN114418829A (en) Static scene occlusion processing method and device, electronic equipment and readable medium
CN115619964A (en) Data processing method and device
CN113117334A (en) Method for determining visible area of target point and related device
EP2879409A1 (en) A system and a method for determining approximate set of visible objects in beam tracing
CN116109756B (en) Ray tracing method, device, equipment and storage medium
CN111179405B (en) 3D scene roaming collision detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210122

WW01 Invention patent application withdrawn after publication