CN115115773B - Collision detection method, device, equipment and storage medium - Google Patents

Collision detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115115773B
CN115115773B CN202210473814.1A CN202210473814A CN115115773B CN 115115773 B CN115115773 B CN 115115773B CN 202210473814 A CN202210473814 A CN 202210473814A CN 115115773 B CN115115773 B CN 115115773B
Authority
CN
China
Prior art keywords
distance
bounding box
point
distance field
directed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210473814.1A
Other languages
Chinese (zh)
Other versions
CN115115773A (en
Inventor
刘鹏飞
张宇晴
金小刚
叶劲峰
廖詩颺
寇启龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210473814.1A priority Critical patent/CN115115773B/en
Publication of CN115115773A publication Critical patent/CN115115773A/en
Application granted granted Critical
Publication of CN115115773B publication Critical patent/CN115115773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a collision detection method, a collision detection device, collision detection equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: determining a sampling point in an intersection region of a bounding box of the first object and a bounding box of the second object; determining a first distance and a second distance based on values of the sampling points in the first directed distance field and the second directed distance field, respectively; if the first distance is greater than or equal to the second distance, moving the sampling point towards the zero-value surface of the second directed distance field until the sampling point is located on the zero-value surface of the second directed distance field; and moving the sampling point on the zero-value surface of the second directed distance field, and if the sampling point moves to the zero-value surface of the first directed distance field, determining that the first object and the second object collide. The method determines the intersection of two objects by utilizing the zero-value surface intersection of the directed distance field, has better robustness, accelerates the collision detection speed and improves the collision detection precision.

Description

Collision detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a collision detection method, a collision detection device, collision detection equipment and a storage medium.
Background
With the development of computer technology, there is an increasing demand for object simulation in various industries including movies, industry, games, etc., and collision detection is a necessary process for object simulation. When collision of two objects is detected, the next action of the objects can be controlled according to the collision detection information.
In the related art, it is common to simplify the objects to be measured by using a basic geometry, and to represent contact information between the objects to be measured by detecting whether the basic geometry collides. When the object to be measured is a non-convex body, the object to be measured is represented by a directed distance field. The directed distance field for an irregular object is then typically stored in voxels or grids through discretization.
Then, the existing collision detection method based on the directed distance field is only aimed at the collision detection of the point, the line and the surface with the directed distance field, and the method is poor in precision and low in robustness.
Disclosure of Invention
The embodiment of the application provides a collision detection method, a device, equipment and a storage medium. The technical scheme is as follows:
According to an aspect of an embodiment of the present application, there is provided a collision detection method including:
determining a sampling point in an intersection region of a bounding box of the first object and a bounding box of the second object;
Determining a first distance and a second distance based on values of the sampling points respectively corresponding to the first directed distance field and the second directed distance field; wherein the first directed distance field is a directed distance field of the first object, the second directed distance field is a directed distance field of the second object, the first distance is a distance between the sampling point and a surface of the first object, and the second distance is a distance between the sampling point and a surface of the second object;
If the first distance is greater than or equal to the second distance, moving the sampling point towards the zero-value surface of the second directed distance field until the sampling point is located on the zero-value surface of the second directed distance field; wherein a zero-valued face of the second directed distance field corresponds to a surface of the second object;
Moving the sampling point on the zero-value surface of the second directed distance field, and if the sampling point moves to the zero-value surface of the first directed distance field, determining that the first object and the second object collide; wherein a zero-valued face of the first directed distance field corresponds to a surface of the first object.
According to an aspect of an embodiment of the present application, there is provided a collision detection apparatus including:
The sampling point determining module is used for determining sampling points in an intersection area of the bounding box of the first object and the bounding box of the second object;
The distance determining module is used for determining a first distance and a second distance based on values respectively corresponding to the sampling points in the first directed distance field and the second directed distance field; wherein the first directed distance field is a directed distance field of the first object, the second directed distance field is a directed distance field of the second object, the first distance is a distance between the sampling point and a surface of the first object, and the second distance is a distance between the sampling point and a surface of the second object;
The sampling point moving module is used for moving the sampling point towards the zero-value surface of the second directed distance field if the first distance is greater than or equal to the second distance until the sampling point is positioned on the zero-value surface of the second directed distance field; wherein a zero-valued face of the second directed distance field corresponds to a surface of the second object;
The collision determining module is used for moving the sampling point on the zero-value surface of the second directed distance field, and determining that the first object and the second object collide if the sampling point moves to the zero-value surface of the first directed distance field; wherein a zero-valued face of the first directed distance field corresponds to a surface of the first object.
According to an aspect of an embodiment of the present application, there is provided a computer device including a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the above-described method.
According to an aspect of an embodiment of the present application, there is provided a computer-readable storage medium having stored therein a computer program loaded and executed by a processor to implement the above-described method.
According to one aspect of an embodiment of the present application, there is provided a computer program product comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the terminal device performs the above method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
By selecting a sampling point in the intersection region of the bounding boxes of the two objects, the sampling point moves towards the zero-value surface of the second directional distance field which is closer to the sampling point, when the sampling point is positioned on the zero-value surface of the second directional distance field, the sampling point is kept moving on the zero-value surface of the second directional distance field, and when the sampling point on the zero-value surface of the second directional distance field moves to the zero-value surface of the first directional distance field, the two objects are determined to collide. The zero-value surface intersection of the directed distance field is utilized to determine that two objects are intersected, the robustness is good, the collision detection speed is increased, and the collision detection precision is improved.
Drawings
FIG. 1 is a schematic illustration of the spatial division provided by one embodiment of the present application;
FIG. 2 is a schematic illustration of a bounding box provided by one embodiment of the present application;
FIG. 3 is a schematic illustration of a bounding box provided by another embodiment of the present application;
FIG. 4 is a flow chart of a collision detection method provided by one embodiment of the present application;
FIG. 5 is a schematic illustration of an object provided by one embodiment of the application;
FIG. 6 is a schematic illustration of an object provided by another embodiment of the present application;
FIG. 7 is a schematic illustration of an intersection area provided by one embodiment of the present application;
FIG. 8 is a schematic diagram of a sampling point provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a directed distance field provided by one embodiment of the present application;
FIG. 10 is a schematic diagram of a directed distance field provided by another embodiment of the present application;
FIG. 11 is a schematic diagram of a directed distance field provided by another embodiment of the present application;
FIG. 12 is a flow chart of a collision detection method provided by another embodiment of the present application;
FIG. 13 is a flow chart of a collision detection method provided by another embodiment of the present application;
FIG. 14 is a schematic view of a directed distance field provided by another embodiment of the present application;
FIG. 15 is a schematic view of a directed distance field provided by another embodiment of the present application;
FIG. 16 is a flow chart of a collision detection method provided by another embodiment of the present application;
FIG. 17 is a schematic diagram of a bounding box provided by one embodiment of the present application;
FIG. 18 is a schematic illustration of a bounding box provided by another embodiment of the present application;
FIG. 19 is a schematic view of a bounding box provided by another embodiment of the present application;
FIG. 20 is a schematic illustration of intersection region partitioning provided by one embodiment of the present application;
FIG. 21 is a flow chart of a collision detection method provided by another embodiment of the present application;
FIG. 22 is a schematic illustration of an object provided in one embodiment of the application;
FIG. 23 is a schematic illustration of a bounding box provided by another embodiment of the present application;
FIG. 24 is a schematic illustration of an object collision provided by an embodiment of the present application;
FIG. 25 is a schematic view of a cuboid collision with a directed distance field according to one embodiment of the present application;
FIG. 26 is a schematic view of a cone colliding with a directed distance field, provided in accordance with one embodiment of the present application;
FIG. 27 is a schematic view of a capsule impacting a directed distance field in accordance with one embodiment of the present application;
FIG. 28 is a schematic view of a column colliding with a directed distance field, provided in one embodiment of the present application;
FIG. 29 is a schematic view of an object represented by a directed distance field provided by one embodiment of the present application;
FIG. 30 is a schematic representation of an object represented by a triangular surface provided in accordance with one embodiment of the present application;
FIG. 31 is a schematic illustration of an object collision provided by an embodiment of the present application;
FIG. 32 is a schematic illustration of an object collision provided by another embodiment of the present application;
FIG. 33 is a schematic view of an object collision provided by another embodiment of the present application;
FIG. 34 is a block diagram of a collision detection apparatus provided by one embodiment of the present application;
Fig. 35 is a block diagram of a collision detecting apparatus provided in another embodiment of the present application;
FIG. 36 is a block diagram of a computer device according to one embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before describing the technical scheme of the application, a few background technical knowledge related to the application is described. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application. Embodiments of the present application include at least some of the following.
(1) Collision detection
In the virtual environment, due to the interaction of users and the movement of objects, the objects may frequently collide, at this time, in order to maintain the authenticity of the virtual environment, the collisions need to be monitored in time, corresponding collision reactions are calculated, drawing results are updated, otherwise, penetration phenomena can occur between the objects, and the reality of the virtual environment and the immersion of the users are destroyed.
The object is an object facing collision detection, which is applied to various fields, and collision detection such as a game field, a Computer animation, a physical simulation, a calculation geometry, robotics, CAD (Computer AIDED DESIGN ), CAM (Computer Aided Manufacturing, computer aided manufacturing) and the like is an important subject in research fields, and the object may be represented as a specific object to which the subject is directed.
In some embodiments, in the field of games, the objects appear as virtual objects in a virtual environment, optionally the objects are one or more of virtual trees, virtual animals, virtual characters, etc. in the virtual environment. In some embodiments, the virtual environment is three-dimensional, good collision detection requires that the virtual character can move smoothly in the scene, steps in certain height can be automatically lifted up, and too high steps block the virtual character; the slope with smaller slope can be used for going up, and the virtual character is blocked if the slope is too large. In the case where various forward directions are blocked, the avatar should be slid in a reasonable direction as much as possible instead of being forced to stop. While meeting these requirements, it is also accurate and stable enough to prevent virtual characters from falling out of the scene through the wall under special conditions.
In some other embodiments, collision detection is applied in the field of simulation, and objects may appear as objects to which the simulation is directed. Alternatively, in a simulation experiment of a car crash, the object may be represented as a car model.
The collision detection problem of a conventional virtual environment can be described in simplified terms as follows: the input model of the collision detection system is a geometric model of a static environmental object and a dynamic active object, which are both sets of basic geometric elements. Wherein the environmental objects may include rigid objects or soft objects, the positions and directions of which do not change, but the soft objects may deform under the action of external force. The moving object can freely move in the virtual environment, the direction and the size completely depend on the simulation process or the input device controlled by a user, the moving object cannot be represented by a motion equation about time, and only the rotation angle and the translation amount of the motion information of a certain time sampling point relative to a previous time sampling point or a fixed reference object can be obtained. The task is to determine whether two geometric objects interfere at a certain moment, i.e. whether their intersection is not empty, if a collision occurs, and to determine the basic geometric elements of the collision points involved in the collision.
Collision detection can be classified into collision detection of discrete points and continuous collision detection (continuous collision detection, abbreviated as CCD) in a detection manner. The collision detection of discrete points is to specify two static objects at a certain time T, see if they overlap, return the distance of their nearest point if there is no overlap, return the overlap depth, overlap direction, etc. The continuous collision detection is to respectively designate the positions of two objects at the two moments T1 and T2, see whether the two objects collide in the process of moving from the moment T1 to the moment T2, and return to the position and the normal line of the first collision point if the two objects collide. The continuous collision detection is the most natural collision detection, so that the programming of collision response logic can be facilitated, and the object can be easily prevented from overlapping or crossing. Collision detection of discrete points is less friendly, and it is a challenge how two overlapping objects can be separated and moved in a reasonable way, if there are already many triangles overlapping when a collision is detected, and if there are triangle mesh objects in it.
Although continuous collision detection is the most natural mode, the implementation is very complex, and the operation cost is high, so that most of mature physical engines and collision detection engines still adopt collision detection based on discrete points, and in order to avoid objects to overlap deeply or pass through each other, relatively small simulation steps are mostly adopted.
According to the collision detection method provided by the embodiment of the application, the contact information can be rapidly judged according to the directed distance fields of the two objects through the gradient direction, so that the implementation is simple and the cost is low.
(2) Collision detection algorithm
According to the space dimension collision detection algorithm, the method can be divided into two-dimensional space collision detection and three-dimensional space collision detection, and in the embodiment of the application, the three-dimensional space collision detection algorithm is considered by performing collision detection research on a three-dimensional space model. Of course, the technical scheme provided by the application is also suitable for two-dimensional scenes. According to the spatial domain division, the collision detection algorithm can be classified into a graphic space collision detection algorithm and an image space collision detection algorithm. The pattern space collision detection algorithm includes a space-based segmentation collision detection algorithm and a convex body-based collision detection algorithm. Image space collision detection algorithm the present application is not described in detail herein.
Based on a space division collision detection algorithm: the algorithm is mainly used for dividing the space where the model is located in the three-dimensional scene into unit spaces and detecting collision of objects in the same space. The common space division method comprises uniform grids, BSP (Binary space partitioning binary space division) trees, octree and the like, and is mainly suitable for objects uniformly distributed in a three-dimensional space, and when the number of the objects in the same local area is large, the detection efficiency is low.
Based on a convex body collision detection algorithm: the algorithm mainly uses the distance information between models and the geometric information of the model vertexes to judge whether the adjacent objects collide or not. The algorithms used in a relatively wide range are the Lin-Canny collision detection algorithm, the V-Clip algorithm based on Lin-Canny optimization, and the like.
Only the octree algorithm used by the object in the embodiments of the present application will be described below. Of course, other algorithms may be used for the object in the embodiment of the present application, which is not limited in this respect.
Octree algorithm: is a common method and data structure for processing three-dimensional space. The use of octree to store voxels can effectively reduce the high memory requirements of the voxels and facilitate understanding of the discretized model. An excellent octree data structure can efficiently and conveniently search for neighboring nodes, and thus voxel formation and collision detection speed inside an object can be effectively improved. Referring to fig. 1, a schematic diagram of space division according to an embodiment of the application is shown. The space 110 is one space to be detected, and when the space 110 is coincident with other spaces, the space 110 is subjected to space division of the octree algorithm into 8 subspaces 120 (only one subspace 120 is shown in fig. 1). Each subspace 120 is detected, and when the subspace 120 is detected to coincide with other spaces but not completely coincide with other spaces, the subspace 120 is subjected to space division of the octree algorithm again, so that 8 subspaces 130 (only one subspace 130 is shown in fig. 1). And so on, sub-space 130 is again examined until all of the partitioned spaces are completely coincident with other spaces or completely non-coincident.
(3) Bounding box
The bounding box technology is one of the most common detection methods in collision detection, and when two complex objects are detected by collision, a simple cube surrounding the objects is first detected by collision. Only when the bounding cube collides, the surrounding object is further subjected to collision detection, and the bounding box is used for collision detection, so that the calculated amount can be reduced, and the detection speed is improved. Only the AABB (Axis Alignment Bounding Box ), OBB (Oriented Bounding Box, orientation bounding box) used in the model of the embodiment of the present application will be briefly described below.
Axis alignment bounding box: an AABB in a three-dimensional scene is a simple cube with sides parallel to three-dimensional spatial axes, and has a simple construction process, but a large spatial redundancy. The method is mainly suitable for constructing convex objects and is not suitable for rotating, moving and deforming objects. Referring to fig. 2, a schematic diagram of a bounding box according to an embodiment of the present application is shown. In the space 200, the axis-aligned bounding box of the object 210 is 220, and each side of the axis-aligned bounding box 220 is parallel to the three-dimensional coordinate axis of the space. Only two dimensions are illustrated in the figures.
The direction bounding box: is a cube that can be constructed according to the direction of surrounding objects, which feature allows the OBB to surround the model as much as possible, reducing space wastage. However, since the bounding box is not parallel to the coordinate axes, the coordinate axes need to be calculated first during construction. Referring to fig. 3, a schematic diagram of a bounding box according to another embodiment of the present application is shown. In the space 300, the orientation bounding box of the object 310 is 320, and each side of the orientation bounding box 320 is not parallel to the three-dimensional coordinate axis of the space.
In addition, there is a discrete directional polyhedral bounding box, which is a convex hull that contains objects and whose normal vectors to the surface all come from a fixed set of directions.
In the prior art, a method for automatically generating an effective bounding box for object parts in a three-dimensional space is not yet available, and even if a part of methods can generate the bounding box, it is difficult to completely wrap all parts of an object in the bounding box with the smallest size.
(4) Directed distance field
The model representation methods widely used at present mainly comprise polygonal grids, point clouds and directed distance fields. Wherein, the polygon mesh refers to a collection of vertices, edges and faces constituting an object, for defining the shape and outline of the object. The point cloud is a massive point set expressing the target space distribution and the target surface characteristics under the same space reference system, and after the space coordinates of each sampling point of the object surface are obtained, the point set is obtained and is called as 'point cloud'.
The distance field is effectively a scalar field that reflects the shortest distance from any point in space to a given object surface. In graphics, a directed distance field (signed distance field) is typically employed to characterize whether the point is outside or inside the object. The value of the directed distance field is positive when the point is located outside the object, negative when the point is located within the object, and 0 when the point is located on the surface of the object. In the embodiment of the present application, the sign information of the directional distance field is not limited, and in a possible embodiment, the sign of the directional distance field inside the object is positive, and the sign of the directional distance field outside the object is negative.
The distance field defines the minimum distance from all points in the field to the closed surface. Representing the closed curved surface with a distance field has the advantage of not limiting the topology. In addition, the distance and normal estimation required for collision detection and response is very fast and independent of the geometrical complexity of the object. In order to reduce the storage requirement or the distance field generation time, the solving of the distance field can be reduced, and the accuracy is reduced, so that the distance field method can balance the calculation efficiency and the accuracy. There are a variety of data structures representing distance fields, such as uniform grids, octrees, and BSP trees. Alternatively, the directional distance field is a grid-body distance field, and the quality of the representation is controlled by the volume texture resolution, or by the ratio of the distance field resolutions.
The directional distance field has more generation algorithms, is easy to calculate for simple basic graphs, such as a distance equation (the shortest distance between any point in a scene and an object (expressed by the distance equation) of a sphere, a cuboid and the like), and can be used for constructing more complex graphs by carrying out Boolean operation on the distance equation by utilizing the concept of constructing solid geometry. However, the manner in which simple objects are expressed by distance equations and complex objects are expressed by building solid geometry by boolean logic construction is generally applicable to directional distance field descriptions of small scenes. In practice, however, more complex scenes and more complex models are often faced, and these complex objects are typically represented using grids. There are a variety of algorithms for the generation of directed distance fields based on model meshes, including violence-generated distance field algorithms, HP-adaptive distance field generation algorithms, and fast travel distance field generation algorithms.
In the prior art, the collision detection method based on the directed distance field is limited to the point-to-directed distance field, the edge-to-directed distance field and the face-to-directed distance field, and the more complex the detection primitive is, the more easily the detection primitive is trapped into local optimum, so that no robust and efficient algorithm for collision detection of the directed distance field and the directed distance field exists at present. The embodiment of the application provides a collision detection method based on a directed distance field and a directed distance field, which well perfects the collision detection method related to the directed distance field.
(5) Bounding box generation
The VAE (Variational Autoencoder, variation self-encoder) is a deep-learning generative model based on the idea of variation. The VAE model is a generation model that contains hidden variables that are trained using a neural network to obtain two functions (also called an inference network and a generation network) to generate data that is not contained in the input data. In a possible embodiment, the object is sampled, the information of the sampled points is input into a neural network using a VAE model, and the input points can be generated into a set of parameterized cuboids. In a possible embodiment, the output cuboid output result is optimized with a loss function.
In the embodiment of the application, a plurality of cuboid bounding box sets are output by using a VAE model, and the cuboid bounding box is manually adjusted to obtain the optimal cuboid bounding box for an object.
In the method provided by the embodiment of the application, the execution main body of each step can be computer equipment. The computer device may be any electronic device having the capability of storing and processing data. For example, the computer device may be a server, and may be a terminal device. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc. In the following method embodiments, for convenience of explanation, only the execution subject of each step will be described as a computer device.
Referring to fig. 4, a flowchart of a collision detection method according to an embodiment of the application is shown. The subject of execution of the steps of the method may be a computer device. The method may comprise at least one of the following steps (410-440):
in step 410, a sampling point is determined in an intersection region of a bounding box of a first object and a bounding box of a second object.
The object is: the object itself is or is implemented as part of the object.
In some embodiments, the object is implemented as part of an object. That is, an object belongs to an object, and is contained in relation to the contained object. Optionally, the object is a component of an object. Fig. 5 schematically shows an object. In the space 50, the object 51 has a plurality of constituent parts, wherein the head 52 and the front chest 53 are each the object of the object 51.
In some embodiments, the object is the object itself. I.e. objects, i.e. objects. Fig. 6 illustrates another schematic diagram of an object. In the space 60, an object (puppy) 61 and an object (puppy) 62 are objects in the embodiment of the present application.
In some embodiments, the first object and the second object are part of different objects. In one possible embodiment, as shown in fig. 5, the first object is a head 52 or a front chest 53 (the second object is not shown). In another possible embodiment, the second object is a head 52 or a front chest 53 (the first object is not shown).
In some embodiments, the first object and the second object are different objects. As shown in fig. 6, the first object 61 and the second object 62.
In some embodiments, the bounding box of the first object is a bounding box capable of completely enclosing the first object. The intersection area is the intersection space of two bounding boxes, and the size and the shape of the intersection area are different according to the shape of the bounding box. The intersection region may be a regular geometry or an irregular geometry. Optionally, the type of bounding box is at least one of various types of bounding boxes, such as an AABB bounding box, an OBB bounding box, and the like. Fig. 7 shows the intersection region. In the space 70, the first object 72 is a head of the first object 71, and a bounding box 73 of the first object 72 and a bounding box 74 of the second object (not shown in the figure) have an intersection area 75.
The sampling point is an initial point for collision detection of an object in space. Fig. 8 shows sampling points. In the space 70, points a, b, c, d, e are all possible sampling points in the intersection region 75.
In some embodiments, the sampling point is a midpoint of an intersection region of the bounding box of the first object and the bounding box of the second object. The midpoint of the intersection region refers to the geometric center of the intersection region.
Optionally, a midpoint of an intersection region of the bounding box of the first object and the bounding box of the second object is determined as the sampling point. As shown in fig. 8, point a is the midpoint (geometric center) of the intersection region 75, and point a is taken as the sampling point.
In some embodiments, the sampling point is any point in the bounding box intersection of the first object and the bounding box of the second object.
Optionally, any point of intersection of the bounding box of the first object and the bounding box of the second object is determined as the sampling point. As shown in fig. 8, points b and c are points at which the bounding box of the first object intersects with the bounding box of the second object, and any one of the points b and c is taken as a sampling point. Alternatively, the point b is taken as the sampling point. Alternatively, the point c is taken as the sampling point.
In some embodiments, the sampling point is a target vertex in an intersection region of a bounding box of the first object and a bounding box of the second object.
Optionally, a target vertex in an intersection region of the bounding box of the first object and the bounding box of the second object is determined as the sampling point, the target vertex being any point of vertices of the bounding box of the first object and the bounding box of the second object. As shown in fig. 8, 8 vertices of the bounding box of the first object are shown, only one vertex (point e) falling in the intersection area 75 is shown in the figure, 8 vertices of the bounding box of the second object are shown, only one vertex (point d) falling in the intersection area 75 is shown in the figure, and any one point of the points e and d is defined as a target vertex, and is defined as a sampling point. Alternatively, point e is taken as the sampling point. Alternatively, the point d is taken as the sampling point.
In some embodiments, the sampling point is any point of the intersection region of the bounding box of the first object and the bounding box of the second object. As shown in fig. 8, in the space 70, in the intersection area 75, the points a, b, c, d, and e are all possible sampling points, and any other point falling in the intersection area 75 may also be used as a sampling point.
Considering that if the directed distance field is a non-convex field, the collision detection result based on one sampling point is likely not optimal, so that a plurality of sampling points are selected to perform collision detection, and the situation that local optimization is not all optimal can be effectively avoided.
In step 420, a first distance and a second distance are determined based on the values of the sample points in the first directed-distance field and the second directed-distance field, respectively.
The first directed distance field is a directed distance field of the first object. In some embodiments, the distance field of the first object is a distance reflecting a point in space to the first object, and the directed distance field of the first object reflects a signed distance of the point in space to the first object. Optionally, when the point in space is inside the first object, the value of the point in space in the first directed distance field is negative. Optionally, when the point in space is outside the first object, the value of the point in space in the first directed distance field is positive. Optionally, when the point in space is at the surface of the first object, the point in space has a value of zero in the first directed distance field. Fig. 9 illustrates a directed distance field. In the space 90, in a first directed distance field 92 (only the first directed distance field inside the bounding box 91 is shown in the figure) of the bounding box 91 of the first object (not shown in the figure), a solid line 93 corresponds to the surface of the first object, and the distance from the point on the surface of the solid line 93 to the surface of the first object is zero. In some embodiments, the point in space within the solid line 93, the value in the first directed distance field 92 is negative. The value in the first directed distance field 92 at the point in space outside the solid line 93 is a positive number.
The representation may be at least one or more of function-based analytic, grid-based, neural network-based. In a possible embodiment, the representation in the first directed distance field is based on a functional parsing. And for the points in space, a distance function is used for obtaining the distance from the points in space to the surface of the first object, and a directional distance field of the first object is obtained according to the difference of the positions of the points and the distances from the points to the surface. In a possible embodiment, the representation in the first directed distance field is based on a grid. And for the points in space, according to different resolutions, carrying out grid division, and according to the specific determination of the position of the area where the grid is located and the difference of the distance from the surface of the object from the point in the center of the grid, obtaining the directed distance field of the first object. In a possible embodiment, the first directed distance field is represented based on a neural network.
The surface of the first object (i.e. the zero-valued surface described below) can be better simulated based on the directed distance field generated by the function-resolving formula. The grid body directed distance field generated based on the grid can accelerate the generation speed of the first directed distance field and reduce the calculated amount. The directional distance field based on the neural network has low requirement on the real-time performance of collision detection.
The second directed distance field is a directed distance field of the second object. The first directional distance field is similar to the second directional distance field, and will not be described in detail herein. As shown in fig. 9, in the space 90, in a second directed distance field 95 (only the second directed distance field inside the bounding box 94 is shown in the figure) of the bounding box 94 of the second object (not shown in the figure), a solid line 96 corresponds to the surface of the second object, and the distance from the point on the surface of the solid line 96 to the surface of the second object is zero. In some embodiments, the point in space within the solid line 96 is negative in value in the second directed distance field 95. The value in the second directed distance field 95 at the point in space outside the solid line 96 is a positive number.
The first distance refers to a distance between the sampling point and the surface of the first object. In a possible embodiment, the first distance refers to a minimum distance between the sampling point and the surface of the first object. In some embodiments, the first distance is a value of the sampling point in the first directed distance field when the sampling point is outside the first object. In some embodiments, the first distance is an inverse number of the value of the sampling point in the first directed distance field when the sampling point is inside the first object. Fig. 10 illustrates a directed distance field. The sampling point a is the midpoint of the intersection region of the bounding box of the first object and the bounding box of the second object, the point p1 is the point of the surface of the first object, the distance between the point a and the point p1 is the first distance, and the straight line where the point a and the point p1 are located is perpendicular to the surface of the first object.
The second distance refers to the distance between the sampling point and the surface of the second object. In a possible embodiment, the second distance refers to a minimum distance between the sampling point and the surface of the second object. In some embodiments, the second distance is a value of the sampling point in the second directed distance field when the sampling point is outside the second object. In some embodiments, the second distance is an inverse number of the value of the sampling point in the second directed distance field when the sampling point is inside the second object. As shown in fig. 10, the sampling point a is a midpoint of an intersection region of the bounding box of the first object and the bounding box of the second object, the point p2 is a point of the surface of the second object, a distance between the point a and the point p2 is a second distance, and a straight line where the point a and the point p1 are located is perpendicular to the surface of the second object.
In some embodiments, the sampling point is a midpoint of an intersection region of the bounding box of the first object and the bounding box of the second object. The first distance and the second distance may be determined based on values of the sampling points in the first directed distance field and the second directed distance field, respectively. In some embodiments, as shown in fig. 10, point a is a sampling point, point a is at a location intermediate the surface of the first object and the surface of the second object, so the value of point a in the first directed distance field is a positive number, and the value of point a in the second directed distance field is also a positive number. The first distance is the value of point a in the first directed distance field, i.e., the distance between point a and point p 1. The second distance is the value of point a in the second directed distance field, i.e., the distance between point a and point p 2.
In some embodiments, the sampling point is a target vertex in an intersection region of a bounding box of the first object and a bounding box of the second object. The first distance and the second distance may be determined based on values of the sampling points in the first directed distance field and the second directed distance field, respectively. In some embodiments, as shown in fig. 10, point e is a sampling point, point e is the vertex of the bounding box of the first object at the intersection region, as shown in fig. 10, the value of point e in the first directed distance field is a positive number, and the value of point e in the second directed distance field is a negative number. The first distance is the value of point e in the first directed distance field, i.e. the distance between point e and point p 3. The second distance, i.e. the opposite number of values of point e in the second directed distance field, of course, i.e. the distance between point a and point p 4.
In some embodiments, the first distance and the second distance need not be determined when the sampling point is inside the second object. In some embodiments, as shown in fig. 10, point e is a sampling point, point e is the vertex of the bounding box of the first object at the intersection region, point e is inside the second object, and it is not necessary to determine the first distance and the second distance.
If the first distance is greater than or equal to the second distance, the sample point is moved toward the zero-valued surface of the second directed distance field, step 430, until the sample point is located at the zero-valued surface of the second directed distance field.
The zero-valued surface of the second directed distance field corresponds to the surface of the second object, which has been described in more detail above and will not be described in detail here.
In some embodiments, the representation of the second directed-distance field is a functional parsing, and the zero-valued surface of the second directed-distance field is the surface of the second object. In some embodiments, the representation of the second directed distance field is a grid, with the zero-valued face of the second directed distance field being internal to the second object, or with the zero-valued face of the second directed distance field being external to the second object. However, although the zero-valued surface of the second directed distance field is not exactly equivalent to the surface of the second object in both cases, the surface of the second object can be approximated infinitely by increasing the resolution and further reducing the mesh volume, and thus the zero-valued surface of the second directed distance field can be considered to correspond to the surface of the second object.
In some embodiments, if the first distance is greater than or equal to the second distance, the sampling point is moved in any direction toward the zero-valued side of the second directed distance field. Any direction includes, but is not limited to, gradient directions. The gradient direction refers to the minimum moving distance direction from the sampling point to the zero-valued surface of the second directed distance field.
In some embodiments, if the first distance is greater than or equal to the second distance, the sampling point is moved in any direction toward the zero-valued side of the second directed distance field. As shown in fig. 10, the sampling point is point a, the first distance is the distance between point a and point p1, and the second distance is the distance between point a and point p2, the first distance being greater than the second distance, so that the sampling point is moved in any direction toward the zero-valued surface of the second directed distance field until the sampling point is located at the zero-valued surface of the second directed distance field. When the sample point (point a) moves in any direction to any position of the zero-valued surface of the second directed distance field, the movement is stopped.
In some embodiments, if the first distance is greater than or equal to the second distance, the sampling point is moved in any direction toward the zero-valued side of the second directed distance field. As shown in fig. 10, the sampling point is point e, the first distance is the distance between point e and point p3, and the second distance is the distance between point e and point p4, the first distance being greater than the second distance, so that the sampling point is moved in any direction toward the zero-valued surface of the second directed distance field until the sampling point is located at the zero-valued surface of the second directed distance field. The sample point (point e) moves in any direction towards the zero value of the second directed distance field and stops moving when it moves to any position on the zero value surface of the second directed distance field.
If the first distance is less than the second distance, the sampling point is moved toward the zero-valued surface of the first directed distance field until the sampling point is located at the zero-valued surface of the first directed distance field. In some embodiments, if the first distance is less than the second distance, the sampling point is moved in any direction toward the zero-valued surface of the first directed distance field until the sampling point is located at the zero-valued surface of the first directed distance field. In some embodiments, if the first distance is less than the second distance, the sample point is moved along a gradient direction of the sample point in the first distance field toward a zero-valued surface of the first directed distance field until the sample point is located at the zero-valued surface of the first directed distance field. The specific and first distances are similar to those of the second distances, and thus are not described herein.
In some embodiments, the first distance and the second distance need not be determined when the sampling point is inside the second object. Considering that the sampling point is inside the second object, the sampling point is moved towards the zero-valued side of the first directed distance field without considering the distance of the sampling point to the zero-valued side of the first directed distance field and the distance of the sampling point to the zero-valued side of the second directed distance field. The sample point is moved towards the zero-valued side of the first directed distance field to the zero-valued side of the second directed distance field. Wherein the direction of the sampling point towards the zero-valued side of the first directed distance field may be any direction of the sampling point to the zero-valued side of the first directed distance field. In some embodiments, the direction of the sampling point towards the first directed distance field is the gradient direction of the sampling point in the first directed distance field. Optionally, the gradient direction is a gradient descent direction. As shown in fig. 10, point e is a sampling point, point e is the vertex of the bounding box of the first object at the intersection region, point e is inside the second object, and it is not necessary to determine the first distance and the second distance, point e moves the sampling point toward the gradient descent direction of point e in the first directed distance field (i.e., the direction in which point e points to point p 3) until point e moves to the zero-valued surface of the second directed distance field (i.e., point e moves to point p5, point p5 is on the zero-valued surface of the second directed distance field).
Step 440, moving the sampling point on the zero-valued surface of the second directed distance field, and if the sampling point moves to the zero-valued surface of the first directed distance field, determining that the first object and the second object collide.
The zero-valued surface of the first directed distance field corresponds to the surface of the first object, which has been described in more detail above and will not be described in detail here.
And moving the sampling point on the zero-value surface of the second directed distance field, and if the sampling point moves to the zero-value surface of the first directed distance field, indicating that the zero-value surface of the first directed distance field is intersected with the zero-value surface of the second directed distance field, determining that the first object and the second object collide. As shown in fig. 9, moving sample point a in any direction toward the null plane of the second directional distance field, moving to any position on the null plane of the second directional distance field, moving to sample point at null plane 96 of the second directional distance field, moving to null plane 93 of the first directional distance field when sample point moves to point q1, illustrates that first directional distance field 92 intersects second directional distance field 95. The first directed distance field corresponds to a surface of the first object and the second directed distance field corresponds to a surface of the second object, such that a collision of the first object with the second object may be determined based on the intersection of the first directed distance field and the second directed distance field.
In a possible embodiment, the representation of the directed distance field is a functional analysis, the set of points nearest to the object surface being determined as zero-valued faces of the directed distance field, in which case the nearest distance may be 0. Thus, the zero-valued surface of the directed distance field is fully conformed to the object surface. The sampling point moves on the zero-valued surface of the directed distance field, i.e. on the surface of the object.
In a possible embodiment, the representation of the directed distance field is a grid, the grid body has no strictly meaningful zero-valued surface of the directed distance field, the set of the centers of all grids closest to the object surface can be determined as the zero-valued surface of the directed distance field in the embodiment of the application, in which case the sampling point is moved in the zero-valued surface of the directed distance field consisting of all grid center points closest to the object surface. Considering that when the resolution is large, the grid center closest to the object surface is substantially close to the object surface, a description thereof will not be given.
If the sampling point cannot move to the zero value surface of the first directed distance field, determining that the first object and the second object do not collide. Fig. 11 illustrates a directed distance field. As shown in fig. 11, when the sampling point a moves in any direction toward the zero-valued surface of the second directional distance field and moves to any position on the zero-valued surface of the second directional distance field, the sampling point is moved to the zero-valued surface of the second directional distance field, and when the sampling point moves to the point q2, the sampling point cannot move to the zero-valued surface of the first directional distance field, and the point q2 is the closest point to the first directional distance field on the zero-valued surface of the second directional distance field, which means that the first directional distance field and the second directional distance field cannot intersect. The first directed distance field corresponds to a surface of the first object and the second directed distance field corresponds to a surface of the second object, such that from the first directed distance field not intersecting the second directed distance field, it may be determined that the first object does not collide with the second object.
By selecting the sampling points, whether zero-value surfaces (corresponding to surfaces of two objects) of the two directional distance fields are intersected or not is judged, whether the two objects collide or not is determined, the calculated amount is further reduced, and the collision detection speed is increased.
According to the technical scheme provided by the embodiment of the application, the sampling points are selected from the intersection area of the bounding boxes of the two objects, the sampling points move towards the zero-value surface of the second directional distance field with a relatively close distance, when the sampling points are positioned on the zero-value surface of the second directional distance field, the sampling points are kept to move on the zero-value surface of the second directional distance field, and when the sampling points on the zero-value surface of the second directional distance field move to the zero-value surface of the first directional distance field, the two objects are determined to collide. The zero-value surface intersection of the directed distance field is utilized to determine the intersection of two objects, so that the collision detection speed is increased, and the collision detection precision is improved.
Referring to fig. 12, a flowchart of a collision detection method according to another embodiment of the present application is shown. The subject of execution of the steps of the method may be a computer device. The method may comprise at least one of the following steps (510-550):
in step 510, a sampling point is determined in an intersection region of a bounding box of the first object and a bounding box of the second object.
In step 520, a first distance and a second distance are determined based on the values of the sample points in the first directed-distance field and the second directed-distance field, respectively.
If the first distance is greater than or equal to the second distance, the target point with the smallest distance between the zero-value surface of the second directed distance field and the sampling point is determined in step 530.
The target point is the point on the zero-valued surface of the second directed distance field where the distance to the sampling point is smallest, so the line connecting the target point and the sampling point must be perpendicular to the surface of the second object. And (3) making a perpendicular line towards the surface of the second object through the sampling point, wherein a point intersected with the surface of the second object is the target point.
In some embodiments, if the first distance is greater than or equal to the second distance, a target point on the zero-valued surface of the second directed distance field having the smallest distance to the sampling point is determined. As shown in fig. 10, the first distance is the distance between the point a and the point p1, the second distance is the distance between the point a and the point p2, the first distance is greater than the second distance, the straight line between the point a and the point p2 is perpendicular to the surface of the second object, and the target point is the point p2.
In some embodiments, if the first distance is greater than or equal to the second distance, a target point on the zero-valued surface of the second directed distance field having the smallest distance to the sampling point is determined. As shown in fig. 10, the first distance is the distance between the point e and the point p3, the second distance is the distance between the point e and the point p4, the first distance is greater than the second distance, the straight line between the point e and the point p4 is perpendicular to the surface of the second object, and the target point is the point p4.
Step 540, moving the sampling point according to the gradient direction of the second directed distance field until the sampling point moves to the target point, wherein the gradient direction is the direction in which the sampling point points to the target point.
The gradient direction is the direction in which the sample point points to the target point, when the sample point is located outside the zero-valued surface of the second directed distance field, the gradient direction means that the sample point located outside the zero-valued surface of the second directed distance field points to the target point located on the zero-valued surface of the second directed distance field, i.e. the gradient descent direction. When the sampling point is located inside the zero-valued surface of the second directed distance field, the gradient direction means that the sampling point located inside the zero-valued surface of the second directed distance field points to the target point located on the zero-valued surface of the second directed distance field, i.e. the gradient-rising direction.
In some embodiments, if the first distance is greater than or equal to the second distance, the sampling point is moved according to the gradient direction of the second directed distance field until the sampling point is moved to the target point, the gradient direction being the gradient descent direction. The gradient descent direction of the sampling point in the second distance field refers to a direction passing through the sampling point and perpendicular to the surface of the second object and pointing toward the inside of the second object. As shown in fig. 10, the first distance is greater than the second distance, thus moving the sample point towards the zero-valued surface of the second directed distance field until the sample point is located at the target point on the zero-valued surface of the second directed distance field. The specific gradient descent direction is shown in fig. 10, in which point a points to the arrow direction of point p2, and the gradient descent direction of the sampling point (point a) in the second distance field, that is, the direction passing through point a and being perpendicular to the surface of the second object and pointing toward the interior of the second object, points from point a (sampling point) to the arrow direction of point p2 (target point). When point a moves to point p2, the movement is stopped, and p2 is located on the zero-valued side of the second directed distance field.
In some embodiments, if the first distance is greater than or equal to the second distance, the sampling point is moved according to the gradient direction of the second directed distance field until the sampling point is moved to the target point, the gradient direction being the gradient-increasing direction. The gradient-up direction of the sampling point in the second distance field refers to a direction passing through the sampling point and perpendicular to the surface of the second object and pointing outside the second object. As shown in fig. 10, the first distance is greater than the second distance, thus moving the sample point towards the zero-valued surface of the second directed distance field until the sample point is located at the target point on the zero-valued surface of the second directed distance field. The specific gradient-increasing direction is shown in fig. 10, in which point e points in the direction of the arrow of point p4, and the gradient-increasing direction of point e in the second distance field, i.e. the direction passing through point e and perpendicular to the second object surface and pointing outside the second object, points from point e (sampling point) in the direction of the arrow of point p4 (target point). When point e moves to point p4, the movement is stopped, and point p4 is located on the zero-valued surface of the second directed distance field.
In some embodiments, the directional distance field is represented as a function-resolved equation, and the gradient direction may be determined by controlling the step size of the gradient descent in a manner that converges iteratively a plurality of times. In some embodiments, the representation of the directed distance field is a grid, and the gradient direction may be determined by traversing its surrounding sample locations.
Step 550, moving the sampling point on the zero-valued surface of the second directed distance field, and if the sampling point moves to the zero-valued surface of the first directed distance field, determining that the first object and the second object collide.
The application provides a collision detection method, which can reduce the movement of a sampling point to the greatest extent, lighten the calculated amount and improve the detection speed and efficiency by moving the sampling point to the zero value surface of a second directed distance field along the gradient direction.
Referring to fig. 13, a flowchart of a collision detection method according to another embodiment of the present application is shown. The subject of execution of the steps of the method may be a computer device. The method may comprise at least one of the following steps (610-670):
In step 610, a sampling point is determined in an intersection region of a bounding box of the first object and a bounding box of the second object.
In step 620, a first distance and a second distance are determined based on the values of the sample points in the first directed-distance field and the second directed-distance field, respectively.
If the first distance is greater than or equal to the second distance, the sample point is moved toward the zero-valued surface of the second directed distance field until the sample point is located at the zero-valued surface of the second directed distance field, step 630.
Step 640, moving the sampling point on the zero-valued surface of the second directed distance field, and if the sampling point moves to the zero-valued surface of the first directed distance field, determining that the first object and the second object collide.
Step 650 moves the sample point over the zero-valued surface of the second directed distance field and determines the sample point as a first depth point when the value of the sample point in the first directed distance field satisfies a first condition.
Optionally, the first condition is that the value of the sampling point in the first directed distance field satisfies a minimum value.
In a possible embodiment, the sampling point is determined to be the first depth point when the value of the sampling point in the first directed distance field is at a minimum. In a possible embodiment, the sampling point is determined to be the first depth point when the absolute value of the sampling point in the first directed distance field is a maximum. In a possible embodiment, the sampling point is determined to be the first depth point when the opposite number of values of the sampling point in the first directed distance field is a maximum.
Fig. 14 illustrates a directed distance field. When the sample point moves along the zero-valued surface of the second distance field to the position q1, it also moves onto the zero-valued surface of the first directed distance field, determining that the two directed distance fields intersect. The sample point is shifted over the zero-valued surface of the second directed distance field and the sample point K1 is determined to be the first depth point when the value of the sample point in the first directed distance field is the minimum value. The point K1 is inside the first directed distance field, the value of the point K1 in the first directed distance field is a negative number, and the point K1 is determined as the first depth point when the inverse of the value of the point K1 in the first directed distance field is maximum. When the distance of the point K1 to the zero-valued surface of the first directed distance field is maximum, the point K1 is determined as the first depth point.
Step 660, moving the sample point over the zero-valued surface of the first directed distance field, determining the sample point as a second depth point when the value of the sample point in the second directed distance field satisfies a second condition.
Optionally, the second condition is that the value of the sampling point in the second directed distance field satisfies a minimum value.
In a possible embodiment, the sampling point is determined to be the second depth point when the value of the sampling point in the second directed distance field is at a minimum value. In a possible embodiment, the sampling point is determined to be the second depth point when the absolute value of the sampling point in the second directed distance field is the maximum value. In a possible embodiment, the sampling point is determined to be the second depth point when the opposite number of values of the sampling point in the first directed distance field is a maximum.
As shown in fig. 14, when the sample point moves to the position of q1 along the zero-valued surface of the first distance field, it also moves to the zero-valued surface of the second directed distance field, determining that the two directed distance fields intersect. The sampling point is shifted over the zero-valued surface of the first directed distance field and the sampling point K2 is determined to be the second depth point when the value of the sampling point in the second directed distance field is the minimum value. The point K2 is inside the second directed distance field, the value of the point K2 in the second directed distance field is a negative number, and the point K2 is determined to be the second depth point when the inverse of the value of the point K2 in the second directed distance field is the largest. When the distance of the point K2 to the zero-valued surface of the second directed distance field is maximum, the point K2 is determined as the second depth point.
Step 670, determining a distance between the first depth point and the second depth point as a contact depth of the first object and the second object.
The contact depth is the deepest distance that the first object contacts the second object.
As shown in fig. 14, a point K1 is a first depth point, and a point K2 is a second depth point. The distance from the point K1 to the point K2 is determined as the contact depth of the first object and the second object.
In some embodiments, the representation of the directed distance field is a grid, in which the zero-valued surface of the directed distance field indicating the grid body does not completely coincide with the surface of the object, the zero-valued surface of the directed distance field may be made to approach the surface of the object infinitely when the resolution, i.e. the grid size, is adjusted. In some embodiments, the contact depth is adjusted using geometric information of the grid of depth points, resulting in an adjusted contact depth. In a possible embodiment, the voxels at the first depth point have a first depth, the first depth being generated for the geometric information of the voxels of the first depth point, and the voxels at the second depth point have a second depth, the second depth being generated for the geometric information of the voxels of the second depth point. And adjusting the contact depth according to the first depth and the second depth to obtain the contact depth of the first object and the second object reflected by the directional distance field of the grid body. In a possible embodiment, the contact depth of the first object with the second object is the distance from the first depth point to the second depth point minus the first depth and the second depth. In a possible embodiment, the contact depth of the first object with the second object is the distance from the first depth point to the second depth point plus the first depth and the second depth.
In a possible embodiment, a straight line where the first depth point and the second depth point are located is determined as a collision normal corresponding to the first object and the second object. Fig. 15 schematically shows a directed distance field, where a straight line L1 is a straight line where the first depth point K1 and the second depth point K2 are located, and the straight line L1 is a collision normal corresponding to the first object and the second object. Accordingly, it can also be considered that the direction in which the first object collides with the second object is a straight line in which L1 is located and points to the inside of the second object; the direction in which the second object collides with the first object is a straight line in which L1 is located and is directed in the direction of the inside of the first object.
In a possible embodiment, a straight line passing through the first depth point and perpendicular to the surface of the first object is determined as a collision normal corresponding to the first object, and a straight line passing through the second depth point and perpendicular to the surface of the second object is determined as a collision normal corresponding to the second object. As shown in fig. 15, the straight line L2 passes through the point K1 and is perpendicular to the zero-value surface of the first distance field, that is, the collision normal corresponding to the first object is the straight line in which the gradient direction of the point K1 in the first directed distance field is located. Accordingly, it can also be considered that the direction line L2 in which the second object collides with the first object is a line and points in a direction inside the first directed distance field, that is, in a gradient descent direction of the point K1 in the first directed distance field. The straight line L3 passes through the point K2 and is perpendicular to the zero-value surface of the second distance field, i.e. the collision normal corresponding to the second object is the straight line in which the gradient direction of the point K2 in the second directed distance field is located. Accordingly, it can also be considered that the direction straight line L3 of the first object impinging on the second object is straight and directed in the direction inside the second directed distance field, i.e. the gradient descent direction of the point K2 in the second directed distance field.
According to the technical scheme provided by the embodiment of the application, the contact depth and the collision normal are determined through the first depth point and the second depth point, so that the specific information of the collision of two objects can be accurately known, the next position of the two objects can be conveniently adjusted, and the collision detection is more accurate.
Referring to fig. 16, a flowchart of a collision detection method according to another embodiment of the present application is shown. The subject of execution of the steps of the method may be a computer device. The method may comprise at least one of the following steps (710-770):
at step 710, a global bounding box of the first object and a global bounding box of the second object are generated.
The whole bounding box is relative to the object, and the generation of the whole bounding box of the first object and the whole bounding box of the second object means that the corresponding objects can be completely wrapped by the whole bounding boxes corresponding to the two objects respectively.
In some embodiments, the bounding box is completely attached to the object without leaving excess space, and the bounding box completely encloses the object. In some embodiments, there is more space between the bounding box and the object, but the bounding box still completely encloses the object.
In a possible embodiment, the overall bounding box is an axis-aligned bounding box, just surrounding the object. As shown in fig. 2, an overall bounding box 220 of the object 210 is generated, the overall bounding box 220 being an axis-aligned bounding box.
In some embodiments, an axis-aligned bounding box of a first object and an axis-aligned bounding box of a second object are generated.
In step 720, bounding boxes corresponding to the objects of the first object are generated, and bounding boxes corresponding to the objects of the second object are generated.
The bounding box of each object of the object encloses the object. Fig. 17 exemplarily shows a bounding box. In fig. 17, bounding boxes are generated for respective objects of an object, for example, bounding boxes of a head (object) are bounding boxes s0, s0 wrapping the head of the object.
In some embodiments, the object is a non-convex model, one non-convex model is segmented into a plurality of convex models (objects), and then collision detection is performed on the segmented models (objects). By dividing the collision detection for each portion, the time for collision detection can be shortened, and the efficiency of collision detection can be improved.
In the foregoing step 710 and step 720, in a possible embodiment, the bounding box of each object is generated first, then the whole bounding box is generated by the whole bounding box of the object, fig. 18 illustrates the bounding box, the bounding box s2 of each object is generated first (s 2 refers to only one of the bounding boxes of the object), and then the whole bounding box s1 is generated by the whole bounding box of the object.
In a possible embodiment, the whole bounding box is first generated, the bounding box of each object is regenerated, fig. 19 exemplarily shows the bounding box, the whole bounding box s3 of the object is first generated, and the bounding box s4 (s 4 refers to only one of the bounding boxes) of each object is regenerated.
In a possible embodiment, bounding boxes of the respective objects are generated, and an overall bounding box of the object is regenerated. As shown in fig. 19, bounding boxes s4 (s 4 refer to only one of the bounding boxes) of the respective objects are generated first, and then the entire bounding box s3 of the object is generated.
In step 730, in a case where the entire bounding box of the first object and the entire bounding box of the second object intersect, an intersection region of the bounding box of the first object and the bounding box of the second object is determined in an intersection region of the entire bounding box of the first object and the entire bounding box of the second object.
Collision detection is performed on bounding boxes in an intersection region of the entire bounding box of the first object and the entire bounding box of the second object, and the intersected bounding boxes of the first object and the second object are determined. In some embodiments, to determine whether two OBB bounding boxes in the intersection region of the overall bounding box of the first object and the overall bounding box of the second object intersect, it is only necessary to use vertex information of the two bounding boxes to determine.
As shown in fig. 18, the region J is an intersection region of the entire bounding box of the first object and the entire bounding box of the second object. The bounding box H1 is a bounding box of the first object, and the bounding box H2 is a bounding box of the second object, the bounding box H1 and the bounding box H2 intersecting in an intersection region of the entire bounding box of the first object and the entire bounding box of the second object.
In a possible embodiment, the AABB of the two objects is first detected and if their AABB intersect, the intersection region R is calculated. And is represented by the following method: r= { (x, y, z) |l x≤x≤rx,ly≤y≤ry,lz≤z≤rz }.
An intersection region of the overall bounding box of the first object and the overall bounding box of the second object is divided into a plurality of sub-regions.
For a target sub-region of the plurality of sub-regions, a first maximum distance and a first minimum distance of the target sub-region are obtained, the first maximum distance being a maximum distance of the target sub-region to a bounding box of the first object, and the first minimum distance being a minimum distance of the target sub-region to a bounding box of the first object, and a second minimum distance being a maximum distance of the target sub-region to a bounding box of the second object.
In some embodiments, the distance is obtained using a distance function. The input of the distance function of the geometry is the position in space and the output is the distance of this position to the surface of the geometry. Further, the interval distance function { d min,dmax } = dis i (R) is obtained by using the distance function of the geometry. DisI is a spatial region and the output is the range of that region to the surface of the geometry. Illustratively, the distance output is signed, and when a point in the region is inside the geometry, the distance output is negative. When a point in the region is located outside the geometric body, the output distance is a positive number.
Fig. 20 exemplarily shows the intersection region division. In the intersection region J of the entire bounding box of the first object and the entire bounding box of the second object, the intersection region J is divided into 8 sub-regions, only A1, A2, A3 are shown in fig. 20, and the remaining 5 are not shown. Alternatively, the sub-region A1 is taken as the target sub-region. The first maximum distance and the first minimum distance of the sub-region A1 to the bounding box 73 of the first object are obtained by the interval distance function. Illustratively, the first maximum distance is 5cm and the first minimum distance is 3cm. The second maximum distance and the second minimum distance of the sub-region A1 to the bounding box 74 of the second object are obtained by the interval distance function. Illustratively, the second maximum distance is 6cm and the first minimum distance is 4cm.
And if the first minimum distance and the second minimum distance meet a third condition, determining that the bounding box of the first object and the bounding box of the second object do not intersect in the target sub-region. In some embodiments, the third condition is that the first minimum distance is not less than zero or the second minimum distance is not less than zero. In some embodiments, as shown in fig. 20, the first minimum distance and the second minimum distance of the region A1 are both greater than 0, and the third condition is satisfied, it is determined that the bounding box 73 of the first object and the bounding box 74 of the second object do not intersect at the sub-region A1.
And if the first maximum distance and the second maximum distance meet the fourth condition, determining that the bounding box of the first object and the bounding box of the second object intersect at the target sub-region, and determining the target sub-region as the intersecting sub-region. In some embodiments, the fourth condition is that both the first maximum distance and the second maximum distance are less than zero.
If the first minimum distance and the second minimum distance do not meet the third condition and the first maximum distance and the second maximum distance do not meet the fourth condition, dividing the target sub-region into a plurality of sub-regions, and acquiring the first maximum distance and the first minimum distance of the target sub-region to the bounding box of the first object and the second maximum distance and the second minimum distance of the target sub-region to the bounding box of the second object again from the target sub-region in the plurality of sub-regions. That is, the current target sub-region is continued to be divided when the first minimum distance is greater than zero or the second minimum distance is greater than zero or only one of the first maximum distance and the first minimum distance is less than zero. In some embodiments, as shown in fig. 20, the area below the area A1 is a sub-area divided by an intersection area J of the entire bounding box of the first object and the entire bounding box of the second object, the sub-area is taken as a target sub-area, and the first maximum distance and the first minimum distance of the sub-area from the bounding box 73 of the first object are obtained through an interval distance function. Illustratively, the first maximum distance is 3cm and the first minimum distance is-3 cm. A second maximum distance and a second minimum distance of the sub-region to bounding box 74 of the second object are obtained by the interval distance function. By way of example, the second maximum distance is 2cm and the first minimum distance is-5 cm, it can be seen that the first maximum distance and the second maximum distance do not satisfy the fourth condition (both the first maximum distance and the second maximum distance are smaller than zero), nor do the first minimum distance and the second minimum distance satisfy the third condition (either the first minimum distance is not smaller than zero or the second minimum distance is not smaller than zero), so that the sub-region is divided into a plurality of sub-regions, B1, B2 after the division are shown in the figure, and the rest are not shown.
An intersection region of the bounding box of the first object and the bounding box of the second object is determined from the intersection sub-regions. In some embodiments, the intersection region 75 of the bounding box of the first object and the bounding box of the second object is determined from the intersection sub-regions.
In a possible embodiment, the intersection region of the bounding box of the first object and the bounding box of the second object is determined using an octree algorithm.
In step 740, a sampling point is determined in an intersection region of the bounding box of the first object and the bounding box of the second object.
Step 750, determining a first distance and a second distance based on values of the sample points in the first directed distance field and the second directed distance field, respectively.
If the first distance is greater than or equal to the second distance, step 760, the sample point is moved toward the zero-valued surface of the second directed distance field until the sample point is located at the zero-valued surface of the second directed distance field.
Step 770, moving the sample point over the null plane of the second directed distance field, and if the sample point moves to the null plane of the first directed distance field, determining that the first object and the second object collide.
According to the technical scheme provided by the embodiment of the application, the intersection area of the bounding box is determined in the intersection area of the whole bounding box of the first object and the whole bounding box of the second object, so that the collision area can be further reduced, and the detection speed is increased.
Referring to fig. 21, a flowchart of a collision detection method according to an embodiment of the present application is shown. The subject of execution of the steps of the method may be a computer device. The method may comprise at least one of the following steps (810-890):
at step 810, a global bounding box of the first object and a global bounding box of the second object are generated.
Step 820 samples a plurality of location points on a surface of a first object.
The location points are located on the first object surface, i.e. a plurality of location points are selected on the first object surface.
Fig. 22 illustrates an object, point D being one of the possible location points (location points of other parts of the object are not shown).
In step 830, n bounding boxes corresponding to the initialization of the first object are generated according to the plurality of location points, where n is a positive integer.
The initialized bounding box may or may not completely encase the object. Fig. 23 exemplarily shows a bounding box. The initialized bounding box s10 of the front chest of the object (an object of the object) cannot completely wrap the front chest of the object.
And generating n bounding boxes corresponding to the initialization of the first object according to the positions and the direction information of the plurality of position points. In a possible embodiment, a corresponding normal line is drawn at each location point, and the direction information of the location point is obtained according to the normal line.
And inputting a plurality of position points into the structure abstract network to obtain a plurality of cuboids. In a possible embodiment, the structure abstraction network is a deep learning network. Optionally, the structure abstraction network is a deep learning network using a variational automatic encoder, and the set of sampled position points is used as input to the deep learning network. A set of cuboid sets { C i}i=1,…,M may be generated for an object over the network. Each cuboid represents a portion (one object) of an object.
In step 840, the n bounding boxes are adjusted according to the position points on the surface of the first object, where the position points are not included by the bounding boxes, so as to obtain bounding boxes corresponding to the objects of the first object.
For a target position point on the surface of the first object, which is not contained by the bounding box, n probability values corresponding to the target position point are acquired, and the ith probability value is used for indicating the probability that the target position point is divided into the ith bounding box in the n bounding boxes. Determining a target bounding box corresponding to the maximum probability value in the n probability values from the n bounding boxes; adjusting the target bounding box so that the target bounding box contains target position points; when there is no position point not included in the bounding box on the surface of the first object, the n bounding boxes after adjustment are determined as bounding boxes corresponding to the objects of the first object.
In a possible embodiment, each cuboid of the structure abstraction network output represents a part of an object (an object) and has its own rotation, translation, scaling parameters. The network also records an allocation matrix describing n probabilities that point p n belongs to the portion represented by cuboid C m. And expanding the generated cuboid by using the allocation matrix provided by the method. For those points that are not inside any cuboid, the cuboid most likely to contain the point is selected and expanded to just contain the point.
In step 850, in the case where the entire bounding box of the first object and the entire bounding box of the second object intersect, in the intersection region of the entire bounding box of the first object and the entire bounding box of the second object, the intersection region of the bounding box of the first object and the bounding box of the second object is determined.
In step 860, a sampling point is determined in an intersection region of the bounding box of the first object and the bounding box of the second object.
In step 870, the first distance and the second distance are determined based on the values of the sample points in the first directed-distance field and the second directed-distance field, respectively.
If the first distance is greater than or equal to the second distance, the sample point is moved toward the zero-valued surface of the second directed distance field until the sample point is located at the zero-valued surface of the second directed distance field, step 880.
Step 890, moving the sample point over the null plane of the second directed distance field, and if the sample point moves to the null plane of the first directed distance field, determining that the first object and the second object collide.
According to the technical scheme provided by the embodiment of the application, a plurality of cuboid bounding boxes are obtained by utilizing a structure abstract network, and the cuboid bounding boxes can be timely adjusted by utilizing the output probability values, so that the adjusted bounding boxes can completely wrap objects, the bounding boxes are generated by utilizing the network and are manually adjusted, the speed of generating the bounding boxes is further increased, and meanwhile, the generated bounding boxes can be optimized maximally.
In one possible embodiment, the directional distance field and the test for collision detection between the directional distance fields are performed.
FIG. 24 illustrates a schematic view of an object collision provided by an embodiment of the present application. In this embodiment, a model of two trees is used as the collision detecting object, and a mesh body directed distance field is employed, with a resolution of 256 3. Assuming that the upper tree moves to the lower tree, it stops when a collision is detected. The position of the collision is enlarged as shown by u 0.
In one possible embodiment, a test of collision detection is performed between the distance function and the grid body directed distance field.
Fig. 25 to 28 show collisions between primitives represented by distance functions and directed distance fields. Fig. 25 illustrates the collision between a cuboid represented by a distance function and a directed distance field, with a specific cuboid r1 colliding with a tree t1 represented by a directed distance field as shown. Fig. 26 illustrates the collision between a cone represented by a distance function and a directed distance field, with a specific cone r2 colliding with a tree t2 represented by a directed distance field as shown. Fig. 27 illustrates the collision between a capsule body represented by a distance function and a directed distance field, with a specific capsule body r3 colliding with a tree t3 represented by a directed distance field as shown. Fig. 28 illustrates the collision between a cylinder represented by a distance function and a directed distance field, with a particular cylinder r4 colliding with a tree t4 represented by a directed distance field as shown.
In one possible embodiment, for the non-male mold, collision tests between the non-male mold and the basic geometry are performed using polygonal mesh and mesh body directed distance fields, respectively, to represent.
Taking rabbits as an example, fig. 29 and fig. 30 are both model rendering results provided by an embodiment of the present application. Wherein, fig. 29 shows the object by the directional distance field of the grid body, and as shown in fig. 29, the resolution of rabbit r5 by the directional distance field of the grid body is 256 3. In contrast, the object shown by the triangular surface in fig. 30 shows a number of 58000 rabbits r6 shown by the triangular surface in fig. 30.
In a specific collision detection procedure, it is assumed that the rabbit makes a free-fall motion, first striking the tree and then falling to the ground. The collision detection in the whole process can be divided into two major parts, namely, the collision between the rabbit and the tree is taken as the collision between the non-convex object, and the collision between the rabbit and the ground is taken as the collision between the non-convex object and the basic geometric body. Fig. 31 to 33 exemplarily show an object collision. Fig. 31 shows a collision of rabbit r7 with tree t7, fig. 32 shows a collision of rabbit r8 with tree root t8, and fig. 33 shows a collision of rabbit r9 with the ground (not shown).
The average time and time variance spent in collision detection between different objects are calculated separately. The results were as follows:
table 1 representation of different models time performance at different stages
According to statistical data, the technical scheme provided by the embodiment of the application is obviously superior to a polygonal collision detection method in time performance. And the collision detection method of the polygon is affected by the motion gesture of the object. In the whole motion process, the rabbit can rotate and roll continuously, and the time variance of the polygonal collision detection algorithm is large. The contact area between models is different in different postures, and the number of triangular surfaces to be detected is changed, so that the detection time is also changed. By contrast, by adopting the technical scheme of the application, the number of gradient descent times is not greatly different, so that the spent detection time is relatively stable.
The technical scheme of the application is also applied to the field of games and is used for detecting the collision of a plurality of virtual objects in the game virtual environment.
In a virtual environment, virtual objects include, but are not limited to, virtual characters (at least one of a virtual character controlled by a player or a virtual character controlled by AI (ARTIFICIAL INTELLIGENCE, artificial intelligence)), various virtual objects in a virtual environment (e.g., virtual doors, virtual steps, virtual walls, virtual floors, etc.).
In a possible embodiment, collision detection is performed on two virtual characters in a virtual environment. In a possible embodiment, two virtual characters want to determine whether the hands of the first virtual character and the feet of the second virtual character collide during the fighting process. This collision detection may include the following steps:
S1, taking a hand of a first virtual character as a first object, taking a foot of a second virtual character as a second object, generating bounding boxes for the hand of the first virtual character and the foot of the second virtual character respectively, and determining sampling points in an intersection area of the bounding boxes of the hand of the first virtual character and the foot of the second virtual character.
S2, determining a first distance and a second distance based on values of the sampling points in the first directed distance field and the second directed distance field respectively; wherein the first directed distance field is a directed distance field of a hand of the first avatar and the second directed distance field is a directed distance field of a foot of the second avatar, the first distance being a distance between the sampling point and a surface of the hand of the first avatar and the second distance being a distance between the sampling point and a surface of the foot of the second avatar.
S3, if the first distance is greater than or equal to the second distance, moving the sampling point towards the zero-value surface of the second directed distance field until the sampling point is positioned on the zero-value surface of the second directed distance field; wherein the zero-valued surface of the second directed distance field corresponds to a surface of a foot of the second avatar.
S4, moving a sampling point on the zero-value surface of the second directed distance field, and if the sampling point moves to the zero-value surface of the first directed distance field, determining that the hand of the first virtual character collides with the foot of the second virtual character; wherein the zero-valued surface of the first directed distance field corresponds to a surface of a hand of the first virtual character.
And S5, if the sampling point cannot move to the zero value surface of the first directed distance field, determining that the hand of the first virtual character and the foot of the second virtual character do not collide.
Alternatively, after it is determined that the hand of the first avatar collides with the foot of the second avatar, the positions of the hand of the first avatar and the foot of the second avatar within the game scene are reset instead of continuing the collision.
Alternatively, when it is determined that the hand of the first virtual character and the foot of the second virtual character do not collide, the positions of the hand of the first virtual character and the foot of the second virtual character within the game scene are not interfered.
The technical scheme provided by the embodiment of the application can be applied to the field of games, and the position of the virtual object can be controlled and adjusted in time by collision detection of the virtual object in the game scene, so that a realistic game interface is presented for a game user, and the game experience of the user is further improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 34, a block diagram of a collision detection apparatus according to an embodiment of the present application is shown. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The apparatus may be the computer device described above or may be provided in a computer device. As shown in fig. 34, the apparatus 900 may include: a sample point determination module 910, a distance determination module 920, a sample point movement module 930, and a collision determination module 940.
The sampling point determining module 910 is configured to determine a sampling point in an intersection area of a bounding box of the first object and a bounding box of the second object.
The distance determining module 920 is configured to determine a first distance and a second distance based on values of the sampling point corresponding to the first directed distance field and the second directed distance field, respectively; wherein the first directed distance field is a directed distance field of the first object, the second directed distance field is a directed distance field of the second object, the first distance is a distance between the sampling point and a surface of the first object, and the second distance is a distance between the sampling point and a surface of the second object.
The sample point moving module 930 is configured to move the sample point toward the zero-valued surface of the second directed distance field if the first distance is greater than or equal to the second distance until the sample point is located on the zero-valued surface of the second directed distance field; wherein a zero-valued face of the second directed distance field corresponds to a surface of the second object.
The collision determination module 940 is configured to move the sampling point on the zero-valued surface of the second directed distance field, and determine that the first object and the second object collide if the sampling point moves to the zero-valued surface of the first directed distance field; wherein a zero-valued face of the first directed distance field corresponds to a surface of the first object.
In some embodiments, the sample point moving module 930 is configured to determine a target point with a smallest distance between the target point and the sample point on the zero-valued surface of the second directed distance field.
The sample point moving module 930 is further configured to move the sample point according to a gradient direction of the second directed distance field until the sample point moves to the target point, where the gradient direction is a direction in which the sample point points to the target point.
In some embodiments, as shown in fig. 35, the apparatus further comprises a depth determination module 950.
The depth determining module 950 is configured to move the sampling point on a zero-valued surface of the second directed distance field, and determine the sampling point as a first depth point when a value of the sampling point in the first directed distance field satisfies a first condition.
The depth determining module 950 is configured to move the sampling point on a zero-valued surface of the first directed distance field, and determine the sampling point as a second depth point when a value of the sampling point in the second directed distance field satisfies a second condition.
The depth determining module 950 is configured to determine a distance between the first depth point and the second depth point as a contact depth of the first object and the second object.
In some embodiments, as shown in fig. 35, the apparatus further comprises a normal determination module 960.
The normal determining module 960 is configured to determine a straight line where the first depth point and the second depth point are located as a collision normal line corresponding to the first object and the second object.
Or, the normal determining module 960 is configured to determine a straight line passing through the first depth point and perpendicular to the surface of the first object as a collision normal corresponding to the first object, and determine a straight line passing through the second depth point and perpendicular to the surface of the second object as a collision normal corresponding to the second object.
In some embodiments, the collision determination module 940 is further configured to determine that the first object and the second object do not collide if the sampling point cannot move to a zero-valued surface of the first directed distance field.
In some embodiments, the sampling point determining module 910 is configured to determine a midpoint of an intersection region of a bounding box of the first object and a bounding box of the second object as the sampling point.
Or, the sampling point determining module 910 is configured to determine, as the sampling point, any point of intersection points of the bounding box of the first object and the bounding box of the second object.
Or, the sampling point determining module 910 is configured to determine, as the sampling point, a target vertex in an intersection area of the bounding box of the first object and the bounding box of the second object, where the target vertex is any point of vertices of the bounding box of the first object and the bounding box of the second object.
In some embodiments, the apparatus further comprises a bounding box generation module 970, a bounding box determination module 980, as shown in fig. 35.
The bounding box generating module 970 is configured to generate an overall bounding box of a first object and an overall bounding box of a second object, where the first object includes the first object and the second object includes the second object.
The bounding box generating module 970 is configured to generate a bounding box corresponding to each object of the first object and a bounding box corresponding to each object of the second object.
The bounding box determining module 980 is configured to determine, in a case where the entire bounding box of the first object and the entire bounding box of the second object intersect, an intersection area of the bounding box of the first object and the bounding box of the second object in an intersection area of the entire bounding box of the first object and the entire bounding box of the second object.
In some embodiments, the bounding box determination module 980 is configured to divide an intersection region of the entire bounding box of the first object and the entire bounding box of the second object into a plurality of sub-regions.
The bounding box determining module 980 is configured to obtain, for a target sub-region of the plurality of sub-regions, a first maximum distance and a first minimum distance of the target sub-region, where the first maximum distance is a maximum distance of the target sub-region to a bounding box of the first object, and the first minimum distance is a minimum distance of the target sub-region to a bounding box of the first object, and a second minimum distance is a maximum distance of the target sub-region to a bounding box of the second object.
The bounding box determining module 980 is configured to determine that the bounding box of the first object and the bounding box of the second object do not intersect in the target sub-region if the first minimum distance and the second minimum distance satisfy a third condition.
The bounding box determining module 980 is configured to determine that the bounding box of the first object and the bounding box of the second object intersect at the target sub-region if the first maximum distance and the second maximum distance satisfy a fourth condition, and determine the target sub-region as an intersecting sub-region.
The bounding box determining module 980 is configured to divide the target sub-area into a plurality of sub-areas if the first minimum distance and the second minimum distance do not satisfy the third condition and the first maximum distance and the second maximum distance do not satisfy the fourth condition, and acquire, again from the target sub-area of the plurality of sub-areas, a first maximum distance and a first minimum distance of the target sub-area to a bounding box of the first object, and a second maximum distance and a second minimum distance of the target sub-area to a bounding box of the second object.
The bounding box determining module 980 is configured to determine, according to the intersection sub-region, an intersection region of a bounding box of the first object and a bounding box of the second object.
In some embodiments, as shown in fig. 35, the bounding box generation module 970 includes a location point sampling sub-module 972, a bounding box generation sub-module 974, and a bounding box adjustment sub-module 976.
The location point sampling submodule 972 is used for sampling a plurality of location points on the surface of the first object;
The bounding box generating sub-module 974 is configured to generate n bounding boxes corresponding to the initialization of the first object according to the plurality of location points, where n is a positive integer.
The bounding box adjustment sub-module 976 is configured to adjust the n bounding boxes according to the location points on the surface of the first object that are not included by the bounding box, so as to obtain bounding boxes that respectively correspond to the objects of the first object.
In some embodiments, the bounding box adjustment sub-module 976 is configured to, for a target location point on the surface of the first object that is not included by the bounding box, obtain n probability values corresponding to the target location point, where the i probability value is used to indicate a probability that the target location point is divided into an i bounding box of the n bounding boxes.
The bounding box adjustment sub-module 976 is configured to determine, from the n bounding boxes, a target bounding box corresponding to a maximum probability value of the n probability values.
The bounding box adjustment sub-module 976 is configured to adjust the target bounding box such that the target bounding box contains the target location point.
The bounding box adjustment sub-module 976 is configured to determine the n bounding boxes after adjustment as bounding boxes corresponding to respective objects of the first object, when there are no location points on the surface of the first object that are not included by the bounding box.
According to the technical scheme provided by the embodiment of the application, the sampling points are selected from the intersection area of the bounding boxes of the two objects, the sampling points move towards the zero-value surface of the second directional distance field with a relatively close distance, when the sampling points are positioned on the zero-value surface of the second directional distance field, the sampling points are kept to move on the zero-value surface of the second directional distance field, and when the sampling points on the zero-value surface of the second directional distance field move to the zero-value surface of the first directional distance field, the two objects are determined to collide. The zero-value surface intersection of the directed distance field is utilized to determine the intersection of two objects, so that the collision detection speed is increased, and the collision detection precision is improved.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 36 is a schematic structural diagram of a computer device according to an embodiment of the present application. Specifically, the present application relates to a method for manufacturing a semiconductor device. The computer apparatus 1000 includes a central processing unit (english: central Processing Unit, abbreviated as CPU) 1001, a system Memory 1004 including a random access Memory (english: random Access Memory, abbreviated as RAM) 1002 and a Read-Only Memory (english: ROM) 803, and a system bus 1005 connecting the system Memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic input/output system (I/O system) 1006, which facilitates the transfer of information between the various devices within the computer, and a mass storage device 1007 for storing an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for the user to enter information. Wherein the display 1008 and the input device 1009 are connected to the central processing unit 1001 through an input/output controller 1010 connected to a system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input/output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
Computer readable media may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only Memory (EPROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (DIGITAL VERSATILE DISC, DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that computer storage media are not limited to the ones described above. The system memory 1004 and mass storage devices 1007 described above may be collectively referred to as memory.
According to various embodiments of the application, the computer device 1000 may also operate by a remote computer connected to the network through a network, such as the Internet. I.e., the computer device 1000 may be connected to the network 1012 through a network interface unit 1011 connected to the system bus 1005, or other types of networks or remote computer systems (not shown) may be connected using the network interface unit 1011.
In an exemplary embodiment, a computer readable storage medium is also provided, in which a computer program is stored which, when being executed by a processor, implements the above-mentioned collision detection method.
Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random Access Memory ), SSD (Solid STATE DRIVES), or optical disk, etc. The random access memory may include, among other things, reRAM (RESISTANCE RANDOM ACCESS MEMORY, resistive random access memory) and DRAM (Dynamic Random Access Memory ).
In an exemplary embodiment, a computer program product is also provided, the computer program product comprising computer instructions stored in a computer readable storage medium. The processor of the terminal device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the terminal device performs the collision detection method described above.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limiting.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (14)

1. A collision detection method, the method comprising:
determining a sampling point in an intersection region of a bounding box of the first object and a bounding box of the second object;
Determining a first distance and a second distance based on values of the sampling points respectively corresponding to the first directed distance field and the second directed distance field; wherein the first directed distance field is a directed distance field of the first object, the second directed distance field is a directed distance field of the second object, the first distance is a distance between the sampling point and a surface of the first object, and the second distance is a distance between the sampling point and a surface of the second object;
If the first distance is greater than or equal to the second distance, moving the sampling point towards the zero-value surface of the second directed distance field until the sampling point is located on the zero-value surface of the second directed distance field; wherein a zero-valued face of the second directed distance field corresponds to a surface of the second object;
Moving the sampling point on the zero-value surface of the second directed distance field, and if the sampling point moves to the zero-value surface of the first directed distance field, determining that the first object and the second object collide; wherein a zero-valued face of the first directed distance field corresponds to a surface of the first object.
2. The method of claim 1, wherein the moving the sample point toward the zero-valued surface of the second directed distance field until the sample point is located at the zero-valued surface of the second directed distance field comprises:
determining a target point with the smallest distance between the zero value surface of the second directed distance field and the sampling point;
and moving the sampling point according to the gradient direction of the second directional distance field until the sampling point moves to the target point, wherein the gradient direction is the direction in which the sampling point points to the target point.
3. The method of claim 1, wherein said determining that said first object and said second object collide further comprises:
moving the sampling point on a zero-valued surface of the second directed distance field, determining the sampling point as a first depth point when the value of the sampling point in the first directed distance field meets a minimum value;
Moving the sampling point on a zero-value surface of the first directed distance field, and determining the sampling point as a second depth point when the value of the sampling point in the second directed distance field meets a minimum value;
And determining the distance between the first depth point and the second depth point as the contact depth of the first object and the second object.
4. A method according to claim 3, characterized in that the method further comprises:
Determining a straight line where the first depth point and the second depth point are located as a collision normal corresponding to the first object and the second object;
Or alternatively, the first and second heat exchangers may be,
And determining a straight line passing through the first depth point and perpendicular to the surface of the first object as a collision normal corresponding to the first object, and determining a straight line passing through the second depth point and perpendicular to the surface of the second object as a collision normal corresponding to the second object.
5. The method of claim 1, wherein after moving the sample point on the zero-valued surface of the second directed distance field, further comprising:
And if the sampling point cannot move to the zero value surface of the first directed distance field, determining that the first object and the second object do not collide.
6. The method of claim 1, wherein determining the sampling point in the intersection region of the bounding box of the first object and the bounding box of the second object comprises:
Determining a midpoint of an intersection region of the bounding box of the first object and the bounding box of the second object as the sampling point;
Or alternatively, the first and second heat exchangers may be,
Determining any point in bounding box intersection points of the first object and the second object as the sampling point;
Or alternatively, the first and second heat exchangers may be,
And determining a target vertex in an intersection area of the bounding box of the first object and the bounding box of the second object as the sampling point, wherein the target vertex is any point in the vertices of the bounding box of the first object and the bounding box of the second object.
7. The method according to claim 1, wherein the method further comprises:
Generating a whole bounding box of a first object and a whole bounding box of a second object, wherein the first object comprises the first object and the second object comprises the second object;
generating bounding boxes respectively corresponding to all objects of the first object and bounding boxes respectively corresponding to all objects of the second object;
In a case where the entire bounding box of the first object and the entire bounding box of the second object intersect, an intersection region of the bounding box of the first object and the bounding box of the second object is determined in an intersection region of the entire bounding box of the first object and the entire bounding box of the second object.
8. The method of claim 7, wherein the determining the intersection region of the bounding box of the first object and the bounding box of the second object in the intersection region of the bounding box of the first object and the bounding box of the second object comprises:
dividing an intersection region of the overall bounding box of the first object and the overall bounding box of the second object into a plurality of sub-regions;
For a target sub-region of the plurality of sub-regions, acquiring a first maximum distance and a first minimum distance of the target sub-region, the first maximum distance being a maximum distance of the target sub-region to a bounding box of the first object, and a second maximum distance and a second minimum distance of the target sub-region, the first minimum distance being a minimum distance of the target sub-region to a bounding box of the first object, the second maximum distance being a maximum distance of the target sub-region to a bounding box of the second object;
If the first minimum distance and the second minimum distance meet a third condition, determining that the bounding box of the first object and the bounding box of the second object are not intersected in the target subarea;
If the first maximum distance and the second maximum distance meet a fourth condition, determining that the bounding box of the first object and the bounding box of the second object intersect at the target sub-region, and determining the target sub-region as an intersecting sub-region;
If the first minimum distance and the second minimum distance do not meet the third condition and the first maximum distance and the second maximum distance do not meet the fourth condition, dividing the target sub-region into a plurality of sub-regions, and acquiring a first maximum distance and a first minimum distance from the target sub-region to a bounding box of the first object and a second maximum distance and a second minimum distance from the target sub-region to a bounding box of the second object again from the target sub-region in the plurality of sub-regions;
and determining the intersection region of the bounding box of the first object and the bounding box of the second object according to the intersection sub-region.
9. The method of claim 7, wherein generating bounding boxes for respective objects of the first object comprises:
sampling a plurality of location points on a surface of the first object;
Generating n bounding boxes which are initialized corresponding to the first object according to the plurality of position points, wherein n is a positive integer;
and adjusting the n bounding boxes according to the position points, which are not contained by the bounding boxes, on the surface of the first object to obtain bounding boxes corresponding to all the objects of the first object.
10. The method according to claim 9, wherein the adjusting the n bounding boxes according to the location points on the surface of the first object that are not included by the bounding box, to obtain bounding boxes corresponding to the objects of the first object respectively, includes:
For a target position point which is not contained by the bounding box on the surface of the first object, acquiring n probability values corresponding to the target position point, wherein the i probability value is used for indicating the probability that the target position point is divided into the i bounding box in the n bounding boxes;
determining a target bounding box corresponding to the maximum probability value in the n probability values from the n bounding boxes;
Adjusting the target bounding box so that the target bounding box contains the target position point;
and determining the n bounding boxes after adjustment as bounding boxes corresponding to the objects of the first object respectively when no position points which are not contained by the bounding boxes exist on the surface of the first object.
11. A collision detection apparatus, characterized in that the apparatus comprises:
The sampling point determining module is used for determining sampling points in an intersection area of the bounding box of the first object and the bounding box of the second object;
The distance determining module is used for determining a first distance and a second distance based on values respectively corresponding to the sampling points in the first directed distance field and the second directed distance field; wherein the first directed distance field is a directed distance field of the first object, the second directed distance field is a directed distance field of the second object, the first distance is a distance between the sampling point and a surface of the first object, and the second distance is a distance between the sampling point and a surface of the second object;
The sampling point moving module is used for moving the sampling point towards the zero-value surface of the second directed distance field if the first distance is greater than or equal to the second distance until the sampling point is positioned on the zero-value surface of the second directed distance field; wherein a zero-valued face of the second directed distance field corresponds to a surface of the second object;
The collision determining module is used for moving the sampling point on the zero-value surface of the second directed distance field, and determining that the first object and the second object collide if the sampling point moves to the zero-value surface of the first directed distance field; wherein a zero-valued face of the first directed distance field corresponds to a surface of the first object.
12. A computer device comprising a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the method of any of claims 1 to 10.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, which is loaded and executed by a processor to implement the method of any of the preceding claims 1 to 10.
14. A computer program product comprising computer instructions stored in a computer readable storage medium, from which a processor reads and executes the computer instructions to implement the method of any one of claims 1 to 10.
CN202210473814.1A 2022-04-29 2022-04-29 Collision detection method, device, equipment and storage medium Active CN115115773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210473814.1A CN115115773B (en) 2022-04-29 2022-04-29 Collision detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210473814.1A CN115115773B (en) 2022-04-29 2022-04-29 Collision detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115115773A CN115115773A (en) 2022-09-27
CN115115773B true CN115115773B (en) 2024-07-16

Family

ID=83327116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210473814.1A Active CN115115773B (en) 2022-04-29 2022-04-29 Collision detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115115773B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115952569B (en) * 2023-03-14 2023-06-16 安世亚太科技股份有限公司 Simulation method, simulation device, electronic equipment and computer readable storage medium
CN116612825B (en) * 2023-07-19 2023-10-13 四川省产品质量监督检验检测院 Method for detecting collision point and calculating collision volume of molecular electrostatic potential isosurface point cloud
CN117224951B (en) * 2023-11-02 2024-05-28 深圳市洲禹科技有限公司 Pedestrian behavior prediction method and device based on perception and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6396492B1 (en) * 1999-08-06 2002-05-28 Mitsubishi Electric Research Laboratories, Inc Detail-directed hierarchical distance fields
US7555163B2 (en) * 2004-12-16 2009-06-30 Sony Corporation Systems and methods for representing signed distance functions
CN107907593B (en) * 2017-11-22 2020-09-22 中南大学 Manipulator anti-collision method in ultrasonic detection
CN110180182B (en) * 2019-04-28 2021-03-26 腾讯科技(深圳)有限公司 Collision detection method, collision detection device, storage medium, and electronic device
CN110992456B (en) * 2019-11-19 2021-09-07 浙江大学 Avalanche simulation method based on position dynamics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Liu, Pengfei, et al.;Liu, Pengfei, et al.;Real-time collision detection between general SDFs;20240425;1-13 *

Also Published As

Publication number Publication date
CN115115773A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN115115773B (en) Collision detection method, device, equipment and storage medium
US11461958B2 (en) Scene data obtaining method and model training method, apparatus and computer readable storage medium using the same
Ho et al. Efficient point-based rendering techniques for haptic display of virtual objects
US8903693B2 (en) Boundary handling for particle-based simulation
US6809738B2 (en) Performing memory management operations to provide displays of complex virtual environments
JP2625621B2 (en) How to create an object
CN103236079B (en) Improved three-dimensional model voxelization-based inner sphere construction method
CN111652908A (en) Operation collision detection method for virtual reality scene
US9971335B2 (en) Hybrid dynamic tree data structure and accessibility mapping for computer numerical controlled machining path planning
Otaduy et al. CLODs: Dual Hierarchies for Multiresolution Collision Detection.
WO2016097373A1 (en) Rendering based generation of occlusion culling models
CN110717967A (en) Large-scene-model-oriented web-side dynamic rendering LOD processing method
US20030117398A1 (en) Systems and methods for rendering frames of complex virtual environments
Lin et al. Collision detection
Hadap et al. Collision detection and proximity queries
Hastings et al. Optimization of large-scale, real-time simulations by spatial hashing
CN117152237A (en) Distance field generation method and device, electronic equipment and storage medium
Horvat et al. Ray-casting point-in-polyhedron test
Echegaray et al. A methodology for optimal voxel size computation in collision detection algorithms for virtual reality
Ulyanov et al. Interactive vizualization of constructive solid geometry scenes on graphic processors
CN113591208A (en) Oversized model lightweight method based on ship feature extraction and electronic equipment
Naim et al. Collision detection and force response in highly-detailed point-based hapto-visual virtual environments
Woulfe et al. A framework for benchmarking interactive collision detection
CN117235824B (en) Coplanarity fitting method, apparatus, device and computer readable storage medium
Sajo et al. Controlling the Accuracy and Efficiency of Collision Detection in 2d Games using Hitboxes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant