CN115317916A - Method and device for detecting overlapped objects in virtual scene and electronic equipment - Google Patents

Method and device for detecting overlapped objects in virtual scene and electronic equipment Download PDF

Info

Publication number
CN115317916A
CN115317916A CN202210725183.8A CN202210725183A CN115317916A CN 115317916 A CN115317916 A CN 115317916A CN 202210725183 A CN202210725183 A CN 202210725183A CN 115317916 A CN115317916 A CN 115317916A
Authority
CN
China
Prior art keywords
target object
ray
virtual scene
detection
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210725183.8A
Other languages
Chinese (zh)
Inventor
王向坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210725183.8A priority Critical patent/CN115317916A/en
Publication of CN115317916A publication Critical patent/CN115317916A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars

Abstract

The invention provides a method and a device for detecting an overlapped object in a virtual scene and electronic equipment, wherein collision detection parameters are obtained, and the collision detection parameters comprise parameters for detecting whether a target object in the virtual scene collides with a first object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameters, wherein the detection ray does not intersect with the target object; it is then determined whether the detected ray collides with the first object, and if so, the target object is determined to be an overlapping object of the first object. The method can generate the detection ray on the outer surface of the object in the virtual scene, and then determine whether the current object is an overlapped object according to whether the detection ray of the current object collides with other objects in the virtual scene, and the method is low in consumption compared with a method for detecting the collision of the object in the virtual scene, and is based on programmed operation, and compared with a manual detection method, the efficiency of overlapping detection is improved.

Description

Method and device for detecting overlapped objects in virtual scene and electronic equipment
Technical Field
The invention relates to the technical field of software design, in particular to a method and a device for detecting an overlapped object in a virtual scene and electronic equipment.
Background
In the Unity engine environment, the construction of a game scene is generally the arrangement of the scene by a field editing worker or by a scene editing tool. In the aspect of a scene editing tool, a plurality of terrain-based scattering tools or a terrain editing tool provided by an engine are generated for game scene objects, but after static objects are generated in a scene by means of the tool, the phenomena of cross overlapping of the objects usually occur in the game scene, and the game experience of a player is influenced. In the related art, in order to avoid the phenomenon of object cross-overlap, collision detection is usually performed on objects when tools scatter points so as to remove the cross-overlapped objects, but the method needs to improve the calculation efficiency by means of multithreading, and the consumption is high; and cross overlapped objects in the scene can be manually removed, so that the controllability is high but the efficiency is low.
Disclosure of Invention
The invention aims to provide a method and a device for detecting an overlapped object in a virtual scene and electronic equipment, so as to improve the detection efficiency of the overlapped object in the virtual scene.
In a first aspect, the present invention provides a method for detecting an overlapping object in a virtual scene, where the method includes: acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object except the target object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameter; the detection ray does not intersect with the target object; it is determined whether the detection ray collides with the first object, and if so, the target object is determined as an overlapping object of the first object.
In an alternative embodiment, the step of generating a detection ray on the outer surface of the target object based on the collision detection parameter includes: determining ray points around the target object based on the collision detection parameters; detection rays are generated on the outer surface of the target object according to ray points around the target object.
In an alternative embodiment, the step of determining ray points around the target object based on the collision detection parameters includes: obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points; and determining the position of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
In an alternative embodiment, the preset points include: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle; the step of obtaining initial coordinates of the ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points includes: multiplying the collision detection parameters with the position coordinates of the multiple vertexes to obtain initial coordinates of ray points around the target object; the vertex position coordinates are obtained by establishing a rectangular coordinate system with the center of a preset unit circle as the origin.
In an optional embodiment, the step of determining the position of the ray point around the target object based on the initial coordinate of the ray point around the target object and the position coordinate of the target object in the virtual scene includes: performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object; and adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object includes: determining the relative distance between a ray point corresponding to the target object and the target object; and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of generating a detection ray on the outer surface of the target object according to the ray points around the target object includes: and aiming at each ray point around the target object, shifting the current ray point to a specified distance in the vertical direction, and taking the shifted position as a transmission starting point to transmit rays in the vertical direction to obtain the detection rays corresponding to the current ray point.
In an alternative embodiment, the step of determining whether the detection ray collides with the first object and determining the target object as an overlapping object of the first object if the collision occurs includes: judging whether a detection ray colliding with the first object exists in the detection rays generated by the outer surface of the target object; if so, determining the target object as an overlapping object of the first object; if not, the target object is determined to be non-overlapping with the first object.
In an optional embodiment, after the step of determining the target object as the overlapping object of the first object, the method further includes: and hiding the overlapped objects, and determining whether to remove the overlapped objects according to the display effect of the virtual scene after hiding.
In an optional embodiment, before the step of determining the target object as the overlapping object of the first object, the method further includes: subtracting the vertical coordinate of the center of the target object from the vertical coordinate of the center of the first object to obtain a distance difference value; judging whether the distance difference value is smaller than a preset distance threshold value or not; if the distance difference value is smaller than a preset distance threshold value, determining the target object as an overlapped object of the first object; and if the distance difference is not smaller than the preset distance threshold, determining that the target object is not overlapped with the first object.
In an optional embodiment, before the step of obtaining the collision detection parameter, the method further includes: selecting a second object from the virtual scene; the second object is any object in the virtual scene; obtaining a collision detection parameter in response to an adjustment operation on a collision range of the second object; and the collision range corresponding to the collision detection parameters enables the detection ray corresponding to the second object not to intersect with the second object.
In a second aspect, the present invention provides an apparatus for detecting an overlapping object in a virtual scene, the apparatus comprising: the parameter acquisition module is used for acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object except the target object in the virtual scene; the ray generation module is used for generating detection rays on the outer surface of the target object based on the collision detection parameters; the detection ray does not intersect with the target object; and the overlapping detection module is used for determining whether the detection ray collides with the first object or not, and if so, determining the target object as an overlapping object of the first object.
In a third aspect, the present invention provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for detecting an overlapped object in the virtual scene.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method for detecting overlapping objects in a virtual scene as described above.
The embodiment of the invention has the following beneficial effects:
the invention provides a method and a device for detecting an overlapped object in a virtual scene and electronic equipment, wherein collision detection parameters are firstly acquired, and the collision detection parameters comprise: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameters, wherein the detection ray does not intersect with the target object; it is then determined whether the detected ray collides with the first object, and if so, the target object is determined to be an overlapping object of the first object. In the method, the detection ray generated on the outer surface of the object in the virtual scene can be controlled according to the collision detection parameter, and then whether the current object is an overlapped object is determined according to whether the detection ray of the current object collides with other objects in the virtual scene.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth hereinafter.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for detecting an overlapped object in a virtual scene according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for detecting an overlapped object in a virtual scene according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a polygon circumscribed by a predetermined unit circle according to an embodiment of the present invention;
fig. 4 is a flowchart of another method for detecting an overlapped object in a virtual scene according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for detecting an overlapped object in a virtual scene according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the Unity engine environment, the construction of game scenes is mostly implemented by field editing workers or by scene editing tools. In the aspect of a scene editing tool, a plurality of landform-based scattering tools or a landform editing tool carried by an engine are generated for a game scene object. After static objects are generated in a scene by means of a tool, a phenomenon of cross-overlapping between the objects (which can also be understood as an interference phenomenon existing between two objects) occurs in the scene, and the player experience is affected.
In the related art, common methods for avoiding the cross-overlapping of objects by using a topographic point spreading tool include the following three methods: in the first scheme, the size of an object in a game scene is acquired, self cross overlapping is avoided when the object scatters points (usually, the scattered points refer to points corresponding to objects generated on the scene terrain), the scattered points are recorded, and an original point set is avoided by comparing with a point set existing in a memory when new scattered points are carried out. And in the second scheme, collision detection is carried out on the object, and the cross overlapped object is removed. And thirdly, carrying out local object shading or manual removal and the like in the area with the crossed and overlapped objects.
However, the scheme can achieve better effect on the detection of self cross overlapping and objects with smaller size, but has lower processing efficiency; and for different types of objects or larger objects, under the conditions that the scattering density is larger and the sizes of the objects are randomly generated within a certain range, more object overlapping phenomena can occur in a game scene. The scheme performs collision detection on two scene objects, so that cross overlapped objects can be removed, but the collision detection among a plurality of objects has high consumption on performance, is difficult to support a large number of objects, and has poor processing effect. The third scheme can achieve a good effect and has the best controllability, but the manual detection and removal of the large scene needs to invest more labor cost, and the efficiency is low.
Based on the above text, embodiments of the present invention provide a method and an apparatus for detecting an overlapping object in a virtual scene, and an electronic device. To facilitate understanding of the embodiment of the present invention, a detailed description is first given of a method for detecting an overlapped object in a virtual scene, as shown in fig. 1, where the method includes the following specific steps:
step S102, obtaining collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object in the virtual scene except the target object.
In a specific implementation, the virtual scene generally includes a plurality of objects, and the objects may be buildings, scenes (e.g., trees, flowers, grass, etc.), or objects such as vehicles. The objects in the virtual scene are usually static objects baked by an engine, and specifically, the static objects can be obtained by scattering points in the virtual scene through a preset topographic scattering tool.
The target object may be any object in the virtual scene, and the collision detection parameter may be a parameter set by the user according to a self collision range of the target object (the parameter may make the detection ray generated by the target object not intersect with the target object), may also be a parameter set by the user according to a self collision range of a virtual object other than the target object, and may also be a parameter set by the user according to research and development requirements.
Step S104, generating detection rays on the outer surface of the target object based on the collision detection parameters; the detection ray does not intersect the target object.
According to the collision detection parameters, a plurality of ray points distributed around the target object can be obtained, and then the ray points are emitted to the outer surface of the target object, so that a plurality of detection rays corresponding to the target object can be generated.
And step S106, determining whether the detection ray collides with the first object, and if so, determining the target object as an overlapped object of the first object.
The outer surface of the target object usually corresponds to a plurality of detection rays, the number of the detection rays can be determined according to the detection precision, and generally, the larger the number of the detection rays, the higher the detection precision. In a specific implementation, the detection ray collides with the first object, i.e., the detection ray intersects the first object. Specifically, if one or more detection rays of the plurality of detection rays generated from the outer surface of the target object intersect with the first object, the target object is considered to be overlapped with the first object, and at this time, the target object may be determined as an overlapped object of the first object. In a specific implementation, when the target object is determined to be an overlapping object, the overlapping object may be further processed, and the processing on the overlapping object may be: removing the overlapped object, masking the overlapped object, or adjusting the position of the overlapped object, etc.
If there is no detection ray that intersects the first object among the plurality of detection rays generated from the outer surface of the target object, the target object does not overlap the first object, and no processing is required.
In a particular implementation, the first object is not a terrain object in the virtual scene, which is typically a terrain in the virtual scene, is not removable, and also allows the target object to overlap the terrain object.
The embodiment of the invention provides a method for detecting an overlapped object in a virtual scene, which comprises the following steps of firstly obtaining collision detection parameters: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameters, wherein the detection ray does not intersect with the target object; it is then determined whether the detected ray collides with the first object, and if so, the target object is determined as an overlapping object of the first object. In the method, the detection ray generated on the outer surface of the object in the virtual scene can be controlled according to the collision detection parameter, and then whether the current object is an overlapped object is determined according to whether the detection ray of the current object collides with other objects in the virtual scene.
The embodiment of the present invention further provides another method for detecting an overlapped object in a virtual scene, which is implemented on the basis of the above embodiment, and the method mainly describes a specific process of generating a detection ray on an outer surface of a target object based on a collision detection parameter (specifically, the method is implemented by the following steps S204 to S206), as shown in fig. 2, the method includes the following specific steps:
step S202, collision detection parameters are acquired.
The collision detection parameter may be a parameter corresponding to a self collision range of the target object, or may be a parameter corresponding to a self collision range of a designated object selected by a user from among a plurality of objects included in the virtual scene, where the designated object may be a first object in a set of objects selected by the virtual scene, or may be an object randomly extracted from the set of objects selected.
In a specific implementation, the collision detection parameter may be determined by the following steps 10-11:
step 10, selecting a second object from the virtual scene; and the second object is any object in the virtual scene. The second object may be a target object and may be another object in the virtual scene.
Step 11, responding to the adjustment operation of the collision range of the second object, and obtaining a collision detection parameter; and the collision range corresponding to the collision detection parameter enables the detection ray corresponding to the second object not to intersect with the second object.
In specific implementation, in order to improve the detection efficiency, only one object needs to be selected from the virtual scene to determine the collision detection parameters. After the second object is selected, visual range display is added to the second object, and a user can adjust the collision range of the second object (namely, adjust the value of the collision detection parameter) through the visual range display, so that the collision range corresponding to the collision detection parameter is larger than the collision range of the second object, namely, the collision range corresponding to the collision detection parameter can surround the second object. In specific operation, a user can pull a sliding bar provided in the graphical user interface to adjust a proper range (the adjusting range corresponds to the collision detection parameter), and the script can automatically acquire the range and can adaptively zoom in and out the size of the object, so that the method has high adaptability. For example, when the second object is a rectangular parallelepiped, the collision detection parameter may be a radius of a circle circumscribed by the rectangular parallelepiped.
It should be noted that the object existing in the virtual scene may be a boxdollier collision volume detection, a meskcolllier collision volume, or a combined collision volume of the two; wherein, a boxdolider collision volume generally refers to a rectangular cuboid-shaped collision volume type in an engine; meshCollider collision volumes generally refer to the type of collision volume referenced to the outer surface of the model in the engine. The virtual scene may be a game scene.
For example, assuming that an object existing in the virtual scene is a boxholder collision volume, when a ray detects whether the object is an overlapping object, self-collision volume detection needs to be performed on the object first, so as to avoid self-collision of the object (the self-collision may be understood as that a detected ray corresponding to the object intersects with the self). In order to obtain more accurate collision detection parameters and avoid the detection of self collision bodies of objects, the visual range adjustment setting of virtual objects can be carried out by utilizing a Gizmos function built in a Unity engine, meanwhile, in order to improve the efficiency, the scheme only carries out range adjustment script loading on second objects selected in a virtual scene, and then the specific values of the collision detection parameters are obtained by manually adjusting the size of the collision range of the second objects, so that the self collision detection is avoided.
Step S204, based on the collision detection parameters, determining ray points around the target object.
The collision detection parameters may prevent the target object from self-collision, i.e., prevent the detection rays generated from the outer surface of the target object from intersecting the target object. Thus, a plurality of ray points need to be set around the target object in accordance with the collision detection parameters to generate a detection ray from the ray points. In a specific implementation, the step S204 can be implemented by the following steps 20 to 21:
and 20, obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points.
The preset points can be set by a user according to requirements. For example, the plurality of preset points may be a plurality of points set on the preset unit circle, may be all vertices of the preset polygon, or may be all vertices of a polygon circumscribing the preset unit circle. Meanwhile, a plurality of preset points are placed in the rectangular coordinate system, and the position coordinate corresponding to each preset point can be obtained.
In a specific implementation, it is assumed that the preset points include: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle; the position coordinates of each vertex are obtained under a rectangular coordinate system established by taking the circle center of a preset unit circle as an origin, the horizontal direction is an X axis, the vertical direction is a Y axis, and the direction perpendicular to the plane of the X axis and the plane of the Y axis is a Z axis. Specifically, the collision detection parameter may be multiplied by the position coordinates of the plurality of vertices to obtain initial coordinates of ray points around the target object; wherein one vertex corresponds to one ray point.
The number of the sides of the circumscribed polygon can be set according to research and development requirements, as shown in fig. 3, the circumscribed polygon of the preset unit circle is a schematic diagram, the leftmost diagram in fig. 3 is a schematic diagram of a circumscribed regular hexagon of the preset unit circle, the middle diagram is a schematic diagram of a circumscribed regular dodecagon of the preset unit circle, and the rightmost diagram is a schematic diagram of a circumscribed regular octadecagon of the preset unit circle. Specifically, when the number of the sides of the circumscribed polygon is small, the detection efficiency can be improved, but the detection area is large, the number of ray points is small, and accurate detection is difficult; when the number of sides of the circumscribed polygon is too large, the detection accuracy increases, but the efficiency is low due to the excessive ray point detection.
In some alternative embodiments, a predetermined unit circle circumscribing a regular dodecagon may be selected as the ray point under the dual consideration of the combined accuracy and efficiency. When an external regular dodecagon is selected, twelve vertexes of the regular dodecagon can be used as twelve preset points, a rectangular coordinate system is established by taking the circle center of a preset unit circle as an origin, and under the rectangular coordinate system, a position coordinate set corresponding to the twelve preset points can be expressed as follows:
Figure BDA0003710789990000111
the position coordinate set comprises twelve elements, and each element corresponds to the position coordinate of one preset point.
And step 21, determining the positions of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
After the initial coordinates of the ray points around the target object are obtained, the initial coordinates are introduced to the position of the target object in the virtual scene, so that the ray points can be arranged around the target object in the virtual scene. In a particular implementation, determining the location of ray points around the target object may be achieved by the following steps 30-31:
and step 30, performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object.
The position coordinates of the target object in the virtual scene refer to the position coordinates corresponding to the center of the target object in the world coordinate system. The linear operation may be to add the initial coordinates of the ray points around the target object to the position coordinates of the target object in the virtual scene, that is, to change the origin of the rectangular coordinate system where the initial coordinates are located to the origin of the world coordinate system, so as to obtain coordinate values of the ray points around the target object corresponding to the world coordinate system, and determine the coordinate value of each ray point in the world coordinate system as the position of the ray point corresponding to the target object.
And step 31, adjusting the positions of the ray points corresponding to the target object to obtain the final positions of the ray points around the target object.
In a specific implementation, in order to ensure the detection accuracy and prevent the detection ray from intersecting with the target object, the position of the ray point of the target object needs to be adjusted, and the adjusted position is determined as the final position of the ray point around the target object. Specifically, the relative distance between the ray point corresponding to the target object and the target object may be determined first; and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
The relative distance between the ray point and the target object may be a distance between the ray point and the center of the target object, or a distance between the ray point and the surface of the target object. By performing the scaling processing and the object size following processing, it is possible to make the ray point outside the target object and the detection ray generated by the ray point not intersect the target object.
Step S206, generating detection rays on the outer surface of the target object according to ray points around the target object; the detection ray does not intersect the target object.
In a specific implementation, after obtaining the ray points around the target object, the detection rays may be emitted from the ray points to the outside of the target object. For each ray point around the target object, shifting the current ray point by a specified distance in the vertical direction (the specified distance may be set according to research and development requirements, for example, the specified distance may be set to-2000 or 1000, etc.), and taking the shifted position as an emission starting point, emitting a ray in the vertical direction, to obtain a detection ray corresponding to the current ray point, where the current ray point is any one ray point around the target object.
In practical application, each ray point around the target object may be moved downward by a specified distance along the vertical direction, and then the ray is emitted in the vertical upward direction with the shifted position as the emission starting point, so as to obtain the detection ray corresponding to each ray point. And each ray point around the target object can also be moved upwards by a specified distance along the vertical direction, and then the ray is emitted in the vertical downward direction by taking the shifted position as an emission starting point to obtain the detection ray corresponding to each ray point.
Specifically, on the basis of obtaining Ray points around a target object, a Unity engine is used for carrying out Ray emission function (Ray) of the Unity engine, and after the Ray points are subjected to deviation in the vertical direction (namely the Y-axis direction), ray emission starting points are obtained and emitted in the vertical direction. The ray point vertical direction deviation is carried out to make the ray emission starting point avoid the target object, prevent the ray emission starting point from emitting in the target object and avoid the detection error.
Step S208, determining whether the detection ray collides with the first object, and if so, determining the target object as an overlapping object of the first object.
In a specific implementation, collision information of each detection ray of the target object needs to be acquired, and collision information corresponding to a certain detection ray includes information that the detection ray collides with other objects except for the own object in the virtual scene. Specifically, if a target detection ray intersects with an object in the virtual scene (which may also be understood as a collision), the name and physical information of the collision object are saved in the collision information corresponding to the target detection ray.
Whether the detection ray intersected with the first object exists in the target object can be judged through the collision information corresponding to the detection ray around the target object, if yes, the target object and the first object are overlapped, and the target object can be determined as the overlapped object of the first object.
In specific implementation, after the collision information corresponding to the detection ray is determined, the terrain information in the collision information needs to be filtered, that is, the terrain object in the collision information is filtered, and the terrain object is a fixed object in a virtual scene and does not need to be subjected to subsequent processing.
According to the method for detecting the overlapped objects in the virtual scene, a plurality of ray points are generated around each object in the virtual scene in a control mode according to the collision detection parameters, then the detection rays are generated according to the ray points, and the overlapped objects in the virtual scene are determined by acquiring the information of the objects touched by the detection rays.
The embodiment of the present invention further provides another method for detecting an overlapped object in a virtual scene, which is implemented on the basis of the above embodiment, and the method mainly describes a specific process of determining whether a detection ray collides with a first object, and if so, determining a target object as an overlapped object of the first object (which is implemented by the following steps S406 to S416), as shown in fig. 4, the method includes the following specific steps:
step S402, collision detection parameters are acquired.
In specific implementation, after the collision detection parameters are obtained through parameter visualization adjustment, the collision detection parameters can be obtained by utilizing a GetComponent < > function of the Unity engine.
In step S404, a detection ray is generated on the outer surface of the target object based on the collision detection parameters.
After the collision detection parameters are obtained, a circular region with the target object as the center and the collision detection parameters as the radius can be obtained. Then, based on the position coordinates of all vertexes of the circumscribed polygon of the preset unit circle, multiplying the position coordinates of all vertexes of the circumscribed polygon with the collision detection parameters to obtain the coordinates of all vertexes of the circumscribed polygon of the circular area with the target object as the center, namely the coordinates of a plurality of ray points around the target object; wherein, a vertex of the circumscribed polygon corresponds to a ray point.
Step S406, judging whether a detection ray colliding with the first object exists in the detection rays generated by the outer surface of the target object; if so, go to step S408; otherwise, step S416 is performed.
The first object is an object other than the target object in the virtual scene. In a specific implementation, collision information of each detection ray of the target object needs to be acquired, where the collision information corresponding to the detection ray includes information that the detection ray collides with other objects in the virtual scene except for the own object, and the information includes names of the collided objects.
In step S408, the vertical coordinate of the center of the target object is subtracted from the vertical coordinate of the center of the first object to obtain a distance difference.
In a virtual scene, there may be a plurality of objects in a vertical direction, such as buildings, vessels, floats, etc. on the sea surface. In order to obtain correct processing effect in the vertical direction of the virtual scene and enable ships, buildings and other objects on the water surface not to influence the generation of objects in the terrain relative area, the invention introduces a distance judgment coefficient (equivalent to a preset distance threshold) of the objects in the vertical direction, judges the objects in the virtual scene again by obtaining the absolute distance between the two objects in the vertical direction and inputting the self-defined distance, and establishes vertical judgment accurate measurement.
In a specific implementation, the difference between the vertical coordinate of the center of the target object and the vertical coordinate of the center of the first object is calculated, and the relative distance between the two objects in the numerical direction can be obtained.
Step S410, judging whether the distance difference value is smaller than a preset distance threshold value; if so, go to step S412; otherwise, step S416 is performed.
The preset distance threshold value can be set according to research and development requirements, and can also be adjusted in real time according to requirements.
Step S412, determining the target object as an overlapped object of the first object; step S414 is performed.
Step S414, hiding the overlapped object, and determining whether to remove the overlapped object according to the display effect of the virtual scene after the hiding.
In a specific implementation, after the target object is determined as the overlapped object of the first object, the overlapped object may be hidden first, and whether to remove the overlapped object is determined according to a display effect of the virtual scene after the hiding. Specifically, the Unity engine self-contained function SetActive () can be used for hiding the target, and the SetActive () function has higher processing performance and can quickly check the display effect after the hiding processing in the virtual scene.
In practical application, a user checks the display effect of the virtual scene after the hiding processing, if the display effect meets the preset requirement, a removal button in a graphical user interface can be triggered, and the overlapped object of the hiding processing is removed; if the display effect does not meet the preset requirement, the target object after the hiding treatment can be displayed in the graphical user interface again, then the detection ray or the vertical distance of the target object is readjusted, a shielding button displayed in the graphical user interface is further triggered, the target object is shielded, the user checks the display effect of the virtual scene after the shielding treatment, and if the display effect meets the requirement, the target object is removed through a removing button.
In order to simplify the memory information occupied by the scene, when the hidden object is removed, the destroyImmediate () function of the Unity engine can be used for removing the hidden object and cleaning the hidden data information in the virtual scene.
In step S416, it is determined that the target object does not overlap the first object.
In the method for detecting the overlapped object in the virtual scene, the detection ray can be generated on the outer surface of the object in the virtual scene, and then whether the current object is the overlapped object or not is determined according to whether the detection ray of the current object collides with other objects in the virtual scene, and the overlapped object is processed. The method can basically remove the cross overlapping objects existing in the virtual scene, greatly improve the generation quality and the processing efficiency of the objects in the virtual scene, and can obtain good scene arrangement effect.
Corresponding to the above method embodiment, an embodiment of the present invention provides an apparatus for detecting an overlapped object in a virtual scene, as shown in fig. 5, where the apparatus includes:
a parameter obtaining module 50 for obtaining collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object in the virtual scene except the target object.
A ray generation module 51 for generating a detection ray on the outer surface of the target object based on the collision detection parameter; the detection ray does not intersect the target object.
And the overlap detection module 52 is configured to determine whether the detection ray collides with the first object, and if so, determine the target object as an overlap object of the first object.
The detection device for the overlapped objects in the virtual scene firstly obtains collision detection parameters, wherein the collision detection parameters comprise: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameters, wherein the detection ray does not intersect with the target object; it is then determined whether the detected ray collides with the first object, and if so, the target object is determined as an overlapping object of the first object. In the method, the detection ray generated on the outer surface of the object in the virtual scene can be controlled according to the collision detection parameter, and then whether the current object is an overlapped object is determined according to whether the detection ray of the current object collides with other objects in the virtual scene.
Specifically, the above-mentioned ray generation module 51 includes: a ray point generating unit for determining ray points around the target object based on the collision detection parameters; and the ray generating unit is used for generating detection rays on the outer surface of the target object according to ray points around the target object.
Further, the ray point generating unit is configured to: obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points; and determining the position of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
In a specific implementation, the preset points include: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle; the ray point generating unit is configured to: multiplying the collision detection parameters by the position coordinates of the plurality of vertexes to obtain initial coordinates of ray points around the target object; the position coordinates of the vertexes are obtained under the condition that a rectangular coordinate system is established by taking the center of a preset unit circle as an origin.
In a specific implementation, the ray point generating unit is further configured to: performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object; and adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object.
Specifically, the ray point generating unit is further configured to: determining the relative distance between a ray point corresponding to the target object and the target object; and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
Further, the above-mentioned ray generation unit is configured to: and aiming at each ray point around the target object, shifting the current ray point to a specified distance in the vertical direction, and taking the shifted position as a transmitting starting point to transmit rays in the vertical direction to obtain the detection rays corresponding to the current ray point.
In a specific implementation, the overlap detection module 52 is configured to: judging whether a detection ray colliding with the first object exists in the detection rays generated by the outer surface of the target object; if so, determining the target object as an overlapping object of the first object; if not, determining that the target object does not overlap with the first object.
In some embodiments, the apparatus further comprises an object handling model configured to: after the target object is determined as the overlapped object of the first object, the overlapped object is subjected to hiding processing, and whether the overlapped object is removed or not is determined according to the display effect of the virtual scene after the hiding processing.
Further, the apparatus further comprises a distance detection module configured to: before the target object is determined as the overlapped object of the first object, subtracting the vertical coordinate of the center of the target object from the vertical coordinate of the center of the first object to obtain a distance difference value; judging whether the distance difference value is smaller than a preset distance threshold value or not; if the distance difference value is smaller than a preset distance threshold value, determining the target object as an overlapped object of the first object; and if the distance difference is not smaller than the preset distance threshold, determining that the target object is not overlapped with the first object.
In a specific implementation, the apparatus further includes a parameter determining module, configured to: before acquiring collision detection parameters, selecting a second object from the virtual scene; the second object is any object in the virtual scene; obtaining a collision detection parameter in response to an adjustment operation on a collision range of the second object; and the collision range corresponding to the collision detection parameter enables the detection ray corresponding to the second object not to intersect with the second object.
The implementation principle and the generated technical effects of the detection device for the overlapped objects in the virtual scene provided by the embodiment of the invention are the same as those of the method embodiment, and for brief description, the corresponding contents in the method embodiment can be referred to where the embodiment of the device is not mentioned.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, where the electronic device includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for detecting an overlapped object in the virtual scene.
Specifically, the method for detecting an overlapped object in the virtual scene includes: acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object except the target object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameter; the detection ray does not intersect with the target object; it is determined whether the detection ray collides with the first object, and if so, the target object is determined as an overlapping object of the first object.
The method for detecting the overlapped objects in the virtual scene can control the outer surface of the object of the virtual scene to generate the detection ray according to the collision detection parameter, and then determine whether the current object is the overlapped object according to whether the detection ray of the current object collides with other objects in the virtual scene, wherein the method is low in consumption compared with a method for performing collision detection on the objects in the virtual scene, and the method is based on programmed operation and improves the detection efficiency of the overlapped objects in the virtual scene compared with a manual detection method.
In an alternative embodiment, the step of generating a detection ray on the outer surface of the target object based on the collision detection parameter includes: determining ray points around the target object based on the collision detection parameters; detection rays are generated on the outer surface of the target object according to ray points around the target object.
In an alternative embodiment, the step of determining ray points around the target object based on the collision detection parameters includes: obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points; and determining the positions of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
In an alternative embodiment, the preset points include: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle; the step of obtaining initial coordinates of the ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points includes: multiplying the collision detection parameters by the position coordinates of the plurality of vertexes to obtain initial coordinates of ray points around the target object; the position coordinates of the vertexes are obtained under the condition that a rectangular coordinate system is established by taking the center of a preset unit circle as an origin.
In an optional embodiment, the step of determining the position of the ray point around the target object based on the initial coordinate of the ray point around the target object and the position coordinate of the target object in the virtual scene includes: performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object; and adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object includes: determining the relative distance between a ray point corresponding to the target object and the target object; and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of generating a detection ray on the outer surface of the target object according to the ray points around the target object includes: and aiming at each ray point around the target object, shifting the current ray point to a specified distance in the vertical direction, and taking the shifted position as a transmitting starting point to transmit rays in the vertical direction to obtain the detection rays corresponding to the current ray point.
In an alternative embodiment, the step of determining whether the detection ray collides with the first object and determining the target object as an overlapping object of the first object if the collision occurs includes: judging whether a detection ray colliding with the first object exists in the detection rays generated by the outer surface of the target object; if yes, determining the target object as an overlapping object of the first object; if not, the target object is determined to be non-overlapping with the first object.
In an optional embodiment, after the target object is determined as the overlapped object of the first object, the overlapped object is hidden, and whether to remove the overlapped object is determined according to a display effect of the virtual scene after the hiding.
In an optional embodiment, before the step of determining the target object as the overlapping object of the first object, the method further includes: subtracting the vertical coordinate of the center of the target object from the vertical coordinate of the center of the first object to obtain a distance difference value; judging whether the distance difference value is smaller than a preset distance threshold value or not; if the distance difference value is smaller than a preset distance threshold value, determining the target object as an overlapped object of the first object; and if the distance difference is not smaller than the preset distance threshold, determining that the target object is not overlapped with the first object.
In an optional embodiment, before the step of obtaining the collision detection parameter, the method further includes: selecting a second object from the virtual scene; the second object is any object in the virtual scene; obtaining a collision detection parameter in response to an adjustment operation on a collision range of the second object; and the collision range corresponding to the collision detection parameters enables the detection ray corresponding to the second object not to intersect with the second object.
Further, the electronic device shown in fig. 6 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The memory 100 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The processor 101 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the method for detecting an overlapped object in a virtual scene, where specific implementation may refer to method embodiments, and details are not repeated here.
Specifically, the method for detecting an overlapped object in the virtual scene includes: acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in the virtual scene collides with a first object in the virtual scene; the first object is an object except the target object in the virtual scene; generating a detection ray on the outer surface of the target object based on the collision detection parameters; the detection ray does not intersect with the target object; it is determined whether the detection ray collides with the first object, and if so, the target object is determined as an overlapping object of the first object.
The method for detecting the overlapped objects in the virtual scene can control the outer surface of the object of the virtual scene to generate the detection ray according to the collision detection parameter, and then determine whether the current object is the overlapped object according to whether the detection ray of the current object collides with other objects in the virtual scene, wherein the method is low in consumption compared with a method for performing collision detection on the objects in the virtual scene, and the method is based on programmed operation and improves the detection efficiency of the overlapped objects in the virtual scene compared with a manual detection method.
In an alternative embodiment, the step of generating a detection ray on the outer surface of the target object based on the collision detection parameter includes: determining ray points around the target object based on the collision detection parameters; detection rays are generated on the outer surface of the target object according to ray points around the target object.
In an alternative embodiment, the step of determining ray points around the target object based on the collision detection parameters includes: obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points; and determining the positions of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
In an alternative embodiment, the preset points include: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle; the step of obtaining the initial coordinates of the ray points around the target object according to the collision detection parameters and the position coordinates of the plurality of preset points includes: multiplying the collision detection parameters with the position coordinates of the multiple vertexes to obtain initial coordinates of ray points around the target object; the vertex position coordinates are obtained by establishing a rectangular coordinate system with the center of a preset unit circle as the origin.
In an optional embodiment, the step of determining the position of the ray point around the target object based on the initial coordinate of the ray point around the target object and the position coordinate of the target object in the virtual scene includes: performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object; and adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object includes: determining the relative distance between a ray point corresponding to the target object and the target object; and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
In an optional embodiment, the step of generating a detection ray on the outer surface of the target object according to the ray points around the target object includes: and aiming at each ray point around the target object, shifting the current ray point to a specified distance in the vertical direction, and taking the shifted position as a transmission starting point to transmit rays in the vertical direction to obtain the detection rays corresponding to the current ray point.
In an alternative embodiment, the step of determining whether the detection ray collides with the first object and determining the target object as an overlapping object of the first object if the collision occurs includes: judging whether a detection ray colliding with a first object exists in detection rays generated by the outer surface of the target object; if yes, determining the target object as an overlapping object of the first object; if not, the target object is determined to be non-overlapping with the first object.
In an optional embodiment, after the target object is determined as the overlapped object of the first object, the overlapped object is hidden, and whether to remove the overlapped object is determined according to a display effect of the virtual scene after the hiding process.
In an optional embodiment, before the step of determining the target object as the overlapping object of the first object, the method further includes: subtracting the vertical coordinate of the center of the target object from the vertical coordinate of the center of the first object to obtain a distance difference value; judging whether the distance difference value is smaller than a preset distance threshold value or not; if the distance difference value is smaller than a preset distance threshold value, determining the target object as an overlapped object of the first object; and if the distance difference is not smaller than the preset distance threshold, determining that the target object is not overlapped with the first object.
In an optional implementation manner, before the step of obtaining the collision detection parameter, the method further includes: selecting a second object from the virtual scene; the second object is any object in the virtual scene; obtaining a collision detection parameter in response to an adjustment operation on a collision range of the second object; and the collision range corresponding to the collision detection parameters enables the detection ray corresponding to the second object not to intersect with the second object.
The functions, if implemented in the form of software functional units and identified as separate product sales or use, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A method for detecting overlapping objects in a virtual scene, the method comprising:
acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in a virtual scene collides with a first object in the virtual scene; the first object is an object in the virtual scene except the target object;
generating a detection ray on the outer surface of the target object based on the collision detection parameter; the detection ray does not intersect with the target object;
and determining whether the detection ray collides with the first object, and if so, determining the target object as an overlapped object of the first object.
2. The method of claim 1, wherein the step of generating detection rays on the outer surface of the target object based on the collision detection parameters comprises:
determining ray points around the target object based on the collision detection parameters;
and generating detection rays on the outer surface of the target object according to ray points around the target object.
3. The method of claim 2, wherein the step of determining ray points around the target object based on the collision detection parameters comprises:
obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the preset points;
determining the position of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene.
4. The method of claim 3, wherein the plurality of preset points comprises: presetting a plurality of vertexes contained in a circumscribed polygon of a unit circle;
the step of obtaining initial coordinates of ray points around the target object according to the collision detection parameters and the position coordinates of the preset points comprises:
multiplying the collision detection parameters by the position coordinates of the plurality of vertexes to obtain initial coordinates of ray points around the target object; and the position coordinates of the vertex are obtained under the condition that a rectangular coordinate system is established by taking the center of the preset unit circle as an origin.
5. The method of claim 3, wherein the step of determining the position of the ray points around the target object based on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene comprises:
performing linear operation on the initial coordinates of the ray points around the target object and the position coordinates of the target object in the virtual scene to obtain the position of the ray point corresponding to the target object;
and adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray point around the target object.
6. The method according to claim 5, wherein the step of adjusting the position of the ray point corresponding to the target object to obtain the final position of the ray points around the target object comprises:
determining the relative distance between the ray point corresponding to the target object and the target object;
and carrying out scaling processing and object size following processing on the relative distance to obtain the final position of the ray point around the target object.
7. The method of claim 2, wherein the step of generating detection rays at the outer surface of the target object from ray points around the target object comprises:
and aiming at each ray point around the target object, shifting the current ray point to a specified distance in the vertical direction, and taking the shifted position as a transmission starting point to transmit rays in the vertical direction to obtain the detection rays corresponding to the current ray point.
8. The method of claim 1, wherein the step of determining whether the detected ray collides with the first object and, if so, determining the target object as an overlapping object of the first object comprises:
judging whether a detection ray colliding with the first object exists in detection rays generated by the outer surface of the target object;
if so, determining the target object as an overlapping object of the first object;
if not, determining that the target object is not overlapped with the first object.
9. The method of any one of claims 1-8, wherein after the step of determining the target object as an overlapping object of the first object, the method further comprises:
and hiding the overlapped object, and determining whether to remove the overlapped object according to the display effect of the virtual scene after hiding.
10. The method of claim 8, wherein the step of determining the target object as an overlapping object of the first object is preceded by the method further comprising:
subtracting the vertical coordinate of the center of the target object from the vertical coordinate of the center of the first object to obtain a distance difference value;
judging whether the distance difference value is smaller than a preset distance threshold value or not;
if the distance difference value is smaller than the preset distance threshold value, determining the target object as an overlapped object of the first object;
and if the distance difference is not smaller than the preset distance threshold, determining that the target object is not overlapped with the first object.
11. The method of claim 1, wherein the step of obtaining collision detection parameters is preceded by the method further comprising:
selecting a second object from the virtual scene; wherein the second object is any object in the virtual scene;
responding to the adjustment operation of the collision range of the second object to obtain a collision detection parameter; and the collision range corresponding to the collision detection parameters enables the detection ray corresponding to the second object not to intersect with the second object.
12. An apparatus for detecting overlapping objects in a virtual scene, the apparatus comprising:
the parameter acquisition module is used for acquiring collision detection parameters; wherein the collision detection parameters include: detecting a parameter of whether a target object in a virtual scene collides with a first object in the virtual scene; the first object is an object in the virtual scene except the target object;
a ray generation module for generating detection rays on the outer surface of the target object based on the collision detection parameters; the detection ray does not intersect with the target object;
and the overlapping detection module is used for determining whether the detection ray collides with the first object or not, and if so, determining the target object as an overlapping object of the first object.
13. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of detecting overlapping objects in a virtual scene of any one of claims 1 to 11.
14. A computer-readable storage medium having stored thereon computer-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of detecting overlapping objects in a virtual scene of any of claims 1 to 11.
CN202210725183.8A 2022-06-23 2022-06-23 Method and device for detecting overlapped objects in virtual scene and electronic equipment Pending CN115317916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210725183.8A CN115317916A (en) 2022-06-23 2022-06-23 Method and device for detecting overlapped objects in virtual scene and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210725183.8A CN115317916A (en) 2022-06-23 2022-06-23 Method and device for detecting overlapped objects in virtual scene and electronic equipment

Publications (1)

Publication Number Publication Date
CN115317916A true CN115317916A (en) 2022-11-11

Family

ID=83916375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210725183.8A Pending CN115317916A (en) 2022-06-23 2022-06-23 Method and device for detecting overlapped objects in virtual scene and electronic equipment

Country Status (1)

Country Link
CN (1) CN115317916A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132624A (en) * 2023-10-27 2023-11-28 济南作为科技有限公司 Method, device, equipment and storage medium for detecting occlusion of following camera
CN117152327A (en) * 2023-10-31 2023-12-01 腾讯科技(深圳)有限公司 Parameter adjusting method and related device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132624A (en) * 2023-10-27 2023-11-28 济南作为科技有限公司 Method, device, equipment and storage medium for detecting occlusion of following camera
CN117132624B (en) * 2023-10-27 2024-01-30 济南作为科技有限公司 Method, device, equipment and storage medium for detecting occlusion of following camera
CN117152327A (en) * 2023-10-31 2023-12-01 腾讯科技(深圳)有限公司 Parameter adjusting method and related device
CN117152327B (en) * 2023-10-31 2024-02-09 腾讯科技(深圳)有限公司 Parameter adjusting method and related device

Similar Documents

Publication Publication Date Title
CN115317916A (en) Method and device for detecting overlapped objects in virtual scene and electronic equipment
CN110073417B (en) Method and apparatus for placing virtual objects of augmented or mixed reality applications in a real world 3D environment
US20150109290A1 (en) Device and method for removing noise points in point clouds
CN105243662B (en) The determination method and terminal device of a kind of terminal position
CN109035423B (en) Floor segmentation method and device of virtual three-dimensional model of house
CN106919883B (en) QR code positioning method and device
CN106980851B (en) Method and device for positioning data matrix DM code
CN111080762B (en) Virtual model rendering method and device
CN111369680B (en) Method and device for generating three-dimensional image of building
CN110458954B (en) Contour line generation method, device and equipment
CN114972621A (en) Three-dimensional building contour extraction method and device, electronic equipment and storage medium
CN108744520B (en) Method and device for determining placement position of game model and electronic equipment
CN109949421B (en) Triangular net cutting method and device
CN112365572A (en) Rendering method based on tessellation and related product thereof
CN113117334B (en) Method and related device for determining visible area of target point
CN108628896B (en) Sign-in behavior heat processing method and device
CN109427084B (en) Map display method, device, terminal and storage medium
CN107393019B (en) Particle-based cloth simulation method and device
US9761046B2 (en) Computing device and simulation method for processing an object
CN113628286A (en) Video color gamut detection method and device, computing equipment and computer storage medium
CN108984262B (en) Three-dimensional pointer creating method and device and electronic equipment
CN113368498B (en) Model generation method and device and electronic equipment
CN111429581A (en) Method and device for determining outline of game model and adding special effect of game
CN111402370A (en) Method and device for detecting floating object
CN113642281A (en) PCB design drawing detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination