CN117899488A - Collision detection method and device for virtual character movement, electronic equipment and storage medium - Google Patents

Collision detection method and device for virtual character movement, electronic equipment and storage medium Download PDF

Info

Publication number
CN117899488A
CN117899488A CN202311758145.3A CN202311758145A CN117899488A CN 117899488 A CN117899488 A CN 117899488A CN 202311758145 A CN202311758145 A CN 202311758145A CN 117899488 A CN117899488 A CN 117899488A
Authority
CN
China
Prior art keywords
position information
linked list
coordinate
coordinate linked
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311758145.3A
Other languages
Chinese (zh)
Inventor
贾明峥
傅左涛
解文昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingzhen Technology Shanghai Co ltd
Original Assignee
Xingzhen Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingzhen Technology Shanghai Co ltd filed Critical Xingzhen Technology Shanghai Co ltd
Priority to CN202311758145.3A priority Critical patent/CN117899488A/en
Publication of CN117899488A publication Critical patent/CN117899488A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a collision detection method, a device, an electronic device and a storage medium for virtual character movement, wherein the method comprises the following steps: displaying a virtual scene on a preset page, wherein the virtual scene comprises a first object and a second object, if the first object and/or the second object are detected to move in the virtual scene, based on the movement of the first object and/or the movement of the second object, updating the position information in a position coordinate linked list, and in the moving process of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object meets the first preset difference, and the difference between the second position information of the first object and the second position information of the second object meets the first preset difference, determining that collision is generated between the first object and the second object, and comparing each frame brought by grid-managed collision detection in space can be reduced through comparing the differences between the position information in the linked list, so that collision detection can be reduced.

Description

Collision detection method and device for virtual character movement, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a collision detection method and device for virtual character movement, electronic equipment and a storage medium.
Background
With the rapid development of computer technology, computer technology brings great convenience to the life of people and greatly improves the life quality of people. People enjoy convenient life and also need proper entertainment as recreation. Thus, various game product applications are in progress.
In gaming china, collision detection should be accurate, efficient and very fast. Most of the collision detection algorithms are currently based on spatial segmentation techniques, which means that collision detection will be tightly linked to the polygon management pipeline of the scene. In general, the amount of computation for determining whether a polygon of an object passes through a polygon in a scene may be large, and how to reduce the amount of computation is an urgent problem to be solved.
Disclosure of Invention
The disclosure provides a collision detection method, a device, an electronic device and a storage medium for virtual character movement, and the technical scheme of the disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided a collision detection method for virtual character movement, including:
Displaying a virtual scene on a preset page; the virtual scene comprises a first object and a second object;
If the first object and/or the second object are detected to move in the virtual scene, updating the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered;
In the moving process of the first object and/or the second object, if the gap between the first position information of the first object and the first position information of the second object meets a first preset gap, and the gap between the second position information of the first object and the second position information of the second object meets a first preset gap, determining that collision is generated between the first object and the second object.
In some possible embodiments, the first object moves and the second object is stationary while in the virtual scene;
If the first object and/or the second object are detected to move in the virtual scene, updating the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object, including:
if the first object is detected to move in the virtual scene, determining current first position information and current second position information of the first object;
Determining first position information of a first object in a first coordinate linked list based on the identification of the first object, and determining second position information of a second object in a second coordinate linked list based on the identification of the first object;
Updating the first position information of the first object into the current first position information of the first object in the first coordinate linked list;
and updating the second position information of the second object into the current second position information of the first object in the second coordinate linked list.
In some possible embodiments, the method further comprises:
If the updated first position information of the first object exceeds the adjacent first position information in the first coordinate linked list, exchanging the positions of the updated first position information of the first object and the adjacent first position information in the first coordinate linked list; in the first coordinate linked list, each piece of first position information carries an object identifier;
If the updated second position information of the first object exceeds the adjacent second position information in the second coordinate linked list, exchanging the ordering positions of the updated second position information of the first object and the adjacent second position information in the second coordinate linked list; in the second coordinate linked list, each piece of second position information carries the identification of the object.
In some possible embodiments, during the movement of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object satisfies the first preset difference, and the difference between the second position information of the first object and the second position information of the second object satisfies the first preset difference, determining that a collision occurs between the first object and the second object includes:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the second position information of the first object and the second position information of the second object in the second coordinate linked list meet the first preset difference, determining that collision is generated between the first object and the second object.
In some possible embodiments, the method further comprises:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the difference between the second position information of the first object and the second position information of the second object in the second coordinate linked list does not meet the first preset difference, determining that collision does not occur between the first object and the second object.
In some possible embodiments, the method further comprises:
In the moving process of the first object and/or the second object, if the gap between the first position information of the first object and the first position information of the second object meets a first preset gap, and the gap between the second position information of the first object and the second position information of the second object meets a second preset gap, determining that collision is generated between the first object and the second object;
displaying early warning information; the early warning information is used for early warning that collision is generated between the first object and the second object; the second preset gap is greater than the first preset gap.
In some of the possible embodiments of the present invention,
The first object is a first virtual character;
The second object is a second virtual character, a stationary entity in the virtual scene, an edge in the virtual scene.
According to a second aspect of the embodiments of the present disclosure, there is provided a collision detection apparatus for virtual character movement, including:
The scene display module is configured to display a virtual scene on a preset page; the virtual scene comprises a first object and a second object;
The linked list information updating module is configured to execute updating of the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object if the movement of the first object and/or the movement of the second object in the virtual scene is detected; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered;
the collision detection module is configured to execute a collision between the first object and the second object if a gap between the first position information of the first object and the first position information of the second object meets a first preset gap and a gap between the second position information of the first object and the second position information of the second object meets the first preset gap in the moving process of the first object and/or the second object.
In some possible embodiments, the first object moves and the second object is stationary while in the virtual scene;
the linked list information updating module is configured to execute:
if the first object is detected to move in the virtual scene, determining current first position information and current second position information of the first object;
Determining first position information of a first object in a first coordinate linked list based on the identification of the first object, and determining second position information of a second object in a second coordinate linked list based on the identification of the first object;
Updating the first position information of the first object into the current first position information of the first object in the first coordinate linked list;
and updating the second position information of the second object into the current second position information of the first object in the second coordinate linked list.
In some possible embodiments, the linked list information update module is configured to perform:
If the updated first position information of the first object exceeds the adjacent first position information in the first coordinate linked list, exchanging the positions of the updated first position information of the first object and the adjacent first position information in the first coordinate linked list; in the first coordinate linked list, each piece of first position information carries an object identifier;
If the updated second position information of the first object exceeds the adjacent second position information in the second coordinate linked list, exchanging the ordering positions of the updated second position information of the first object and the adjacent second position information in the second coordinate linked list; in the second coordinate linked list, each piece of second position information carries the identification of the object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the second position information of the first object and the second position information of the second object in the second coordinate linked list meet the first preset difference, determining that collision is generated between the first object and the second object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the difference between the second position information of the first object and the second position information of the second object in the second coordinate linked list does not meet the first preset difference, determining that collision does not occur between the first object and the second object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object and/or the second object, if the gap between the first position information of the first object and the first position information of the second object meets a first preset gap, and the gap between the second position information of the first object and the second position information of the second object meets a second preset gap, determining that collision is generated between the first object and the second object;
displaying early warning information; the early warning information is used for early warning that collision is generated between the first object and the second object; the second preset gap is greater than the first preset gap.
In some of the possible embodiments of the present invention,
The first object is a first virtual character;
The second object is a second virtual character, a stationary entity in the virtual scene, an edge in the virtual scene.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the method as in any of the first aspects above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of any one of the first aspects of embodiments of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, the computer program being read from the readable storage medium by at least one processor of the computer device and executed, such that the computer device performs the method of any one of the first aspects of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
And displaying a virtual scene on a preset page, wherein the virtual scene comprises a first object and a second object, if the first object and/or the second object are detected to move in the virtual scene, updating the position information in a position coordinate chain table based on the movement of the first object and/or the movement of the second object, wherein the position coordinate chain table comprises a first coordinate chain table and a second coordinate chain table, the first coordinate chain table comprises first position information after all objects in the virtual scene are ordered, the second coordinate chain table comprises second position information after all objects in the virtual scene are ordered, and in the movement process of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object meets the first preset difference, and the difference between the second position information of the first object and the second position information of the second object meets the first preset difference, the collision between the first object and the second object is determined. In the embodiment of the application, the comparison of each frame brought by the collision detection of the grid management type in the space can be reduced by the comparison of the gaps between the position information in the linked list, so that the collision detection can be reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view showing an application environment of a collision detection method of virtual character movement according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a collision detection method for virtual character movement, according to an exemplary embodiment;
FIG. 3 is a schematic diagram of a default page shown in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram II of a default page shown according to an exemplary embodiment;
FIG. 5 is a flow chart illustrating a method for updating a linked list of location coordinates during movement of a virtual character in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram III of a default page shown according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating a method for updating a linked list of location coordinates during movement of a virtual character in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating a scene area corresponding to a target angle in accordance with an exemplary embodiment;
FIG. 9 is a flowchart illustrating a collision detection method for virtual character movement, according to an exemplary embodiment;
Fig. 10 is a block diagram of an electronic device for collision detection for virtual character movement, according to an example embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar first objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for display, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic view illustrating an application environment of a collision detection method of virtual character movement according to an exemplary embodiment, and as shown in fig. 1, the application environment may include a server 011, a first client 012, and a second client 013.
In some possible embodiments, the server 011 may include a stand alone physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network, a content delivery network), and a basic cloud computing service such as a big data and artificial intelligence platform. Operating systems running on the server may include, but are not limited to, android systems, IOS systems, linux, windows, unix, and the like. The server 011 may display a virtual scene on a preset page, where the virtual scene includes a first object and a second object, if the first object and/or the second object are detected to move in the virtual scene, based on the movement of the first object and/or the movement of the second object, update the position information in the position coordinate linked list, where the position coordinate linked list includes a first coordinate linked list and a second coordinate linked list, the first coordinate linked list includes first position information after all objects in the virtual scene are ordered, the second coordinate linked list includes second position information after all objects in the virtual scene are ordered, and in the movement process of the first object and/or the second object, if a difference between the first position information of the first object and the first position information of the second object satisfies the first preset difference, and a difference between the second position information of the first object and the second position information of the second object satisfies the first preset difference, it is determined that a collision is generated between the first object and the second object. In the embodiment of the application, the comparison of each frame brought by the collision detection of the grid management type in the space can be reduced by the comparison of the gaps between the position information in the linked list, so that the collision detection can be reduced.
In some possible embodiments, the first client 012 may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a smart wearable device, or the like. Or may be software running on the client, such as an application, applet, etc. Alternatively, the operating system running on the client may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
In some possible embodiments, the second client 013 may include, but is not limited to, a smart phone, a desktop computer, a tablet, a notebook, a smart speaker, a digital assistant, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a smart wearable device, or the like. Or may be software running on the client, such as an application, applet, etc. Alternatively, the operating system running on the client may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
In some possible embodiments, the first client 012 or the second client 013 displays a virtual scene on a preset page, the virtual scene includes a first object and a second object, if the first object and/or the second object are detected to move in the virtual scene, the position information in the position coordinate linked list is updated based on the movement of the first object and/or the movement of the second object, where the position coordinate linked list includes a first coordinate linked list and a second coordinate linked list, the first coordinate linked list includes first position information after all objects in the virtual scene are ordered, the second coordinate linked list includes second position information after all objects in the virtual scene are ordered, and if a gap between the first position information of the first object and the first position information of the second object satisfies the first preset gap, and a gap between the second position information of the first object and the second position information of the second object satisfies the first preset gap, a collision is determined between the first object and the second object. In the embodiment of the application, the comparison of each frame brought by the collision detection of the grid management type in the space can be reduced by the comparison of the gaps between the position information in the linked list, so that the collision detection can be reduced.
In an exemplary embodiment, the databases corresponding to the client and the server may be node devices in the blockchain system, and the obtained and generated information can be shared to other node devices in the blockchain system, so that information sharing among multiple node devices is realized. The plurality of node devices in the blockchain system can be configured with the same blockchain, the blockchain consists of a plurality of blocks, and the blocks adjacent to each other in front and back have an association relationship, so that the data in any block can be detected through the next block when being tampered, thereby avoiding the data in the blockchain from being tampered, and ensuring the safety and reliability of the data in the blockchain.
Fig. 2 is a flowchart illustrating a collision detection method of virtual character movement according to an exemplary embodiment. It is noted that the present specification provides method operational steps as described in the examples or flowcharts, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. In actual system or product execution, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment). As shown in fig. 2, the flowchart includes at least the following steps S201 to S205:
In step S201, displaying a virtual scene on a preset page; the virtual scene includes a first object and a second object.
In the embodiment of the application, the preset page can be a page in an application program or a webpage in a website. Alternatively, the application may be a short video application, may be a music application, may be a social application, may be a gaming application, and the like. Likewise, the website may be a short video website, may be a music website, may be a social networking website, may be a gaming website, and the like.
In the embodiment of the present application, a virtual scene generally refers to a world formed by computer-generated image, sound, and other elements simulating a real environment or a non-real environment. The collection of these elements may be real objects or imaginary creatures, even inanimate objects, presenting a vivid and attractive visual experience through elaborate design and rendering techniques. This design is useful in certain applications, such as in the field of game development, movie production, or user interface design. The system can provide rich interaction possibility and diversified story lines, and bring immersive experience to users. The embodiment of the application does not limit the virtual scene, and can comprise the virtual scene in the game.
In the embodiment of the application, the first object is a first virtual role in the virtual scene, and the second object is a second virtual role in the virtual scene, a static entity in the virtual scene, an edge in the virtual scene and the like.
In the embodiment of the application, the virtual role refers to a virtual situation which does not conform or does not conform to the fact, and is the meaning of a simulated entity or a technical role of the entity realized by a high-tech technology. It can also be interpreted as fictional characters in works such as dramas, movies, comics, authored, etc., that are not present in reality.
On the premise that the virtual scene is a virtual scene in a game, optionally, the virtual character can be a player game character created by a user, the user can have various communication activities, can buy and sell, can trade, can play games and chat, has a basic name similar to a person, can hold various activities in the game according to the rules of the game, or can carry out a series of actions according to received instructions. Alternatively, the virtual character may be a non-player game character created by a developer.
Alternatively, the virtual character may be in a human form or an animal form.
In the embodiment of the application, the static entity refers to an entity always at a position in the virtual scene, such as a building (including a house, a shop, a sculpture), a plant and the like in the virtual scene.
In the embodiment of the application, the edge in the virtual scene can be the edge of the map represented by the virtual scene.
Fig. 3 is a schematic diagram of a preset page according to an exemplary embodiment, and as shown in fig. 3, includes a preset page 300, a virtual scene 301 rendered on the preset page according to scene data of the virtual scene, and a first virtual character 302 and a second virtual character 303 rendered on the virtual scene according to character data of the virtual character, and a stationary entity 304 rendered on the virtual scene according to entity data of the stationary entity.
In an alternative embodiment, the virtual character may be mobile in the virtual scene, i.e. the virtual character may move in the virtual scene according to the received movement instructions. In another alternative embodiment, the virtual character may be stationary in the virtual scene.
The embodiment of the application relates to a first client and a second client, wherein the first client and the second client correspond to the same game application program.
Alternatively, the first client may be a client corresponding to a first account creating the first virtual character 302. Specifically, a first user to which a first terminal where a first client is located belongs may create a first virtual role through a first account on the first client, and control the first virtual role to operate.
Alternatively, the second client may be a client corresponding to a second account that creates the second virtual character 303. Specifically, a second user to which a second terminal where the second client is located belongs may create a second virtual role through a second account on the second client, and control the second virtual role to operate.
In this embodiment of the present application, after a first user logs in to the game application through a first client and a second user logs in to the game application through the second client, when a first virtual character 302 corresponding to the first client and a second virtual character 303 corresponding to the second client are in the same virtual space (the same game area), or in the same piece of virtual scene in the same virtual space, the first client may display the second virtual character 303 corresponding to the second client, and the second client may display the first virtual character 302 corresponding to the first client, in other words, both the first user and the second user may see the virtual character of the other party.
In step S203, if the movement of the first object and/or the second object in the virtual scene is detected, updating the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered.
Fig. 4 is a schematic diagram two of a preset page, including a first object a, a second object B, and C in a virtual scene 301, as shown in fig. 4, according to an exemplary embodiment. Wherein a first object a may correspond to a first virtual character 302 in fig. 3, a second object B may correspond to a second virtual character 303 in fig. 3, and a second object C may correspond to a stationary entity 304 in fig. 3.
In the embodiment of the present application, in the case that the virtual scene is a two-dimensional virtual scene, a two-dimensional coordinate system as shown in fig. 4 may be suggested (in the real virtual scene, the two-dimensional coordinate system is not displayed). All objects in the virtual scene have their own coordinate positions based on the unified two-dimensional coordinate system. For example, the coordinate position of the first object a is (x 1, y 1), the coordinate position of the second object B is (x 2, y 2), and the coordinate position of the second object C is (x 3, y 3).
For subsequent collision detection during object movement, a position coordinate linked list may be constructed based on the coordinate positions of all objects of the virtual scene.
When the coordinate system is a two-dimensional coordinate system, the position coordinate linked list comprises a first coordinate linked list corresponding to the X axis and a second coordinate linked list corresponding to the Y axis, or the position coordinate linked list comprises a first coordinate linked list corresponding to the Y axis and a second coordinate linked list corresponding to the X axis. In the following, the position coordinate linked list includes a first coordinate linked list corresponding to the X axis and a second coordinate linked list corresponding to the Y axis, and for the sake of full explanation, other embodiments may refer to this embodiment, and will not be described herein again.
In the embodiment of the application, the first coordinate linked list comprises first position information corresponding to the X axis of all objects, and the second coordinate linked list comprises second position information corresponding to the Y axis of all objects.
In order to determine whether any two objects (such as the first object and the second object) are rapidly approaching on a certain axis, the first position information of all the objects in the first coordinate linked list may be ordered, for example, from small to large according to the value of the first position information. Likewise, the second position information of all objects in the second coordinate linked list may be ordered, for example, from small to large according to the value of the second position information.
Continuing with the illustration based on FIG. 4, assuming the coordinate position of the first object A is (5, 1), the coordinate position of the second object B is (11, 4), and the coordinate position of the second object C is (7, 5), then the first coordinate linked list (5, 7, 11) and the second coordinate linked list (1, 4, 5) can be obtained from the coordinate positions of the first object A, the second object B, and the second object C.
In order to enable the first position information in the first coordinate linked list and the second position information in the second coordinate linked list to correspond to the object, each first position information and each second position information may carry an identification of the object.
The following describes in particular how the location information in the location coordinate linked list is updated as an object moves in the virtual scene.
FIG. 5 is a flow chart illustrating a method for updating a linked list of location coordinates during movement of a virtual character according to an exemplary embodiment, comprising:
in step S501, if it is detected that the first object moves in the virtual scene, current first position information and current second position information of the first object are determined.
Optionally, when the first object is detected to move in the virtual scene, the position change of the first object in the moving process can be determined in real time, so as to obtain the current first position information and the current second position information of the first object.
Optionally, when the first object is detected to move in the virtual scene, a position change of the first object in the moving process may be determined according to a preset time interval, so as to obtain current first position information and current second position information of the first object. The preset time interval is preset, for example, may be 0.1 seconds.
Fig. 6 is a schematic diagram three of a preset page, shown in fig. 6, including a first object a, a second object B, and C in a virtual scene 301, according to an exemplary embodiment. Wherein the first object a is moving and the second object B is stationary. As shown in fig. 6, the coordinate position of the first object a at time t1 is (5, 1), and the coordinate position at time t2 is (6, 2).
In step S503, first location information of the first object in the first coordinate linked list is determined based on the identification of the first object, and second location information of the second object in the second coordinate linked list is determined based on the identification of the first object.
In the embodiment of the application, the first position information of the first object is determined to be 5 in the first coordinate linked list based on the identification of the first object, and the second position information of the second object is determined to be 1 in the second coordinate linked list based on the identification of the first object.
In step S505, the first position information of the first object in the first coordinate linked list is updated to the current first position information of the first object.
Subsequently, the first position information 5 of the first object in the first coordinate linked list may be updated to the current first position information 6 of the first object, so that an updated first coordinate linked list (6,7,11) is obtained.
In step S507, the second position information of the second object in the second coordinate linked list is updated to the current second position information of the first object.
Subsequently, the second position information 1 of the second object in the second coordinate linked list may be updated to the current second position information 2 of the first object, so that an updated second coordinate linked list (2, 4, 5) is obtained.
Subsequently, the updating of the position information of the first coordinate linked list and the second coordinate linked list according to steps S501-S507 may continue with the movement of the first object. Assuming that the coordinate position of the first object a at the time t3 is 6.5,2.5, the first position information 6 of the first object may be updated to the current first position information 6.5 of the first object, so as to obtain an updated first coordinate linked list (6.5,7,11), and the second position information 2 of the second object in the second coordinate linked list may be updated to the current second position information 2.5 of the first object, so as to obtain an updated second coordinate linked list (2.5,4,5).
Similarly, assuming that the second object is moving and the first object is stationary, or that the first object and the second object are both moving, the location information in the first coordinate linked list and the second coordinate linked list may be updated according to steps S501-S507, so as to obtain the updated current first coordinate linked list and second coordinate linked list.
Optionally, the moving of the first object and the stationary of the second object are further described by taking the example of the moving of the first object, and because the first position information in the first coordinate linked list and the second position information in the second coordinate linked list are all ordered according to the numerical value from small to large, when the numerical values of the position information in the first coordinate linked list and the second coordinate linked list are updated, the positions of the position information in the linked list also need to be updated under certain conditions.
FIG. 7 is a flowchart illustrating a method for updating a linked list of location coordinates during movement of a virtual character, according to an exemplary embodiment, comprising:
In step S701, if the updated first position information of the first object exceeds the adjacent first position information in the first coordinate linked list, exchanging the updated first position information of the first object and the positions of the adjacent first position information in the first coordinate linked list; in the first coordinate linked list, each piece of first position information carries an identification of an object.
Assuming that the coordinate position of the first object a at the time t4 is (7.5,4.5), at this time, the first position information 6.5 of the first object in the first coordinate linked list may be updated to the current first position information 7.5 of the first object, so that an updated first coordinate linked list (7.5,7,11) is obtained. In this case, the first position information 7.5 of the first object exceeds the next adjacent first position information 7, and therefore the updated first position information of the first object and the position of the adjacent first position information in the first coordinate linked list are exchanged to obtain a new first coordinate linked list (7,7.5,11).
In step S703, if the updated second position information of the first object exceeds the adjacent second position information in the second coordinate linked list, exchanging the updated second position information of the first object and the ordering positions of the adjacent second position information in the second coordinate linked list; in the second coordinate linked list, each piece of second position information carries the identification of the object.
Then, the second position information 2.5 of the first object in the second coordinate linked list may be updated to the current second position information 4.5 of the first object, so that an updated second coordinate linked list (4.5,4,5) is obtained. In this case, the second position information 4.5 of the first object exceeds the next adjacent second position information 4, so that the updated second position information of the first object and the ordering position of the next adjacent second position information in the second coordinate linked list can be exchanged, resulting in a new second coordinate linked list (4, 4.5, 5).
Optionally, when the position information in the position coordinate linked list is updated at the same time, the positions of the position information in the first coordinate linked list may be exchanged according to the actual situation, or the positions of the position information in the second coordinate linked list may be exchanged.
In step S205, during the movement of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object satisfies the first preset difference, and the difference between the second position information of the first object and the second position information of the second object satisfies the first preset difference, it is determined that a collision occurs between the first object and the second object.
In an alternative embodiment, during the movement of the first object and/or the second object, a gap between the first position information of the first object and the first position information of the second object may be determined first, and in case the gap between the first position information of the first object and the first position information of the second object satisfies a first preset gap, a gap between the second position information of the first object and the second position information of the second object is determined again, and in case the gap between the second position information of the first object and the second position information of the second object also satisfies the first preset gap, a collision is determined between the first object and the second object. Alternatively, upon determining that a collision is generated between the first object and the second object, a collision special effect may be displayed.
In another alternative embodiment, during the movement of the first object and/or the second object, a gap between the second position information of the first object and the second position information of the second object may be determined first, and in case the gap between the second position information of the first object and the second position information of the second object satisfies a first preset gap, a gap between the first position information of the first object and the first position information of the second object is determined again, and in case the gap between the first position information of the first object and the first position information of the second object also satisfies the first preset gap, a collision is determined between the first object and the second object. Alternatively, upon determining that a collision is generated between the first object and the second object, a collision special effect may be displayed.
The difference meeting the first preset difference means that the difference between the two is smaller than or equal to the first preset difference.
An explanation is given below of whether or not a collision occurs between the first object and the second object in one of the confirmation sequences.
Fig. 8 is a flowchart illustrating a collision detection method of virtual character movement, according to an exemplary embodiment, comprising:
In step S801, in the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list satisfies the first preset difference, the identifier of the second object corresponding to the adjacent first position information is determined.
Optionally, in the moving process of the first object, if a difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list is smaller than or equal to a first preset difference, determining an identifier of a second object corresponding to the adjacent first position information.
In step S803, second location information of the second object in the second coordinate linked list is determined according to the identification of the second object.
Then, second position information of the second object in the second coordinate linked list and second position information of the first object in the second coordinate linked list can be determined according to the identification of the second object.
In step S805, if the second position information of the first object and the second position information of the second object in the second coordinate linked list satisfy the first preset gap, it is determined that a collision occurs between the first object and the second object.
Optionally, if the second position information of the first object and the second position information of the second object in the second coordinate linked list satisfy the first preset gap, it is determined that collision occurs between the first object and the second object.
Correspondingly, in the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information, determining the second position information of the second object in the second coordinate linked list according to the identification of the second object, and if the difference between the second position information of the first object and the second position information of the second object in the second coordinate linked list does not meet the first preset difference, determining that collision does not occur between the first object and the second object.
In the embodiment of the application, the collision between the first object and the second object can be pre-warned. Specifically, in the moving process of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object meets a first preset difference, and the difference between the second position information of the first object and the second position information of the second object meets a second preset difference, it is determined that collision will occur between the first object and the second object, and early warning information is displayed. The early warning information is used for early warning that collision is generated between the first object and the second object; the second preset gap is greater than the first preset gap.
In summary, by comparing the gaps between the position information in the linked list, the comparison of each frame caused by the collision detection of the grid management type in the space can be reduced, so that the collision detection can be reduced.
Fig. 9 is a block diagram of a collision detection apparatus for virtual character movement, according to an exemplary embodiment. The method has the function of realizing the data processing method in the method embodiment, and the function can be realized by hardware or by executing corresponding software by hardware. Referring to fig. 9, the apparatus includes a scene display module 901, a linked list information update module 902, and a collision detection module 903:
A scene display module 901 configured to perform displaying a virtual scene on a preset page; the virtual scene comprises a first object and a second object;
A linked list information updating module 902 configured to perform updating of the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object if the movement of the first object and/or the second object in the virtual scene is detected; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered;
The collision detection module 903 is configured to perform a collision between the first object and the second object if a gap between the first position information of the first object and the first position information of the second object satisfies a first preset gap and a gap between the second position information of the first object and the second position information of the second object satisfies a first preset gap during movement of the first object and/or the second object.
In some possible embodiments, the first object moves and the second object is stationary while in the virtual scene;
the linked list information updating module is configured to execute:
if the first object is detected to move in the virtual scene, determining current first position information and current second position information of the first object;
Determining first position information of a first object in a first coordinate linked list based on the identification of the first object, and determining second position information of a second object in a second coordinate linked list based on the identification of the first object;
Updating the first position information of the first object into the current first position information of the first object in the first coordinate linked list;
and updating the second position information of the second object into the current second position information of the first object in the second coordinate linked list.
In some possible embodiments, the linked list information update module is configured to perform:
If the updated first position information of the first object exceeds the adjacent first position information in the first coordinate linked list, exchanging the positions of the updated first position information of the first object and the adjacent first position information in the first coordinate linked list; in the first coordinate linked list, each piece of first position information carries an object identifier;
If the updated second position information of the first object exceeds the adjacent second position information in the second coordinate linked list, exchanging the ordering positions of the updated second position information of the first object and the adjacent second position information in the second coordinate linked list; in the second coordinate linked list, each piece of second position information carries the identification of the object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the second position information of the first object and the second position information of the second object in the second coordinate linked list meet the first preset difference, determining that collision is generated between the first object and the second object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets a first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
Determining second position information of the second object in a second coordinate linked list according to the identification of the second object;
If the difference between the second position information of the first object and the second position information of the second object in the second coordinate linked list does not meet the first preset difference, determining that collision does not occur between the first object and the second object.
In some possible embodiments, the collision detection module is configured to perform:
In the moving process of the first object and/or the second object, if the gap between the first position information of the first object and the first position information of the second object meets a first preset gap, and the gap between the second position information of the first object and the second position information of the second object meets a second preset gap, determining that collision is generated between the first object and the second object;
displaying early warning information; the early warning information is used for early warning that collision is generated between the first object and the second object; the second preset gap is greater than the first preset gap.
In some of the possible embodiments of the present invention,
The first object is a first virtual character;
The second object is a second virtual character, a stationary entity in the virtual scene, an edge in the virtual scene.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 10 is a block diagram illustrating an apparatus 3000 for collision detection for virtual character movement according to an exemplary embodiment. For example, apparatus 3000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 10, the apparatus 3000 may include one or more of the following components: a processing component 3002, a memory 3004, a power component 3006, a multimedia component 3008, an audio component 3010, an input/output (I/O) interface 3012, a sensor component 3014, and a communications component 3016.
The processing component 3002 generally controls overall operations of the device 3000, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing assembly 3002 may include one or more processors 3020 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 3002 may include one or more modules to facilitate interactions between the processing component 3002 and other components. For example, the processing component 3002 may include a multimedia module to facilitate interaction between the multimedia component 3008 and the processing component 3002.
The memory 3004 is configured to store data of various picture types to support operations at the device 3000. Examples of such data include instructions for any application or method operating on device 3000, contact data, phonebook data, messages, pictures, videos, and the like. The memory 3004 may be implemented by any picture type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply assembly 3006 provides power to the various components of the device 3000. The power supply components 3006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 3000.
The multimedia component 3008 includes a screen between the device 3000 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia assembly 3008 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 3000 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 3010 is configured to output and/or input audio signals. For example, audio component 3010 includes a Microphone (MIC) configured to receive external audio signals when device 3000 is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signals may be further stored in the memory 3004 or transmitted via the communication component 3016. In some embodiments, the audio component 3010 further comprises a speaker for outputting audio signals.
The I/O interface 3012 provides an interface between the processing component 3002 and a peripheral interface module, which may be a keyboard, click wheel, button, or the like. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 3014 includes one or more sensors for providing status assessment of various aspects of the device 3000. For example, sensor assembly 3014 may detect the on/off state of device 3000, the relative positioning of the components, such as the display and keypad of device 3000, sensor assembly 3014 may also detect the change in position of device 3000 or a component of device 3000, the presence or absence of user contact with device 3000, the orientation or acceleration/deceleration of device 3000, and the change in temperature of device 3000. The sensor assembly 3014 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 3014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 3014 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 3016 is configured to facilitate wired or wireless communication between the apparatus 3000 and other devices. The device 3000 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 3016 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 3016 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 3000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
Embodiments of the present invention also provide a computer readable storage medium that may be provided in an electronic device to store at least one instruction or at least one program related to a collision detection method for implementing a virtual character movement, the at least one instruction or the at least one program being loaded and executed by the processor to implement the collision detection method for virtual character movement provided in the above method embodiments.
In an exemplary embodiment, a storage medium is also provided, such as a memory 3004, comprising instructions executable by the processor 3020 of the apparatus 3000 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present invention also provide a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method of any one of the first aspects of the embodiments of the present disclosure.
Embodiments of the present invention also provide a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of the computer device reads and executes the computer program, causing the computer device to perform the method of any of the first aspects of the embodiments of the present disclosure.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments collision detection and parallel processing of virtual character movements is also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A collision detection method for movement of a virtual character, comprising:
Displaying a virtual scene on a preset page; the virtual scene comprises a first object and a second object;
If the first object and/or the second object are detected to move in the virtual scene, updating the position information in a position coordinate linked list based on the movement of the first object and/or the movement of the second object; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered;
And in the moving process of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object meets a first preset difference, and the difference between the second position information of the first object and the second position information of the second object meets the first preset difference, determining that collision is generated between the first object and the second object.
2. The collision detection method for virtual character movement according to claim 1, wherein the first object moves and the second object is stationary in the virtual scene;
If the first object and/or the second object are detected to move in the virtual scene, updating the position information in the position coordinate linked list based on the movement of the first object and/or the movement of the second object, including:
If the first object is detected to move in the virtual scene, determining current first position information and current second position information of the first object;
Determining first position information of the first object in the first coordinate linked list based on the identification of the first object, and determining second position information of the second object in the second coordinate linked list based on the identification of the first object;
Updating the first position information of the first object into the current first position information of the first object in the first coordinate linked list;
And updating the second position information of the second object in the second coordinate linked list to the current second position information of the first object.
3. The collision detection method for virtual character movement according to claim 2, wherein the method further comprises:
If the updated first position information of the first object exceeds the adjacent first position information in the first coordinate linked list, exchanging the updated first position information of the first object and the position of the adjacent first position information in the first coordinate linked list; each piece of first position information carries an object identifier in the first coordinate linked list;
If the updated second position information of the first object exceeds the adjacent second position information in the second coordinate linked list, exchanging the updated second position information of the first object and the ordering positions of the adjacent second position information in the second coordinate linked list; and in the second coordinate linked list, each piece of second position information carries the identification of the object.
4. The collision detection method according to claim 1 or 2, wherein the determining that a collision is generated between the first object and the second object if a gap between the first position information of the first object and the first position information of the second object satisfies a first preset gap and a gap between the second position information of the first object and the second position information of the second object satisfies the first preset gap during the movement of the first object and/or the second object comprises:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets the first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
determining second position information of the second object in the second coordinate linked list according to the identification of the second object;
And if the second position information of the first object and the second position information of the second object in the second coordinate linked list meet the first preset gap, determining that collision is generated between the first object and the second object.
5. A collision detection method for movement of a virtual character according to claim 1 or 2, the method further comprising:
In the moving process of the first object, if the difference between the first position information of the first object and the adjacent first position information in the first coordinate linked list meets the first preset difference, determining the identification of a second object corresponding to the adjacent first position information;
determining second position information of the second object in the second coordinate linked list according to the identification of the second object;
If the difference between the second position information of the first object and the second position information of the second object in the second coordinate linked list does not meet the first preset difference, determining that collision is not generated between the first object and the second object.
6. The collision detection method for virtual character movement according to claim 1, wherein the method further comprises:
In the moving process of the first object and/or the second object, if the difference between the first position information of the first object and the first position information of the second object meets a first preset difference, and the difference between the second position information of the first object and the second position information of the second object meets a second preset difference, determining that collision is generated between the first object and the second object;
Displaying early warning information; the early warning information is used for early warning that collision is generated between the first object and the second object; the second preset gap is greater than the first preset gap.
7. The collision detection method for virtual character movement according to claim 1, wherein,
The first object is a first virtual character;
the second object is a second virtual character, a static entity in the virtual scene, and an edge in the virtual scene.
8. A collision detecting apparatus for movement of a virtual character, comprising:
The scene display module is configured to display a virtual scene on a preset page; the virtual scene comprises a first object and a second object;
A linked list information updating module configured to execute updating of position information in a position coordinate linked list based on the movement of the first object and/or the movement of the second object if the movement of the first object and/or the second object in the virtual scene is detected; the position coordinate linked list comprises a first coordinate linked list and a second coordinate linked list; the first coordinate linked list comprises first position information after all objects in the virtual scene are ordered, and the second coordinate linked list comprises second position information after all objects in the virtual scene are ordered;
the collision detection module is configured to execute a collision between the first object and the second object if a gap between the first position information of the first object and the first position information of the second object meets a first preset gap and a gap between the second position information of the first object and the second position information of the second object meets the first preset gap in the moving process of the first object and/or the second object.
9. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the collision detection method of virtual character movement of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the collision detection method of virtual character movement according to any of claims 1 to 7.
CN202311758145.3A 2023-12-19 2023-12-19 Collision detection method and device for virtual character movement, electronic equipment and storage medium Pending CN117899488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311758145.3A CN117899488A (en) 2023-12-19 2023-12-19 Collision detection method and device for virtual character movement, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311758145.3A CN117899488A (en) 2023-12-19 2023-12-19 Collision detection method and device for virtual character movement, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117899488A true CN117899488A (en) 2024-04-19

Family

ID=90695762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311758145.3A Pending CN117899488A (en) 2023-12-19 2023-12-19 Collision detection method and device for virtual character movement, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117899488A (en)

Similar Documents

Publication Publication Date Title
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20230300292A1 (en) Providing shared augmented reality environments within video calls
US20230306694A1 (en) Ranking list information display method and apparatus, and electronic device and storage medium
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
US20230164298A1 (en) Generating and modifying video calling and extended-reality environment applications
JP2023524368A (en) ADAPTIVE DISPLAY METHOD AND DEVICE FOR VIRTUAL SCENE, ELECTRONIC DEVICE, AND COMPUTER PROGRAM
CN110851108A (en) Electronic equipment operation method and device, electronic equipment and storage medium
EP4240012A1 (en) Utilizing augmented reality data channel to enable shared augmented reality video calls
WO2022169668A1 (en) Integrating artificial reality and other computing devices
CN117244249A (en) Multimedia data generation method and device, readable medium and electronic equipment
CN115278273B (en) Resource display method and device, electronic equipment and storage medium
WO2023155477A1 (en) Painting display method and apparatus, electronic device, storage medium, and program product
CN108874141B (en) Somatosensory browsing method and device
CN114697568B (en) Special effect video determining method and device, electronic equipment and storage medium
US20230154445A1 (en) Spatial music creation interface
CN117899488A (en) Collision detection method and device for virtual character movement, electronic equipment and storage medium
CN110891194B (en) Comment information display method and device, terminal and storage medium
CN109544698A (en) Image presentation method, device and electronic equipment
JP2022551671A (en) OBJECT DISPLAY METHOD, APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
CN117899479A (en) Virtual character moving method and device, electronic equipment and storage medium
CN110853643A (en) Method, device, equipment and storage medium for voice recognition in fast application
WO2024041270A1 (en) Interaction method and apparatus in virtual scene, device, and storage medium
US20240203080A1 (en) Interaction data processing
US11863596B2 (en) Shared augmented reality session creation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination