CN115193038A - Interaction control method and device, electronic equipment and storage medium - Google Patents

Interaction control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115193038A
CN115193038A CN202210885198.0A CN202210885198A CN115193038A CN 115193038 A CN115193038 A CN 115193038A CN 202210885198 A CN202210885198 A CN 202210885198A CN 115193038 A CN115193038 A CN 115193038A
Authority
CN
China
Prior art keywords
target
virtual
virtual object
sub
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210885198.0A
Other languages
Chinese (zh)
Inventor
冯启迪
谭思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210885198.0A priority Critical patent/CN115193038A/en
Publication of CN115193038A publication Critical patent/CN115193038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an interaction control method, an interaction control device, an electronic device and a storage medium, wherein the method comprises the following steps: the method is applied to the terminal equipment; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control method comprises the following steps: receiving target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one alternative virtual object based on the triggered target position and the current pose of a virtual camera in a virtual scene; in response to the fact that the associated virtual object exists in the target sub-area, determining the associated virtual object as a target virtual object corresponding to the target trigger operation; in response to the fact that no associated virtual object exists in the target sub-area, selecting a target virtual object corresponding to the target triggering operation from at least one candidate virtual object based on a preset rule; and displaying the target virtual object in a first preset form.

Description

Interaction control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of games, and in particular, to an interaction control method, an interaction control apparatus, an electronic device, and a storage medium.
Background
Currently, in many scenes, frequent user interaction with virtual objects in the virtual scene may be involved. For example, in a game, a user moves the position of a virtual object in a virtual scene corresponding to the game; in this case, it is usually necessary to select the model corresponding to the virtual object first, and then drag or click to the corresponding position for placement. However, the models corresponding to different virtual objects may have a problem of mutual "occlusion" in the virtual scene, so that the user cannot select the virtual object to be selected.
Disclosure of Invention
The embodiment of the disclosure at least provides an interaction control method, an interaction control device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an interaction control method, which is applied to a terminal device; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control method comprises the following steps:
receiving target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one alternative virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene;
in response to the existence of the associated virtual object in the target sub-region, determining the associated virtual object as a target virtual object corresponding to the target trigger operation;
in response to the target sub-area having no associated virtual object, selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
and displaying the target virtual object in a first preset form.
In an optional embodiment, the determining, based on the triggered target position and the current pose of the virtual camera in the virtual scene, a target sub-region and at least one virtual object corresponding to the target position includes:
determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera;
determining the target sub-region and the at least one candidate virtual object from the sub-regions and the virtual objects based on the target virtual ray, first position information of the plurality of sub-regions in the virtual scene, respectively, and second position information of the plurality of virtual objects in the virtual scene, respectively.
In an optional embodiment, the determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera includes:
determining a projection plane located in the virtual scene based on the current pose of the virtual camera;
determining a projection point corresponding to the target position on the projection plane based on the triggered projection relation among the target position, the projection plane and a camera plane corresponding to the virtual camera;
determining the target virtual ray based on the optical center position of the virtual camera and the position of the projection point.
In an optional embodiment, the determining, in response to the existence of the associated virtual object in the target sub-region, that the associated virtual object is a target virtual object corresponding to the target trigger operation further includes:
and determining whether the target sub-area has an associated virtual object or not based on the position of the target sub-area in the virtual scene and the positions of the virtual objects in the virtual scene.
In an optional embodiment, the selecting, from the at least one candidate virtual object, a target virtual object corresponding to the target triggering operation based on a preset rule includes:
determining the candidate virtual object which is firstly penetrated by the target virtual ray in the at least one candidate virtual object as a target virtual object corresponding to the target trigger operation;
alternatively, the first and second liquid crystal display panels may be,
determining the penetration distance of the at least one candidate virtual object by the target virtual ray; and determining the candidate virtual object with the maximum penetration distance as a target virtual object corresponding to the target trigger operation.
In an optional embodiment, the selecting, from the at least one candidate virtual object, a target virtual object corresponding to the target triggering operation based on a preset rule includes:
determining the relative position relationship between the sub-regions respectively associated with the at least one candidate virtual object and the target sub-region;
and determining the relative position relationship as a candidate virtual object associated with a sub-region of a preset relative position relationship as a target virtual object corresponding to the target trigger operation.
In an optional embodiment, the method further comprises:
and in response to the fact that the alternative virtual object corresponding to the target position does not exist, taking the target sub-area as a target corresponding to the target trigger operation, and displaying the target sub-area in a second preset form.
In a second aspect, an embodiment of the present disclosure further provides an interaction control apparatus, which is applied to a terminal device; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control device includes:
the first determination module is used for receiving a target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one candidate virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene;
a second determining module, configured to determine, in response to a presence of an associated virtual object in the target sub-region, the associated virtual object as a target virtual object corresponding to the target trigger operation;
a third determining module, configured to select, in response to that there is no associated virtual object in the target sub-region, a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
the first display module is used for displaying the target virtual object in a first preset form.
In an optional embodiment, when the first determining module determines the target sub-region and the at least one virtual object corresponding to the target position based on the triggered target position and the current pose of the virtual camera in the virtual scene, the apparatus further includes: a fourth determination module to:
determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera;
determining the target sub-region and the at least one candidate virtual object from the sub-regions and the virtual objects based on the target virtual ray, first position information of the plurality of sub-regions in the virtual scene, respectively, and second position information of the plurality of virtual objects in the virtual scene, respectively.
In an optional embodiment, when determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera, the fourth determining module is further configured to:
determining a projection plane located in the virtual scene based on the current pose of the virtual camera;
determining a projection point corresponding to the target position on the projection plane based on the triggered projection relation among the target position, the projection plane and a camera plane corresponding to the virtual camera;
determining the target virtual ray based on the optical center position of the virtual camera and the position of the projection point.
In an optional embodiment, before the second determining module determines, in response to the existence of the associated virtual object in the target sub-region, the associated virtual object as the target virtual object corresponding to the target trigger operation, the apparatus further includes: an association determination module to:
and determining whether the target sub-area has an associated virtual object or not based on the position of the target sub-area in the virtual scene and the positions of the virtual objects in the virtual scene.
In an optional embodiment, when the target virtual object corresponding to the target trigger operation is selected from the at least one candidate virtual object based on a preset rule, the third determining module is configured to:
determining the candidate virtual object which is firstly penetrated by the target virtual ray in the at least one candidate virtual object as a target virtual object corresponding to the target trigger operation;
alternatively, the first and second electrodes may be,
determining the penetration distance of the at least one candidate virtual object by the target virtual ray; and determining the candidate virtual object with the maximum penetration distance as a target virtual object corresponding to the target trigger operation.
In an optional embodiment, when the target virtual object corresponding to the target trigger operation is selected from the at least one candidate virtual object based on a preset rule, the third determining module is further specifically configured to:
determining the relative position relationship between the sub-regions respectively associated with the at least one candidate virtual object and the target sub-region;
and determining the relative position relationship as a candidate virtual object associated with a sub-region of a preset relative position relationship as a target virtual object corresponding to the target trigger operation.
In an alternative embodiment, the apparatus further comprises: a second display module to:
and in response to the fact that the alternative virtual object corresponding to the target position does not exist, taking the target sub-area as a target corresponding to the target trigger operation, and displaying the target sub-area in a second preset form.
In a third aspect, this disclosure provides an electronic device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps of the first aspect, or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The interactive control method provided by the embodiment of the disclosure divides a virtual scene into a plurality of sub-regions; associating a virtual object within the virtual scene with a sub-region; after target triggering operation of a user is received, determining a target sub-area and at least one alternative virtual object corresponding to a target position based on a triggered target position and a current pose of a virtual camera in a virtual scene; when the associated virtual object exists in the target sub-region, the virtual object associated with the target sub-region can be determined as the target virtual object; when the associated virtual object does not exist in the target sub-region, the target virtual object may be determined according to a preset rule in at least one candidate virtual object corresponding to the target trigger operation. According to the method, a virtual object directly triggered by target triggering operation is not taken as a target virtual object, but the target virtual object to be triggered is determined by determining whether the target sub-area corresponding to the triggered target position has the associated virtual object or not based on the association relation between the plurality of sub-areas and the virtual object, so that the virtual object shielded by other virtual objects can be triggered even if mutual shielding exists between the virtual objects, meanwhile, a user can accurately select the virtual object with a larger volume, the condition that the target sub-area without the associated virtual object is selected when the alternative virtual object exists is avoided, and the problem that the user cannot accurately select the virtual object in the related technology is solved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 illustrates a flow chart of an interaction control method provided by some embodiments of the present disclosure;
FIG. 2 illustrates an example diagram of a virtual camera shooting a virtual scene provided by some embodiments of the present disclosure;
FIG. 3 illustrates one of the exemplary diagrams of a virtual scene provided by some embodiments of the present disclosure;
FIG. 4 illustrates a second exemplary view of a virtual scene provided by some embodiments of the present disclosure;
FIG. 5 illustrates an example diagram of determining whether an associated virtual object exists for a target sub-region provided by some embodiments of the present disclosure;
FIG. 6 illustrates an example diagram of a target virtual ray penetration sequence provided by some embodiments of the present disclosure;
FIG. 7 illustrates an example diagram of determining a target virtual object provided by some embodiments of the present disclosure;
FIG. 8 illustrates a schematic diagram of an interactive control device provided by some embodiments of the present disclosure;
fig. 9 illustrates a schematic diagram of an electronic device provided by some embodiments of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the disclosure is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure.
Research shows that in a scene where a user interacts with a virtual object in a virtual scene, such as a game, the user frequently interacts with the virtual object in the virtual scene, for example, when the user moves the position of the virtual object in the virtual scene, the user usually needs to select a model of the virtual object first and then drag or click the model to a corresponding position for placement. However, the models of the virtual objects are different in size, and a problem of "occlusion" may occur between the models of the virtual objects, so that the user cannot select the virtual object to be selected.
Taking a self-propelled chess game as an example, in the game, a virtual scene is divided into a plurality of subareas similar to chess grids; the user can select the virtual object on the grids by clicking and control the selected virtual object to move on different grids. However, as mobile devices are popularized, self-propelled chess games are increasingly displayed on mobile phone devices and limited by the screen size of the mobile phone devices, and under the lattices with a large number of self-propelled chess, the lattices which can be divided by each virtual object are relatively small in size, so that some virtual objects with large models can occupy the lattices in which the self-propelled chess is positioned and exceed the range of the lattices occupied by the self-propelled chess, and shield other lattices from the main, which causes that the shielded lattices or the virtual objects positioned on the shielded lattices are difficult to select by users.
Based on the above research, the present disclosure provides an interaction control method, which does not use a virtual object directly triggered by a target trigger operation as a target virtual object, but determines a target virtual object to be triggered by whether a target sub-region corresponding to a triggered target position has an associated virtual object based on an association relationship between a plurality of sub-regions and the virtual object, so that even if there is mutual occlusion between the virtual objects, a virtual object occluded by other virtual objects can be triggered, and the problem that a user cannot trigger the occluded virtual object in the related art is overcome.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an interaction control method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the interaction control method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device including, for example, a touch terminal and a Personal Computer (PC) terminal, or a server or other processing device. The touch terminal includes, for example: smart phones, tablet computers, and the like; the PC terminal includes, for example, a desktop computer, a notebook computer, and the like.
The terminal equipment is provided with and loaded with an application program of the method. The application program may be executed on the server, and the server may perform related data processing, and the terminal device may display data transmitted by the server as the display device. When an application program runs in the terminal equipment, displaying a virtual scene in an interface of the terminal equipment; virtual objects are contained in the virtual scene.
The interaction control method provided by the embodiment of the disclosure can be applied to any virtual scene for performing interaction control on a virtual object, such as a game, a virtual community established by depending on the virtual scene, and the like. The interactive control method provided by the embodiment of the disclosure is not only suitable for self-propelled chess games, but also suitable for other games in which different virtual objects are shielded from each other and cannot be selected.
Taking the example of applying the interactive control method provided by the embodiment of the disclosure to a game, when the game runs in the terminal device, the virtual scene of the game can be displayed in the interface of the terminal device. The virtual scene comprises a virtual object; the virtual object may include, but is not limited to, a virtual object manipulated by a user, a Non-user Character (NPC); the virtual object may include, but is not limited to, at least one of a virtual character, a virtual animal, a virtual object capable of being controlled by a user, and a non-user character set according to game requirements, and is not particularly limited herein.
In addition, the virtual object may further include: virtual buildings, virtual plants, virtual props, and the like; for example, when the target game includes a self-propelled chess game, a virtual scene of the game includes chequers for placing virtual objects, the chequers for placing the virtual objects are sub-areas with a certain length and width range in the virtual scene, the sub-areas include sub-areas corresponding to the teamwork of the user and sub-areas corresponding to the enemy teamwork, and the user can place the virtual objects in the sub-areas corresponding to the teamwork of the user, and perform operations such as moving, using skills, unloading, selling, upgrading and the like on the target virtual objects corresponding to the target triggering operation through the target triggering operation.
The following describes an interaction control method provided in the embodiments of the present disclosure.
Referring to fig. 1, a flowchart of an interaction control method provided in the embodiment of the present disclosure is shown, where the method is applied to a terminal device, a virtual scene is displayed in an interface of the terminal device, and the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control method comprises the following steps of S101-S104, wherein:
s101: receiving a target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one candidate virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene.
S102: in response to the existence of the associated virtual object in the target sub-region, determining the associated virtual object as a target virtual object corresponding to the target trigger operation.
S103: in response to the fact that no associated virtual object exists in the target sub-area, selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule.
S104: and displaying the target virtual object in a first preset form.
In one embodiment provided by the present disclosure, a virtual scene is divided into a plurality of sub-regions; associating a virtual object within the virtual scene with a sub-region; after target triggering operation of a user is received, determining a target sub-area and at least one alternative virtual object corresponding to a target position based on a triggered target position and a current pose of a virtual camera in a virtual scene; when the associated virtual object exists in the target sub-region, the virtual object associated with the target sub-region can be determined as the target virtual object; when the associated virtual object does not exist in the target sub-region, the target virtual object may be determined according to a preset rule in at least one candidate virtual object corresponding to the target trigger operation. According to the method, a virtual object directly triggered by target triggering operation is not taken as a target virtual object, but the target virtual object to be triggered is determined by judging whether the target sub-area corresponding to the triggered target position has the related virtual object or not based on the association relation between the plurality of sub-areas and the virtual object, so that the virtual object shielded by other virtual objects can be triggered even if mutual shielding exists between the virtual objects, meanwhile, a user can accurately select the virtual object with a larger volume, the situation that the target sub-area without the associated virtual object is selected when the alternative virtual object exists is avoided, and the problem that the user cannot accurately select the virtual object in the related technology is solved.
The following describes each of the above-mentioned steps S101 to S104 in detail.
For the above S101, the target triggering operation of the user may be a click operation, a drag operation, or the like of the user on a sub-area or a virtual object in the virtual scene, and a target position corresponding to the target triggering operation is determined in the virtual scene according to the target triggering operation of the user. Wherein the target position refers to a position in the interface triggered by the user.
For example, taking a terminal device as a mobile phone as an example, when the target triggering operation is a click operation, a user may click any position in a virtual scene displayed on a graphical user interface with a finger, determine a clicked position through an internal interface, and use the clicked position as a target position.
In addition, in a case where the target trigger operation is a drag operation, the target position may be a starting point of the drag operation, for example, a virtual object is to be dragged from a target virtual object corresponding to the target position to another sub-region; the target position may also be the end point of the drag operation, for example, for a skill that can be applied to a virtual object, by dragging an icon corresponding to the skill onto the corresponding virtual object, to trigger the application of the skill to the corresponding virtual object.
The embodiment of the present disclosure provides a specific manner for determining a target sub-area and at least one candidate virtual object corresponding to a target position according to a triggered target position and a current pose of a virtual camera in a virtual scene, including S1011 to S1012:
s1011: determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera;
the virtual camera can be understood as a virtual camera arranged in a model space where a virtual scene is located, and the virtual camera shoots a certain coordinate point in the model space at a certain shooting angle to obtain the virtual scene, namely a picture displayed in an interface of the terminal device. The coordinate position and the shooting angle of the virtual camera in the model space are determined according to the current pose of the virtual camera, pictures of a virtual scene under different visual angles can be obtained by adjusting the current pose of the virtual camera, the virtual camera can move along with the virtual object in the pose relative to the virtual object, and therefore a fixed visual angle moving along with the virtual object is obtained.
And after the triggered target position and the current pose of the virtual camera are determined, determining a target virtual ray in the virtual scene.
Specifically, in one embodiment of the present disclosure, a projection plane located in the virtual scene is determined based on the current pose of the virtual camera; determining a projection point corresponding to the target position on the projection plane based on the triggered projection relation among the target position, the projection plane and a camera plane corresponding to the virtual camera; determining the target virtual ray based on the optical center position of the virtual camera and the position of the projection point.
Illustratively, in the exemplary diagram of the virtual camera capturing the virtual scene shown in fig. 2, there is a model space corresponding to the virtual scene; the virtual camera is positioned in the virtual scene and can determine the coordinate position and the shooting angle of the virtual camera in the model space, the virtual camera shoots according to the coordinate position and the shooting angle to obtain a picture of the virtual scene in the pose, and the projection plane is any plane which is positioned in the model space corresponding to the virtual scene and is parallel to the optical axis of the virtual camera. The projection plane and the picture of the virtual scene shot by the virtual camera have a projection relation, and according to the projection relation, a projection point corresponding to the target position can be determined on the projection plane.
After the projection point is determined, according to the optical center position of the virtual camera in the virtual scene and the position of the projection point in the virtual scene, a target virtual ray which takes the optical center position as an endpoint and takes a connecting line of the optical center position and the projection point position as an extension direction is determined.
S1012: determining the target sub-region and the at least one candidate virtual object from the sub-regions and the virtual objects based on the target virtual ray, first position information of a plurality of sub-regions in the virtual scene, respectively, and second position information of a plurality of virtual objects in the virtual scene, respectively.
Here, collision detection may be performed on the sub-regions and the target virtual ray based on first position information of the plurality of sub-regions in the virtual scene, respectively, and a target sub-region that collides with the target virtual ray may be determined from the plurality of sub-regions.
The collision detection may also be performed on the virtual object and the target virtual ray based on the position information of the virtual model corresponding to each of the plurality of virtual objects in the virtual scene, so as to determine at least one candidate virtual object penetrated by the target virtual ray.
Illustratively, in one of the exemplary diagrams of the virtual scene of a game shown in fig. 3, the virtual scene of the game includes a plurality of sub-areas for placing virtual objects, where the sub-areas include a sub-area for placing a virtual object of my formation and a sub-area for placing a virtual object of an enemy formation, the plurality of sub-areas for placing the virtual object of my formation form a placement area for the virtual object of my formation according to a certain sequence, for example, the placement area for the virtual object of my formation is formed by 32 sub-areas of 4 rows × 8 columns, each sub-area corresponds to first position information, and the first position information is used for indicating a position of the sub-area in the virtual scene; the virtual object in the virtual scene corresponds to second position information, and the second position information is used for indicating the specific position of the virtual object in the virtual scene. The three-dimensional model of the virtual object includes, for example: a plurality of patches (mesh), each patch being composed of at least three vertices and a connecting line between the vertices; combining different surface patches to form the surface of the three-dimensional model; the adjacent patches share the same vertex and connecting line; the second position information is, for example, a position of a certain point of the three-dimensional model corresponding to the virtual object, such as a certain vertex, or a central point of the three-dimensional model, in the virtual scene.
After first position information of the target virtual ray and first position information of the multiple sub-regions in the virtual scene are obtained and second position information of the virtual object in the virtual scene are obtained, which sub-region of the multiple sub-regions the target position of the virtual ray collided by the target is located in can be determined according to the target position of the projection point of the virtual ray collided by the target, and the sub-region can be determined as the target sub-region, so that the target sub-region corresponding to the target triggering operation can be determined according to the target triggering operation of a user. And if the virtual object is located in the extending direction of the target virtual ray in the process that the target virtual ray collides with the sub-area from the optical center of the virtual camera, that is, no matter whether the virtual objects are mutually shielded or not, the target virtual ray can determine the virtual object located on the extending route as the candidate virtual object.
In another embodiment, since the target virtual ray extends from the optical center of the virtual camera to the target position only, but during the extension process, it is unknown whether the model to be collided is a model of a target sub-region corresponding to the target position or a three-dimensional model of the virtual object, the virtual object and the sub-region may be respectively set at different levels, for example, the virtual object is set at an object level and the sub-region is set at a battlefield level, the target virtual ray penetrates through the virtual object when the level of the model to be collided by the target virtual ray is the object level corresponding to the virtual object, and the extension is stopped when the level of the model to be collided by the target virtual ray is the battlefield level corresponding to the sub-region.
After the target sub-region and the at least one candidate virtual object are determined, according to whether the associated virtual object exists in the target sub-region, respectively executing S102 and S103.
In the above S102, for the target sub-region, there is a related virtual object, and the related virtual object is determined as a target virtual object corresponding to the target trigger operation.
For example, since the target sub-area is determined based on the target trigger operation of the user, when there is an associated virtual object in the target sub-area, the associated virtual object may be considered as the target virtual object that the user wants to select. For example, in the second example view of the virtual scene of a game shown in fig. 4, the target sub-area corresponding to the target trigger operation is determined as the A1 sub-area according to the target trigger operation of the user, but since the virtual object on the A1 sub-area is blocked by the virtual object in the lower A2 sub-area, at this time, it is determined that the virtual object on the A1 sub-area is actually selected by the user in response to the trigger operation of the user, and therefore, when the associated virtual object exists in the target sub-area, the target virtual object corresponding to the target trigger operation is determined with the target sub-area as a criterion.
In addition, when determining whether the associated virtual object exists in the target sub-region, the embodiment of the present disclosure may determine whether the associated virtual object exists in the target sub-region based on the position of the target sub-region in the virtual scene and the positions of the plurality of virtual objects in the virtual scene.
For example, in the exemplary diagram of fig. 5, which illustrates determining whether the target sub-region has the associated virtual object, the position of the virtual object in the virtual scene is represented as the position of a determination region of the virtual object in the virtual scene, where the determination region is a regular rectangle for determining which sub-region of the plurality of sub-regions the virtual object is located in.
Here, the determination region of the virtual object is only used as an auxiliary determination for determining the association relationship between the virtual object and the plurality of sub-regions in the embodiment of the present disclosure, and the present disclosure does not set any limit to the determination method.
In the above S203, for the virtual object that is not associated with the target sub-region, the target virtual object is determined from the at least one candidate virtual object according to a preset rule.
Specifically, in an embodiment provided by the present disclosure, when determining the target virtual object according to the preset rule, at least one of the following three ways M1 to M3 is included, but not limited to:
m1: and determining the candidate virtual object which is firstly penetrated by the target virtual ray in the at least one candidate virtual object as the target virtual object corresponding to the target trigger operation.
For example, in obtaining at least one candidate virtual object by using the target virtual ray, if there are multiple candidate virtual objects, a penetration order of the target virtual ray is obtained. As shown in the example diagram of the target virtual ray penetration sequence shown in fig. 6, three candidate virtual objects are determined in total, that is, there are three virtual objects penetrated by the target virtual ray emitted by the virtual camera, the extending direction of the target virtual ray is perpendicular to the projection plane, according to the extending direction of the target virtual ray, the three candidate virtual objects are sequentially obtained by a3, a2, and a1 according to the distance sequence from the virtual camera, and according to a preset rule, the virtual object closest to the virtual camera, that is, the virtual object a1 displayed at the forefront of the distance interface is determined as the target virtual object.
Here, when there is no associated virtual object in the target sub-region, it may be determined that the virtual object itself is the virtual object that the user wants to select, and therefore, the virtual object closest to the user, that is, the virtual object that is not occluded by another virtual object, among the multiple candidate virtual objects, is determined as the target virtual object according to the triggered target position.
In addition, the a3 virtual object with the farthest distance may be determined as the target virtual object, or other determination methods may be set according to actual conditions, and the present disclosure does not limit this.
And M2: determining the penetration distance of the at least one candidate virtual object by the target virtual ray; and determining the candidate virtual object with the maximum penetration distance as a target virtual object corresponding to the target trigger operation.
Illustratively, because the virtual object is a three-dimensional model, there is a penetration distance when the target virtual ray penetrates the virtual object. In general, in three-dimensional modeling, since the thickness of a body part is generally larger than the thickness of a weapon or the thickness of a decoration, and the penetration distance is correspondingly increased, a target virtual object to be selected by a user is determined from the penetration distances among a plurality of virtual objects through which a target virtual ray penetrates.
For example, the larger the penetration distance, the closer to the trunk portion of the virtual object is generally indicated, and therefore, it is more likely to be actually selected by the user to determine the virtual object having the largest penetration distance as the target virtual object.
In addition, the virtual object with the minimum penetration distance may be determined as the target virtual object, or other determination methods may be set according to actual conditions, and the disclosure does not limit this.
M3: determining the relative position relationship between the sub-regions respectively associated with the at least one candidate virtual object and the target sub-region; and determining the relative position relationship as a candidate virtual object associated with a sub-region of a preset relative position relationship as a target virtual object corresponding to the target trigger operation.
For example, the relative position relationship between the sub-region associated with the candidate virtual object and the target sub-region may be expressed in 8 orientations of the sub-region associated with the candidate virtual object being directly above, directly below, directly left, directly right, directly above left, directly below left, directly above right and below right of the target sub-region. The selection sequence can be determined according to which position of the target sub-region the sub-regions associated with the multiple candidate virtual objects are located, and due to the problem of the shooting angle of the virtual camera, the target sub-region is easily occluded by the virtual object in the sub-region generally located right below the target sub-region, that is, when the associated virtual object does not exist in the target sub-region, the target virtual object which the user wants to select is generally considered to be located right below the target sub-region. Therefore, among the plurality of candidate virtual objects, a virtual object located directly below the target sub-region is selected as the target virtual object. If the sub-region positioned under the target sub-region does not have the associated virtual object in the sub-regions associated with the multiple candidate virtual objects, determining the target virtual object according to the sequence of left-down, right-left, right-right, left-up, right-up and right-up; alternatively, when there is no virtual object associated with a sub-region directly below the target sub-region, the determination may be performed by using the methods M1 and M2 described above.
Here, the above-mentioned manners of determining the target virtual object by M1, M2, and M3 may be used alone or in any combination according to actual situations, and these three determination manners may be preset in the game setting as setting options, and the user may set the manner of determining the target virtual object by himself according to his own usage habits.
Exemplarily, when only one candidate virtual object exists and there is no associated virtual object in the target sub-region, as shown in fig. 7, in an exemplary diagram for determining a target virtual object, the sub-region A1 is the target sub-region corresponding to the target position, and according to the target position corresponding to the user target trigger operation, the generated target virtual ray penetrates the virtual object on the sub-region A2, at this time, it is first determined whether there is an associated virtual object in the sub-region A1, and after it is determined that there is no associated virtual object in the sub-region A1, the virtual object is determined as the target virtual object according to the virtual object on the sub-region A2 penetrated by the target virtual ray.
For the above S104, after the target virtual object corresponding to the target trigger operation is determined, the target virtual object is displayed according to the first preset form.
For example, when a plurality of candidate virtual objects exist in a virtual scene, due to the fact that the candidate virtual objects shield a target sub-region corresponding to a target position triggered by the user, it is difficult for the user to determine which candidate virtual object is selected, and therefore, the edge of the determined target virtual object is set to be red, and the user can visually see the determined target virtual object. In addition, the target virtual object may be enlarged, levitated or a selected marker may be displayed above the target virtual object, for example, an arrow marker points to the target virtual object, or a finger marker points to the target virtual object, or a sword point to the target virtual object.
Here, the first preset form is expressed as feedback given to the user after determining the target virtual object corresponding to the target trigger operation, so that the user can clearly determine whether the target virtual object is a virtual object that the user wants to select, so as to perform subsequent operations.
In addition, after responding to the target trigger operation of the user, there may be a case where there is no candidate virtual object corresponding to the target position.
In response to the situation, in an embodiment provided by the present disclosure, in response to that there is no alternative virtual object corresponding to the target location, the target sub-area is taken as a target corresponding to the target trigger operation, and the target sub-area is displayed in a second preset form.
For example, after responding to the target triggering operation of the user, the target virtual ray does not penetrate any virtual object, and the target sub-region does not have an associated virtual object, and the target sub-region is taken as a target corresponding to the target triggering operation. For example, when the user arranges the virtual object, a malfunction may occur, and when the user does not select the target virtual object, and wants to move the target virtual object into another sub-area, the user operates to click the wrong target position first, so that the target virtual object is not selected, and then click another sub-area. At this time, since there is no associated virtual object in another sub-region, the edge of the sub-region will flash red, which will remind the user that the operation is invalid.
The embodiment of the present disclosure further provides a specific example of interactive control, where a mobile phone is used as an execution subject, a target game is deployed in the mobile phone, a virtual scene displayed by the target game includes a virtual object and a sub-area used for placing the virtual object, and a user performs operations such as moving, detaching, selling, and upgrading on a selected target virtual object through a target trigger operation. By utilizing ray technology, a first target virtual ray is emitted to a target position corresponding to target trigger operation at the optical center position of a virtual camera lens, and if a virtual object exists in the extending direction of the first target virtual ray, the virtual object is penetrated. And determining the penetrated virtual object as an alternative virtual object. The first target virtual ray stops when colliding with a target sub-region corresponding to the target position, and meanwhile, the virtual object and the sub-region in the virtual scene are subjected to hierarchical division, the virtual object corresponds to an object hierarchy, and the sub-region corresponds to a battlefield hierarchy.
After a target trigger operation in response to a user, a target virtual object is determined through the following steps S1 to S2.
S1: according to the battlefield level, after a ray is shot from a camera according to the target position to obtain a corresponding target sub-area, a virtual object on the target sub-area is determined as a target virtual object.
S2: and for the object level, when the target virtual object is not determined in the step S1, emitting a second target virtual ray, stopping the second target virtual ray when the second target virtual ray collides with the virtual object of the object level, and determining the collided virtual object as the target virtual object.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an interaction control device corresponding to the interaction control method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the interaction control method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repeated parts are not described again.
Referring to fig. 8, a schematic diagram of an interaction control apparatus provided in an embodiment of the present disclosure is shown, where the interaction control apparatus is applied to a terminal device; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control device includes: a first determination module 81, a second determination module 82, a third determination module 83, a first presentation module 84; wherein, the first and the second end of the pipe are connected with each other,
a first determining module 81, configured to receive a target triggering operation of a user, and determine, based on a triggered target position and a current pose of a virtual camera in the virtual scene, a target sub-region and at least one candidate virtual object corresponding to the target position;
a second determining module 82, configured to, in response to that there is an associated virtual object in the target sub-region, determine the associated virtual object as a target virtual object corresponding to the target trigger operation;
a third determining module 83, configured to select, in response to that there is no associated virtual object in the target sub-region, a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
a first presentation module 84, configured to present the target virtual object in a first preset form.
In an optional embodiment, when the first determining module 81 determines the target sub-region and the at least one virtual object corresponding to the target position based on the triggered target position and the current pose of the virtual camera in the virtual scene, the apparatus further includes: the fourth determination module 85 is configured to:
determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera;
determining the target sub-region and the at least one candidate virtual object from the sub-regions and the virtual objects based on the target virtual ray, first position information of the plurality of sub-regions in the virtual scene, respectively, and second position information of the plurality of virtual objects in the virtual scene, respectively.
In an optional embodiment, when determining the target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera, the fourth determining module 85 is further specifically configured to:
determining a projection plane located in the virtual scene based on the current pose of the virtual camera;
determining a projection point corresponding to the target position on the projection plane based on the triggered projection relation among the target position, the projection plane and a camera plane corresponding to the virtual camera;
determining the target virtual ray based on the optical center position of the virtual camera and the position of the projection point.
In an optional embodiment, before the second determining module 82 determines, in response to the existence of the associated virtual object in the target sub-region, the associated virtual object as the target virtual object corresponding to the target trigger operation, the apparatus further includes: an association determination module 86 configured to:
determining whether the target sub-area has an associated virtual object based on the position of the target sub-area in the virtual scene and the positions of the plurality of virtual objects in the virtual scene respectively.
In an optional embodiment, the third determining module 83, when the target virtual object corresponding to the target triggering operation is selected from the at least one candidate virtual object based on a preset rule, is configured to:
determining the candidate virtual object which is firstly penetrated by the target virtual ray in the at least one candidate virtual object as a target virtual object corresponding to the target trigger operation;
alternatively, the first and second electrodes may be,
determining the penetration distance of the at least one candidate virtual object by the target virtual ray; and determining the candidate virtual object with the maximum penetration distance as a target virtual object corresponding to the target trigger operation.
In an optional implementation manner, when the target virtual object corresponding to the target trigger operation is selected from the at least one candidate virtual object based on the preset rule, the third determining module 83 is further specifically configured to:
determining the relative position relationship between the sub-regions respectively associated with the at least one candidate virtual object and the target sub-region;
and determining the relative position relationship as a candidate virtual object associated with a sub-region of a preset relative position relationship as a target virtual object corresponding to the target trigger operation.
In an alternative embodiment, the apparatus further comprises: a second display module 87 for:
and in response to the fact that the alternative virtual object corresponding to the target position does not exist, taking the target sub-area as a target corresponding to the target trigger operation, and displaying the target sub-area in a second preset form.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 9, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes:
a processor 91 and a memory 92; the memory 92 stores machine-readable instructions executable by the processor 91, the processor 91 being configured to execute the machine-readable instructions stored in the memory 92, the processor 91 performing the following steps when the machine-readable instructions are executed by the processor 91:
receiving target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one alternative virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene;
in response to the existence of the associated virtual object in the target sub-region, determining the associated virtual object as a target virtual object corresponding to the target trigger operation;
in response to the fact that no associated virtual object exists in the target sub-area, selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
and displaying the target virtual object in a first preset form.
The memory 92 includes a memory 921 and an external memory 922; the memory 921 is also referred to as an internal memory, and temporarily stores operation data in the processor 91 and data exchanged with an external memory 922 such as a hard disk, and the processor 91 exchanges data with the external memory 922 through the memory 921.
For the specific execution process of the instruction, reference may be made to the steps of the interaction control method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the interaction control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the interaction control method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. An interactive control method is characterized by being applied to terminal equipment; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control method comprises the following steps:
receiving a target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one alternative virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene;
in response to the existence of the associated virtual object in the target sub-region, determining the associated virtual object as a target virtual object corresponding to the target trigger operation;
in response to the target sub-area having no associated virtual object, selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
and displaying the target virtual object in a first preset form.
2. The method of claim 1, wherein determining a target sub-region and at least one virtual object corresponding to the target location based on the triggered target location and a current pose of a virtual camera in the virtual scene comprises:
determining a target virtual ray located in the virtual scene based on the triggered target position and the current pose of the virtual camera;
determining the target sub-region and the at least one candidate virtual object from the sub-regions and the virtual objects based on the target virtual ray, first position information of the plurality of sub-regions in the virtual scene, respectively, and second position information of the plurality of virtual objects in the virtual scene, respectively.
3. The method of claim 2, wherein determining a target virtual ray located in the virtual scene based on the triggered target location and the current pose of the virtual camera comprises:
determining a projection plane located in the virtual scene based on the current pose of the virtual camera;
determining a projection point corresponding to the target position on the projection plane based on the triggered projection relation among the target position, the projection plane and a camera plane corresponding to the virtual camera;
determining the target virtual ray based on the optical center position of the virtual camera and the position of the projection point.
4. The method according to any one of claims 1 to 3, wherein the determining the associated virtual object as the target virtual object corresponding to the target trigger operation before the associated virtual object exists in the target sub-region further comprises:
and determining whether the target sub-area has an associated virtual object or not based on the position of the target sub-area in the virtual scene and the positions of the virtual objects in the virtual scene.
5. The method according to claim 2 or 3, wherein the selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule comprises:
determining the candidate virtual object which is firstly penetrated by the target virtual ray in the at least one candidate virtual object as a target virtual object corresponding to the target trigger operation;
alternatively, the first and second electrodes may be,
determining the penetration distance of the at least one candidate virtual object penetrated by the target virtual ray respectively; and determining the candidate virtual object with the maximum penetration distance as a target virtual object corresponding to the target trigger operation.
6. The method according to claim 1, wherein the selecting a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule comprises:
determining the relative position relation between the sub-regions respectively associated with the at least one candidate virtual object and the target sub-region;
and determining the relative position relationship as a candidate virtual object associated with a sub-region of a preset relative position relationship as a target virtual object corresponding to the target trigger operation.
7. The method of claim 1, further comprising:
and in response to the fact that the alternative virtual object corresponding to the target position does not exist, taking the target sub-area as a target corresponding to the target trigger operation, and displaying the target sub-area in a second preset form.
8. An interaction control device is characterized by being applied to terminal equipment; displaying a virtual scene in an interface of the terminal equipment, wherein the virtual scene is divided into a plurality of sub-areas; the virtual scene comprises a plurality of virtual objects associated with the sub-regions; the interaction control device comprises:
the first determination module is used for receiving a target trigger operation of a user, and determining a target sub-area corresponding to a target position and at least one candidate virtual object based on the triggered target position and the current pose of a virtual camera in the virtual scene;
a second determining module, configured to determine, in response to a virtual object associated with the target sub-region, that the associated virtual object is a target virtual object corresponding to the target trigger operation;
a third determining module, configured to select, in response to that there is no associated virtual object in the target sub-region, a target virtual object corresponding to the target trigger operation from the at least one candidate virtual object based on a preset rule;
the first display module is used for displaying the target virtual object in a first preset form.
9. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the interaction control method of any of claims 1 to 7 when the machine-readable instructions are executed by the processor.
10. A computer-readable storage medium, having stored thereon a computer program, which, when executed by an electronic device, performs the steps of the interaction control method according to any one of claims 1 to 7.
CN202210885198.0A 2022-07-26 2022-07-26 Interaction control method and device, electronic equipment and storage medium Pending CN115193038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210885198.0A CN115193038A (en) 2022-07-26 2022-07-26 Interaction control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210885198.0A CN115193038A (en) 2022-07-26 2022-07-26 Interaction control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115193038A true CN115193038A (en) 2022-10-18

Family

ID=83584758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210885198.0A Pending CN115193038A (en) 2022-07-26 2022-07-26 Interaction control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115193038A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6323895B1 (en) * 1997-06-13 2001-11-27 Namco Ltd. Image generating system and information storage medium capable of changing viewpoint or line-of sight direction of virtual camera for enabling player to see two objects without interposition
JP2020062116A (en) * 2018-10-15 2020-04-23 株式会社 ディー・エヌ・エー System, method, and program for providing content using augmented reality technique
CN111833428A (en) * 2019-03-27 2020-10-27 杭州海康威视系统技术有限公司 Visual domain determining method, device and equipment
US20200357164A1 (en) * 2017-03-17 2020-11-12 Unity IPR ApS Method and system for automated camera collision and composition preservation
CN112148125A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 AR interaction state control method, device, equipment and storage medium
CN112807684A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Obstruction information acquisition method, device, equipment and storage medium
WO2021227682A1 (en) * 2020-05-15 2021-11-18 腾讯科技(深圳)有限公司 Virtual object controlling method, apparatus and device and medium
WO2021249134A1 (en) * 2020-06-10 2021-12-16 腾讯科技(深圳)有限公司 Method for interaction with virtual object and related device
CN114442888A (en) * 2022-02-08 2022-05-06 联想(北京)有限公司 Object determination method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6323895B1 (en) * 1997-06-13 2001-11-27 Namco Ltd. Image generating system and information storage medium capable of changing viewpoint or line-of sight direction of virtual camera for enabling player to see two objects without interposition
US20200357164A1 (en) * 2017-03-17 2020-11-12 Unity IPR ApS Method and system for automated camera collision and composition preservation
JP2020062116A (en) * 2018-10-15 2020-04-23 株式会社 ディー・エヌ・エー System, method, and program for providing content using augmented reality technique
CN111833428A (en) * 2019-03-27 2020-10-27 杭州海康威视系统技术有限公司 Visual domain determining method, device and equipment
WO2021227682A1 (en) * 2020-05-15 2021-11-18 腾讯科技(深圳)有限公司 Virtual object controlling method, apparatus and device and medium
WO2021249134A1 (en) * 2020-06-10 2021-12-16 腾讯科技(深圳)有限公司 Method for interaction with virtual object and related device
CN112148125A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 AR interaction state control method, device, equipment and storage medium
CN112807684A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Obstruction information acquisition method, device, equipment and storage medium
CN114442888A (en) * 2022-02-08 2022-05-06 联想(北京)有限公司 Object determination method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张金玲;贾庆轩;孙汉旭;刘亚辉;: "增强现实中的多层次遮挡算法", 湖南大学学报(自然科学版), no. 05, 25 May 2009 (2009-05-25) *
段瑞青;余烨;刘晓平;: "一种基于虚拟实景空间的测量与定位方法", 合肥工业大学学报(自然科学版), no. 06, 28 June 2012 (2012-06-28) *

Similar Documents

Publication Publication Date Title
KR102625233B1 (en) Method for controlling virtual objects, and related devices
WO2020238592A1 (en) Method and apparatus for generating mark information in virtual environment, electronic device, and storage medium
JP6529659B2 (en) Information processing method, terminal and computer storage medium
CN108144293B (en) Information processing method, information processing device, electronic equipment and storage medium
US20200330866A1 (en) Location indication information display method, electronic apparatus, and storage medium
CN109925720B (en) Information processing method and device
CN110639203B (en) Control response method and device in game
CN113440846B (en) Game display control method and device, storage medium and electronic equipment
CN110052021B (en) Game object processing method, mobile terminal device, electronic device, and storage medium
WO2022088941A1 (en) Virtual key position adjusting method and apparatus, and device, storage medium and program product
WO2023109328A1 (en) Game control method and apparatus
CN111475089B (en) Task display method, device, terminal and storage medium
WO2021227684A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
CN113559501A (en) Method and device for selecting virtual units in game, storage medium and electronic equipment
CN113457157A (en) Method and device for switching virtual props in game and touch terminal
CN115193038A (en) Interaction control method and device, electronic equipment and storage medium
JP7423137B2 (en) Operation presentation method, device, terminal and computer program
CN112221123B (en) Virtual object switching method and device, computer equipment and storage medium
JP2016018363A (en) Game program for display-controlling object arranged on virtual space plane
CN113440835A (en) Control method and device of virtual unit, processor and electronic device
CN113694514A (en) Object control method and device
CN113457144A (en) Method and device for selecting virtual units in game, storage medium and electronic equipment
JP2016130888A (en) Computer program for icon selection, portable terminal, and computer mounting method
CN109117076B (en) Game unit selection method, storage medium and computer equipment
CN110825280A (en) Method, apparatus and computer-readable storage medium for controlling position movement of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination