CN117815663A - Interaction method, device, terminal equipment and computer readable storage medium - Google Patents

Interaction method, device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN117815663A
CN117815663A CN202311855052.2A CN202311855052A CN117815663A CN 117815663 A CN117815663 A CN 117815663A CN 202311855052 A CN202311855052 A CN 202311855052A CN 117815663 A CN117815663 A CN 117815663A
Authority
CN
China
Prior art keywords
target
screening condition
state
virtual character
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311855052.2A
Other languages
Chinese (zh)
Inventor
张桂苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311855052.2A priority Critical patent/CN117815663A/en
Publication of CN117815663A publication Critical patent/CN117815663A/en
Pending legal-status Critical Current

Links

Abstract

The application provides an interaction method, an interaction device, terminal equipment and a computer readable storage medium. Providing a graphical user interface by a terminal device, the method comprising: displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode; responding to the triggering operation aiming at a target screening condition component in at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of a target virtual role; and determining a first interactive object meeting the target screening condition from the plurality of interactive objects, and controlling the target virtual character and the first interactive object to be displayed in the virtual scene. Through setting up the screening condition subassembly that different screening conditions correspond, can adapt to the interactive object that the user wants to show or screen under the different scenes, and then can show the interactive object that satisfies target screening condition.

Description

Interaction method, device, terminal equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an interaction method, an interaction device, a terminal device, and a computer readable storage medium.
Background
In the related art, a large number of interactive objects are often displayed in a virtual game scene at the same time, when an interaction operation is required to be performed with some specific interactive objects, the interactive objects often need to be clicked to trigger the interaction operation with the specific interactive objects, and at the moment, the interaction objects are interfered by corresponding models of other interactive objects, so that a user is difficult to click the interactive objects needing to be interacted.
In general, when a large number of interactive objects are simultaneously displayed in a virtual game scene, all the interactive objects can be hidden by a specific key combination, but there still remains a problem in that an interactive operation with a specific interactive object cannot be precisely performed.
Disclosure of Invention
In view of this, an object of the present application is to propose an interaction method, an interaction device, a terminal device and a computer readable storage medium.
In view of the above object, in a first aspect, the present application provides an interaction method, in which a graphical user interface is provided by a terminal device, in which at least part of a virtual scene is displayed, the virtual scene including a plurality of interaction objects and a target virtual character controlled by the terminal device, the method including:
Displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode;
responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene.
In a second aspect, the present application provides an interaction apparatus for providing, by a terminal device, a graphical user interface in which at least part of a virtual scene is presented, the virtual scene including a plurality of interaction objects and a target virtual character controlled by the terminal device, the apparatus comprising:
a first display module configured to display at least one screening criteria component in the graphical user interface, wherein each screening criteria component corresponds to a manner of distinction;
the determining module is configured to respond to the triggering operation aiming at the target screening condition component in the at least one screening condition component, and determine a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
And a second display module configured to determine a first interactive object satisfying the target screening condition from the plurality of interactive objects, and control the display of the target virtual character and the first interactive object in the virtual scene.
In a third aspect, the present application provides a terminal device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the interaction method according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer instructions for causing a computer to perform the interaction method of the first aspect.
From the foregoing, it can be seen that an interaction method, apparatus, terminal device and computer readable storage medium provided in the present application provide a graphical user interface through a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, and the virtual scene includes a plurality of interaction objects and a target virtual character controlled by the terminal device. Displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode; responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role; and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene. Through setting up the screening condition subassembly that different screening conditions correspond, can adapt to the interactive object that the user wants to show or screen under the different scenes, and then can show the interactive object that satisfies target screening condition.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 shows an exemplary application scenario of an interaction method provided in an embodiment of the present application.
Fig. 2 shows an exemplary flowchart of an interaction method provided in an embodiment of the present application.
FIG. 3 illustrates an exemplary schematic diagram of screening interaction components in a virtual scene according to an embodiment of the present application.
Fig. 4 shows an exemplary schematic diagram of screening criteria components corresponding to screening criteria areas according to an embodiment of the present application.
FIG. 5 illustrates an exemplary diagram of triggering a target avatar in accordance with an embodiment of the present application.
FIG. 6 illustrates an exemplary diagram of triggering a target avatar to perform an interactive operation, according to an embodiment of the present application.
Fig. 7 shows an exemplary schematic diagram of a sliding track in a virtual scene according to an embodiment of the application.
FIG. 8 illustrates an exemplary schematic diagram of an endpoint of a sliding track moving into an object interaction response region corresponding to a first interaction object according to an embodiment of the present application.
FIG. 9 illustrates an exemplary diagram of screening a first interactive object that meets team screening criteria in accordance with an embodiment of the present application.
Fig. 10 illustrates an exemplary schematic diagram of screening a first interactive object that meets a helper screening criteria according to an embodiment of the present application.
Fig. 11 illustrates an exemplary schematic diagram of screening a first interactive object for compliance with pet or ride screening conditions in accordance with an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an interaction device according to an embodiment of the present application.
Fig. 13 shows an exemplary structural schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background art, in the related art, a large number of virtual characters are often displayed in a virtual game scene at the same time, when an interaction with some specific virtual characters is desired, the user often needs to click on the virtual character to trigger the interaction with the specific virtual characters, and at the moment, the interaction is interfered by corresponding models of other virtual characters, so that the user is difficult to click on the virtual characters needing to interact.
For example, in a game scene, a plurality of overlapped models corresponding to the interactive objects exist, the model A corresponds to the interactive objects which the user wants to interact with, but the model A is positioned at the bottom layer of the model B and is completely shielded by the model B, so that the user cannot accurately click on the model A and interact with the interactive objects.
For another example, a user who does not have teammates wants to interact with other interactive objects having teammates, so as to join the teammates, and the other users who do not have teammates in the game scene belong to interference items.
In general, when a large number of interactive objects are simultaneously displayed in a virtual game scene, all the interactive objects can be hidden by a specific key combination, but there still remains a problem in that an interactive operation with a specific interactive object cannot be precisely performed.
As such, the present application provides an interaction method, apparatus, terminal device, and computer readable storage medium, where a graphical user interface is provided by the terminal device, where at least a portion of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a plurality of interaction objects and a target virtual character controlled by the terminal device. Displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode; responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role; and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene. Through setting up the screening condition subassembly that different screening conditions correspond, can adapt to the interactive object that the user wants to show or screen under the different scenes, and then can show the interactive object that satisfies target screening condition.
Fig. 1 shows an exemplary application scenario of an interaction method provided in an embodiment of the present application.
Referring to fig. 1, in this application scenario, a local terminal device 101 and a server 102 are included. The local terminal device 101 and the server 102 may be connected through a wired or wireless communication network, so as to implement data interaction.
The local terminal device 101 may be a terminal device with data transmission and multimedia input/output functions near the user side, such as a desktop computer, a mobile phone, a mobile computer, a tablet computer, a media player, a car-mounted computer, an intelligent wearable device, a personal digital assistant (personal digital assistant, PDA), or other terminal devices capable of implementing the above functions, etc. The terminal device may include a processor for presenting a graphical user interface that may display a visual operation interface, and a display screen having a touch input function for processing operation data, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In some exemplary embodiments, the interaction method may run on the local terminal device 101 or the server 102.
When the interaction method is run on the server 102, the server 102 is used to provide an interaction service to a user of a terminal device in which a client in communication with the server 102 is installed, and the user can designate a target program through the client. Server 102 displays at least one screening criteria component in the graphical user interface, wherein each screening criteria component corresponds to a manner of distinction; responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role; and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene. The server 102 may also send the validated first interactive object to a client, which presents the first interactive object to the user. Wherein the terminal device may be the aforementioned local terminal device 101.
When the interaction method is run on the server 102, the method may be implemented and executed based on a cloud interaction system.
The cloud interaction system comprises client equipment and a cloud server.
In some example embodiments, various cloud applications may be run under the cloud interaction system, such as: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the control method of the moving state in the game are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In the above embodiments, the description has been given taking the example that the interaction method is run on the server 102, but the disclosure is not limited thereto, and in some exemplary embodiments, the interaction method may also be run on the local terminal device 101.
The local terminal device 101 may include a display screen and a processor. A client is installed in the local terminal apparatus 101, and a user can specify a target program through the client. The processor displays at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode; responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role; and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene. The processor may also send the confirmed first interactive object to the client, which presents the first interactive object to the user via the display screen.
In some exemplary embodiments, taking a visualization operation as an example, the local terminal device 101 stores a visualization operation program and is used to present a visualization operation screen. The local terminal device 101 is used for interaction with a user through a graphical user interface, i.e. conventionally, the visual operation program is downloaded and installed through the terminal device and run. The manner in which the local terminal device 101 provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or it may be provided to the user by holographic projection. For example, the local terminal device 101 may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the visualization operating program, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In some exemplary embodiments, the embodiments of the present disclosure provide an interaction method, in which a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device 101 or may be a client device in the aforementioned cloud interaction system.
An interaction method according to an exemplary embodiment of the present disclosure is described below in connection with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Fig. 2 shows an exemplary flowchart of an interaction method provided in an embodiment of the present application.
Referring to fig. 2, in an interaction method provided by an embodiment of the present application, a graphical user interface is provided by a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a plurality of interaction objects and a target virtual character controlled by the terminal device, and the method specifically includes the following steps:
s202: at least one filtering condition component is displayed in the graphical user interface, wherein each filtering condition component corresponds to a distinguishing mode.
S204: and responding to the triggering operation aiming at the target screening condition component in the at least one screening condition component, and determining a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role.
S206: and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene.
In some embodiments, a graphical user interface may be provided by a terminal device in which at least a portion of a virtual scene may be presented, the virtual scene including a plurality of interactive objects and a target virtual character controlled by the terminal device.
A filtering interaction component may be presented in the graphical user interface. When a user wants to interact with certain specific interaction objects in the virtual scene, the screening interaction component can be triggered to display at least one screening condition component in the graphical user interface, so that a first interaction object meeting the interaction requirement of the user is screened out.
FIG. 3 illustrates an exemplary schematic diagram of screening interaction components in a virtual scene according to an embodiment of the present application.
Specifically, the triggering operation may include a long press operation or a first click operation. Referring to fig. 3, a filtering interactive component, such as M in fig. 3, is displayed in the graphical user interface, and when the filtering interactive component is pressed for a long time, a filtering condition area including a plurality of filtering condition components each corresponding to a component interaction response area may be displayed in the graphical user interface. Wherein each screening criteria component corresponds to a screening criteria.
Of course, when clicking on the filtering interaction component, a filtering condition area may be displayed in the graphical user interface, where the filtering condition area includes a plurality of filtering condition components, and each filtering condition component corresponds to a component interaction response area. Wherein each screening criteria component corresponds to a screening criteria. The first triggering operation may be accessed to the client in the form of an instruction signal, where the instruction signal may include a key instruction signal generated by triggering a target key of the target peripheral, and the target key may be one key or a combination of multiple keys on the target peripheral. For example, by striking a "K" key on a keyboard to generate a key command signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; still alternatively, a key instruction signal may also be generated by moving a mouse to hover over the target skill control and clicking. When a trigger operation is performed on the target skill control through the mouse, specific operations including, but not limited to, a double click operation, a drag operation and the like are performed.
Fig. 4 shows an exemplary schematic diagram of screening criteria components corresponding to screening criteria areas according to an embodiment of the present application.
Referring to fig. 4, for example, after triggering the screen interaction component M, three different screen condition areas, for example, corresponding to three different screen condition components, for example, screen condition components A, B and C, respectively, may be displayed in the graphical user interface.
In some embodiments, a second trigger operation to the target screening criteria component that is consecutive to the trigger operation may be a first hover operation, and after the screening interaction component is triggered by the trigger operation, screening criteria components A, B and C are displayed, determining whether a duration of the first hover operation within a response area corresponding to any of the screening criteria components reaches a first preset hover time. Specifically, when the user inputs the touch signal, the graphical user interface can determine a touch point corresponding to the touch signal, for example G in fig. 4, and further determine whether the touch point G is in a response area corresponding to any filtering condition component, and stay for a first preset hover time, for example, when hovering in the response area corresponding to the filtering condition component a for the first preset time, the filtering condition component a is taken as the target filtering condition component.
FIG. 5 illustrates an exemplary diagram of triggering a first interactive object according to an embodiment of the present application.
Referring to fig. 5, further, the screening condition a corresponding to the screening condition component a is triggered to implement displaying the first interactive object satisfying the screening condition corresponding to the target screening condition component. For example, in fig. 4, an interactive object W, X, Y, Z is displayed, and when the filtering condition a corresponding to the filtering condition component a is triggered, if the interactive objects X and Z satisfy the filtering condition a, only the interactive objects X and Z are displayed in the virtual scene, and the interactive objects W and Y are hidden. The interaction object displayed after being screened can be used as a first interaction object, the hidden interaction object can be used as a second interaction object, and the interaction object can be a virtual character, a pet, a sitting, a virtual building (such as a weapon store, a pharmacy, a restaurant and the like), a transaction node and the like, wherein the transaction node can be a stall of the virtual character in a virtual scene and the like.
FIG. 6 illustrates an exemplary diagram of triggering a first interactive object to perform an interactive operation, according to an embodiment of the present application.
Referring to fig. 6, further, after the first interactive object is obtained by filtering, a third interactive object may be determined from the first interactive object through a selection operation continuous with the triggering operation, so that the target virtual character may be controlled to execute a target interactive action on the third interactive object, thereby determining an interactive result. The selecting operation may be a second hover operation, and it is determined whether a duration of the second hover operation within an object interaction response area corresponding to the first interactive object reaches a second preset hover time. Specifically, when the user inputs the touch signal, the graphical user interface can determine a touch point corresponding to the touch signal, for example G in fig. 6, and further determine whether the touch point G is in the object interaction response area corresponding to the first interaction object, and stay for a second preset hover time, for example, when hovering in the object interaction response area corresponding to the first interaction object X for the second preset time, execute the target interaction behavior corresponding to the target screening condition for the first interaction object X, so as to determine the corresponding interaction result.
It should be noted that, the target interaction behavior may be determined by the target differentiating method and the current state of the target virtual character.
Referring to fig. 4, for example, after triggering the screen interaction component M, three different screen condition areas, for example, corresponding to three different screen condition components, for example, screen condition components A, B and C, respectively, may be displayed in the graphical user interface.
In some embodiments, the second triggering operation may be a second clicking operation, and after the filtering interaction component is triggered by the first triggering operation, the filtering condition components A, B and C are displayed, and in response to the second clicking operation for any filtering condition component in the filtering condition area, the filtering condition component may be regarded as a target filtering condition component. Specifically, when the user clicks the response area corresponding to the filtering condition component a, the filtering condition component a is taken as the target filtering condition component.
Referring to fig. 5, further, the screening condition a corresponding to the screening condition component a is triggered to implement displaying the first interactive object satisfying the screening condition corresponding to the target screening condition component. For example, in fig. 4, an interactive object W, X, Y, Z is displayed, and when the filtering condition a corresponding to the filtering condition component a is triggered, if the interactive objects X and Z satisfy the filtering condition a, only the interactive objects X and Z are displayed in the virtual scene, and the interactive objects W and Y are hidden.
Referring to fig. 6, further, after the first interactive object is obtained by filtering, a third interactive object may be determined from the first interactive object through a selection operation continuous with the triggering operation, so that the target virtual character may be controlled to execute a target interactive action on the third interactive object, thereby determining an interactive result. The selection operation may be a third click operation, for example, in response to a third click operation in an object interaction response area corresponding to the first interaction object X in the filtering condition area, a target interaction behavior corresponding to the target filtering condition for the first interaction object X may be executed, so as to determine a corresponding interaction result.
In some embodiments, the second trigger operation may also be a first sliding operation. Specifically, on the basis of uninterrupted triggering operation, a sliding track corresponding to the first sliding operation can be determined in response to the first sliding operation in the screening condition area, and whether the sliding track passes through the boundary between the component interaction response area corresponding to any screening condition component and other areas is further determined. Wherein the other areas are areas other than the component interaction response area. And responding to the sliding track to pass through the boundary between the response area corresponding to any screening condition component and other areas, taking the screening condition component as a target screening condition component, and determining a first interaction object meeting the target distinguishing mode from the interaction objects in the virtual scene according to the target distinguishing mode corresponding to the target screening condition component and the current state of the target virtual character.
Fig. 7 shows an exemplary schematic diagram of a sliding track in a virtual scene according to an embodiment of the application.
For example, when the sliding track, for example N in fig. 7, slides through the filtering condition component a to the component response area corresponding to the filtering condition component B and slides through the boundary of the component response area corresponding to the filtering condition component B to the area except for the filtering condition area in the virtual scene, the filtering condition component B is triggered, and according to the target distinguishing mode corresponding to the target filtering condition component and the current state of the target virtual character, the first interactive object meeting the target distinguishing mode is determined from the interactive objects in the virtual scene. That is, when the sliding track slides in each filtering condition area and the sliding track does not slide to an area other than the filtering condition area in the virtual scene, any filtering condition is not triggered, only when the sliding track slides to an area other than the filtering condition area in the virtual scene, for example, the sliding track N slides over the component response area corresponding to the filtering condition component a and the component response area corresponding to the filtering condition component B in fig. 7, and further slides out from the component response area corresponding to the filtering condition component B and slides to an area other than the filtering condition area in the virtual scene, for example, the P position in fig. 7, the filtering condition component B is taken as a target filtering condition component, and a first interactive object satisfying the target distinguishing mode is determined from the interactive objects in the virtual scene according to the target distinguishing mode corresponding to the target filtering condition component B and the current state of the target virtual character.
Referring to fig. 5, a distinguishing manner B corresponding to the screening condition component B is further triggered to display a first interaction object meeting the screening condition corresponding to the target screening condition component. For example, in fig. 7, an interactive object W, X, Y, Z is displayed, and when the distinction mode B corresponding to the filtering condition component B is triggered, if the interactive objects X and Z satisfy the distinction mode B, only the interactive objects X and Z are displayed in the virtual scene, hiding the interactive objects W and Y.
Further, after the first interactive object is obtained through screening, a third interactive object can be determined from the first interactive object through a selection operation continuous with the triggering operation, and then the target virtual character can be controlled to execute target interactive behaviors on the third interactive object, so that an interactive result is determined. Wherein the selection operation may be a second sliding operation. After determining that the sliding track passes through the boundary between the response region and the other region corresponding to any of the screening condition components, the end point of the sliding track, e.g., the P position in fig. 7, may be determined.
Fig. 8 illustrates an exemplary schematic diagram of an endpoint of a sliding track moving into a character interaction response area corresponding to a first interaction object according to an embodiment of the present application.
Still further, referring to fig. 8, when the endpoint is moved into the object interaction response area corresponding to the first interactive object, for example, the P position is moved into the object interaction response area corresponding to the interactive object X, then the target interaction behavior corresponding to the target distinction manner for the first interactive object may be performed to determine the interaction result, that is, the target interaction behavior corresponding to the distinction manner B for the interactive object X may be performed, thereby determining the interaction result.
It should be noted that, similarly to the manner in which the target screening condition component is determined by the hover operation in the foregoing embodiment, the determination of the hover operation is combined with the first slide operation, and the target screening condition component is determined. Specifically, on the basis of uninterrupted long-press operation, a first sliding operation in the screening condition area can be responded, and whether the hovering time of the touch point corresponding to the first sliding operation in the component interaction response area corresponding to any screening condition component reaches the preset time is further determined. If the hover time of the touch point in the component interaction response area corresponding to any filtering condition component reaches the preset time, the filtering condition component can be used as a target filtering condition component, and a first interaction object meeting the target distinguishing mode is determined from interaction objects in the virtual scene according to the target distinguishing mode corresponding to the target filtering condition component and the current state of the target virtual character.
For the distinguishing manner, in the practical application process, a plurality of different distinguishing manners may be included, and specific setting may be set according to practical situations, for example, the distinguishing manner may include teams, camps, and special props (such as riding, pets, etc.), which are not limited in particular herein.
In some embodiments, the target differentiation means may include: distinguishing by team status, the first target screening condition may be "team", and the current character status may include: a enqueued state and an unenqueued state. Specifically, in response to a triggering operation for the team filtering component, a team formation state corresponding to the target virtual character may be determined, for example, whether the target virtual character controlled by the user is in any team, that is, whether the target virtual character is in a certain team, whether other virtual characters exist in the team besides the target virtual character, that is, whether the target virtual character is in the team even if only the target virtual character is in the team, and whether the target virtual character is in the team may be determined, further, whether an interaction object in an ungrouped state is regarded as a target filtering condition may be determined.
Further, if the target virtual character controlled by the user is not in any team, that is, it is determined that the target virtual character is in an ungrouped state, it is determined that the interactive object in the queued state is regarded as a target screening condition. A third interactive object of the assembled state indicated by the target differentiation means may be determined from the interactive objects in the virtual scene. That is, the target differentiating method is used for screening the third interaction object according to whether the target virtual character is in any team, if the target virtual character is not in any team, the target virtual character can be considered to want to join one of the teams, and if the target virtual character is not in any team, the target differentiating method correspondingly screens the third interaction object in any team.
Referring to fig. 8, if the screening condition B corresponds to a target distinguishing manner of distinguishing according to the team status, when the end point P of the sliding track is moved to the position corresponding to the first virtual character X, the team information of the team where the first virtual character X is located and the character information corresponding to all the virtual characters in the team may be displayed in the graphical user interface, for example, if the Q mark in fig. 8 corresponds to the team information of the team where the first virtual character X is located and the character information corresponding to all the virtual characters in the team, where the character information at least includes: name information and combat information corresponding to all the virtual characters, team information at least comprises: target location information for indicating a team's travel to, average combat information for all virtual characters in the team, and/or target task information for indicating a team's execution. For example, the first virtual character X is in a team named XXX, average fight 1000, and target location information LLL, and the character information corresponding to all the virtual characters in the team includes virtual character X and virtual character K, and the fight of virtual character X is 1100 and the fight of virtual character K is 900.
Still further, after the end point P of the sliding track is moved to the position corresponding to the first virtual character X, for example, the selection operation may be terminated, and further, a request for joining in a team may be sent to the first virtual character X, or a request for joining in a team may be sent to another virtual character in the team, for example, the virtual character K. When the first virtual role X confirms that the request for joining the team sent by the target virtual role is approved, the target virtual role can be joined into the team where the first virtual role X is located, and the team goal is achieved.
Of course, if the target virtual character controlled by the user is in any team, a third interactive object which is not in any team and indicated by the target differentiating mode may be determined from the interactive objects in the virtual scene, that is, when it is determined that the target virtual character is in the formed state, it may be determined that the interactive object in the non-formed state is taken as the target screening condition. That is, the target differentiating method is used for screening the third interaction object according to whether the target virtual character is in any team, if the target virtual character is in any team, the target virtual character can be considered to want to invite other virtual characters not in any team to join the team in which the target virtual character is located, and if the target differentiating method is used for correspondingly screening the second virtual character not in any team.
FIG. 9 illustrates an exemplary diagram of screening a first interactive object that meets team screening criteria in accordance with an embodiment of the present application.
Referring to fig. 9, if the screening condition B corresponds to a target screening condition that is distinguished according to the team status, when the end point P of the sliding track is moved to the position corresponding to the second virtual character X, character information corresponding to the second virtual character X may be displayed in the graphical user interface, for example, the name of the second virtual character X and the combat force thereof are 1100, corresponding to the R mark in fig. 9.
Still further, after the end point P of the sliding track is moved to the position corresponding to the second virtual character X, for example, the third triggering operation may be terminated, and then the team invitation request may be sent to the second virtual character X. When the second virtual role X confirms that the team invitation request sent by the target virtual role is approved, the second virtual role X can be added into the team where the target virtual role is located, and the team formation target is achieved.
In some embodiments, the target differentiation manner may further include: the target is distinguished according to the helper status, for example, the target distinguishing mode is "camping", and the current role status comprises: a helper added state and a helper not added state. In response to the target avatar being in the helper-added state, it may be determined to have the interactive object in the helper-not-added state as a target screening condition. For example, whether the target virtual character controlled by the user is in any party, that is, whether the target virtual character is in a party, whether other virtual characters exist in the party in addition to the target virtual character, that is, whether the target virtual character is in a party even if only one target virtual character is in the party, and whether the target virtual character is in a joined party state, further, whether an interactive object in a non-joined party state is determined as a target screening condition may be determined.
Further, a join helper request may be sent to the third interactive object, and a join helper request may also be sent to other interactive objects that are in the same helper as the third interactive object. When the third interactive object confirms that the request of joining the helper sent by the target virtual character is approved, the target virtual character can be joined into the helper where the third interactive object is located.
Further, if the target virtual character controlled by the user is not in any helper, that is, it is determined that the target virtual character is in a helper-not-added state, it is determined that the interactive object in the helper-added state is regarded as the target screening condition. A third interactive object with the helper status added, indicated by the target differentiation means, may be determined from the interactive objects in the virtual scene. That is, the target differentiating method is used for screening the third interactive object according to whether the target virtual character is in any party, if the target virtual character is not in any party, the target virtual character can be considered to want to join in one of the parties, and then the target differentiating method correspondingly screens the third interactive object in any party.
Further, a helper invitation request may be sent to the third interactive object, and when the third interactive object confirms that the helper invitation request sent by the target virtual character is approved, the third interactive object may be caused to join the helper where the target virtual character is located.
Of course, if the target virtual character controlled by the user is in any helper, a third interactive object which is indicated by the target differentiating mode and is not in any helper may be determined from the interactive objects in the virtual scene, that is, when the target virtual character is determined to be in the teamed state, it may be determined that the interactive object in the state of not joining the helper is taken as the target screening condition. That is, the target differentiating method is used for screening the third interactive object according to whether the target virtual character is in any party, if the target virtual character is in any party, the target virtual character can be considered to want to invite other virtual characters not in any party to join in the party where the target virtual character is, and the first target differentiating method correspondingly screens out the second virtual character not in any party.
In some embodiments, the target differentiation manner may further include: the target is distinguished according to the help group, for example, the target distinguishing mode is 'camping', and the current role state comprises: a helper state has been added. Specifically, in response to a triggering operation for the helper screening component, a helper state corresponding to the target virtual character is determined, if the target virtual character is in the helper-added state, an interactive object in the same helper as the target virtual character is determined to serve as a target screening condition, and then the interactive object in the same helper as the target virtual character is selected. That is, by the target discrimination method, the third interactive object having the same camping as the target virtual character can be screened out.
Fig. 10 illustrates an exemplary schematic diagram of screening a first interactive object that meets a helper screening criteria according to an embodiment of the present application.
Referring to fig. 10, further, in response to the target screening condition being an interactive object in the same help-dispatch area as the target virtual character, the display of the area information corresponding to the third interactive object in the help-dispatch area and the interactive function component in the graphical user interface may be controlled, for example if the distinguishing mode B corresponds to a target distinguishing mode that is distinguished by the help-dispatch area, when the end point P of the sliding track is moved to the position corresponding to the first interactive object X, the information of the first interactive object X corresponding to the first interactive object X in the camping, for example, the position of the first interactive object X in the camping is a secondary group length, and the interactive function components, for example, functions of team formation, private chat, assignment, division and the like, can be displayed in the graphical user interface. The corresponding interactive function for the first interactive object X may be triggered by clicking the corresponding interactive function component, that is, the first interactive object X is determined to be a third interactive object, for example, clicking an interactive function component of team, if the target virtual character is on any team, the team invitation information is sent to the first interactive object X, and if the first interactive object X confirms the team invitation information, the first interactive object X is added to the team on which the target virtual character is located.
If the target virtual character is not in any team, the first interactive object X is in any team, the team forming application information can be sent to the first interactive object X, and if the first interactive object X confirms the team forming application information, the target virtual character is added to the team in which the first interactive object X is located.
For example, clicking on the interactive function component, i.e., private chat, may call out a dialog box with the first interactive object X, and may send text information, voice information, or image information, etc., to the first interactive object X.
Still further, the target virtual character may be controlled to change a position of the third interactive object corresponding to the same helper camp, or to remove the third interactive object from the same helper camp. For example, clicking the interactive function component for assignment can adjust the position of the first interactive object X in the camping, for example, the position of the first interactive object X can be adjusted from the minor group length to the group length. If the interactive function component is clicked to be removed, the first interactive object X can be removed from the current campaigns, so that the first interactive object X is no longer in the same campaigns with the target virtual character.
Fig. 11 illustrates an exemplary schematic diagram of screening a first interactive object for compliance with pet or ride screening conditions in accordance with an embodiment of the present application.
In some embodiments, the target differentiation manner may further include: the object distinguishing mode can be used for screening interactive objects with special props according to the pet or riding state, and the third object distinguishing mode can be used for screening the special props, wherein the current role state comprises the following steps: a owned pet state and an unoccupied pet state, or a owned riding state and an unoccupied riding state. Specifically, in response to a triggering operation for the pet screening component, a pet state corresponding to the target virtual character is determined, in response to the target virtual character being in an already owned pet state, an interactive object in an unoccupied pet state is determined to be used as a target screening condition, or in response to the target virtual character being in an unoccupied pet state, an interactive object in an already owned pet state is determined to be used as a target screening condition.
Alternatively, the interactive object in the unoccupied riding state is determined as the target screening condition in response to the target virtual character being in the already-owned riding state, or the interactive object in the already-owned riding state is determined as the target screening condition in response to the target virtual character being in the unoccupied riding state.
That is, by the target discrimination method, the third interactive object having the pet or riding and not having the pet or riding can be screened out.
Referring to fig. 11, further, in response to the target screening condition being an interactive object in a state of not owning/having a pet, the control target virtual character issues a pet selling request to the third interactive object/the control target virtual character issues a pet purchasing request or a pet viewing request to the third interactive object. Or, in response to the target screening condition being an interactive object in an unoccupied/already-owned riding state, the control target virtual character issues a riding sales request to the third interactive object/the control target virtual character issues a riding purchase request or a riding viewing request to the third interactive object.
For example, when the target screening condition is determined to be the interactive object in the state of not having the pet, the target virtual character may be controlled to issue a pet selling request to the third interactive object, so as to control the target virtual character to issue a transaction application to the third interactive object, and to transact at least one pet owned by the target virtual character to the third interactive object.
Or when the target screening condition is determined to be the interaction object which does not have the riding state, the target virtual character can be controlled to send a riding selling request to the third interaction object, so that the target virtual character is controlled to send a transaction application to the third interaction object, and at least one riding transaction owned by the target virtual character is carried out to the third interaction object.
Of course, when the target screening condition is determined to be the interactive object in the state of having the pet, the target virtual character may be controlled to issue a pet purchase or view request to the third interactive object, for example, the target virtual character is controlled to issue a transaction application to the third interactive object, so that the target virtual character purchases at least one pet from the third interactive object, or views pet information corresponding to at least one pet owned by the third interactive object.
Or when the target screening condition is determined to be the interactive object in the existing riding state, the target virtual character can be controlled to send a riding purchase or viewing request to the third interactive object, for example, the target virtual character is controlled to send a transaction application to the third interactive object, so that the target virtual character purchases at least one riding from the third interactive object, or views at least one riding information corresponding to the riding owned by the third interactive object.
Specifically, for example, if the differentiating mode B corresponds to a target differentiating mode that differentiates between pets and riding states, when the end point P of the sliding track is moved to the position corresponding to the first interactive object X, character information corresponding to the first interactive object X may be displayed in the graphical user interface, and resource information corresponding to the target virtual resource may also be displayed. That is, the first interactive object X is determined as the third interactive object. For example, the target virtual resource is a ride or pet, and the resource information may be a corresponding image of the ride or pet, such as a mark S in fig. 11.
Still further, in response to the target screening condition being an interactive object in a state of having a pet, the target virtual character may be controlled to issue a pet purchase request to the virtual character, and if the virtual character confirms the pet purchase request, the first purchase interface for acquiring the target pet owned by the virtual character, that is, the first purchase interface for purchasing the target pet owned by the virtual character, is controlled to be displayed in the graphical user interface.
And responding to the interaction object with the target screening condition in the state of having the pet, controlling the target virtual character to send a pet viewing request to the virtual character, and controlling the first display interface for displaying the target pet owned by the virtual character to be displayed in the graphical user interface, namely, controlling the first viewing interface to be used for viewing the pet information of each dimension corresponding to the target pet owned by the virtual character in response to the pet viewing request confirmed by the virtual character.
Similarly, for riding, in response to the target screening condition being an interactive object in an already-owned riding state, the target virtual character may be controlled to issue a riding purchase request to the virtual character, and if the virtual character confirms the riding purchase request, a third purchase interface for acquiring the target riding owned by the virtual character, that is, a third purchase interface for purchasing the target riding owned by the virtual character, is controlled to be displayed in the graphical user interface.
And in response to the target screening condition being an interaction object in an already-owned riding state, the target virtual character can be controlled to send a riding viewing request to the virtual character, and in response to the virtual character confirming the riding viewing request, the third display interface for displaying the target riding owned by the virtual character is controlled to be displayed in the graphical user interface, namely, the third viewing interface is used for viewing riding information of each dimension corresponding to the target riding owned by the virtual character.
If the third interactive object is not a virtual character but a virtual store, the target virtual character may be controlled to issue a pet purchase request to the virtual store in response to the target screening condition being an interactive object in a state of having a pet, and if the virtual store confirms the pet purchase request, the second purchase interface for acquiring all pets owned by the virtual store may be controlled to be displayed in the graphic user interface, that is, for example, if the pet that the target virtual character wants to purchase is a type a pet, all type a pets sold in the virtual store and all other virtual characters want to sell through the virtual store are displayed in the second purchase interface, that is, the second purchase interface is different from the first purchase interface, a plurality of identical pets may be displayed, because the virtual character typically has only one for the same pet, and a plurality of identical pets may be displayed in the virtual store.
If the third interactive object is not a virtual character but a virtual store, the target virtual character may be controlled to issue a pet viewing request to the virtual store in response to the target screening condition being an interactive object in a state of having a pet, and if the virtual store confirms the pet viewing request, the second viewing interface for displaying all pets owned by the virtual store may be controlled to be displayed in the graphic user interface, that is, for example, if the pet that the target virtual character wants to view is a type a pet, all type a pets sold in the virtual store and all other virtual characters want to sell through the virtual store are displayed in the second viewing interface, that is, the second viewing interface is different from the first viewing interface, a plurality of identical pets may be displayed, because the virtual character typically has only one pet for the same pet, and a plurality of identical pets may be displayed in the virtual store. Of course, corresponding to the viewing request, the virtual store may provide a viewing function, such as a pet image, including pet information corresponding to the pet that the target avatar wants to view, even though the corresponding pet is not yet sold, but does not provide a selling function.
If the third interactive object is not a virtual character but a virtual store, the target virtual character may be controlled to issue a riding purchase request to the virtual store in response to the target screening condition being an interactive object in an already-owned riding state, and if the virtual store confirms the riding purchase request, a fourth purchase interface for acquiring all of the rides owned by the virtual store may be controlled to be displayed in the graphical user interface, that is, for example, if the ride that the target virtual character wants to purchase is an a-type riding, all of the a-type riding sold in the virtual store and all of the a-type riding that other virtual characters want to sell through the virtual store are displayed in the fourth purchase interface, that is, the fourth purchase interface is different from the third purchase interface, a plurality of identical rides may be displayed, since in general the virtual character has only one for the same riding, and in the virtual store, a plurality of identical rides may be displayed.
If the third interactive object is not a virtual character but a virtual store, in response to the target screening condition being an interactive object in an already-owned riding state, the target virtual character may be controlled to issue a riding viewing request to the virtual store, and if the virtual store confirms the riding viewing request, a fourth viewing interface for displaying all of the rides owned by the virtual store may be controlled to be displayed in the graphical user interface, that is, for example, if the riding that the target virtual character wants to view is a-type riding, all of the a-type riding sold in the virtual store and all of the a-type riding that other virtual characters want to sell through the virtual store are displayed in the second viewing interface, that is, the fourth viewing interface is different from the third viewing interface, a plurality of identical riding may be displayed, since in general, the virtual character only has one for the same riding, and in the virtual store, a plurality of identical riding may be displayed. Of course, corresponding to the viewing request, the virtual store may provide a viewing function, such as a riding graphic, including riding information corresponding to a riding that the target virtual character wants to view, even though the corresponding riding has not yet been sold, but does not provide a selling function.
In some embodiments, any purchase interface and any view interface may also be displayed simultaneously, for example, by clicking on the symbol S in fig. 11, any purchase interface may be displayed in a graphical user interface, for example, a mall interface, and any view interface may be displayed, for example, a graphical user interface including a pet or a riding graphical user interface. In response to determining operations for either the purchase interface or either the viewing interface, either the viewing interface or either the purchase interface is hidden in the graphical user interface. That is, if it is determined to click on any of the purchase interfaces, i.e., the mall interface, any of the viewing interfaces, i.e., the graphic interface, is hidden, whereas if it is determined to click on any of the viewing interfaces, i.e., the graphic interface, any of the purchase interfaces, i.e., the mall interface, is hidden.
It should be noted that after the selection operation is terminated, it may also be determined whether the pet or the seat is in an available state, for example, whether the pet or the seat is put in the mall interface. If the pet or ride is in an acquirable state, i.e., the pet or ride is launched in the mall, it may be controlled to display any purchase interface for acquiring the pet or ride, i.e., the mall interface, in the graphical user interface. Alternatively, if the pet or ride is not in an available state, e.g., the pet or ride has not been launched in the mall, it may be controlled to display any viewing interface, i.e., a graphical interface, for displaying the pet or ride in the graphical user interface.
Of course, the information of the pet or the corresponding riding information of the riding may be further displayed by clicking the mark S in fig. 11, for example, the information of the name, the quality, the attribute, the skill of the pet and the like of the pet may be further displayed.
In some embodiments, while the first interactive object is displayed, the second interactive object that does not meet the target distinguishing condition, that is, other interactive objects except the first interactive object, is hidden in the virtual scene, so that it can be ensured that the user can select the first interactive object that is originally blocked by the other interactive objects through triggering operation.
It should be noted that, during the operation, the selection operation continuous with the triggering operation is interrupted, and the triggering operation is deemed to be terminated, so that even if the first interactive object is screened out or the third interactive object is screened out from the first interactive object, the display of the target virtual character, the first interactive object and the second interactive object in the virtual scene is controlled, that is, the hidden other interactive objects are simultaneously displayed, and the screening corresponding to any distinguishing mode is not performed due to the termination of the triggering operation, and all the interactive objects before the triggering of the screening condition component are displayed in the interface.
From the foregoing, it can be seen that an interaction method, apparatus, terminal device and computer readable storage medium provided in the present application provide a graphical user interface through a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, and the virtual scene includes a plurality of interaction objects and a target virtual character controlled by the terminal device. Displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode; responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role; and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene. Through setting up the screening condition subassembly that different screening conditions correspond, can adapt to the interactive object that the user wants to show or screen under the different scenes, and then can show the interactive object that satisfies target screening condition.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that some embodiments of the present application are described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 12 is a schematic structural diagram of an interaction device according to an embodiment of the present application.
Based on the same inventive concept, the application also provides an interaction device corresponding to the method of any embodiment.
Referring to fig. 12, a graphic user interface is provided through a terminal device, in which at least part of a virtual scene is displayed, the virtual scene including a plurality of interactive objects and target virtual characters controlled by the terminal device, the interactive apparatus includes: the device comprises a first display module, a determining module and a second display module; wherein,
a first display module configured to display at least one screening criteria component in the graphical user interface, wherein each screening criteria component corresponds to a manner of distinction;
the determining module is configured to respond to the triggering operation aiming at the target screening condition component in the at least one screening condition component, and determine a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
and a second display module configured to determine a first interactive object satisfying the target screening condition from the plurality of interactive objects, and control the display of the target virtual character and the first interactive object in the virtual scene.
In one possible implementation, the second display module is further configured to:
and controlling not to display a second interactive object which does not meet the target screening condition in the interactive objects in the virtual scene.
In one possible implementation, the at least one filtering condition component includes: a team screening component;
the distinguishing mode comprises the following steps: distinguishing according to team formation states;
the current character state includes: a enqueued state and an ungrouped state;
the determination module is further configured to:
responding to the triggering operation aiming at the team screening component, and determining a team forming state corresponding to the target virtual role;
in response to the target virtual character being in the enqueued state, determining that the interactive object in the non-enqueued state is to be used as a target screening condition, or in response to the target virtual character being in the non-enqueued state, determining that the interactive object in the enqueued state is to be used as a target screening condition.
In one possible implementation, the at least one filtering condition component includes: a helper screening component;
the distinguishing mode comprises the following steps: distinguishing according to the helper state;
the current character state includes: a helper-added state and a helper-not-added state;
The determination module is further configured to:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state, or determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state.
In one possible implementation, the at least one filtering condition component includes: a helper screening component;
the distinguishing mode comprises the following steps: distinguishing according to the help group camping;
the current character state includes: a helper status has been added;
the determination module is further configured to:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining an interactive object which is in the same helper as the target virtual character as a target screening condition in response to the target virtual character being in the helper-added state.
In one possible implementation, the at least one filtering condition component includes: a pet screening component;
The distinguishing mode comprises the following steps: distinguishing according to the states of the pets;
the current character state includes: a owned pet status and an unowned pet status;
the determination module is further configured to:
responding to the triggering operation aiming at the pet screening component, and determining the state of the pet corresponding to the target virtual role;
and determining that the interactive object in the unoccupied pet state is used as a target screening condition in response to the target virtual character being in the owned pet state, or determining that the interactive object in the owned pet state is used as the target screening condition in response to the target virtual character being in the unoccupied pet state.
In one possible implementation, the at least one filtering condition component includes: a ride-screening assembly;
the distinguishing mode comprises the following steps: distinguishing according to riding states;
the current character state includes: a riding owned state and a riding unoccupied state;
the determination module is further configured to:
responding to the triggering operation aiming at the riding screening component, and determining the riding state corresponding to the target virtual character;
in response to the target virtual character being in an already-owned riding state, determining an interaction object in an unoccupied riding state as a target screening condition, or in response to the target virtual character being in an unoccupied riding state, determining an interaction object in an already-owned riding state as a target screening condition.
In one possible implementation manner, the apparatus further includes: an interaction module;
the interaction module is further configured to:
responding to a selection operation continuous with the triggering operation, and determining a third interaction object from the first interaction objects according to the selection operation;
and controlling the target virtual character to execute target interaction behavior on the third interaction object, wherein the target interaction behavior is determined by the target screening condition.
In one possible implementation, the selecting operation includes: and sliding operation from the target screening condition component to the third interactive object.
In one possible implementation, the interaction module is further configured to:
controlling the target virtual role to send a team invitation request to the third interactive object/controlling the target virtual role to send a team joining request to the third interactive object in response to the target screening condition being the interactive object in the non-team-formed state/the team-formed state; or (b)
And responding to the target screening condition as the interactive object in the no-helper-joining/helper-joining state, controlling the target virtual role to send a helper invitation request to the third interactive object/controlling the target virtual role to send a helper-joining request to the third interactive object.
In one possible implementation, the interaction module is further configured to:
and responding to the target screening condition as an interactive object in the same help-dispatch area with the target virtual character, controlling the target virtual character to change the corresponding position of the third interactive object in the same help-dispatch area, or controlling the target virtual character to remove the third interactive object from the same help-dispatch area.
In one possible implementation, the interaction module is further configured to:
responding to the target screening condition as an interaction object in a state of not having a pet/a state of having a pet, controlling the target virtual character to send a pet selling request to the third interaction object/controlling the target virtual character to send a pet purchasing request or a pet viewing request to the third interaction object; or (b)
And in response to the target screening condition being an interaction object in an unoccupied/owned riding state, controlling the target virtual character to send a riding selling request to the third interaction object/controlling the target virtual character to send a riding purchase request or a riding viewing request to the third interaction object.
In one possible implementation manner, the third interaction object includes: a virtual character;
the interaction module is further configured to:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual character;
and in response to the virtual character confirming the pet purchase request, controlling to display a first purchase interface for acquiring the target pet owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character;
the interaction module is further configured to:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual character;
and responding to the virtual character to confirm the pet viewing request, and controlling to display a first display interface for displaying the target pet owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store;
The interaction module is further configured to:
responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual store;
and in response to the virtual store confirming the pet purchase request, controlling to display a second purchase interface for acquiring all pets owned by the virtual store in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store;
the interaction module is further configured to:
responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual store;
and in response to the virtual store confirming the pet view request, controlling to display a second display interface for displaying all pets owned by the virtual store in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character;
the interaction module is further configured to:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual character;
And in response to the virtual character confirming the riding purchase request, controlling to display a third purchase interface for acquiring the target riding owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character;
the interaction module is further configured to:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual character;
and in response to the virtual character confirming the riding viewing request, controlling to display a third display interface for displaying the target riding owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store;
the interaction module is further configured to:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual store;
in response to the virtual store confirming the ride purchase request, control displays a fourth purchase interface in a graphical user interface for acquiring all rides owned by the virtual store.
In one possible implementation manner, the third interaction object includes: a virtual store;
the interaction module is further configured to:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual store;
in response to the virtual store confirming the ride-view request, control displays a fourth presentation interface in a graphical user interface for presenting all rides owned by the virtual store.
In one possible implementation, the first display module is further configured to:
displaying a screening interaction component in the graphical user interface;
and responding to the triggering operation aiming at the screening interaction component, and displaying at least one screening condition component in the graphical user interface.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is configured to implement the corresponding interaction method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Fig. 13 shows an exemplary structural schematic diagram of a terminal device provided in an embodiment of the present application.
Based on the same inventive concept, the application also provides a terminal device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the interaction method of any embodiment when executing the program. Fig. 13 shows a more specific hardware structure of a terminal device according to this embodiment, where the device may include: processor 1310, memory 1320, input/output interface 1330, communication interface 1340, and bus 1350. Wherein processor 1310, memory 1320, input/output interface 1330, and communication interface 1340 implement a communication connection between each other within the device via bus 1350.
The processor 1310 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1320 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1320 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1320 and executed by processor 1310.
The input/output interface 1330 is used to connect with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1340 is provided to connect communication modules (not shown) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1350 includes a path to transfer information between elements of the device (e.g., processor 1310, memory 1320, input/output interface 1330, and communication interface 1340).
It should be noted that although the above-described device only shows processor 1310, memory 1320, input/output interface 1330, communication interface 1340, and bus 1350, the device may include other components necessary to achieve proper operation in an implementation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The terminal device of the foregoing embodiment is configured to implement the corresponding interaction method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
The memory 1320 stores machine readable instructions executable by the processor 1310, which when the terminal device is running, communicate with the memory 1320 over bus 1330 such that the processor 1310 performs the following instructions when running:
displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode;
Responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
and determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene.
In a possible implementation manner, in an instruction executed by the processor 1310, the method further includes:
and controlling not to display a second interactive object which does not meet the target screening condition in the interactive objects in the virtual scene.
In a possible embodiment, the at least one filtering condition component includes: a team screening component;
the distinguishing mode comprises the following steps: distinguishing according to team formation states; the current character state includes: a enqueued state and an ungrouped state; in the instructions executed by the processor 1310, the determining, in response to the triggering operation of the target screening condition component in the at least one screening condition component, a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role includes:
Responding to the triggering operation aiming at the team screening component, and determining a team forming state corresponding to the target virtual role;
in response to the target virtual character being in the enqueued state, determining that the interactive object in the non-enqueued state is to be used as a target screening condition, or in response to the target virtual character being in the non-enqueued state, determining that the interactive object in the enqueued state is to be used as a target screening condition.
In a possible embodiment, the at least one filtering condition component includes: a helper screening component; the distinguishing mode comprises the following steps: distinguishing according to the helper state; the current character state includes: a helper-added state and a helper-not-added state; in the instructions executed by the processor 1310, the determining, in response to the triggering operation of the target screening condition component in the at least one screening condition component, a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role includes:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state, or determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state.
In a possible embodiment, the at least one filtering condition component includes: a helper screening component; the distinguishing mode comprises the following steps: distinguishing according to the help group camping; the current character state includes: a helper status has been added; in the instructions executed by the processor 1310, the determining, in response to the triggering operation of the target screening condition component in the at least one screening condition component, a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role includes:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining an interactive object which is in the same helper as the target virtual character as a target screening condition in response to the target virtual character being in the helper-added state.
In a possible embodiment, the at least one filtering condition component includes: a pet screening component; the distinguishing mode comprises the following steps: distinguishing according to the states of the pets; the current character state includes: a owned pet status and an unowned pet status; in the instructions executed by the processor 1310, the determining, in response to the triggering operation of the target screening condition component in the at least one screening condition component, a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role includes:
Responding to the triggering operation aiming at the pet screening component, and determining the state of the pet corresponding to the target virtual role;
and determining that the interactive object in the unoccupied pet state is used as a target screening condition in response to the target virtual character being in the owned pet state, or determining that the interactive object in the owned pet state is used as the target screening condition in response to the target virtual character being in the unoccupied pet state.
In a possible embodiment, the at least one filtering condition component includes: a ride-screening assembly; the distinguishing mode comprises the following steps: distinguishing according to riding states; the current character state includes: a riding owned state and a riding unoccupied state; in the instructions executed by the processor 1310, the determining, in response to the triggering operation of the target screening condition component in the at least one screening condition component, a target screening condition according to the target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role includes:
responding to the triggering operation aiming at the riding screening component, and determining the riding state corresponding to the target virtual character;
in response to the target virtual character being in an already-owned riding state, determining an interaction object in an unoccupied riding state as a target screening condition, or in response to the target virtual character being in an unoccupied riding state, determining an interaction object in an already-owned riding state as a target screening condition.
In a possible implementation manner, in an instruction executed by the processor 1310, the method further includes:
responding to a selection operation continuous with the triggering operation, and determining a third interaction object from the first interaction objects according to the selection operation;
and controlling the target virtual character to execute target interaction behavior on the third interaction object, wherein the target interaction behavior is determined by the target screening condition.
In a possible implementation, the selecting operation includes, among the instructions executed by the processor 1310: and sliding operation from the target screening condition component to the third interactive object.
In a possible implementation manner, in the instructions executed by the processor 1310, the controlling the target virtual character to perform a target interaction behavior on the third interaction object includes:
controlling the target virtual role to send a team invitation request to the third interactive object/controlling the target virtual role to send a team joining request to the third interactive object in response to the target screening condition being the interactive object in the non-team-formed state/the team-formed state; or (b)
And responding to the target screening condition as the interactive object in the no-helper-joining/helper-joining state, controlling the target virtual role to send a helper invitation request to the third interactive object/controlling the target virtual role to send a helper-joining request to the third interactive object.
In a possible implementation manner, in the instructions executed by the processor 1310, the controlling the target virtual character to perform a target interaction behavior on the third interaction object includes:
and responding to the target screening condition as an interactive object in the same help-dispatch area with the target virtual character, controlling the target virtual character to change the corresponding position of the third interactive object in the same help-dispatch area, or controlling the target virtual character to remove the third interactive object from the same help-dispatch area.
In a possible implementation manner, in the instructions executed by the processor 1310, the controlling the target virtual character to perform a target interaction behavior on the third interaction object includes:
responding to the target screening condition as an interaction object in a state of not having a pet/a state of having a pet, controlling the target virtual character to send a pet selling request to the third interaction object/controlling the target virtual character to send a pet purchasing request or a pet viewing request to the third interaction object; or (b)
And in response to the target screening condition being an interaction object in an unoccupied/owned riding state, controlling the target virtual character to send a riding selling request to the third interaction object/controlling the target virtual character to send a riding purchase request or a riding viewing request to the third interaction object.
In a possible implementation manner, the third interaction object includes: a virtual character; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the state of having the pet, the target virtual character to issue a pet purchase request to the third interactive object includes:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual character;
and in response to the virtual character confirming the pet purchase request, controlling to display a first purchase interface for acquiring the target pet owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the state of having the pet, the target virtual character to issue a pet viewing request to the third interactive object includes:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual character;
And responding to the virtual character to confirm the pet viewing request, and controlling to display a first display interface for displaying the target pet owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the state of having the pet, the target virtual character to issue a pet purchase request to the third interactive object includes:
responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual store;
and in response to the virtual store confirming the pet purchase request, controlling to display a second purchase interface for acquiring all pets owned by the virtual store in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the state of having the pet, the target virtual character to issue a pet viewing request to the third interactive object includes:
Responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual store;
and in response to the virtual store confirming the pet view request, controlling to display a second display interface for displaying all pets owned by the virtual store in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the already-owned riding state, the target virtual character to issue a riding purchase request to the third interactive object includes:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual character;
and in response to the virtual character confirming the riding purchase request, controlling to display a third purchase interface for acquiring the target riding owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual character; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the already-owned riding state, the target virtual character to issue a riding viewing request to the third interactive object includes:
Responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual character;
and in response to the virtual character confirming the riding viewing request, controlling to display a third display interface for displaying the target riding owned by the virtual character in a graphical user interface.
In one possible implementation manner, the third interaction object includes: a virtual store; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the already-owned riding state, the target virtual character to issue a riding purchase request to the third interactive object includes:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual store;
in response to the virtual store confirming the ride purchase request, control displays a fourth purchase interface in a graphical user interface for acquiring all rides owned by the virtual store.
In one possible implementation manner, the third interaction object includes: a virtual store; in the instructions executed by the processor 1310, the controlling, in response to the target screening condition being the interactive object in the already-owned riding state, the target virtual character to issue a riding viewing request to the third interactive object includes:
Responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual store;
in response to the virtual store confirming the ride-view request, control displays a fourth presentation interface in a graphical user interface for presenting all rides owned by the virtual store.
In one possible implementation, the displaying at least one filtering condition component in the graphical user interface in the instructions executed by the processor 1310 includes:
displaying a screening interaction component in the graphical user interface;
and responding to the triggering operation aiming at the screening interaction component, and displaying at least one screening condition component in the graphical user interface.
Based on the same inventive concept, corresponding to any of the above embodiments of the method, the present application further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the interaction method as described in any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the interaction method described in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present disclosure also provides a computer program product, corresponding to the interaction method described in any of the above embodiments, comprising computer program instructions. In some embodiments, the computer program instructions may be executed by one or more processors of a computer to cause the computer and/or the processor to perform the described interaction method. Corresponding to the execution subject corresponding to each step in each embodiment of the interaction method, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to enable the computer and/or the processor to perform the interaction method according to any one of the above embodiments, and has the beneficial effects of corresponding method embodiments, which are not described herein again.
It can be appreciated that before using the technical solutions of the embodiments in the present application, the user is informed about the type, the use range, the use scenario, etc. of the related personal information in an appropriate manner, and the authorization of the user is obtained.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Therefore, the user can select whether to provide personal information to the software or hardware such as the terminal equipment, the application program, the server or the storage medium for executing the operation of the technical scheme according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, the popup window can also bear a selection control for the user to select to provide personal information for the terminal equipment in a 'consent' or 'disagreement' mode.
It will be appreciated that the above-described notification and user authorization acquisition process is merely illustrative, and not limiting of the implementation of the present application, and that other ways of satisfying relevant legal regulations may be applied to the implementation of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be implemented as a system, method, or computer program product. Thus, the present application may be embodied in the form of: all hardware, all software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, is generally referred to herein as a "circuit," module, "or" system. Furthermore, in some embodiments, the present application may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive example) of the computer-readable storage medium could include, for example: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer, for example, through the internet using an internet service provider.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable interaction device to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements and/or the like which are within the spirit and principles of the embodiments are intended to be included within the scope of the present application.

Claims (24)

1. An interaction method, characterized in that a graphical user interface is provided by a terminal device, in which at least part of a virtual scene is presented, the virtual scene comprising a plurality of interaction objects and a target virtual character controlled by the terminal device, the method comprising:
displaying at least one screening condition component in the graphical user interface, wherein each screening condition component corresponds to a distinguishing mode;
responding to the triggering operation aiming at a target screening condition component in the at least one screening condition component, and determining a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
And determining a first interaction object meeting the target screening condition from the plurality of interaction objects, and controlling the target virtual character and the first interaction object to be displayed in the virtual scene.
2. The method according to claim 1, wherein the method further comprises:
and controlling not to display a second interactive object which does not meet the target screening condition in the interactive objects in the virtual scene.
3. The method of claim 1, wherein the at least one screening criteria component comprises: a team screening component;
the distinguishing mode comprises the following steps: distinguishing according to team formation states;
the current character state includes: a enqueued state and an ungrouped state;
the responding to the triggering operation of the target screening condition component in the at least one screening condition component determines a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role, and the method comprises the following steps:
responding to the triggering operation aiming at the team screening component, and determining a team forming state corresponding to the target virtual role;
in response to the target virtual character being in the enqueued state, determining that the interactive object in the non-enqueued state is to be used as a target screening condition, or in response to the target virtual character being in the non-enqueued state, determining that the interactive object in the enqueued state is to be used as a target screening condition.
4. The method of claim 1, wherein the at least one screening criteria component comprises: a helper screening component;
the distinguishing mode comprises the following steps: distinguishing according to the helper state;
the current character state includes: a helper-added state and a helper-not-added state;
the responding to the triggering operation of the target screening condition component in the at least one screening condition component determines a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role, and the method comprises the following steps:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state, or determining that the interactive object in the no-help-added state is used as a target screening condition in response to the target virtual character being in the no-help-added state.
5. The method of claim 1, wherein the at least one screening criteria component comprises: a helper screening component;
The distinguishing mode comprises the following steps: distinguishing according to the help group camping;
the current character state includes: a helper status has been added;
the responding to the triggering operation of the target screening condition component in the at least one screening condition component determines a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role, and the method comprises the following steps:
responding to the triggering operation aiming at the helper screening component, and determining a helper state corresponding to the target virtual role;
and determining an interactive object which is in the same helper as the target virtual character as a target screening condition in response to the target virtual character being in the helper-added state.
6. The method of claim 1, wherein the at least one screening criteria component comprises: a pet screening component;
the distinguishing mode comprises the following steps: distinguishing according to the states of the pets;
the current character state includes: a owned pet status and an unowned pet status;
the responding to the triggering operation of the target screening condition component in the at least one screening condition component determines a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role, and the method comprises the following steps:
Responding to the triggering operation aiming at the pet screening component, and determining the state of the pet corresponding to the target virtual role;
and determining that the interactive object in the unoccupied pet state is used as a target screening condition in response to the target virtual character being in the owned pet state, or determining that the interactive object in the owned pet state is used as the target screening condition in response to the target virtual character being in the unoccupied pet state.
7. The method of claim 1, wherein the at least one screening criteria component comprises: a ride-screening assembly;
the distinguishing mode comprises the following steps: distinguishing according to riding states;
the current character state includes: a riding owned state and a riding unoccupied state;
the responding to the triggering operation of the target screening condition component in the at least one screening condition component determines a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role, and the method comprises the following steps:
responding to the triggering operation aiming at the riding screening component, and determining the riding state corresponding to the target virtual character;
in response to the target virtual character being in an already-owned riding state, determining an interaction object in an unoccupied riding state as a target screening condition, or in response to the target virtual character being in an unoccupied riding state, determining an interaction object in an already-owned riding state as a target screening condition.
8. The method according to any one of claims 3-7, further comprising:
responding to a selection operation continuous with the triggering operation, and determining a third interaction object from the first interaction objects according to the selection operation;
and controlling the target virtual character to execute target interaction behavior on the third interaction object, wherein the target interaction behavior is determined by the target screening condition.
9. The method of claim 8, wherein the selecting operation comprises: and sliding operation from the target screening condition component to the third interactive object.
10. The method of claim 8, wherein the controlling the target avatar to perform a target interaction behavior on the third interaction object comprises:
controlling the target virtual role to send a team invitation request to the third interactive object/controlling the target virtual role to send a team joining request to the third interactive object in response to the target screening condition being the interactive object in the non-team-formed state/the team-formed state; or (b)
And responding to the target screening condition as the interactive object in the no-helper-joining/helper-joining state, controlling the target virtual role to send a helper invitation request to the third interactive object/controlling the target virtual role to send a helper-joining request to the third interactive object.
11. The method of claim 8, wherein the controlling the target avatar to perform a target interaction behavior on the third interaction object comprises:
and responding to the target screening condition as an interactive object in the same help-dispatch area with the target virtual character, controlling the target virtual character to change the corresponding position of the third interactive object in the same help-dispatch area, or controlling the target virtual character to remove the third interactive object from the same help-dispatch area.
12. The method of claim 8, wherein the controlling the target avatar to perform a target interaction behavior on the third interaction object comprises:
responding to the target screening condition as an interaction object in a state of not having a pet/a state of having a pet, controlling the target virtual character to send a pet selling request to the third interaction object/controlling the target virtual character to send a pet purchasing request or a pet viewing request to the third interaction object; or (b)
And in response to the target screening condition being an interaction object in an unoccupied/owned riding state, controlling the target virtual character to send a riding selling request to the third interaction object/controlling the target virtual character to send a riding purchase request or a riding viewing request to the third interaction object.
13. The method of claim 12, wherein the third interactive object comprises: a virtual character;
and the response to the target screening condition being the interaction object in the state of having the pet, controlling the target virtual character to send a pet purchase request to the third interaction object, including:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual character;
and in response to the virtual character confirming the pet purchase request, controlling to display a first purchase interface for acquiring the target pet owned by the virtual character in a graphical user interface.
14. The method of claim 12, wherein the third interactive object comprises: a virtual character;
and the response to the target screening condition being the interaction object in the state of having the pet, controlling the target virtual character to send a pet viewing request to the third interaction object, including:
responding to the target screening condition as an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual character;
And responding to the virtual character to confirm the pet viewing request, and controlling to display a first display interface for displaying the target pet owned by the virtual character in a graphical user interface.
15. The method of claim 12, wherein the third interactive object comprises: a virtual store;
and the response to the target screening condition being the interaction object in the state of having the pet, controlling the target virtual character to send a pet purchase request to the third interaction object, including:
responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet purchase request to the virtual store;
and in response to the virtual store confirming the pet purchase request, controlling to display a second purchase interface for acquiring all pets owned by the virtual store in a graphical user interface.
16. The method of claim 12, wherein the third interactive object comprises: a virtual store;
and the response to the target screening condition being the interaction object in the state of having the pet, controlling the target virtual character to send a pet viewing request to the third interaction object, including:
Responding to the target screening condition to be an interactive object in the state of having the pet, and controlling the target virtual character to send a pet viewing request to the virtual store;
and in response to the virtual store confirming the pet view request, controlling to display a second display interface for displaying all pets owned by the virtual store in a graphical user interface.
17. The method of claim 12, wherein the third interactive object comprises: a virtual character;
and the response to the target screening condition being an interaction object in an already-owned riding state, controlling the target virtual character to send a riding purchase request to the third interaction object, including:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual character;
and in response to the virtual character confirming the riding purchase request, controlling to display a third purchase interface for acquiring the target riding owned by the virtual character in a graphical user interface.
18. The method of claim 12, wherein the third interactive object comprises: a virtual character;
And the response to the target screening condition being an interaction object in an already-owned riding state, controlling the target virtual character to send a riding viewing request to the third interaction object, including:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual character;
and in response to the virtual character confirming the riding viewing request, controlling to display a third display interface for displaying the target riding owned by the virtual character in a graphical user interface.
19. The method of claim 12, wherein the third interactive object comprises: a virtual store;
and the response to the target screening condition being an interaction object in an already-owned riding state, controlling the target virtual character to send a riding purchase request to the third interaction object, including:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding purchase request to the virtual store;
in response to the virtual store confirming the ride purchase request, control displays a fourth purchase interface in a graphical user interface for acquiring all rides owned by the virtual store.
20. The method of claim 12, wherein the third interactive object comprises: a virtual store;
and the response to the target screening condition being an interaction object in an already-owned riding state, controlling the target virtual character to send a riding viewing request to the third interaction object, including:
responding to the target screening condition as an interaction object in an already-owned riding state, and controlling the target virtual character to send a riding viewing request to the virtual store;
in response to the virtual store confirming the ride-view request, control displays a fourth presentation interface in a graphical user interface for presenting all rides owned by the virtual store.
21. The method of claim 1, wherein displaying at least one filter criteria component in the graphical user interface comprises:
displaying a screening interaction component in the graphical user interface;
and responding to the triggering operation aiming at the screening interaction component, and displaying at least one screening condition component in the graphical user interface.
22. An interactive apparatus characterized in that a graphical user interface is provided by a terminal device, in which at least part of a virtual scene is presented, the virtual scene comprising a plurality of interactive objects and target virtual characters controlled by the terminal device, the apparatus comprising:
A first display module configured to display at least one screening criteria component in the graphical user interface, wherein each screening criteria component corresponds to a manner of distinction;
the determining module is configured to respond to the triggering operation aiming at the target screening condition component in the at least one screening condition component, and determine a target screening condition according to a target distinguishing mode corresponding to the target screening condition component and the current role state of the target virtual role;
and a second display module configured to determine a first interactive object satisfying the target screening condition from the plurality of interactive objects, and control the display of the target virtual character and the first interactive object in the virtual scene.
23. A terminal device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 21 when the program is executed by the processor.
24. A computer readable storage medium storing computer instructions for causing the computer to implement the method of any one of claims 1 to 21.
CN202311855052.2A 2023-12-28 2023-12-28 Interaction method, device, terminal equipment and computer readable storage medium Pending CN117815663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311855052.2A CN117815663A (en) 2023-12-28 2023-12-28 Interaction method, device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311855052.2A CN117815663A (en) 2023-12-28 2023-12-28 Interaction method, device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117815663A true CN117815663A (en) 2024-04-05

Family

ID=90518518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311855052.2A Pending CN117815663A (en) 2023-12-28 2023-12-28 Interaction method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117815663A (en)

Similar Documents

Publication Publication Date Title
US10846937B2 (en) Three-dimensional virtual environment
US11931655B2 (en) Single user multiple presence in multi-user game
US10970934B2 (en) Integrated operating environment
US20190332400A1 (en) System and method for cross-platform sharing of virtual assistants
US9860282B2 (en) Real-time synchronous communication with persons appearing in image and video files
JP6189738B2 (en) Friction-free social sharing with cloud-based game slice generation and instant playback
US9098874B2 (en) System and method of determining view information of an instance of an online game executed on an online game server
JP2019050010A (en) Methods and systems for providing functional extensions to landing page of creative
KR20120050980A (en) Spatial interfaces for realtime networked communications
EP3245781A1 (en) Recommended roster based on customer relationship management data
US11004121B2 (en) Managing ephemeral locations in a virtual universe
US20160321762A1 (en) Location-based group media social networks, program products, and associated methods of use
WO2019099912A1 (en) Integrated operating environment
US20170277412A1 (en) Method for use of virtual reality in a contact center environment
CN117815663A (en) Interaction method, device, terminal equipment and computer readable storage medium
US11695843B2 (en) User advanced media presentations on mobile devices using multiple different social media apps
CN113099257A (en) Network friend-making interaction method and device, terminal equipment and storage medium
KR102479764B1 (en) Method and apparatus for generating a game party
US20240005608A1 (en) Travel in Artificial Reality
Brown et al. Towards a service framework for remote sales support via augmented reality
Vieira Creation of dynamic virtual tours in multimedia spaces
US20240013495A1 (en) Systems and methods for the interactive rendering of a virtual environment on a user device with limited computational capacity
CN114816082A (en) Input control method and device applied to cloud application and electronic equipment
CN112130726A (en) Page operation method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination