WO2024082883A1 - 虚拟对象的交互方法、装置、设备及计算机可读存储介质 - Google Patents

虚拟对象的交互方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2024082883A1
WO2024082883A1 PCT/CN2023/118735 CN2023118735W WO2024082883A1 WO 2024082883 A1 WO2024082883 A1 WO 2024082883A1 CN 2023118735 W CN2023118735 W CN 2023118735W WO 2024082883 A1 WO2024082883 A1 WO 2024082883A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
action
interaction
target
virtual
Prior art date
Application number
PCT/CN2023/118735
Other languages
English (en)
French (fr)
Inventor
陈腾
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024082883A1 publication Critical patent/WO2024082883A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Definitions

  • the embodiments of the present application relate to the field of Internet technology, and in particular to a virtual object interaction method, device, equipment and computer-readable storage medium.
  • the embodiments of the present application provide a virtual object interaction method, device, equipment and computer-readable storage medium, and the technical solution includes but is not limited to the following aspects.
  • an embodiment of the present application provides a virtual object interaction method, the method is executed by a terminal device, and the method includes:
  • the virtual scene includes a first virtual object and at least one candidate virtual object
  • a target page is displayed, in which the first virtual object and the second virtual object interact in the virtual scene according to the target interaction action.
  • an embodiment of the present application provides a virtual object interaction device, the device comprising:
  • a display module configured to display a virtual scene, wherein the virtual scene includes a first virtual object and at least one candidate virtual object;
  • the display module is further configured to display an interactive action selection page based on a drag operation on a second virtual object among the at least one candidate virtual object, wherein the interactive action selection page includes a plurality of candidate interactive actions;
  • the display module is further used to display a target page based on a second operation on a target interaction action among the multiple candidate interaction actions, in which the first virtual object and the second virtual object interact in the virtual scene according to the target interaction action.
  • an embodiment of the present application provides a computer device, comprising a processor and a memory, wherein the memory stores at least one program code, and the at least one program code is loaded and executed by the processor so that the computer device implements any of the virtual object interaction methods described above.
  • a non-temporary computer-readable storage medium wherein at least one program code is stored in the non-temporary computer-readable storage medium, and the at least one program code is loaded and executed by a processor so that a computer implements any of the above-mentioned virtual object interaction methods.
  • a computer program or a computer program product is also provided, wherein at least one computer instruction is stored in the computer program or the computer program product, and the at least one computer instruction is loaded and executed by a processor so that the computer implements any of the above-mentioned virtual object interaction methods.
  • the technical solution provided by the embodiment of the present application displays an interactive action selection page by dragging the second virtual object, and then selects a target interactive action in the interactive action selection page, so that the first virtual object and the second virtual object are moved according to the target interactive action.
  • the method fully considers the positions of the first virtual object and the second virtual object in the virtual scene, making the interaction process of the virtual objects more concise, improving the interaction efficiency and flexibility of the virtual objects, and thus improving the user's immersion in virtual social interaction.
  • the interaction process of the virtual objects is more concise, the number of user operations is reduced, thereby reducing the number of times the terminal device responds to the user's operations, thereby saving the terminal device's overhead.
  • FIG1 is a schematic diagram of an implementation environment of a virtual object interaction method provided in an embodiment of the present application.
  • FIG2 is a flow chart of a virtual object interaction method provided in an embodiment of the present application.
  • FIG3 is a schematic diagram showing a virtual scene according to an embodiment of the present application.
  • FIG4 is a schematic diagram showing another virtual scene provided in an embodiment of the present application.
  • FIG5 is a schematic diagram showing another virtual scene provided in an embodiment of the present application.
  • FIG6 is a schematic diagram of a range defining frame after a second virtual object is dragged according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of another range defining frame of a second virtual object after being dragged according to an embodiment of the present application
  • FIG8 is a schematic diagram of displaying a prompt message provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a display method for displaying a target object at a target position of a first virtual object provided by an embodiment of the present application.
  • FIG10 is a schematic diagram showing an interactive action selection page provided in an embodiment of the present application.
  • FIG11 is a schematic diagram showing a target page according to an embodiment of the present application.
  • FIG12 is a schematic diagram showing another target page provided in an embodiment of the present application.
  • FIG13 is a schematic diagram showing a target page according to an embodiment of the present application.
  • FIG14 is a flow chart of a virtual object interaction method provided in an embodiment of the present application.
  • FIG15 is a schematic diagram of the structure of a virtual object interaction device provided in an embodiment of the present application.
  • FIG16 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present application.
  • FIG. 17 is a schematic diagram of the structure of a server provided in an embodiment of the present application.
  • Virtual social interaction Users customize their own 2D (two-dimensional) or 3D (three-dimensional) virtual objects (including but not limited to humanoid models, models of other forms, etc.) and use their own virtual objects to engage in social chats with other people's virtual objects.
  • 2D two-dimensional
  • 3D three-dimensional
  • FIG1 is a schematic diagram of an implementation environment of a virtual object interaction method provided in an embodiment of the present application. As shown in FIG1 , the implementation environment includes: a terminal device 101 and a server 102 .
  • the terminal device 101 may be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, and a laptop computer.
  • the terminal device 101 is used to execute the virtual object interaction method provided in the embodiment of the present application.
  • the terminal device 101 may generally refer to one of a plurality of terminal devices, and this embodiment is only illustrated by taking the terminal device 101 as an example. Those skilled in the art may know that the number of the terminal devices 101 may be more or less. For example, the terminal device 101 may be only one, or the terminal devices 101 may be dozens or hundreds, or more. The embodiment of the present application does not limit the number and device type of the terminal devices.
  • the server 102 is a single server, or a server cluster consisting of multiple servers, or any one of a cloud computing platform and a virtualization center, which is not limited in the present embodiment.
  • the server 102 communicates with the terminal device 101 via a wired network or a wireless network.
  • the server 102 has data receiving functions, data processing functions, and data sending functions.
  • the server 102 may also have other functions, which are not limited in the embodiments of the present application.
  • terminal device 101 and server 102 are only for illustration, and other existing or future terminal devices or servers, if applicable to the present application, should also be included in the scope of protection of the present application and are included here by reference.
  • the embodiment of the present application provides a method for interacting with a virtual object, which can be applied to the implementation environment shown in FIG. 1 above. Taking the flowchart of a method for interacting with a virtual object provided by the embodiment of the present application shown in FIG. 2 as an example, the method can be executed by the terminal device 101 in FIG. 1. As shown in FIG. 2, the method includes the following steps:
  • step 201 a virtual scene is displayed, in which a first virtual object and at least one candidate virtual object are displayed.
  • an application capable of providing a virtual scene is installed and run in a terminal device.
  • the application may refer to an application that needs to be downloaded and installed (also referred to as a host program), or may refer to an embedded program that depends on the host program to run, such as a mini-program.
  • An embedded program is an application that is developed based on a programming language and depends on the host program to run.
  • An embedded program does not need to be downloaded and installed, but only needs to be dynamically loaded in the host program to run. Users can find the embedded program they need by searching, scanning, etc., and can apply the embedded program by clicking on the embedded program they need in the host program. After use, the embedded program is closed, so it does not occupy the terminal's memory, which is very convenient.
  • the application is opened, a virtual scene is displayed, and a first virtual object and at least one candidate virtual object are displayed in the virtual scene, that is, the virtual scene includes the first virtual object and at least one candidate virtual object.
  • the user corresponding to the candidate virtual object may be a friend user of the user corresponding to the first virtual object, or may not be a friend user of the user corresponding to the first virtual object.
  • the operation instruction for the application may be a click operation on the icon of the application, or may be other operations, which are not limited in the embodiments of the present application.
  • FIG3 is a display schematic diagram of a virtual scene provided by an embodiment of the present application.
  • the virtual scene displays a first virtual object 301, a candidate virtual object 1 302, a candidate virtual object 2 303, a candidate virtual object 3 304, and a candidate virtual object 4 305.
  • the virtual scene may also include a scene identifier, such as the "status square" shown in FIG3, which is used to indicate that the user is currently in the virtual scene.
  • the user can also enlarge or reduce the virtual scene.
  • the area of the virtual scene displayed in the display page of the terminal device is smaller, and the number of virtual objects displayed in the virtual scene will be smaller;
  • the virtual scene is reduced, the area of the virtual scene displayed in the display page of the terminal device is larger, and the number of virtual objects displayed in the virtual scene will be more.
  • step 202 in response to a first operation on a second virtual object among at least one candidate virtual object, the second virtual object is set to a draggable state.
  • the terminal device can detect a first operation.
  • the first operation is detected and the first operation is directed to one or some candidate virtual objects among at least one candidate virtual object
  • the one or more candidate virtual objects targeted by the first operation are used as second virtual objects, so that in response to the first operation on the second virtual object among the at least one candidate virtual object, a subsequent process of setting the second virtual object to a draggable state can be executed.
  • the first operation on the second virtual object may refer to a long press operation on the second virtual object.
  • the target duration is set based on experience or adjusted according to the implementation environment, which is not limited in the embodiments of the present application.
  • the target duration is 1 second.
  • Selecting the second virtual object may refer to the operation of clicking (single click, double click or other click methods) the second virtual object, or it may be the operation of selecting the second virtual object by voice (such as sending a voice message of "Select X", where X is the name of the second virtual object).
  • the embodiments of the present application do not limit the method of selecting the second virtual object.
  • determining a first time when the selection operation on the second virtual object is received determining a second time according to the target duration and the first time (for example, taking the sum of the target duration and the first time as the second time), when the second virtual object is still in the selected state at the second time, it indicates that the selection operation on the second virtual object is detected.
  • a first operation on a second virtual object among the at least one candidate virtual object sets the second virtual object to a draggable state.
  • a selection operation for the second virtual object is received at 11:21:25 (ie, the first time), the target duration is 1 second, and the second time is 11:21:26.
  • the second virtual object is still selected at 11:21:26, the second virtual object is set to a draggable state.
  • the virtual scene also displays (also includes) the action identifier of the action currently performed by each candidate virtual object.
  • the action identifier can be an image of the action, or the name of the action, or other identifiers that can uniquely represent the action, which is not limited in the embodiments of the present application.
  • 307 is the bubble corresponding to candidate virtual object one
  • 308 is the bubble corresponding to candidate virtual object two
  • 309 is the bubble corresponding to candidate virtual object three
  • 310 is the bubble corresponding to candidate virtual object four.
  • the method provided by the embodiment of the present application also includes: in response to a first operation on a second candidate virtual object among at least one candidate virtual object, canceling the display of an action identifier of an action currently executed by the second virtual object.
  • FIG4 is a display schematic diagram of another virtual scene provided by the embodiment of the present application.
  • the candidate virtual object three is the second virtual object, and based on the first operation on the candidate virtual object three, canceling the display of the bubble corresponding to the candidate virtual object three, that is, canceling the display of the action identifier of the action currently executed by the candidate virtual object three.
  • the virtual scene also displays (also includes) action identifiers of actions currently performed by each virtual object.
  • the method provided in an embodiment of the present application also includes: in response to a first operation on a second candidate virtual object among at least one candidate virtual object, canceling the display of action identifiers of actions currently performed by each virtual object.
  • FIG5 another display schematic diagram of a virtual scene provided in an embodiment of the present application. Among them, candidate virtual object three is a second virtual object, and based on the first operation on candidate virtual object three, the display of action identifiers of actions currently performed by each virtual object is canceled.
  • step 202 is an optional step.
  • step 202 needs to be performed to set the second virtual object to a draggable state according to the above-mentioned first operation, so as to facilitate the subsequent user to drag the second virtual object.
  • step 202 does not need to be performed, and the user can directly drag the second virtual object later.
  • step 203 based on the dragging operation on the second virtual object, an interactive action selection page is displayed, and a plurality of candidate interactive actions are displayed on the interactive action selection page.
  • the terminal device can detect a drag operation.
  • the drag operation is directed to one or some candidate virtual objects among at least one candidate virtual object
  • the one or more candidate virtual objects targeted by the drag operation are used as the second virtual object, so that based on the drag operation directed to the second virtual object among at least one candidate virtual object, a subsequent process of displaying an interactive action selection page can be executed, and the interactive action selection page includes multiple candidate interactive actions.
  • the first operation in step 202 and the drag operation in step 203 are for the same second virtual object.
  • the first operation and the drag operation are continuous operations, that is, the end time of the first operation is the same as the start time of the drag operation. Then, the user can continuously perform the drag operation without letting go after performing the first operation.
  • the first operation and the drag operation are discontinuous operations, that is, the end time of the first operation is earlier than the start time of the drag operation. Then, the user can let go first after performing the first operation, and then perform the drag operation.
  • the process of displaying the interactive action selection page includes: based on the drag operation on the second virtual object, determining a range defining box after the second virtual object is dragged; based on the intersection of the range defining box after the second virtual object is dragged and the range defining box of the first virtual object, displaying the interactive action selection page.
  • the range defining box after the second virtual object is dragged is used to indicate an area covering the second virtual object
  • the range defining box of the first virtual object is used to indicate an area covering the first virtual object.
  • the embodiment of the present application does not limit the process of determining the range bounding box after the second virtual object is dragged based on the drag operation on the second virtual object. the center position of the second virtual object after being dragged; determining a reference area with the center position of the second virtual object after being dragged as the center; and using the reference area as a range definition box after the second virtual object is dragged.
  • a rectangle is determined with the center position of the second virtual object after being dragged as the center, the first length as the width, and the second length as the height, and the area corresponding to the rectangle is used as the reference area.
  • the first length and the second length are set based on experience or adjusted according to the implementation environment, and the embodiment of the present application does not limit this.
  • FIG6 is a schematic diagram of a range definition box after the second virtual object is dragged provided in an embodiment of the present application.
  • a circle is determined with the center position of the second virtual object after being dragged as the center and the third length as the radius, and the area corresponding to the circle is used as the reference area.
  • the third length is set based on experience or adjusted according to the implementation environment, and this embodiment of the application does not limit this.
  • FIG. 7 is a schematic diagram of another range definition box after the second virtual object is dragged provided in an embodiment of the application.
  • the above example uses a reference area that is a rectangle or a circle.
  • the embodiment of the present application does not limit the shape of the reference area, and the shape of the reference area may also be other possible shapes such as a triangle.
  • the scope-defining box may have a variety of forms.
  • the transparent form invisible to the user
  • the scope-defining box may also be a non-transparent form (visible to the user), such as a form filled with a shadow. It should be noted that the process of determining the scope-defining box of the first virtual object is similar to the process of determining the scope-defining box after the second virtual object is dragged, and will not be repeated here.
  • the virtual scene also displays (also includes) an action identifier of the action currently being performed by the first virtual object
  • the method provided in the embodiment of the present application further includes: based on the intersection of the range defining box after the second virtual object is dragged and the range defining box of the first virtual object, canceling the display of the action identifier of the action currently being performed by the first virtual object.
  • the method for canceling the display of the action identifier of the action currently being performed by the first virtual object can refer to the method for canceling the display of the action identifier of the action currently being performed by the second virtual object corresponding to FIG. 4 above, which will not be described in detail here.
  • Implementation method 1 based on the intersection of the scope definition box after the second virtual object is dragged and the scope definition box of the first virtual object, a prompt message is displayed, and the prompt message is used to indicate the cancellation of dragging the second virtual object; in response to the cancellation of dragging the second virtual object, an interactive action selection page is displayed. Canceling dragging is to stop dragging, and in response to the second virtual object being stopped from being dragged (i.e., the terminal device detects that the dragging operation has stopped), an interactive action selection page is displayed. Among them, the position of the second virtual object in the virtual scene before being dragged is different from the position of the second virtual object in the virtual scene after the dragging is stopped.
  • the prompt information can be any content, and the embodiments of the present application do not limit this.
  • the prompt information is "Let go and select a two-player action.”
  • Figure 8 is a display schematic diagram of a prompt information provided by an embodiment of the present application. Among them, the range defining box after the second virtual object is dragged intersects with the range defining box of the first virtual object. Therefore, the action identifier of the action currently performed by the first virtual object is canceled, and the prompt information "Let go and select a two-player action" is displayed.
  • Implementation method 2 based on the intersection of the scope definition box of the second virtual object after being dragged and the scope definition box of the first virtual object, a target object is displayed at the target position of the first virtual object, and the target object is used to indicate the cancellation of dragging the second virtual object; in response to the cancellation of dragging the second virtual object, an interactive action selection page is displayed.
  • canceling dragging is to stop dragging, and in response to the second virtual object being stopped (i.e., the terminal device detects that the dragging operation has stopped), the interactive action selection page is displayed.
  • the target position is an arbitrary position, and the target object is an arbitrary object, and the embodiments of the present application do not limit this.
  • the target position is at the foot of the first virtual object, and the target object is a circle, that is, the circle is displayed at the foot of the first virtual object.
  • FIG. 9 is a display schematic diagram of displaying a target object at the target position of the first virtual object provided by an embodiment of the present application.
  • the shape of the target object shown in FIG. 9 is only an example and is not used to limit the shape of the target object.
  • the shape of the target object can be set according to actual needs.
  • the shape of the target object can also be a shape filled with a shadow.
  • an interactive action selection page is displayed, and at least one candidate interactive action is displayed in the interactive action selection page.
  • Figure 10 it is a display schematic diagram of an interactive action selection page provided in an embodiment of the present application.
  • 1001 is an interactive action selection page
  • 1002 is a plurality of candidate interactive actions.
  • the interactive action selection page may also include a page identifier, which is used to prompt that the interactive action selection page is currently in place.
  • the page identifier is "Select a two-person action" as shown in Figure 10.
  • step 204 based on the second operation on the target interaction action among the multiple candidate interaction actions, the target page is displayed, and the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interaction action.
  • the terminal device can detect the second operation.
  • the second operation is detected and the second operation is directed to one or some candidate interaction actions among multiple candidate interaction actions
  • the one or more candidate interaction actions targeted by the second operation are used as target interaction actions, so that the subsequent process of displaying the target page can be executed based on the second operation directed to the target interaction action among multiple candidate interaction actions.
  • the target interaction action is any one of the multiple candidate interaction actions.
  • the second operation on the target interaction action refers to a selection operation on the target interaction action.
  • the selection operation can be referred to in the description of step 202 above, which will not be described here.
  • the process of displaying the target page includes: based on the second operation on the target interactive action, generating an action data acquisition request, the action data acquisition request including the action identifier of the target interactive action, the object identifier of the second virtual object, and the object identifier of the first virtual object.
  • the action data acquisition request is used to obtain the action data when the first virtual object and the second virtual object interact according to the target interactive action; receiving the action data returned by the server based on the action data acquisition request; running the action data, and in response to the completion of the running of the action data, displaying the target page, the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interactive action. And, exemplarily, the target page still displays the second virtual object before being dragged and the action identifier of the action performed by the second virtual object before being dragged.
  • the action identifier of the target interactive action may be the action name of the target interactive action, or other identifiers that can uniquely represent the target interactive action, which is not limited in the embodiments of the present application.
  • the object identifier of the virtual object may be the user name of the user corresponding to the virtual object, or the account of the user corresponding to the virtual object in the application, or other identifiers that can uniquely represent the virtual object, which is not limited in the embodiments of the present application.
  • FIG11 is a schematic diagram of a target page provided by an embodiment of the present application.
  • the second virtual object is candidate virtual object 3
  • the target interaction action is drinking coffee
  • the first virtual object and the second virtual object are drinking coffee.
  • the target page also displays the second virtual object before being dragged, and the action identifier of the action performed by the second virtual object before being dragged.
  • the interactive action selection page also displays (also includes) a text input control, and the text input control is used to obtain text content, such as 1003 in Figure 10 is a text input control.
  • the process of displaying the target page includes: based on the second operation for the target interactive action among the multiple candidate interactive actions and the text content input in the text input control, displaying the target page, the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interactive action, and the target page displays (including) text content.
  • FIG12 is a display diagram of another target page provided by an embodiment of the present application, wherein the second virtual object is candidate virtual object three, the target interaction action is drinking coffee, and in the virtual scene, the first virtual object and the second virtual object are drinking coffee, and the text content "Let's drink coffee together" is displayed.
  • the interactive action selection page also displays (also includes) a confirmation control, such as 1004 in FIG. 10 is a confirmation control, and based on the second operation on the target interactive action among the multiple candidate interactive actions, the process of displaying the target page includes: based on the second operation on the target interactive action among the multiple candidate interactive actions, and the third operation on the confirmation control, displaying the target page.
  • the third operation on the confirmation control may be a selection operation on the confirmation control, and the timing of the third operation on the confirmation control is later than the timing of the second operation on the target interactive action among the multiple candidate interactive actions.
  • the process of displaying the target page may include: based on the second operation for the target interactive action among multiple interactive actions, the text content entered in the text input control, and the third operation for the confirmation control, the target page is displayed.
  • the timing of the third operation for the confirmation control is later than the timing of the second operation for the target interactive action, and later than the timing of entering the text content in the text input control.
  • the timing of the second operation for the target interactive action may be earlier than the timing of entering the text content in the text input control, or later than the timing of entering the text content in the text input control, and the embodiments of the present application are not limited to this.
  • a process of displaying a target page includes: generating an action data acquisition request based on a second operation on a target interactive action among multiple candidate interactive actions, text content entered in a text input control, and a third operation on a confirmation control, the action data acquisition request including an action identifier of the target interactive action, an object identifier of the second virtual object, an object identifier of the first virtual object, and text content; sending an action data acquisition request to a server; receiving action data returned by the server based on the action data acquisition request; running the action data, and displaying a target page in response to completion of the running of the action data, in which the first virtual object and the second virtual object interact in the virtual scene according to the target interactive action, and text content is displayed on the target page.
  • a process of displaying a target page includes: based on the second operation on the target interaction action among multiple candidate interaction actions, sending an interaction message to a terminal device used by a user corresponding to the second virtual object, the interaction message including an action identifier of the target interaction action, the interaction message being used to indicate that the first virtual object and the second virtual object interact according to the target interaction action; based on receiving a confirmation message sent by the terminal device used by the user corresponding to the second virtual object, displaying the target page, in which the first virtual object and the second virtual object interact in a virtual scene according to the target interaction action, and, exemplarily, the second virtual object before being dragged can be undisplayed in the target page.
  • the process of sending an interaction message to a terminal device used by a user corresponding to the second virtual object based on a second operation on a target interaction action among multiple candidate interaction actions includes: obtaining a friend list of the user corresponding to the first virtual object based on the second operation on the target interaction action among multiple candidate interaction actions; and sending an interaction message to the terminal device used by the user corresponding to the second virtual object based on the fact that the user corresponding to the second virtual object exists in the friend list of the user corresponding to the first virtual object.
  • the process of determining whether the user corresponding to the second virtual object exists in the friend list of the user corresponding to the first virtual object includes: determining the user ID of the user corresponding to the second virtual object; determining the user ID of the user included in the friend list of the user corresponding to the first virtual object; determining that the user corresponding to the second virtual object exists in the friend list of the user corresponding to the first virtual object based on the user ID of the user included in the friend list of the user corresponding to the first virtual object.
  • FIG13 is a display diagram of a target page provided by an embodiment of the present application.
  • the second virtual object is the candidate virtual object 3, the target interactive action is to drink coffee, and the target page also displays the text "Let's drink coffee together".
  • the action identifiers of the candidate virtual object 3 before being dragged and the action performed before the candidate virtual object 3 was dragged are both undisplayed.
  • the above method displays an interactive action selection page by dragging the second virtual object, and then selects the target interactive action in the interactive action selection page, so that the first virtual object and the second virtual object interact according to the target interactive action.
  • the method fully considers the position of the first virtual object and the second virtual object in the virtual scene, making the interactive process of the virtual objects more concise, improving the interactive efficiency of the virtual objects, improving the flexibility of the interaction, and thus improving the user's immersion in virtual social interaction.
  • Fig. 14 is a flow chart of a virtual object interaction method provided in an embodiment of the present application, which includes three execution entities, namely, a user, a terminal device and a server.
  • the user selects the second virtual object and continues for the target duration.
  • the target duration is set based on experience or adjusted according to the implementation environment, and the embodiment of the present application does not limit this. Exemplarily, the target duration is 1 second.
  • the terminal device sets the second virtual object to a drag mode, so that the second virtual object can be moved to any position.
  • the user drags the second virtual object so that the scope defining box of the dragged second virtual object intersects with the scope defining box of the first virtual object.
  • the terminal device displays the target object at the target position of the first virtual object.
  • the target position can be any position, and the target object can be any object, which is not limited in the embodiment of the present application. Under your feet, the target object is a circle.
  • the user cancels dragging the second virtual object and cancels selection of the second virtual object.
  • the terminal device displays an interactive action selection page, which displays at least one candidate interactive action, a text input control and a confirmation control; the text input control is used for the user to input text content; the text content can be any content, and the embodiment of the present application does not limit this.
  • the user selects a target interaction action from at least one candidate interaction action, enters text content in a text input control, and selects a confirmation control; the timing of selecting the target interaction action and the timing of entering text content in the text input control are before the timing of selecting the confirmation control; the timing of selecting the target interaction action may be before the timing of entering text content in the text input control or after the timing of entering text content in the text input control, and the embodiments of the present application are not limited to this.
  • the terminal device transmits the object identifier of the second virtual object, the action identifier of the target interactive action, the object identifier of the first virtual object, and the text content to the server, so that the server obtains action data according to the object identifier of the second virtual object, the action identifier of the target interactive action, and the object identifier of the first virtual object, and the action data is the action data of the first virtual object and the second virtual object interacting according to the target interactive action.
  • the server returns the action data.
  • the terminal device runs the action data, and after the operation of the action data is completed, a target page is displayed.
  • the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interaction action, and text content is displayed in the target page.
  • FIG15 is a schematic diagram of the structure of a virtual object interaction device provided in an embodiment of the present application. As shown in FIG15 , the device includes:
  • a display module 1501 is used to display a virtual scene, wherein the virtual scene displays (including) a first virtual object and at least one candidate virtual object;
  • a control module 1502 configured to set a second virtual object in a draggable state in response to a first operation on a second virtual object among the at least one candidate virtual object;
  • the display module 1501 is further configured to display an interactive action selection page based on a drag operation on a second virtual object among the at least one candidate virtual object, wherein the interactive action selection page displays (including) a plurality of candidate interactive actions;
  • the display module 1501 is further configured to display a target page based on a second operation on a target interaction action among multiple candidate interaction actions, in which the first virtual object and the second virtual object interact in the virtual scene according to the target interaction action.
  • control module 1502 is an optional module. That is, the device provided in the embodiment of the present application may only include the above-mentioned display module 1501. In some implementations, the control module 1502 may also be included.
  • the device further includes: a determination module, configured to determine, based on the drag operation on the second virtual object, a range defining box after the second virtual object is dragged, the range defining box being used to indicate an area covering the second virtual object, wherein the steps performed by the determination module may be completed by the display module 1501, that is, the display module 1501 is configured to determine, based on the drag operation on the second virtual object, a range defining box after the second virtual object is dragged, the range defining box being used to indicate an area covering the second virtual object;
  • the display module 1501 is configured to display an interactive action selection page based on the intersection of the range defining frame of the second virtual object after being dragged and the range defining frame of the first virtual object.
  • display module 1501 is used to display prompt information based on the intersection of the scope defining box after the second virtual object is dragged and the scope defining box of the first virtual object, where the prompt information is used to indicate the cancellation of dragging (i.e., stopping dragging) of the second virtual object; in response to canceling the dragging of the second virtual object (i.e., the second virtual object stops being dragged), displaying an interactive action selection page.
  • the display module 1501 is used to display a target object at a target position of the first virtual object based on the intersection of a scope defining box after the second virtual object is dragged and a scope defining box of the first virtual object, wherein the target object is used to indicate cancellation of dragging (i.e., stopping dragging) of the second virtual object; and in response to cancellation of dragging of the second virtual object (i.e., stopping dragging of the second virtual object), display an interactive action selection page.
  • a determination module is used to determine the center position of the second virtual object after it is dragged based on a drag operation on the second virtual object; determine a reference area with the center position of the second virtual object after it is dragged as the center; and use the reference area as a range defining box after the second virtual object is dragged.
  • the steps performed by the determination module can also be completed by the display module 1501, that is, the display module 1501 is used to determine the center position of the second virtual object after being dragged based on the dragging operation on the second virtual object; determine a reference area with the center position of the second virtual object after being dragged as the center; and use the reference area as a range defining box after the second virtual object is dragged.
  • the virtual scene also displays (also includes) an action identifier of the action currently performed by the first virtual object; the control module 1502 is further used to cancel the display of the action identifier of the action currently performed by the first virtual object based on the intersection of the range defining box after the second virtual object is dragged and the range defining box of the first virtual object.
  • the virtual scene also displays (also includes) action identifiers of actions currently performed by each candidate virtual object; the control module 1502 is also used to cancel the display of the action identifier of the action currently performed by the second virtual object in response to a first operation on a second virtual object among at least one candidate virtual object; or, in response to a first operation on a second virtual object among at least one candidate virtual object, cancel the display of the action identifier of the action currently performed by each candidate virtual object.
  • the interactive action selection page also displays (also includes) a text input control, which is used to obtain text content; the display module 1501 is used to display a target page based on the second operation for the target interactive action among multiple candidate interactive actions and the text content entered in the text input control, in which the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interactive action, and the target page displays (including) text content.
  • a text input control which is used to obtain text content
  • the display module 1501 is used to display a target page based on the second operation for the target interactive action among multiple candidate interactive actions and the text content entered in the text input control, in which the first virtual object and the second virtual object in the target page interact in the virtual scene according to the target interactive action, and the target page displays (including) text content.
  • the device further includes:
  • a generating module configured to generate an action data acquisition request based on a second operation on a target interaction action among the plurality of candidate interaction actions, wherein the action data acquisition request includes an action identifier of the target interaction action, an object identifier of the second virtual object, and an object identifier of the first virtual object;
  • a sending module used for sending an action data acquisition request to a server, where the action data acquisition request is used for acquiring action data when the first virtual object and the second virtual object interact according to a target interaction action;
  • a receiving module used for receiving the action data returned by the server based on the action data acquisition request
  • Running module used to run action data
  • the display module 1501 is used to display the target page in response to the completion of the operation data execution.
  • the steps performed by the above-mentioned generation module, sending module, receiving module and running module can also be completed by the display module 1501. That is, the display module 1501 is used to generate an action data acquisition request based on a second operation on a target interactive action among multiple candidate interactive actions, the action data acquisition request including an action identifier of the target interactive action, an object identifier of the second virtual object and an object identifier of the first virtual object; send the action data acquisition request to the server, the action data acquisition request is used to acquire action data when the first virtual object and the second virtual object interact according to the target interactive action; receive the action data returned by the server based on the action data acquisition request; run the action data; and display the target page in response to the completion of the running of the action data.
  • a sending module is used to send an interaction message to a terminal device used by a user corresponding to the second virtual object based on a second operation on a target interaction action among multiple candidate interaction actions, the interaction message includes an action identifier of the target interaction action, and the interaction message is used to instruct the first virtual object and the second virtual object to interact according to the target interaction action.
  • the steps performed by the sending module can be completed by the display module 1501, that is, the display module 1501 is used to send an interaction message to a terminal device used by a user corresponding to the second virtual object based on a second operation on a target interaction action among multiple candidate interaction actions, the interaction message includes an action identifier of the target interaction action, and the interaction message is used to instruct the first virtual object and the second virtual object to interact according to the target interaction action;
  • the display module 1501 is configured to display a target page based on receiving a confirmation message sent by a terminal device used by a user corresponding to the second virtual object.
  • the sending module is configured to send a target interaction action from among multiple candidate interaction actions.
  • the second operation is to obtain a friend list of the user corresponding to the first virtual object; based on the fact that the user corresponding to the second virtual object exists in the friend list of the user corresponding to the first virtual object, an interactive message is sent to a terminal device used by the user corresponding to the second virtual object.
  • the steps performed by the sending module can be completed by the display module 1501, that is, the display module 1501 is used to obtain the friend list of the user corresponding to the first virtual object based on the second operation for the target interaction action among multiple candidate interaction actions; based on the fact that the user corresponding to the second virtual object exists in the friend list of the user corresponding to the first virtual object, send an interaction message to the terminal device used by the user corresponding to the second virtual object.
  • the above-mentioned device only uses the division of the above-mentioned functional modules as an example to illustrate when realizing its functions.
  • the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and method embodiments provided in the above embodiments belong to the same concept, and their specific implementation process and the technical effects produced are detailed in the method embodiments, which will not be repeated here.
  • FIG16 shows a block diagram of a terminal device 1600 provided by an exemplary embodiment of the present application.
  • the terminal device 1600 may be a portable mobile terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), an MP4 player (Moving Picture Experts Group Audio Layer IV), a laptop computer or a desktop computer.
  • the terminal device 1600 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal or other names.
  • the terminal device 1600 includes: a processor 1601 and a memory 1602 .
  • the processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 1601 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array).
  • the processor 1601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the awake state, also known as a CPU (Central Processing Unit); the coprocessor is a low-power processor for processing data in the standby state.
  • the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content to be displayed on the display screen.
  • the processor 1601 may also include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1602 may include one or more computer-readable storage media, which may be non-transitory (also referred to as non-temporary).
  • the memory 1602 may also include a high-speed random access memory, and a non-volatile memory, such as one or more disk storage devices, flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1602 is used to store at least one instruction, which is used to be executed by the processor 1601 to implement the virtual object interaction method provided in the method embodiment of the present application.
  • the terminal device 1600 may further optionally include: a peripheral device interface 1603 and at least one peripheral device.
  • the processor 1601, the memory 1602 and the peripheral device interface 1603 may be connected via a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 1603 via a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1604, a display screen 1605, a camera assembly 1606, an audio circuit 1607 and a power supply 1609.
  • the peripheral device interface 1603 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1601 and the memory 1602.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602, and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 1604 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals.
  • the RF circuit 1604 communicates with the communication network and other communication devices through electromagnetic signals.
  • the RF circuit 1604 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the RF circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, an encoder/decoder, and a decoder. Code chipset, user identity module card, etc.
  • the radio frequency circuit 1604 can communicate with other terminal devices through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, metropolitan area network, intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity) network.
  • the radio frequency circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
  • the display screen 1605 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, videos, and any combination thereof.
  • the display screen 1605 also has the ability to collect touch signals on the surface or above the surface of the display screen 1605.
  • the touch signal may be input as a control signal to the processor 1601 for processing.
  • the display screen 1605 may also be used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards.
  • the display screen 1605 may be one, which is disposed on the front panel of the terminal device 1600; in other embodiments, the display screen 1605 may be at least two, which are disposed on different surfaces of the terminal device 1600 or are folded; in other embodiments, the display screen 1605 may be a flexible display screen, which is disposed on a curved surface or a folded surface of the terminal device 1600. Even more, the display screen 1605 may be configured as a non-rectangular irregular figure, i.e., a special-shaped screen.
  • the display screen 1605 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera assembly 1606 is used to capture images or videos.
  • the camera assembly 1606 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal device 1600
  • the rear camera is set on the back of the terminal device 1600.
  • there are at least two rear cameras which are any one of a main camera, a depth of field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth of field camera to realize the background blur function, the fusion of the main camera and the wide-angle camera to realize panoramic shooting and VR (Virtual Reality) shooting function or other fusion shooting functions.
  • the camera assembly 1606 may also include a flash.
  • the flash can be a single-color temperature flash or a dual-color temperature flash.
  • a dual-color temperature flash refers to a combination of a warm light flash and a cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 1607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves from the user and the environment, and convert the sound waves into electrical signals and input them into the processor 1601 for processing, or input them into the radio frequency circuit 1604 to achieve voice communication.
  • the microphone may also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 1601 or the radio frequency circuit 1604 into sound waves.
  • the speaker may be a traditional film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into sound waves audible to humans, but also convert the electrical signal into sound waves inaudible to humans for purposes such as ranging.
  • the audio circuit 1607 may also include a headphone jack.
  • the power supply 1609 is used to power various components in the terminal device 1600.
  • the power supply 1609 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery charged through a wired line
  • a wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal device 1600 further includes one or more sensors 1610 , including but not limited to: an acceleration sensor 1611 , a gyroscope sensor 1612 , a pressure sensor 1613 , an optical sensor 1615 , and a proximity sensor 1616 .
  • sensors 1610 including but not limited to: an acceleration sensor 1611 , a gyroscope sensor 1612 , a pressure sensor 1613 , an optical sensor 1615 , and a proximity sensor 1616 .
  • the acceleration sensor 1611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal device 1600.
  • the acceleration sensor 1611 can be used to detect the components of gravity acceleration on the three coordinate axes.
  • the processor 1601 can control the display screen 1605 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 1611.
  • the acceleration sensor 1611 can also be used for collecting game or user motion data.
  • the gyroscope sensor 1612 can detect the body direction and rotation angle of the terminal device 1600, and the gyroscope sensor 1612 can cooperate with the acceleration sensor 1611 to collect the user's 3D actions on the terminal device 1600.
  • the processor 1601 can implement the following functions based on the data collected by the gyroscope sensor 1612: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1613 can be set on the side frame of the terminal device 1600 and/or the lower layer of the display screen 1605.
  • the processor 1601 performs left and right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613.
  • the processor 1601 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1605.
  • the operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the optical sensor 1615 is used to collect the ambient light intensity.
  • the processor 1601 can control the display brightness of the display screen 1605 according to the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the display screen 1605 is reduced.
  • the processor 1601 can also dynamically adjust the shooting parameters of the camera component 1606 according to the ambient light intensity collected by the optical sensor 1615.
  • the proximity sensor 1616 also called a distance sensor, is usually arranged on the front panel of the terminal device 1600.
  • the proximity sensor 1616 is used to collect the distance between the user and the front of the terminal device 1600.
  • the processor 1601 controls the display screen 1605 to switch from the screen-on state to the screen-off state; when the proximity sensor 1616 detects that the distance between the user and the front of the terminal device 1600 is gradually increasing, the processor 1601 controls the display screen 1605 to switch from the screen-off state to the screen-on state.
  • FIG. 16 does not limit the terminal device 1600 and may include more or fewer components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • FIG17 is a schematic diagram of the structure of the server provided in the embodiment of the present application.
  • the server 1700 may have relatively large differences due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 1701 and one or more memories 1702, wherein the one or more memories 1702 store at least one program code, and the at least one program code is loaded and executed by the one or more processors 1701 to implement the virtual object interaction method provided by the above-mentioned various method embodiments.
  • the server 1700 may also have components such as a wired or wireless network interface, a keyboard, and an input and output interface for input and output.
  • the server 1700 may also include other components for implementing device functions, which will not be described in detail here.
  • a non-temporary computer-readable storage medium in which at least one program code is stored.
  • the at least one program code is loaded and executed by a processor to enable a computer to implement any of the above-mentioned virtual object interaction methods.
  • the above-mentioned non-temporary computer-readable storage medium can be a read-only memory (ROM), a random access memory (RAM), a compact disc (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc.
  • ROM read-only memory
  • RAM random access memory
  • CD-ROM compact disc
  • magnetic tape a magnetic tape
  • floppy disk a magnetic tape
  • optical data storage device etc.
  • a computer program or a computer program product is also provided, wherein at least one computer instruction is stored in the computer program or the computer program product, and the at least one computer instruction is loaded and executed by a processor to enable a computer to implement any of the above-mentioned virtual object interaction methods.
  • the information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application are all authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with the relevant laws, regulations and standards of relevant countries and regions.
  • the virtual scenes involved in this application are all obtained with full authorization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

提供了一种虚拟对象的交互方法、装置、设备及计算机可读存储介质,属于互联网技术领域。方法包括:显示虚拟场景,所述虚拟场景包括第一虚拟对象和至少一个候选虚拟对象(201);基于针对至少一个候选虚拟对象中的所述第二虚拟对象的拖动操作,显示交互动作选择页面,所述交互动作选择页面包括多个候选交互动作(203);基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互(204)。

Description

虚拟对象的交互方法、装置、设备及计算机可读存储介质
本申请要求于2022年10月18日提交的申请号为202211275400.4、发明名称为“虚拟对象的交互方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及互联网技术领域,特别涉及一种虚拟对象的交互方法、装置、设备及计算机可读存储介质。
背景技术
随着互联网技术的不断发展,人们对娱乐形式的要求越来越高。例如,在游戏交互过程中,用户可以通过控制虚拟场景中的虚拟对象进行交互。
发明内容
本申请实施例提供了一种虚拟对象的交互方法、装置、设备及计算机可读存储介质,所述技术方案包括但不限于如下的几个方面。
一方面,本申请实施例提供了一种虚拟对象的交互方法,所述方法由终端设备执行,所述方法包括:
显示虚拟场景,所述虚拟场景包括第一虚拟对象和至少一个候选虚拟对象;
基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,所述交互动作选择页面包括多个候选交互动作;
基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互。
另一方面,本申请实施例提供了一种虚拟对象的交互装置,所述装置包括:
显示模块,用于显示虚拟场景,所述虚拟场景包括第一虚拟对象和至少一个候选虚拟对象;
所述显示模块,还用于基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,所述交互动作选择页面包括多个候选交互动作;
所述显示模块,还用于基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互。
另一方面,本申请实施例提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条程序代码,所述至少一条程序代码由所述处理器加载并执行,以使计算机设备实现上述任一所述的虚拟对象的交互方法。
另一方面,还提供了一种非临时性计算机可读存储介质,所述非临时性计算机可读存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行,以使计算机实现上述任一所述的虚拟对象的交互方法。
另一方面,还提供了一种计算机程序或计算机程序产品,所述计算机程序或计算机程序产品中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使计算机实现上述任一种虚拟对象的交互方法。
本申请实施例提供的技术方案通过拖动第二虚拟对象,显示交互动作选择页面,进而在交互动作选择页面中选择目标交互动作,以使第一虚拟对象和第二虚拟对象按照目标交互动 作进行交互。该方法充分考虑了第一虚拟对象和第二虚拟对象在虚拟场景中的站位,使得虚拟对象的交互过程更加简洁,提高虚拟对象的交互效率、提高交互的灵活性,进而能够提高用户在虚拟社交中的沉浸感。并且,正是由于虚拟对象的交互过程更加简洁,因而减少了用户的操作次数,从而减少了终端设备根据用户的操作进行响应的次数,进而节约了终端设备的开销。
附图说明
图1是本申请实施例提供的一种虚拟对象的交互方法的实施环境示意图;
图2是本申请实施例提供的一种虚拟对象的交互方法的流程图;
图3是本申请实施例提供的一种虚拟场景的显示示意图;
图4是本申请实施例提供的另一种虚拟场景的显示示意图;
图5是本申请实施例提供的另一种虚拟场景的显示示意图;
图6是本申请实施例提供的一种第二虚拟对象被拖动之后的范围界定框的示意图;
图7是本申请实施例提供的另一种第二虚拟对象被拖动之后的范围界定框的示意图;
图8是本申请实施例提供的一种提示信息的显示示意图;
图9是本申请实施例提供的一种在第一虚拟对象的目标位置显示目标物体的显示示意图;
图10是本申请实施例提供的一种交互动作选择页面的显示示意图;
图11是本申请实施例提供的一种目标页面的显示示意图;
图12是本申请实施例提供的另一种目标页面的显示示意图;
图13是本申请实施例提供的一种目标页面的显示示意图;
图14是本申请实施例提供的一种虚拟对象的交互方法的流程图;
图15是本申请实施例提供的一种虚拟对象的交互装置的结构示意图;
图16是本申请实施例提供的一种终端设备的结构示意图;
图17是本申请实施例提供的一种服务器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
对本申请实施例涉及的缩略语或关键术语定义。
虚拟社交:用户通过定制化自己的2D(two-dimensional,二维)或3D(three-dimensional,三维)虚拟对象(包括但不限于人形模型、其他形态的模型等),用自己的虚拟对象和他人的虚拟对象进行社交聊天。
图1是本申请实施例提供的一种虚拟对象的交互方法的实施环境示意图,如图1所示,该实施环境包括:终端设备101和服务器102。
终端设备101可以是智能手机、游戏主机、台式计算机、平板电脑、电子书阅读器和膝上型便携计算机中的至少一种。终端设备101用于执行本申请实施例提供的虚拟对象的交互方法。
终端设备101可以泛指多个终端设备中的一个,本实施例仅以终端设备101来举例说明。本领域技术人员可以知晓,上述终端设备101的数量可以更多或更少。比如上述终端设备101可以仅为一个,或者上述终端设备101为几十个或几百个,或者更多数量,本申请实施例对终端设备的数量和设备类型不加以限定。
服务器102为一台服务器,或者为多台服务器组成的服务器集群,或者为云计算平台和虚拟化中心中的任意一种,本申请实施例对此不加以限定。服务器102与终端设备101通过有线网络或无线网络进行通信连接。服务器102具有数据接收功能、数据处理功能和数据发 送功能。当然,服务器102还可以具有其他功能,本申请实施例对此不加以限定。
本领域技术人员应能理解上述终端设备101和服务器102仅为举例说明,其他现有的或者今后可能出现的终端设备或服务器,如可适用于本申请,也应包含在本申请的保护范围之内,并在此以引用方式包含于此。
本申请实施例提供了一种虚拟对象的交互方法,该方法可应用于上述图1所示的实施环境,以图2所示的本申请实施例提供的一种虚拟对象的交互方法的流程图为例,该方法可由图1中的终端设备101执行。如图2所示,该方法包括下述步骤:
在步骤201中,显示虚拟场景,虚拟场景中显示有第一虚拟对象和至少一个候选虚拟对象。
在本申请示例性实施例中,终端设备中安装和运行有能够提供虚拟场景的应用程序,该应用程序可以是指需要下载安装的应用程序(也称为宿主程序),也可以是指依赖宿主程序运行的嵌入型程序,如小程序,本申请实施例对此不进行限定。嵌入型程序是一种基于编程语言开发完成、依赖于宿主程序运行的应用程序。嵌入型程序不需要下载安装,只需要在宿主程序中动态加载即可运行。用户可以通过搜索、扫一扫等方式找到自己所需要的嵌入型程序,在宿主程序中点开自己所需要的嵌入型程序即可应用该嵌入型程序,用完关闭后该嵌入型程序,由此不会占用终端的内存,十分方便。
示例性地,基于针对该应用程序的操作指令,打开该应用程序,显示虚拟场景,虚拟场景中显示有第一虚拟对象和至少一个候选虚拟对象,也即是,虚拟场景包括第一虚拟对象和至少一个候选虚拟对象。候选虚拟对象对应的用户可以是第一虚拟对象对应的用户的好友用户,也可以不是第一虚拟对象对应的用户的好友用户,针对应用程序的操作指令可以是针对应用程序的图标的点击操作,也可以是其他操作,本申请实施例对此均不进行限定。
如图3是本申请实施例提供的一种虚拟场景的显示示意图。其中,虚拟场景中显示有第一虚拟对象301、候选虚拟对象一302、候选虚拟对象二303、候选虚拟对象三304和候选虚拟对象四305。虚拟场景还可以包括场景标识,比如图3所示的“状态广场”,场景标识用于提示当前处于虚拟场景中。
需要说明的是,用户还可以放大或缩小虚拟场景。当放大虚拟场景时,终端设备的显示页面中显示的虚拟场景的区域较小,虚拟场景中显示的虚拟对象的数量会更少;当缩小虚拟场景时,终端设备的显示页面中显示的虚拟场景的区域较大,虚拟场景中显示的虚拟对象的数量会更多。
在步骤202中,响应于针对至少一个候选虚拟对象中的第二虚拟对象的第一操作,将第二虚拟对象设置为可拖动状态。
其中,终端设备可以进行第一操作的检测,当检测到第一操作且该第一操作针对至少一个候选虚拟对象中的某个或某些候选虚拟对象时,将该第一操作所针对的一个或多个候选虚拟对象作为第二虚拟对象,从而能够响应于针对至少一个候选虚拟对象中的第二虚拟对象的第一操作,执行后续的将第二虚拟对象设置为可拖动状态的过程。
示例性地,针对第二虚拟对象的第一操作可以是指针对第二虚拟对象的长按操作。当选中第二虚拟对象,且选中时长超过目标时长时,确定接收到针对第二虚拟对象的长按操作。可选地,目标时长基于经验进行设置,或者根据实施环境进行调整,本申请实施例对此不进行限定。示例性地,目标时长为1秒。选中第二虚拟对象可以是指点击(单击、双击或其他点击方式)第二虚拟对象的操作,也可以是语音选中第二虚拟对象的操作(比如发出“选择X”的语音消息,X为第二虚拟对象的名称),本申请实施例对选中第二虚拟对象的方式不进行限定。
可选地,基于接收到针对第二虚拟对象的选中操作,确定接收到针对第二虚拟对象的选中操作时的第一时间,根据目标时长和第一时间,确定第二时间(比如,将目标时长与第一时间之和作为第二时间),当在第二时间时第二虚拟对象仍为选中状态时,说明检测到针对 至少一个候选虚拟对象中的第二虚拟对象的第一操作,将第二虚拟对象设置为可拖动状态。
示例性地,在11:21:25(即第一时间)接收到针对第二虚拟对象的选中操作,目标时长为1秒,则第二时间为11:21:26,当在11:21:26时第二虚拟对象仍为选中状态,则将第二虚拟对象设置为可拖动状态。
在一种可能的实现方式中,虚拟场景中还显示有(还包括)各个候选虚拟对象当前执行的动作的动作标识。例如,虚拟场景中显示有各个候选虚拟对象对应的气泡,气泡中显示有各个虚拟对象当前执行的动作的动作标识。动作标识可以是动作的图像,也可以是动作的名称,还可以其他能够唯一表示动作的标识,本申请实施例对此不进行限定。如图3中的307为候选虚拟对象一对应的气泡,308为候选虚拟对象二对应的气泡,309为候选虚拟对象三对应的气泡,310为候选虚拟对象四对应的气泡。
本申请实施例提供的方法还包括:响应于针对至少一个候选虚拟对象中的第二候选虚拟对象的第一操作,取消显示第二虚拟对象当前执行的动作的动作标识。如图4是本申请实施例提供的另一种虚拟场景的显示示意图。其中,候选虚拟对象三为第二虚拟对象,基于针对候选虚拟对象三的第一操作,取消显示候选虚拟对象三对应的气泡,也即是取消显示候选虚拟对象三当前执行的动作的动作标识。
或者,虚拟场景中还显示有(还包括)各个虚拟对象当前执行的动作的动作标识。本申请实施例提供的方法还包括:响应于针对至少一个候选虚拟对象中的第二候选虚拟对象的第一操作,取消显示各个虚拟对象当前执行的动作的动作标识。如图5是本申请实施例提供的另一种虚拟场景的显示示意图。其中,候选虚拟对象三为第二虚拟对象,基于针对候选虚拟对象三的第一操作,取消显示各个虚拟对象当前执行的动作的动作标识。
示例性地,该步骤202为可选步骤。在一些实施方式中,步骤201中显示虚拟场景后,虚拟场景包括的至少一个候选虚拟对象默认处于不可拖动状态,那么需要执行该步骤202,以根据上述的第一操作将第二虚拟对象设置为可拖动状态,便于后续用户对第二虚拟对象进行拖动操作。或者,在另一些实施方式中,步骤201中显示虚拟场景之后,虚拟场景包括的至少一个候选虚拟对象默认处于可拖动状态,那么不需要执行该步骤202,用户后续可以直接对第二虚拟对象进行拖动操作。
在步骤203中,基于针对第二虚拟对象的拖动操作,显示交互动作选择页面,交互动作选择页面中显示有多个候选交互动作。
其中,终端设备可以进行拖动操作的检测,当检测到拖动操作且该拖动操作针对至少一个候选虚拟对象中的某个或某些候选虚拟对象时,将该拖动操作所针对的一个或多个候选虚拟对象作为第二虚拟对象,从而能够基于针对至少一个候选虚拟对象中的第二虚拟对象的拖动操作,执行后续的显示交互动作选择页面的过程,该交互动作选择页面包括多个候选交互动作。
应理解的是,当在步骤203之前需要执行步骤202的情况下,步骤202中的第一操作与步骤203中的拖动操作针对相同的第二虚拟对象。在一些实施方式中,第一操作和拖动操作为连续的操作,即第一操作的结束时刻与拖动操作的起始时刻相同。那么,用户执行第一操作后无需松手即可连续的执行拖动操作。在另一些实施方式中,第一操作和拖动操作为不连续的操作,即第一操作的结束时刻早于拖动操作的起始时刻。那么,用户执行第一操作后可以先松手,再执行拖动操作。
在一种可能的实现方式中,基于针对第二虚拟对象的拖动操作,显示交互动作选择页面的过程包括:基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的范围界定框;基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,显示交互动作选择页面。其中,第二虚拟对象被拖动之后的范围界定框用于指示覆盖第二虚拟对象的区域,第一虚拟对象的范围界定框用于指示覆盖第一虚拟对象的区域。
其中,本申请实施例对基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的范围界定框的过程不进行限定。示例性地,基于针对第二虚拟对象的拖动操作,确定第 二虚拟对象被拖动之后的中心位置;以第二虚拟对象被拖动之后的中心位置为中心,确定参考区域;将参考区域作为第二虚拟对象被拖动之后的范围界定框。
可选地,以第二虚拟对象被拖动之后的中心位置为中心,以第一长度为宽度,第二长度为高度,确定一个矩形,将该矩形对应的区域作为参考区域。其中,第一长度和第二长度基于经验进行设置,或者根据实施环境进行调整,本申请实施例对此不进行限定。如图6是本申请实施例提供的一种第二虚拟对象被拖动之后的范围界定框的示意图。
可选地,以第二虚拟对象被拖动之后的中心位置为中心,第三长度为半径,确定一个圆,将该圆对应的区域作为参考区域。其中,第三长度基于经验进行设置,或者根据实施环境进行调整,本申请实施例对此不进行限定。如图7是本申请实施例提供的另一种第二虚拟对象被拖动之后的范围界定框的示意图。
上文以参考区域为矩形或者圆形进行了举例说明,本申请实施例不对参考区域的形状进行限定,参考区域的形状也可以为三角形等其他可能的形状。另外,范围界定框可以具有多种形态,除了图6和图7示出的透明形态(用户不可见)之外,范围界定框也可以是非透明形态(用户可见),比如填充有阴影的形态。需要说明的是,第一虚拟对象的范围界定框的确定过程与第二虚拟对象被拖动之后的范围界定框的确定过程类似,在此不再进行赘述。
在一种可能的实现方式中,虚拟场景中还显示(还包括)有第一虚拟对象当前执行的动作的动作标识,本申请实施例提供的方法还包括:基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,取消显示第一虚拟对象当前执行的动作的动作标识。其中,取消显示第一虚拟对象当前执行的动作的动作标识的方式,可以参考上文图4对应的取消显示第二虚拟对象当前执行的动作的动作标识的方式,此处不做赘述。
在一种可能的实现方式中,基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,有下述两种实现方式显示交互动作选择页面。
实现方式一、基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,显示提示信息,提示信息用于指示取消拖动第二虚拟对象;响应于取消拖动第二虚拟对象,显示交互动作选择页面。取消拖动也即是停止拖动,响应于第二虚拟对象被停止拖动(即终端设备检测到拖动操作停止),显示交互动作选择页面。其中,第二虚拟对象被拖动前在虚拟场景中的位置与第二虚拟对象被停止拖动后在虚拟场景中的位置不同。
可选地,提示信息可以是任意内容,本申请实施例对此不进行限定。示例性地,提示信息为“松手选择双人动作”。如图8是本申请实施例提供的一种提示信息的显示示意图。其中,第二虚拟对象被拖动之后的范围界定框与第一虚拟对象的范围界定框相交,因此,取消显示第一虚拟对象当前执行的动作的动作标识,显示提示信息“松手选择双人动作”。
实现方式二、基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,在第一虚拟对象的目标位置显示目标物体,目标物体用于指示取消拖动第二虚拟对象;响应于取消拖动第二虚拟对象,显示交互动作选择页面。如前所述,取消拖动也即是停止拖动,响应于第二虚拟对象被停止拖动(即终端设备检测到拖动操作停止),显示交互动作选择页面。
可选地,目标位置为任意位置,目标物体为任意物体,本申请实施例对此均不进行限定。示例性地,目标位置为第一虚拟对象的脚下,目标物体为圆圈,也即是在第一虚拟对象的脚下显示圆圈。如图9是本申请实施例提供的一种在第一虚拟对象的目标位置显示目标物体的显示示意图。图9所示的目标物体的形态仅为一种举例,不用于限定目标物体的形态,根据实际需求设置目标物体的形态即可。比如,目标物体的形态还可以是填充有阴影的形态。
在一种可能的实现方式中,当取消拖动(或者说停止拖动)第二虚拟对象时,显示交互动作选择页面,交互动作选择页面中显示有至少一个候选交互动作。如图10是本申请实施例提供的一种交互动作选择页面的显示示意图。其中,1001是交互动作选择页面,1002为多个候选交互动作。交互动作选择页面还可以包括页面标识,页面标识用于提示当前处于交互动作选择页面中。比如,页面标识为图10所示的“选择双人动作”。
在步骤204中,基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互。
其中,终端设备可以进行第二操作的检测,当检测到第二操作且该第二操作针对多个候选交互动作中的某个或某些候选交互动作象时,将该第二操作所针对的一个或多个候选交互动作作为目标交互动作,从而能够基于针对多个候选交互动作中的目标交互动作的第二操作,执行后续的显示目标页面的过程。
在一种可能的实现方式中,目标交互动作为多个候选交互动作中的任意一个候选交互动作。针对目标交互动作的第二操作是指针对目标交互动作的选中操作。该选中操作可以参见上文步骤202中的说明,此处不做赘述。
可选地,基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面的过程包括:基于针对目标交互动作的第二操作,生成动作数据获取请求,动作数据获取请求中包括目标交互动作的动作标识、第二虚拟对象的对象标识和第一虚拟对象的对象标识。向服务器发送动作数据获取请求,动作数据获取请求用于获取第一虚拟对象和第二虚拟对象按照目标交互动作交互时的动作数据;接收服务器基于动作数据获取请求返回的动作数据;运行动作数据,响应于动作数据运行完毕,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互。并且,示例性地,目标页面中仍显示有被拖动之前的第二虚拟对象及被拖动之前的第二虚拟对象执行的动作的动作标识。
其中,目标交互动作的动作标识可以是目标交互动作的动作名称,也可以是其他能够唯一表示目标交互动作的标识,本申请实施例对此不进行限定。虚拟对象的对象标识可以是虚拟对象对应的用户的用户名称,也可以是虚拟对象对应的用户在该应用程序中的账号,还可以是其他能够唯一表示虚拟对象的标识,本申请实施例对此也不进行限定。
如图11是本申请实施例提供的一种目标页面的显示示意图。其中,第二虚拟对象为候选虚拟对象三,目标交互动作为喝咖啡,第一虚拟对象和第二虚拟对象在喝咖啡。目标页面中还显示有被拖动之前的第二虚拟对象,及被拖动之前的第二虚拟对象执行的动作的动作标识。
可选地,交互动作选择页面中还显示有(还包括)文本输入控件,文本输入控件用于获取文本内容,如图10中的1003为文本输入控件。基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面的过程包括:基于针对多个候选交互动作中的目标交互动作的第二操作、在文本输入控件中输入的文本内容,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互,且目标页面中显示有(包括)文本内容。
如图12是本申请实施例提供的另一种目标页面的显示示意图。其中,第二虚拟对象为候选虚拟对象三,目标交互动作为喝咖啡,在虚拟场景中第一虚拟对象和第二虚拟对象在喝咖啡,且显示有文本内容“一起喝咖啡吧”。
可选地,交互动作选择页面中还显示有(还包括)确认控件,如图10中的1004为确认控件,则基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面的过程包括:基于针对多个候选交互动作中的目标交互动作的第二操作,以及针对确认控件的第三操作,显示目标页面。其中,针对确认控件的第三操作可以是针对确认控件的选中操作,针对确认控件的第三操作的时机晚于针对多个候选交互动作中的目标交互动作的第二操作的时机。
可选地,在交互动作选择页面既包括文本输入控件又包括确定控件的情况下,基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面的过程可以包括:基于针对多个交互动作中的目标交互动作的第二操作、在文本输入控件中输入的文本内容、针对确认控件的第三操作,显示目标页面。其中,针对确认控件的第三操作的时机晚于针对目标交互动作的第二操作的时机,晚于在文本输入控件中输入文本内容的时机。针对目标交互动作的第二操作的时机可以早于在文本输入控件中输入文本内容的时机,也可以晚于在文本输入控件中输入文本内容的时机,本申请实施例对此不进行限定。
可选地,基于针对多个交互动作中的目标交互动作的第二操作、在文本输入控件中输入的文本内容、针对确认控件的第三操作,显示目标页面的过程包括:基于针对多个候选交互动作中的目标交互动作的第二操作、在文本输入控件中输入的文本内容、针对确认控件的第三操作,生成动作数据获取请求,动作数据获取请求中包括目标交互动作的动作标识、第二虚拟对象的对象标识、第一虚拟对象的对象标识和文本内容;向服务器发送动作数据获取请求;接收服务器基于动作数据获取请求返回的动作数据;运行动作数据,响应于动作数据运行完毕,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互,且目标页面中显示有文本内容。
在一种可能的实现方式中,基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面的过程包括:基于针对多个候选交互动作中的目标交互动作的第二操作,向第二虚拟对象对应的用户使用的终端设备发送交互消息,交互消息中包括目标交互动作的动作标识,交互消息用于指示第一虚拟对象和第二虚拟对象按照目标交互动作进行交互;基于接收到第二虚拟对象对应的用户使用的终端设备发送的确认消息,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互,并且,示例性地,目标页面中可以取消显示被拖动之前的第二虚拟对象。
可选地,基于针对多个候选交互动作中的目标交互动作的第二操作,向第二虚拟对象对应的用户使用的终端设备发送交互消息的过程包括:基于针对多个候选交互动作中的目标交互动作的第二操作,获取第一虚拟对象对应的用户的好友列表;基于第二虚拟对象对应的用户存在于第一虚拟对象对应的用户的好友列表中,向第二虚拟对象对应的用户使用的终端设备发送交互消息。
其中,确定第二虚拟对象对应的用户是否存在于第一虚拟对象对应的用户的好友列表中的过程包括:确定第二虚拟对象对应的用户的用户标识;确定第一虚拟对象对应的用户的好友列表中包括的用户的用户标识;基于第一虚拟对象对应的用户的好友列表中包括的用户的用户标识中存在第二虚拟对象对应的用户的用户标识,则确定第二虚拟对象对应的用户存在于第一虚拟对象对应的用户的好友列表中。基于第一虚拟对象对应的用户的好友列表中包括的用户的用户标识中不存在第二虚拟对象对应的用户的用户标识,则确定第二虚拟对象对应的用户不存在于第一虚拟对象对应的用户的好友列表中。
如图13是本申请实施例提供的一种目标页面的显示示意图。其中,第二虚拟对象为候选虚拟对象三,目标交互动作为喝咖啡,目标页面中还显示有文本内容“一起喝咖啡吧”。被拖动之前的候选虚拟对象三和候选虚拟对象三被拖动之前执行的动作的动作标识均取消显示。
上述方法通过拖动第二虚拟对象,显示交互动作选择页面,进而在交互动作选择页面中选择目标交互动作,以使第一虚拟对象和第二虚拟对象按照目标交互动作进行交互。该方法充分考虑了第一虚拟对象和第二虚拟对象在虚拟场景中的站位,使得虚拟对象的交互过程更加简洁,提高虚拟对象的交互效率、提高交互的灵活性,进而能够提高用户在虚拟社交中的沉浸感。并且,正是由于虚拟对象的交互过程更加简洁,因而减少了用户的操作次数,从而减少了终端设备根据用户的操作进行响应的次数,进而节约了终端设备的开销。
图14是本申请实施例提供的一种虚拟对象的交互方法的流程图。其中包括三个执行主体,分别为用户、终端设备和服务器。
用户选中第二虚拟对象并持续目标时长。目标时长基于经验进行设置,或者根据实施环境进行调整,本申请实施例对此不进行限定。示例性地,目标时长为1秒。
终端设备将第二虚拟对象设置为拖动模式,使得第二虚拟对象可被移动到任意位置。
用户拖动第二虚拟对象,以使第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交。
终端设备在第一虚拟对象的目标位置显示目标物体,目标位置可以是任意位置,目标物体可以是任意物体,本申请实施例对此不进行限定。示例性地,目标位置为第一虚拟对象的 脚下,目标物体为圆圈。
用户取消拖动第二虚拟对象,取消选中第二虚拟对象。
终端设备显示交互动作选择页面,交互动作选择页面中显示有至少一个候选交互动作、文本输入控件和确认控件;文本输入控件用于用户输入文本内容;文本内容可以是任意内容,本申请实施例对此不进行限定。
用户在至少一个候选交互动作中选择目标交互动作、在文本输入控件中输入文本内容并选中确定控件;选择目标交互动作的时机和在文本输入控件输入文本内容的时机在选中确定控件的时机之前,选择目标交互动作的时机可以在在文本输入控件输入文本内容的时机之前,也可以在在文本输入控件输入文本内容的时机之后,本申请实施例对此不进行限定。
终端设备将第二虚拟对象的对象标识、目标交互动作的动作标识、第一虚拟对象的对象标识、文本内容传递给服务器,以使服务器根据第二虚拟对象的对象标识、目标交互动作的动作标识、第一虚拟对象的对象标识获取动作数据,动作数据为第一虚拟对象和第二虚拟对象按照目标交互动作进行交互的动作数据。
服务器返回动作数据。
终端设备运行动作数据,基于动作数据运行完毕,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互,且目标页面中显示有文本内容。
图15所示为本申请实施例提供的一种虚拟对象的交互装置的结构示意图,如图15所示,该装置包括:
显示模块1501,用于显示虚拟场景,虚拟场景中显示有(包括)第一虚拟对象和至少一个候选虚拟对象;
控制模块1502,用于响应于针对至少一个候选虚拟对象中的第二虚拟对象的第一操作,将第二虚拟对象设置为可拖动状态;
显示模块1501,还用于基于针对至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,交互动作选择页面中显示有(包括)多个候选交互动作;
显示模块1501,还用于基于针对多个候选交互动作中的目标交互动作的第二操作,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互。
示例性地,该控制模块1502为可选的模块。也就是说,本申请实施例提供的装置可以仅包括上述的显示模块1501。在一些实施方式中,还可以包括该控制模块1502。
在一种可能的实现方式中,装置还包括:确定模块,用于基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的范围界定框,范围界定框用于指示覆盖第二虚拟对象的区域,其中,该确定模块执行的步骤可以由显示模块1501完成,也就是说,显示模块1501,用于基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的范围界定框,范围界定框用于指示覆盖第二虚拟对象的区域;
显示模块1501,用于基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,显示交互动作选择页面。
在一种可能的实现方式中,显示模块1501,用于基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,显示提示信息,提示信息用于指示取消拖动(即停止拖动)第二虚拟对象;响应于取消拖动第二虚拟对象(即第二虚拟对象被停止拖动),显示交互动作选择页面。
在一种可能的实现方式中,显示模块1501,用于基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,在第一虚拟对象的目标位置显示目标物体,目标物体用于指示取消拖动(即停止拖动)第二虚拟对象;响应于取消拖动第二虚拟对象(即第二虚拟对象被停止拖动),显示交互动作选择页面。
在一种可能的实现方式中,确定模块,用于基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的中心位置;以第二虚拟对象被拖动之后的中心位置为中心,确定参考区域;将参考区域作为第二虚拟对象被拖动之后的范围界定框。
该确定模块执行的步骤也可以由显示模块1501完成,也就是说,显示模块1501,用于基于针对第二虚拟对象的拖动操作,确定第二虚拟对象被拖动之后的中心位置;以第二虚拟对象被拖动之后的中心位置为中心,确定参考区域;将参考区域作为第二虚拟对象被拖动之后的范围界定框。
在一种可能的实现方式中,虚拟场景中还显示有(还包括)第一虚拟对象当前执行的动作的动作标识;控制模块1502,还用于基于第二虚拟对象被拖动之后的范围界定框和第一虚拟对象的范围界定框相交,取消显示第一虚拟对象当前执行的动作的动作标识。
在一种可能的实现方式中,虚拟场景中还显示有(还包括)各个候选虚拟对象当前执行的动作的动作标识;控制模块1502,还用于响应于针对至少一个候选虚拟对象中的第二虚拟对象的第一操作,取消显示第二虚拟对象当前执行的动作的动作标识;或者,响应于针对至少一个候选虚拟对象中的第二虚拟对象的第一操作,取消显示各个候选虚拟对象当前执行的动作的动作标识。
在一种可能的实现方式中,交互动作选择页面中还显示有(还包括)文本输入控件,文本输入控件用于获取文本内容;显示模块1501,用于基于针对多个候选交互动作中的目标交互动作的第二操作、在文本输入控件中输入的文本内容,显示目标页面,目标页面中第一虚拟对象和第二虚拟对象在虚拟场景中按照目标交互动作进行交互,且目标页面中显示有(包括)文本内容。
在一种可能的实现方式中,装置还包括:
生成模块,用于基于针对多个候选交互动作中的目标交互动作的第二操作,生成动作数据获取请求,动作数据获取请求中包括目标交互动作的动作标识、第二虚拟对象的对象标识和第一虚拟对象的对象标识;
发送模块,用于向服务器发送动作数据获取请求,动作数据获取请求用于获取第一虚拟对象和第二虚拟对象按照目标交互动作进行交互时的动作数据;
接收模块,用于接收服务器基于动作数据获取请求返回的动作数据;
运行模块,用于运行动作数据;
显示模块1501,用于响应于动作数据运行完毕,显示目标页面。
并且,上述生成模块、发送模块、接收模块和运行模块执行的步骤也可以由显示模块1501完成。也就是说,显示模块1501,用于基于针对多个候选交互动作中的目标交互动作的第二操作,生成动作数据获取请求,动作数据获取请求中包括目标交互动作的动作标识、第二虚拟对象的对象标识和第一虚拟对象的对象标识;向服务器发送动作数据获取请求,动作数据获取请求用于获取第一虚拟对象和第二虚拟对象按照目标交互动作进行交互时的动作数据;接收服务器基于动作数据获取请求返回的动作数据;运行动作数据;响应于动作数据运行完毕,显示目标页面。
在一种可能的实现方式中,发送模块,用于基于针对多个候选交互动作中的目标交互动作的第二操作,向第二虚拟对象对应的用户使用的终端设备发送交互消息,交互消息中包括目标交互动作的动作标识,交互消息用于指示第一虚拟对象和第二虚拟对象按照目标交互动作进行交互,该发送模块执行的步骤可以由显示模块1501完成,即显示模块1501用于基于针对多个候选交互动作中的目标交互动作的第二操作,向第二虚拟对象对应的用户使用的终端设备发送交互消息,交互消息中包括目标交互动作的动作标识,交互消息用于指示第一虚拟对象和第二虚拟对象按照目标交互动作进行交互;
显示模块1501,用于基于接收到第二虚拟对象对应的用户使用的终端设备发送的确认消息,显示目标页面。
在一种可能的实现方式中,发送模块,用于基于针对多个候选交互动作中的目标交互动 作的第二操作,获取第一虚拟对象对应的用户的好友列表;基于第二虚拟对象对应的用户存在于第一虚拟对象对应的用户的好友列表中,向第二虚拟对象对应的用户使用的终端设备发送交互消息。
发送模块执行的步骤可以由显示模块1501完成,即显示模块1501用于基于针对多个候选交互动作中的目标交互动作的第二操作,获取第一虚拟对象对应的用户的好友列表;基于第二虚拟对象对应的用户存在于第一虚拟对象对应的用户的好友列表中,向第二虚拟对象对应的用户使用的终端设备发送交互消息。
应理解的是,上述提供的装置在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程和所产生的技术效果详见方法实施例,这里不再赘述。
图16示出了本申请一个示例性实施例提供的终端设备1600的结构框图。该终端设备1600可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端设备1600还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端设备1600包括有:处理器1601和存储器1602。
处理器1601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1601可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的(也称为非临时性的)。存储器1602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1601所执行以实现本申请中方法实施例提供的虚拟对象的交互方法。
在一些实施例中,终端设备1600还可选包括有:外围设备接口1603和至少一个外围设备。处理器1601、存储器1602和外围设备接口1603之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1603相连。具体地,外围设备包括:射频电路1604、显示屏1605、摄像头组件1606、音频电路1607和电源1609中的至少一种。
外围设备接口1603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1601和存储器1602。在一些实施例中,处理器1601、存储器1602和外围设备接口1603被集成在同一芯片或电路板上;在一些其他实施例中,处理器1601、存储器1602和外围设备接口1603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解 码芯片组、用户身份模块卡等等。射频电路1604可以通过至少一种无线通信协议来与其它终端设备进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1605是触摸显示屏时,显示屏1605还具有采集在显示屏1605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1601进行处理。此时,显示屏1605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1605可以为一个,设置在终端设备1600的前面板;在另一些实施例中,显示屏1605可以为至少两个,分别设置在终端设备1600的不同表面或呈折叠设计;在另一些实施例中,显示屏1605可以是柔性显示屏,设置在终端设备1600的弯曲表面上或折叠面上。甚至,显示屏1605还可以设置成非矩形的不规则图形,也即异形屏。显示屏1605可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1606用于采集图像或视频。可选地,摄像头组件1606包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端设备1600的前面板,后置摄像头设置在终端设备1600的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1601进行处理,或者输入至射频电路1604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端设备1600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1601或射频电路1604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1607还可以包括耳机插孔。
电源1609用于为终端设备1600中的各个组件进行供电。电源1609可以是交流电、直流电、一次性电池或可充电电池。当电源1609包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端设备1600还包括有一个或多个传感器1610。该一个或多个传感器1610包括但不限于:加速度传感器1611、陀螺仪传感器1612、压力传感器1613、光学传感器1615以及接近传感器1616。
加速度传感器1611可以检测以终端设备1600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1611可以用于检测重力加速度在三个坐标轴上的分量。处理器1601可以根据加速度传感器1611采集的重力加速度信号,控制显示屏1605以横向视图或纵向视图进行用户界面的显示。加速度传感器1611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1612可以检测终端设备1600的机体方向及转动角度,陀螺仪传感器1612可以与加速度传感器1611协同采集用户对终端设备1600的3D动作。处理器1601根据陀螺仪传感器1612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1613可以设置在终端设备1600的侧边框和/或显示屏1605的下层。当压力传感器1613设置在终端设备1600的侧边框时,可以检测用户对终端设备1600的握持信号,由处理器1601根据压力传感器1613采集的握持信号进行左右手识别或快捷操作。当压力传感器1613设置在显示屏1605的下层时,由处理器1601根据用户对显示屏1605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
光学传感器1615用于采集环境光强度。在一个实施例中,处理器1601可以根据光学传感器1615采集的环境光强度,控制显示屏1605的显示亮度。具体地,当环境光强度较高时,调高显示屏1605的显示亮度;当环境光强度较低时,调低显示屏1605的显示亮度。在另一个实施例中,处理器1601还可以根据光学传感器1615采集的环境光强度,动态调整摄像头组件1606的拍摄参数。
接近传感器1616,也称距离传感器,通常设置在终端设备1600的前面板。接近传感器1616用于采集用户与终端设备1600的正面之间的距离。在一个实施例中,当接近传感器1616检测到用户与终端设备1600的正面之间的距离逐渐变小时,由处理器1601控制显示屏1605从亮屏状态切换为息屏状态;当接近传感器1616检测到用户与终端设备1600的正面之间的距离逐渐变大时,由处理器1601控制显示屏1605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图16中示出的结构并不构成对终端设备1600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
图17为本申请实施例提供的服务器的结构示意图,该服务器1700可因配置或性能不同而产生比较大的差异,可以包括一个或多个处理器(Central Processing Units,CPU)1701和一个或多个的存储器1702,其中,该一个或多个存储器1702中存储有至少一条程序代码,该至少一条程序代码由该一个或多个处理器1701加载并执行以实现上述各个方法实施例提供的虚拟对象的交互方法。当然,该服务器1700还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器1700还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种非临时性计算机可读存储介质,该非临时性计算机可读存储介质中存储有至少一条程序代码,该至少一条程序代码由处理器加载并执行,以使计算机实现上述任一种虚拟对象的交互方法。
可选地,上述非临时性计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序或计算机程序产品,该计算机程序或计算机程序产品中存储有至少一条计算机指令,该至少一条计算机指令由处理器加载并执行,以使计算机实现上述任一种虚拟对象的交互方法。
需要说明的是,本申请所涉及的信息(包括但不限于用户设备信息、用户个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经用户授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的虚拟场景都是在充分授权的情况下获取的。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种虚拟对象的交互方法,其中,所述方法由终端设备执行,所述方法包括:
    显示虚拟场景,所述虚拟场景包括第一虚拟对象和至少一个候选虚拟对象;
    基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,所述交互动作选择页面包括多个候选交互动作;
    基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互。
  2. 根据权利要求1所述的方法,其中,所述基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,包括:
    基于针对所述第二虚拟对象的拖动操作,确定所述第二虚拟对象被拖动之后的范围界定框,所述范围界定框用于指示覆盖所述第二虚拟对象的区域;
    基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,显示所述交互动作选择页面。
  3. 根据权利要求2所述的方法,其中,所述基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,显示所述交互动作选择页面,包括:
    基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,显示提示信息,所述提示信息用于指示停止拖动所述第二虚拟对象;
    响应于所述第二虚拟对象被停止拖动,显示所述交互动作选择页面。
  4. 根据权利要求2所述的方法,其中,所述基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,显示所述交互动作选择页面,包括:
    基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,在所述第一虚拟对象的目标位置显示目标物体,所述目标物体用于指示停止拖动所述第二虚拟对象;响应于所述第二虚拟对象被停止拖动,显示所述交互动作选择页面。
  5. 根据权利要求2至4任一所述的方法,其中,所述基于针对所述第二虚拟对象的拖动操作,确定所述第二虚拟对象被拖动之后的范围界定框,包括:
    基于针对所述第二虚拟对象的拖动操作,确定所述第二虚拟对象被拖动之后的中心位置;
    以所述第二虚拟对象被拖动之后的中心位置为中心,确定参考区域;
    将所述参考区域作为所述第二虚拟对象被拖动之后的范围界定框。
  6. 根据权利要求2至5任一所述的方法,其中,所述虚拟场景还包括所述第一虚拟对象当前执行的动作的动作标识;
    所述方法还包括:基于所述第二虚拟对象被拖动之后的范围界定框和所述第一虚拟对象的范围界定框相交,取消显示所述第一虚拟对象当前执行的动作的动作标识。
  7. 根据权利要求1至6任一所述的方法,其中,所述虚拟场景还包括各个候选虚拟对象当前执行的动作的动作标识;
    所述方法还包括:响应于针对所述至少一个候选虚拟对象中的第二虚拟对象的第一操作,取消显示所述第二虚拟对象当前执行的动作的动作标识;或者,响应于针对所述至少一个候选虚拟对象中的第二虚拟对象的第一操作,取消显示所述各个候选虚拟对象当前执行的动作的动作标识。
  8. 根据权利要求1至7任一所述的方法,其中,所述交互动作选择页面还包括文本输入控件,所述文本输入控件用于获取文本内容;
    所述基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,包括:
    基于针对所述多个候选交互动作中的目标交互动作的第二操作、在所述文本输入控件中 输入的文本内容,显示所述目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互,且所述目标页面包括所述文本内容。
  9. 根据权利要求1至7任一所述的方法,其中,所述基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,包括:
    基于针对所述多个候选交互动作中的目标交互动作的第二操作,生成动作数据获取请求,所述动作数据获取请求中包括所述目标交互动作的动作标识、所述第二虚拟对象的对象标识和所述第一虚拟对象的对象标识;
    向服务器发送所述动作数据获取请求,所述动作数据获取请求用于获取所述第一虚拟对象和所述第二虚拟对象按照所述目标交互动作进行交互时的动作数据;
    接收所述服务器基于所述动作数据获取请求返回的动作数据;
    运行所述动作数据;
    响应于所述动作数据运行完毕,显示所述目标页面。
  10. 根据权利要求1至7任一所述的方法,其中,所述基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,包括:
    基于针对所述多个候选交互动作中的目标交互动作的第二操作,向所述第二虚拟对象对应的用户使用的终端设备发送交互消息,所述交互消息中包括所述目标交互动作的动作标识,所述交互消息用于指示所述第一虚拟对象和所述第二虚拟对象按照所述目标交互动作进行交互;
    基于接收到所述第二虚拟对象对应的用户使用的终端设备发送的确认消息,显示所述目标页面。
  11. 根据权利要求10所述的方法,其中,所述基于针对所述多个候选交互动作中的目标交互动作的第二操作,向所述第二虚拟对象对应的用户使用的终端设备发送交互消息,包括:
    基于针对所述多个候选交互动作中的目标交互动作的第二操作,获取所述第一虚拟对象对应的用户的好友列表;
    基于所述第二虚拟对象对应的用户存在于所述第一虚拟对象对应的用户的好友列表中,向所述第二虚拟对象对应的用户使用的终端设备发送交互消息。
  12. 根据权利要求1至11任一所述的方法,其中,所述基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面之前,所述方法还包括:
    响应于针对所述第二虚拟对象的第一操作,将所述第二虚拟对象设置为可拖动状态。
  13. 一种虚拟对象的交互装置,其中,所述装置包括:
    显示模块,用于显示虚拟场景,所述虚拟场景包括第一虚拟对象和至少一个候选虚拟对象;
    所述显示模块,还用于基于针对所述至少一个候选虚拟对象中的第二虚拟对象的拖动操作,显示交互动作选择页面,所述交互动作选择页面包括多个候选交互动作;
    所述显示模块,还用于基于针对所述多个候选交互动作中的目标交互动作的第二操作,显示目标页面,所述目标页面中所述第一虚拟对象和所述第二虚拟对象在所述虚拟场景中按照所述目标交互动作进行交互。
  14. 一种计算机设备,其中,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条程序代码,所述至少一条程序代码由所述处理器加载并执行,以使所述计算机设备实现如权利要求1至12任一所述的虚拟对象的交互方法。
  15. 一种非临时性计算机可读存储介质,其中,所述非临时性计算机可读存储介质中存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行,以使计算机实现如权利要求1至12任一所述的虚拟对象的交互方法。
  16. 一种计算机程序产品,其中,所述计算机程序产品中存储有至少一条计算机指令,所述至少一条计算机指令由处理器加载并执行,以使计算机实现如权利要求1至12任一所述的虚拟对象的交互方法。
PCT/CN2023/118735 2022-10-18 2023-09-14 虚拟对象的交互方法、装置、设备及计算机可读存储介质 WO2024082883A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211275400.4 2022-10-18
CN202211275400.4A CN117942570A (zh) 2022-10-18 2022-10-18 虚拟对象的交互方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2024082883A1 true WO2024082883A1 (zh) 2024-04-25

Family

ID=90736848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/118735 WO2024082883A1 (zh) 2022-10-18 2023-09-14 虚拟对象的交互方法、装置、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN117942570A (zh)
WO (1) WO2024082883A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200054947A1 (en) * 2017-11-15 2020-02-20 Tencent Technology (Shenzhen) Company Ltd Object selection method, terminal and storage medium
CN111913624A (zh) * 2020-08-18 2020-11-10 腾讯科技(深圳)有限公司 虚拟场景中对象的交互方法及装置
CN112755516A (zh) * 2021-01-26 2021-05-07 网易(杭州)网络有限公司 交互控制的方法及装置、电子设备、存储介质
CN113342233A (zh) * 2021-06-30 2021-09-03 北京字跳网络技术有限公司 一种交互方法、装置、计算机设备以及存储介质
CN114011064A (zh) * 2021-11-16 2022-02-08 网易(杭州)网络有限公司 交互控制的方法、装置和电子设备
CN114296597A (zh) * 2021-12-01 2022-04-08 腾讯科技(深圳)有限公司 虚拟场景中的对象交互方法、装置、设备及存储介质
US20220152505A1 (en) * 2020-11-13 2022-05-19 Tencent Technology (Shenzhen) Company Limited Virtual object control method and apparatus, storage medium, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200054947A1 (en) * 2017-11-15 2020-02-20 Tencent Technology (Shenzhen) Company Ltd Object selection method, terminal and storage medium
CN111913624A (zh) * 2020-08-18 2020-11-10 腾讯科技(深圳)有限公司 虚拟场景中对象的交互方法及装置
US20220152505A1 (en) * 2020-11-13 2022-05-19 Tencent Technology (Shenzhen) Company Limited Virtual object control method and apparatus, storage medium, and electronic device
CN112755516A (zh) * 2021-01-26 2021-05-07 网易(杭州)网络有限公司 交互控制的方法及装置、电子设备、存储介质
CN113342233A (zh) * 2021-06-30 2021-09-03 北京字跳网络技术有限公司 一种交互方法、装置、计算机设备以及存储介质
CN114011064A (zh) * 2021-11-16 2022-02-08 网易(杭州)网络有限公司 交互控制的方法、装置和电子设备
CN114296597A (zh) * 2021-12-01 2022-04-08 腾讯科技(深圳)有限公司 虚拟场景中的对象交互方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117942570A (zh) 2024-04-30

Similar Documents

Publication Publication Date Title
US11782595B2 (en) User terminal device and control method thereof
CN112162671B (zh) 直播数据处理方法、装置、电子设备及存储介质
TWI672629B (zh) 表情展示方法、裝置及電腦可讀取儲存媒體
WO2020253655A1 (zh) 多虚拟角色的控制方法、装置、设备及存储介质
JP7230055B2 (ja) アプリケーションプログラムの表示適応方法及びその装置、端末、記憶媒体、並びにコンピュータプログラム
CN109920065A (zh) 资讯的展示方法、装置、设备及存储介质
WO2020125340A1 (zh) 控制信息处理方法、装置、电子设备及存储介质
CN111324250A (zh) 三维形象的调整方法、装置、设备及可读存储介质
WO2022095465A1 (zh) 信息显示方法及装置
WO2023050722A1 (zh) 信息显示方法及电子设备
CN111459363A (zh) 信息展示方法、装置、设备及存储介质
CN109525704A (zh) 一种控制方法及移动终端
TWI817208B (zh) 確定選中目標的方法及裝置、電腦設備、非臨時性電腦可讀存儲介質及電腦程式產品
CN109117037A (zh) 一种图像处理的方法及终端设备
CN112860046B (zh) 选择运行模式的方法、装置、电子设备及介质
CN112004134B (zh) 多媒体数据的展示方法、装置、设备及存储介质
EP4125274A1 (en) Method and apparatus for playing videos
WO2024082883A1 (zh) 虚拟对象的交互方法、装置、设备及计算机可读存储介质
WO2022062788A1 (zh) 互动特效展示方法及终端
CN114546188B (zh) 基于互动界面的互动方法、装置、设备及可读存储介质
CN113194329A (zh) 直播互动方法、装置、终端及存储介质
CN111064658A (zh) 显示控制方法及电子设备
CN114115660B (zh) 媒体资源处理方法、装置、终端及存储介质
CN113507647B (zh) 多媒体数据的播放控制方法、装置、终端及可读存储介质
CN115379274B (zh) 基于图片的互动方法、装置、电子设备及存储介质