CN114296597A - Object interaction method, device, equipment and storage medium in virtual scene - Google Patents

Object interaction method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN114296597A
CN114296597A CN202111666192.6A CN202111666192A CN114296597A CN 114296597 A CN114296597 A CN 114296597A CN 202111666192 A CN202111666192 A CN 202111666192A CN 114296597 A CN114296597 A CN 114296597A
Authority
CN
China
Prior art keywords
interaction
virtual
interactive
control
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111666192.6A
Other languages
Chinese (zh)
Inventor
鲍慧翡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114296597A publication Critical patent/CN114296597A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an object interaction method, an object interaction device, object interaction equipment, a computer program product and a computer readable storage medium in a virtual scene; the method comprises the following steps: presenting an interface of a virtual scene comprising a virtual object, and presenting a virtual rocker for controlling the displacement of the virtual object in the interface; presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker; the interactive control is associated with an interactive object and used for triggering the virtual object to interact with the interactive object; and responding to the trigger operation aiming at the target interaction control, and controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control. Through the method and the device, the control efficiency of the virtual scene can be improved.

Description

Object interaction method, device, equipment and storage medium in virtual scene
Description of the priority
The application has the application number of 202111453833.X, the application date of 2021, 12 and 01, and the name is as follows: priority of object interaction method, device, equipment and storage medium in virtual scene.
Technical Field
The present application relates to the field of virtualization and human-computer interaction technologies, and in particular, to a method and an apparatus for object interaction in a virtual scene, an electronic device, a computer program product, and a computer-readable storage medium.
Background
With the development of computer technology, electronic devices can realize more abundant and vivid virtual scenes. The virtual scene refers to a digital scene outlined by a computer through a digital communication technology, and a user can obtain a fully virtualized feeling (for example, virtual reality) or a partially virtualized feeling (for example, augmented reality) in the aspects of vision, hearing and the like in the virtual scene, and simultaneously can interact with various interactive objects in the virtual scene or control interaction among various objects in the virtual scene to obtain feedback.
In the related art, aiming at the display mode of an interactive control in a virtual scene, the interactive control is usually displayed at each position of an interface in a scattered manner, so that the occupied screen ratio of the interactive control is high; or the interactive control is displayed in a unified entry mode, so that the step of triggering the operation of the interactive control is added, and the control efficiency of the virtual scene is low.
Disclosure of Invention
The embodiment of the application provides an object interaction method and device in a virtual scene, electronic equipment, a computer program product and a computer readable storage medium, and the control efficiency of the virtual scene can be improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an object interaction method in a virtual scene, which comprises the following steps:
presenting an interface of a virtual scene comprising a virtual object, and presenting a virtual rocker for controlling the displacement of the virtual object in the interface;
presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker;
the interactive control is associated with an interactive object and used for triggering the virtual object to interact with the interactive object;
and responding to the trigger operation aiming at the target interaction control, and controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control.
An embodiment of the present application provides an object interaction apparatus in a virtual scene, including:
the system comprises a presentation module, a control module and a display module, wherein the presentation module is used for presenting an interface of a virtual scene comprising a virtual object and presenting a virtual rocker for controlling the displacement of the virtual object in the interface;
the presentation module is further used for presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker, wherein the interaction control is associated with an interaction object and is used for triggering the virtual object to interact with the interaction object;
and the interaction module is used for responding to the trigger operation aiming at the target interaction control and controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control.
In the above scheme, the presentation module is further configured to present a status icon in a central area of the sensing area of the virtual joystick in a first manner corresponding to the activation status, where the status icon is used to represent the status of the virtual joystick;
and responding to a first trigger operation aiming at the state icon, controlling the virtual rocker to be switched from the activation state to the interaction state, and presenting the state icon by adopting a second style corresponding to the interaction state.
In the above scheme, the interaction module is further configured to control the virtual object to execute the interactive operation with the interactive object in response to a trigger operation for the target interaction control when the virtual joystick is in the interaction state.
In the foregoing solution, before the virtual object is controlled to execute the interactive operation with the interactive object, the presentation module is further configured to receive a release instruction for the first trigger operation in a process of executing the first trigger operation;
and responding to the release instruction, switching the state of the virtual rocker from the interactive state to an activated state, and presenting the state icon by adopting the first pattern.
In the above scheme, the interaction module is further configured to receive a sliding operation and obtain a sliding position corresponding to the sliding operation in a process of executing the pressing operation when the first triggering operation is the pressing operation;
and when the sliding position is in the display area of the target interaction control, receiving a trigger operation aiming at the target interaction control.
In the above scheme, the presenting module is further configured to present the virtual joystick in a dormant state, and when the virtual joystick is in the dormant state, a peripheral area of the central area in the sensing area is in a hidden display state;
controlling the virtual rocker to be in the activated state in response to a triggering operation for the peripheral region.
In the above scheme, the presenting module is further configured to present, at the edge of the circular area, the at least two interaction controls that include the target interaction control and are uniformly distributed when the sensing area is a circular area and the number of the interaction controls is at least two.
In the above scheme, the presenting module is further configured to present a control wheel disc on the outer edge of the circular region, where the control wheel disc includes at least two control display positions;
presenting at least two of the interaction controls including the target interaction control in at least two control display positions included in the control carousel.
In the above scheme, the presentation module is further configured to present, in the sensing area of the virtual joystick, at least one interaction control including a target interaction control in a candidate state;
when a selection operation aiming at the target interaction control in the candidate state is received, the selection operation is used as a trigger operation aiming at the target interaction control.
In the above scheme, the presentation module is further configured to detect a position of the virtual object in the virtual scene;
when the virtual object is detected to be within the interaction range of at least one interaction object, presenting at least one interaction control comprising a target interaction control;
wherein the at least one interaction object comprises the target interaction object.
In the above scheme, the presentation module is further configured to, when it is detected that the virtual object is only within an interaction range of one interaction object, take the interaction object within the interaction range where the virtual object is located as the target interaction object, and present a target interaction control associated with the target interaction object in a central area of an induction area of the virtual joystick;
when the virtual object is detected to be in the interaction range of at least two interaction objects, taking the at least two interaction objects as candidate interaction objects, and acquiring the distance between the virtual object and each candidate interaction object;
and determining the candidate interactive object with the minimum distance to the virtual object as the target interactive object, and presenting a target interactive control associated with the target interactive object in the central area.
In the foregoing solution, the presentation module is further configured to, when the number of the candidate interactive objects having the smallest distance to the virtual object is at least two, take the candidate interactive object having the smallest distance to the virtual object as the interactive object to be filtered;
respectively acquiring the angle formed by each interactive object to be screened and the virtual object by taking the position of the virtual object as the vertex position of the angle, taking a straight line along the orientation of the virtual object as one side of the angle and taking a connecting line between the virtual object and the interactive object to be screened as the other side of the angle;
and determining the interactive object to be screened corresponding to the angle with the minimum angle as the target interactive object.
In the above scheme, the presentation module is further configured to receive a switching operation for the target interaction control;
responding to the switching operation, and switching the presented target interaction control into other interaction controls associated with other interaction objects;
wherein, the other interactive objects are interactive objects except the target interactive object in the interactive range of the virtual object;
in the above scheme, the interaction module is further configured to control the virtual object to perform an interaction operation with the other interaction object in response to a trigger operation for the other interaction control.
In the above scheme, the presentation module is further configured to control the virtual joystick to be in a passive interaction mode when it is detected that the virtual object is within an interaction range of at least one interaction object, and present at least one interaction control including a target interaction control in a first region of the sensing region;
in the above scheme, the presentation module is further configured to respond to a start instruction for an active interaction mode of the virtual joystick, control the mode of the virtual joystick to be switched from the passive interaction mode to the active interaction mode, and present at least one interaction control including a target interaction control in a second area of the sensing area.
In the above scheme, the interaction module is further configured to respond to a trigger operation for the target interaction control, and present the target interaction control in a third style;
displaying a target interaction object associated with the target interaction control in a target area of the virtual scene;
and controlling the virtual object to execute the interactive operation with the target interactive object.
In the foregoing solution, the interaction module is further configured to control the virtual object to stop the interactive operation with the interactive object in response to a stop instruction for the interactive operation, and
canceling the displayed at least one interactive control.
In the above scheme, the presentation module is further configured to obtain interaction data of the virtual object and scene data of the virtual scene;
based on the interaction data and the scene data, calling a neural network model to predict the possibility of interaction between the virtual object and the interaction object, and obtaining a prediction result;
and when the prediction result represents that the possibility of the interaction between the virtual object and the interactive object reaches a possible threshold value, at least one interactive control comprising a target interactive control is presented in the sensing area of the virtual rocker.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the object interaction method in the virtual scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions and is used for implementing an object interaction method in a virtual scene provided by the embodiment of the present application when being executed by a processor.
The embodiment of the present application provides a computer program product, which includes a computer program or instructions, and when the computer program or instructions are executed by a processor, the computer program or instructions implement the object interaction method in the virtual scene provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, the virtual object and the virtual rocker are presented in the interface of the virtual scene, and the at least one interactive control comprising the target interactive control is presented in the sensing area of the virtual rocker, so that the virtual rocker and the interactive control can be displayed as a whole, and the screen occupation ratio of the interactive control in the virtual interface is effectively reduced; in addition, the virtual object can be triggered to execute the interactive operation with the target interactive object in the process of controlling the virtual rocker, so that the control efficiency in the virtual scene can be improved.
Drawings
Fig. 1 is a schematic architecture diagram of an object interaction system 100 in a virtual scene provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 implementing an object interaction method in a virtual scene according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an object interaction method in a virtual scene according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a pattern of a virtual joystick in an activated state according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a virtual joystick provided in a sleep state according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a state switching event triggering process according to an embodiment of the present application;
7A-7B are schematic diagrams illustrating interaction controls provided by embodiments of the present application;
FIG. 8 is a schematic view of a virtual joystick incorporating a control wheel according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a display of an interaction control when passively triggered according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a display manner of a trigger interaction control provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a candidate interactive object provided by an embodiment of the present application;
FIG. 12 is a flowchart of a target interactive object determination method provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of a target interaction object provided by an embodiment of the present application;
FIG. 14 is a flowchart of an interaction control switching operation provided in an embodiment of the present application;
FIG. 15 is a flowchart of a method for displaying an interactive control based on artificial intelligence according to an embodiment of the present application;
FIG. 16 is a flowchart of an object interaction method in a virtual scene according to an embodiment of the present disclosure;
17A-17B are schematic diagrams of display interfaces for a method of object interaction in a virtual scene provided by the related art;
FIG. 18 is a schematic diagram illustrating a display mode of a virtual joystick provided by an embodiment of the present application in different states;
FIG. 19 is a diagram illustrating a state of a virtual joystick according to an embodiment of the present disclosure;
fig. 20 is a schematic diagram illustrating a determination process of a joystick and an interactive key according to an embodiment of the present application.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by a person skilled in the art without making any creative effort fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the application document, to add further description, in the following description, reference is made to the term "first \ second \ third" merely for distinguishing between similar objects and not for indicating a particular ordering of objects, it being understood that "first \ second \ third" may be interchanged either in a particular order or in a sequential order, where permissible, to enable implementation of the embodiments of the application described herein in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side is an application program running in the terminal and used for providing various services, such as an instant messaging client side and a video playing client side.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control virtual objects to perform activities within the virtual scene including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. The virtual scene may be displayed at a first-person perspective (e.g., to play a virtual object in a game at the player's own perspective); or displaying the virtual scene at a third person perspective (e.g., a player follows a virtual object in the game to play the game); the virtual scene can also be displayed at a bird's-eye view angle; the above-mentioned viewing angles can be switched arbitrarily.
Taking the example of displaying the virtual scene at the first-person viewing angle, the virtual scene displayed in the human-computer interaction interface may include: according to the viewing position and the viewing angle of the virtual object in the complete virtual scene, the field of view area of the virtual object is determined, and the partial virtual scene in the field of view area in the complete virtual scene is presented, namely, the displayed virtual scene can be a partial virtual scene relative to the panoramic virtual scene. Because the first person viewing angle is the viewing angle which can give impact force to the user, the immersive perception that the user is personally on the scene in the operation process can be realized. Taking the example of displaying the virtual scene at the bird's-eye view angle, the interface of the virtual scene presented in the human-computer interaction interface may include: in response to a zoom operation for the panoramic virtual scene, a partial virtual scene corresponding to the zoom operation is presented in the human-computer interaction interface, i.e., the displayed virtual scene may be a partial virtual scene relative to the panoramic virtual scene. Therefore, the operability of the user in the operation process can be improved, and the efficiency of man-machine interaction can be improved.
4) Virtual objects, the appearance of various people and objects that can interact in a virtual scene, or movable objects in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-Player Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control the virtual object to freely fall, glide, open a parachute to fall, run, jump, crawl, stoop ahead, etc. in the sky of the virtual scene, or may control the virtual object to swim, float, or dive, etc. in the sea. Of course, the user may also control the virtual object to ride the vehicle-like virtual item to move in the virtual scene, for example, the vehicle-like virtual item may be a virtual car, a virtual aircraft, a virtual yacht, or the like; the user may also control the virtual object to perform antagonistic interaction with other virtual objects through the attack-type virtual item, for example, the virtual item may be a virtual machine, a virtual tank, a virtual fighter, and the like, which is only exemplified in the above scenario, and this is not limited in this embodiment of the present application.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions configured in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character may include, for example, a life value (also referred to as a red volume), a magic value (also referred to as a blue volume), a state value, a blood volume, and the like.
6) The virtual rocker of the mobile phone is a key virtual rocker which is used for a full-touch mobile phone and is virtual on a screen, and the virtual rocker on the touch screen can be directly controlled to play games.
7) Complex interactions in the game: the method refers to the operation contents which are not commonly used but need to be carried out in special scenes, such as horse riding, locking, stunning, weapon replacement and the like, and the operation contents need to be carried out in the game according to different game types, and the operation frequency is not high, but the types are many.
Based on the above explanations of terms and terms involved in the embodiments of the present application, the following describes an object interaction system in a virtual scene provided by the embodiments of the present application. Referring to fig. 1, fig. 1 is an architectural diagram of an object interaction system 100 in a virtual scene provided in this embodiment, in order to support an exemplary application, terminals (exemplary terminals 400-1 and 400-2 are shown) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
The terminal (such as the terminal 400-1 and the terminal 400-2) is used for sending an acquisition request of scene data of the virtual scene to the server 200 based on the view interface receiving the triggering operation of entering the virtual scene;
the server 200 is configured to receive an acquisition request of scene data, and return the scene data of a virtual scene to the terminal in response to the acquisition request;
terminals (such as terminal 400-1 and terminal 400-2) for receiving scene data of a virtual scene, rendering a picture of the virtual scene based on the obtained scene data, and presenting the picture of the virtual scene on a graphical interface (for example, graphical interface 410-1 and graphical interface 410-2 are shown); and rendering the content presented by the picture of the virtual scene based on the returned scene data of the virtual scene.
In practical application, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The terminals (e.g., terminal 400-1 and terminal 400-2) may be, but are not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like. The terminals (e.g., the terminal 400-1 and the terminal 400-2) and the server 200 may be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
In practical applications, the terminals (including the terminal 400-1 and the terminal 400-2) are installed and run with applications that support virtual scenes. The application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multi-player Online Battle sports game (MOBA), a Two-dimensional (2D) game application, a Three-dimensional (3D) game application, a virtual reality application program, a Three-dimensional map program, a simulation program, or a Multi-player gun Battle type survival game. The application may also be a stand-alone application, such as a stand-alone 3D game program.
The method comprises the steps that an electronic game scene is taken as an exemplary scene, a user can operate on a terminal in advance, after the terminal detects the operation of the user, a game configuration file of the electronic game can be downloaded, the game configuration file can comprise an application program, interface display data or virtual scene data of the electronic game, and the like, so that the user can call the game configuration file when logging in the electronic game on the terminal to render and display an electronic game interface. A user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include virtual scene data, behavior data of a virtual object in the virtual scene, and the like.
In practical application, a terminal (including the terminal 400-1 and the terminal 400-2) receives a trigger operation for entering a virtual scene based on a view interface, and sends an acquisition request of scene data of the virtual scene to the server 200; the server 200 receives the acquisition request of the scene data, responds to the acquisition request, and returns the scene data of the virtual scene to the terminal; the terminal receives scene data of a virtual scene, renders pictures of the virtual scene based on the scene data, presents the pictures of the virtual scene comprising virtual objects, and simultaneously presents a virtual rocker for controlling the displacement of the virtual objects in an interface;
further, the terminal responds to the display condition meeting the interactive control, and at least one interactive control comprising a target interactive control is presented in the induction area of the virtual rocker; the interactive control is associated with an interactive object and used for triggering a virtual object (namely a virtual character corresponding to a user logging in the electronic game) to interact with the interactive object; and the terminal responds to the trigger operation aiming at the target interaction control and controls the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control. Therefore, when the virtual object needs to be interactively operated, the interactive control can be presented in the sensing area of the virtual rocker, the screen occupation ratio of the interactive control in the virtual scene interface can be effectively reduced in the mode of displaying according to needs, and interactive operation can be performed in the mode of single-hand operation of the virtual rocker.
The embodiments of the present application can also be implemented by means of Cloud Technology (Cloud Technology), which is a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 implementing an object interaction method in a virtual scene according to an embodiment of the present application. In practical application, the electronic device 500 may be a server or a terminal shown in fig. 1, and taking the electronic device 500 as the terminal shown in fig. 1 as an example, an electronic device implementing the object interaction method in a virtual scene in the embodiment of the present application is described, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the object interaction device in the virtual scene provided by the embodiments of the present application may be implemented in software, and fig. 2 illustrates an object interaction device 555 in the virtual scene stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: a presentation module 5551 and an interaction module 5552, which are logical and thus can be arbitrarily combined or further split according to the implemented functions, the functions of the respective modules being described below.
In other embodiments, the object interaction Device in the virtual scene provided in the embodiments of the present Application may be implemented by combining software and hardware, and as an example, the object interaction Device in the virtual scene provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the object interaction method in the virtual scene provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic elements.
Based on the above description of the object interaction system and the electronic device in the virtual scene provided in the embodiment of the present application, the following description of the object interaction method in the virtual scene provided in the embodiment of the present application is provided. In some embodiments, the object interaction method in the virtual scene provided by the embodiments of the present application may be implemented by a server or a terminal alone, or implemented by the server and the terminal in cooperation. In some embodiments, a terminal or a server may implement the object interaction method in a virtual scene provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; the Application program may be a local (Na-positive) Application program (APP), that is, a program that needs to be installed in an operating system to run, such as a client that supports a virtual scene, such as a game APP; or may be an applet, i.e. a program that can be run only by downloading it into a browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module, or plug-in.
The following describes an object interaction method in a virtual scene provided in the embodiment of the present application by taking a terminal embodiment as an example. Referring to fig. 3, fig. 3 is a schematic flowchart of an object interaction method in a virtual scene provided in an embodiment of the present application, where the object interaction method in the virtual scene provided in the embodiment of the present application includes:
in step 101, the terminal presents an interface of a virtual scene including a virtual object, and presents a virtual joystick for controlling a displacement of the virtual object in the interface of the virtual scene.
In actual implementation, an application client supporting a virtual scene may be installed on the terminal, and when a user opens the application client on the terminal and the terminal runs the application client, the terminal presents an interface of the virtual scene (such as a swordsman game scene) including a virtual object, and the user may control displacement of the virtual object in the virtual scene through the virtual joystick. In practical applications, the virtual object may be an avatar in a virtual scene corresponding to a user account currently logged in the application client, for example, the virtual object may be a virtual object controlled by a user entering a game or a simulated virtual scene, and of course, the virtual scene may further include other virtual objects, which may be controlled by other users or controlled by a robot program.
A virtual joystick is illustrated, through which a user can control the position (direction and speed) of a virtual object in a virtual scene. The sensing area of the virtual rocker comprises a central area and a peripheral area, and in addition, the sensing area can be a circular area. The virtual rocker can present at least two states, a sleep state and an activation state in a virtual scene, and in order to realize the object interaction method in the virtual scene provided by the embodiment of the application by means of the virtual rocker, an interaction state can be set for the virtual rocker, that is, the virtual rocker provided by the embodiment of the application can include three different states in the virtual scene: a sleep state, an active state, and an interactive state.
In practical implementation, referring to fig. 4, fig. 4 is a schematic diagram of a virtual joystick in an activated state according to an embodiment of the present application, a sensing region of the virtual joystick (denoted by reference numeral 1) is a circular region, and the virtual joystick may include a peripheral region (denoted by reference numeral 1-1), a central region (denoted by reference numeral 1-2), direction indication icons (denoted by reference numerals 1-3), and a state control (denoted by reference numeral 1-4) located in the central region. When the virtual joystick is in a sleep state (or default state), only the state control elements shown by the numbers 1-4 can be displayed, and the peripheral area, the central area and the direction indication icons are hidden. When the virtual rocker is in a dormant state, the user does not trigger the basic movement direction and speed control of the virtual rocker, and does not trigger any interactive state, and the virtual rocker is displayed as a state control only in a default state; when the virtual rocker is in an activated state, a user triggers the direction and speed control of the basic movement of the virtual rocker, but does not trigger any interactive state, and the rocker center button is displayed in a default state; when the virtual rocker is in an interactive state, at least one interactive control comprising the target interactive control can be presented in a mode of integrating the interactive control with the virtual rocker. When the virtual rocker is in the interactive state, the virtual rocker can present different styles according to different modes of triggering state switching events, and the interactive state of the virtual rocker is explained in detail in the following. In addition, in practical application, the state of the virtual rocker can be indicated by setting different display styles of the state icons, and when the state icons are activated, the icons of the state control can be set to be highlighted and flickered.
In step 102, at least one interactive control including a target interactive control is presented in a sensing area of the virtual joystick, wherein the interactive control is associated with an interactive object for triggering the virtual object to interact with the interactive object.
In practical implementation, when the at least one interactive control is presented in the sensing area of the virtual rocker, the representation terminal controls the virtual rocker to be in an interactive state in response to the display condition of the at least one interactive control being met. The terminal can trigger the state switching operation of the virtual rocker by responding to the state switching event aiming at the virtual rocker, so that the mutual switching of the virtual rocker among the three states is realized. The triggering mode of the state switching event corresponding to the virtual rocker may include active triggering and passive triggering. The active triggering can be understood as that the terminal receives an active triggering operation executed by a user for the virtual rocker to trigger a state switching event of the virtual rocker. The passive triggering can be understood as that when the terminal automatically detects that the environment where the virtual object is located in the virtual scene meets the condition of triggering the state switching event of the virtual rocker, the state switching event of the virtual rocker is triggered.
In some embodiments, before the terminal presents the state icon in the first pattern corresponding to the activated state in the interface, the terminal may further trigger a state switching event of the virtual rocker, so that the virtual rocker is switched from the dormant state to the activated state. The switching from the dormant state to the active state can be realized by the following ways: the terminal presents a virtual rocker in a dormant state in an interface of a virtual scene, wherein when the virtual rocker is in the dormant state, a peripheral area of the virtual rocker is in a hidden display state; and controlling the virtual rocker to be in an activated state in response to the triggering operation for the peripheral area.
Illustratively, referring to fig. 5, fig. 5 is a schematic diagram of a virtual joystick provided in a sleep state according to an embodiment of the present disclosure, in which the virtual joystick is in the sleep state, a central region (numbered 1-2) of the virtual joystick and a state icon (numbered 1-4) are displayed, and a peripheral region (numbered 1-1) of the virtual joystick and a direction indication icon (numbered 1-3) are hidden. The terminal responds to the received trigger operation aiming at the peripheral area, triggers a state switching event corresponding to the virtual rocking bar, so that the virtual rocking bar is switched from a dormant state to an activated state, and when the virtual rocking bar is in the activated state, a user can be reminded that the virtual rocking bar is currently in the activated state through the display style of the highlighted state icon.
In some embodiments, the terminal controls the virtual rocker to switch from the other state to the interaction state in response to a state switching event of the virtual rocker triggered by an active triggering manner. Referring to fig. 6, fig. 6 is a schematic diagram of a state switching event triggering process provided in the embodiment of the present application, and based on fig. 3, before at least one interaction control including a target interaction control is presented in a sensing region of a virtual rocker, steps 201 to 202 may also be executed to trigger a state switching event corresponding to the virtual rocker, so that a current state of the virtual rocker is switched to an interaction state.
Step 201, the terminal presents a state icon in a central area of a sensing area of the virtual rocker by adopting a first mode corresponding to an activation state, wherein the state icon is used for representing the state of the virtual rocker.
In practical implementation, referring to fig. 4, in a central area of the sensing area of the virtual joystick, a status icon representing a status of the virtual joystick is presented. In this way, the state change of the virtual joystick can be shown by changing the display style of the state icon.
And 202, responding to the first trigger operation aiming at the state icon, controlling the virtual rocker to be switched from the activated state to the interactive state, and presenting the state icon by adopting a second style corresponding to the interactive state.
In practical implementation, the terminal may trigger a state switching event corresponding to the virtual rocker in response to a trigger operation for the state icon, and the terminal controls the virtual rocker to switch from the active state to the interactive state, and accordingly, the display style corresponding to the state icon in the central region may be set to be the second style corresponding to the interactive state (different from the first style corresponding to the active state). The terminal acquires an operation parameter for a trigger operation of the state icon, and when it is determined that a trigger condition of the state switching event is satisfied based on the operation parameter, the state switching event may be triggered.
For example, referring to fig. 4, taking the trigger operation for the status icon as the pressing operation as an example, in response to the pressing operation for the status icon, the operation time length of the pressing operation is acquired; when the length of the operation of the pressing operation reaches the time-length threshold value, a state switching event is triggered, and the virtual rocker is controlled to be in an interactive state.
In some embodiments, when the terminal determines that the virtual joystick is in the interactive state (i.e., the display condition of the interaction control is satisfied), at least one interaction control including the target interaction control may be presented in the sensing region of the virtual joystick, and the at least one interaction control including the target interaction control may be presented in the sensing region of the virtual joystick by: when the induction area is a circular area and the number of the interaction controls is at least two, the at least two interaction controls which are uniformly distributed and comprise the target interaction control are presented at the edge of the circular area.
Fig. 7A-7B are schematic diagrams illustrating interaction controls provided in an embodiment of the present application, and referring to fig. 7A, in the diagrams, a sensing area of a virtual joystick is a circular area, and a terminal controls the virtual joystick to be in an interaction state in response to a long-press operation on a state icon in a central area of the virtual joystick, at this time, for a current virtual scene, the number of interaction objects is 4, that is, 4 interaction controls are included, and the 4 interaction controls are uniformly distributed at an edge of the circular area.
It should be noted that the interactive controls presented at the edge of the virtual joystick may be the interactive controls adapted to the current environment of the virtual object, or all the related interactive controls in the entire virtual scene application (game), and may be set according to the actual situation.
For example, referring to fig. 7B, the number of the interaction controls adapted to the current environment of the virtual object in the drawing is 4, while the number of the interaction controls in the entire virtual scene application is 6, and the interaction controls may be displayed according to the style shown in a or B in the drawing, that is, only the interaction controls (less than or equal to 4) adapted to the current environment of the virtual object may be displayed at the edge of the virtual rocker; the style shown in c in the figure may also be used for displaying, that is, all 6 interactive controls may be displayed on the edge of the virtual joystick, in an actual application, the available state and the unavailable state of the interactive state may be presented by controlling the display state of the interactive controls, for example, the style shown in c in the figure displays all 6 interactive controls, where an interactive control that is not adapted to the current environment where the virtual object is located may be set to an unavailable state.
In some embodiments, the terminal may further expose at least one interaction control by: a control wheel disc is presented on the outer edge of the circular area, and the control wheel disc comprises at least two control display positions; at least two interactive controls including a target interactive control are presented in at least two control display positions included in a control carousel.
When the virtual rocker is actually implemented, the terminal can initialize the control wheel disc in advance, the control wheel disc can comprise at least two control display positions, interaction controls are displayed on the control display positions, and the control wheel disc is directly displayed in a sensing area of the virtual rocker (i.e. the virtual rocker is integrated with the control wheel disc). It should be noted that the interactive control displayed on the control display position may be an interface interactive control suitable for the current virtual scene, or may be all interactive controls that need to be displayed in the entire virtual scene (the interactive controls that are not in the current scene are set to be in an unavailable state). In addition, interaction controls which are not displayed temporarily can be added to empty control display positions, and therefore the interaction controls can be displayed as required.
For example, referring to fig. 8, fig. 8 is a schematic diagram of a virtual joystick including a control wheel provided in an embodiment of the present application, where reference number 1 in the diagram indicates a normal virtual joystick (that is, a virtual joystick in an activated state), and reference number 2 in the diagram indicates a control wheel (which may also be referred to as an interactive keyboard), the control wheel includes 4 control display bits (each control display bit may be used to display an interactive control), where there are 3 control display bits to which an interactive control has been added, and reference number 2-1 indicates an empty control display bit, and a terminal receives a trigger operation for the empty control display bit, and may implement display of other interactive controls.
In some embodiments, the terminal may further display the interactive controls by presenting at least one interactive control including the target interactive control in a candidate state in a sensing region of the virtual joystick; correspondingly, when a selection operation for the target interaction control in the candidate state is received, the selection operation is used as a trigger operation for the target interaction control.
In practical implementation, when the virtual rocker is in an interactive state, the interactive control in the candidate state is presented in the sensing area of the virtual rocker, and in order to visually display the candidate state of the interactive control, the icon corresponding to the interactive control in the candidate state can be controlled to flash in light and dark.
Illustratively, referring to fig. 7A, a virtual joystick including 4 interaction controls is displayed, where it is determined that an interaction control corresponding to "riding a horse" is an interaction control to be executed, at this time, an icon corresponding to the "riding a horse" interaction control may flash in light and dark to remind a user that the current "riding a horse" interaction control is in a candidate state, and a virtual object in a virtual scene is to execute an interaction operation of "riding a horse".
In some embodiments, when the virtual rocker is in an interactive state, the terminal may receive a trigger operation for the target interactive control in the following manner, and when the first trigger operation is a pressing operation, in a process of executing the pressing operation, receive a sliding operation, and obtain a sliding position corresponding to the sliding operation; and when the sliding position is in the display area of the target interaction control, receiving a trigger operation aiming at the target interaction control.
In actual implementation, when the virtual rocker is in an interactive state and the triggering operation for the state icon is a pressing operation, at least one interactive control including a target interactive control is presented in an induction area of the virtual rocker at the moment, and in the process of executing the pressing operation, a user continues to execute a sliding operation according to actual conditions and slides to the target interactive control or a display area corresponding to the target interactive control, which can be regarded as that the user executes the triggering operation for the target interactive control.
Illustratively, in a swordsman scene corresponding to the swordsman, in a sensing area of the virtual rocker, a 'horse riding' interaction control is presented and used for continuously executing sliding operation in the process of pressing the state icon of the virtual rocker and sliding to the 'horse riding' interaction control, and at the moment, an interaction instruction of 'horse riding' can be triggered.
And step 203, controlling the virtual object to execute the interactive operation of the target interactive object associated with the target interactive control in response to the triggering operation aiming at the target interactive control when the virtual rocker is in the interactive state.
In actual implementation, the terminal controls the virtual joystick to be switched from the activated state to the interactive state by executing steps 201 to 202, and displays at least one interactive control including the target interactive control in the sensing region of the virtual joystick, and then, based on fig. 3, the terminal executes step 103, which is actually executing step 203, that is, when the virtual joystick is in the interactive state, the terminal controls the virtual object to execute the interactive operation of the target interactive object associated with the target interactive control.
In some embodiments, the terminal may also release (or cancel) the trigger operation for the status icon in the following manner. In the process of executing the first trigger operation (the trigger operation aiming at the state icon), the terminal receives a release instruction aiming at the first trigger operation; and responding to a release instruction, switching the state of the virtual rocker from the interactive state to the activated state, and presenting the state icon in a first mode.
Illustratively, taking the triggering operation for the state icon as a pressing operation as an example, when the virtual joystick is in an interactive state and a user looses his hand or leaves a sensing area of the virtual joystick during the pressing operation, a release instruction for the pressing operation is triggered, and the state of the virtual joystick is controlled to be switched from the interactive state to an activated state again.
In some embodiments, at least one interaction control including a target interaction control may also be presented in the sensing region of the virtual joystick by: the terminal detects the position of the virtual object in the virtual scene; when the virtual object is detected to be within the interaction range of at least one interaction object, presenting at least one interaction control comprising a target interaction control; wherein the at least one interactive object comprises a target interactive object.
In practical implementation, the terminal responds to a state switching event of the virtual rocker triggered by a passive triggering mode, and controls the virtual rocker to be switched from other states to an interaction state. And presenting an interactive control in a sensing area of the virtual rocker. The terminal detects the environment of the user (virtual object) in the virtual scene through the application program, and when the virtual object is in the interactive range of at least one interactive object in the virtual scene, the virtual object and the corresponding interactive object can be controlled to carry out interactive operation.
In some embodiments, when the trigger mode of the state switching event corresponding to the virtual rocker is passive trigger, the target interaction control may be presented in the following manner: when the virtual object is detected to be only in the interaction range of one interaction object, the interaction object in the interaction range of the virtual object is used as a target interaction object, and a target interaction control related to the target interaction object is presented in the central area of the sensing area of the virtual rocker.
Illustratively, referring to fig. 9, fig. 9 is a schematic display diagram of an interaction control when passively triggered according to an embodiment of the present application. When the terminal detects that the virtual object is in an interaction range of an NPC (network provider control) capable of interacting with a conversation, judging that the virtual object can trigger conversation operation, and replacing a state icon (or icons corresponding to other interaction controls) in a central area of the false rocker with an icon (shown by a number 1) of a conversation interaction control; when the terminal detects that the user is in an interaction range capable of picking up the object, judging that the virtual object can trigger the picking-up operation, and replacing a state icon (or icons corresponding to other interaction controls) in the central area of the false rocker with an icon (shown by a number 2) of a 'picking-up' interaction control; when the terminal detects that the virtual object is currently located in an interactive range of a horse (an interactive object), it is determined that the virtual object can perform a horse riding operation, and a state icon (or an icon corresponding to another interactive control) in a central area of the false rocker is replaced with an icon (shown in a number 3) of the interactive control of "horse riding".
In some embodiments, when the virtual object is detected to be within the interaction range of at least two interaction objects, the terminal may further implement the display of the interaction control through the steps shown in fig. 10. Referring to fig. 10, fig. 10 is a schematic diagram of a display manner of a trigger interaction control provided in an embodiment of the present application, and is described in conjunction with the steps shown in fig. 10.
Step 301, when the terminal detects that the virtual object is within the interaction range of at least two interaction objects, taking the at least two interaction objects as candidate interaction objects, and obtaining the distance between the virtual object and each candidate interaction object.
In practical implementation, in a passive interaction mode, when a virtual object is within an interaction range of multiple interaction objects, the multiple interaction objects may be used as candidate interaction objects, and because the virtual object generally performs interaction operation with a target interaction object at the same time, in order to determine the target interaction object from the multiple candidate interaction objects, the terminal may determine the target interaction object according to a distance relationship or an angle relationship between each candidate interaction object and the virtual object. Firstly, the distance between the candidate interactive object and the virtual object is determined, and the candidate interactive object with the minimum distance to the virtual object is determined as the target interactive object. And screening the interactive objects to obtain candidate interactive objects, and if the number of the candidate interactive objects with the minimum distance to the virtual object is more than one, continuously determining a target interactive object according to the angle relationship between the candidate interactive objects and the virtual object.
Illustratively, referring to fig. 11, fig. 11 is a schematic diagram of candidate interactive objects provided in this embodiment of the application, where the terminal detects a position where a current virtual object is located, and is located in an interaction range of interactive objects corresponding to "horse", "acquirement", and "dialable NPC", and the terminal obtains a distance D1 between the virtual object and the "horse", a distance D2 between the virtual object and the "acquirement", and a distance D3 between the virtual object and the "dialable NPC". It should be noted that the distance is determined according to the position (coordinate information) of the virtual object and the position (coordinate information) of the interactive object, that is, the distance between two points in the same coordinate system.
Step 302, determining the candidate interactive object with the minimum distance to the virtual object as a target interactive object, and presenting a target interactive control associated with the target interactive object in the central area.
Bearing the above example, referring to fig. 11, the terminal obtains the distances between the virtual object and each candidate object, D1, D2 and D3, and determines the minimum value, if D1 is the minimum value, the candidate interactive object whose distance from the virtual object is D1 may be determined as the target interactive object, that is, "horse" is determined as the target interactive object of the virtual object in the current virtual scene, that is, the virtual object may perform the interactive operation "horse riding" corresponding to "horse".
In some embodiments, when the number of candidate interactive objects having the smallest distance to the virtual object is multiple, the method may be further implemented in the following manner, referring to fig. 12, where fig. 12 is a flowchart of a target interactive object determination method provided in an embodiment of the present application, and is described with reference to the steps shown in fig. 12.
Step 401, when the number of the candidate interactive objects with the minimum distance to the virtual object is at least two, the terminal takes the candidate interactive object with the minimum distance to the virtual object as the interactive object to be screened.
In actual implementation, when the number of candidate interactive objects with the minimum distance to the virtual object, which is determined by the terminal, is multiple, the target interactive object needs to be further determined according to the angular relationship between the candidate interactive objects and the virtual object.
Exemplarily, referring to fig. 13, fig. 13 is a schematic diagram of a target interaction object provided in an embodiment of the present application. The terminal detects the position of the current virtual object, the position is located in the interaction range of the interaction objects corresponding to the horse, the acquisition object and the dialogue-capable N PC, the terminal obtains the distance D4 between the virtual object and the horse, the distance D5 between the virtual object and the acquisition object and the distance D6 between the virtual object and the dialogue-capable NPC, and the D4, the D5 and the D6 are equal in size.
Step 402, the terminal respectively obtains the angle formed by each interactive object to be screened and the virtual object by taking the position of the virtual object as the vertex position of the angle, taking the straight line along the orientation of the virtual object as one side of the angle, and taking the connecting line between the virtual object and the interactive object to be screened as the other side of the angle.
In the passive trigger mode, when a plurality of candidate interactive objects with the minimum distance to the virtual object and the same distance to the virtual object are detected, a target interactive object can be determined again according to the angle relationship between the virtual object and each candidate interactive object. The terminal determines the angle relationship in the following way: and determining the angle formed by each interactive object to be screened and the virtual object by taking the position of the virtual object as the vertex position of the angle, taking a straight line along the orientation of the virtual object as one side of the angle and taking a connecting line between the virtual object and the interactive object to be screened as the other side of the angle.
In the above example, referring to fig. 13, the terminal obtains an angle × (shown by number 1 in the figure) between the virtual object and "horse", an angle × (shown by number 2 in the figure) between the virtual object and "collected object", and an angle × (shown by number 3 in the figure) between the virtual object and "dialectable NPC".
And 403, determining the interactive object to be screened corresponding to the angle with the minimum angle as a target interactive object by the terminal.
For the above example, referring to fig. 13, the terminal compares the sizes of the angles a, B and C, and determines that the angle a is the minimum value, and then the interactive object to be screened with the angle of the virtual object being the angle a can be determined as the target interactive object, that is, "horse" is determined as the target interactive object of the virtual object in the current virtual scene, that is, the virtual object can perform the interactive operation "horse riding" with the "horse".
In some embodiments, when the virtual object is within the interaction range of at least two interaction objects, after the target interaction control is presented, the terminal may further execute a switching operation of the interaction control by triggering a switching instruction for the interaction control. Referring to fig. 14, fig. 14 is a flowchart of an interaction control switching operation provided in an embodiment of the present application, and is described in conjunction with the steps shown in fig. 14.
Step 501, the terminal receives a switching operation for a target interactive control.
Step 502, in response to the switching operation, switching the presented target interactive control to other interactive controls associated with other interactive objects, where the other interactive objects are interactive objects, except the target interactive object, within the interactive range where the virtual object is located.
After the steps 501 to 502 are carried out, and other interactive controls are presented in the sensing area of the virtual joystick, based on fig. 3, the terminal actually performs step 103 rather than step 503.
And step 503, in response to the triggering operation for the other interactive control, controlling the virtual object to execute the interactive operation with the other interactive object.
In some embodiments, the interaction mode switching corresponding to the virtual rocker can be realized by setting the interaction mode. By switching the interactive mode, the interactive control presentation mode is realized as follows: and when the terminal detects that the virtual object is in the interaction range of at least one interaction object, controlling the virtual rocker to be in a passive interaction mode, and presenting at least one interaction control comprising a target interaction control in a first area in the induction area.
In practical implementation, the terminal responds to a trigger operation aiming at a state switching event of the virtual rocker to realize state switching of the virtual rocker, wherein the trigger operation comprises active trigger (by a user) and passive trigger; due to the passive triggering, the virtual joystick is caused to switch from other states to an interaction state, so as to control a mode in which the virtual object performs an interaction operation with the target interaction object, which may be referred to as a passive interaction mode of the virtual joystick. In practical application, the active interaction mode and the passive interaction mode can be switched by controlling the opening and closing of the interaction mode function items.
In some embodiments, when the virtual joystick is in the passive interaction mode, the active interaction mode of the virtual joystick is turned back on, and the following operations may be performed: the terminal responds to a starting instruction of an active interaction mode aiming at the virtual rocker, the mode of the virtual rocker is controlled to be switched from a passive interaction mode to the active interaction mode, and at least one interaction control comprising a target interaction control is presented in a second area in the induction area.
In actual implementation, when the virtual joystick is in the passive interaction mode, referring to the display style of the virtual joystick shown in fig. 9, in response to an opening instruction for the active interaction mode of the virtual joystick, switching the interaction mode of the virtual joystick is implemented, that is, the passive interaction mode is switched to the active interaction mode, and then at least one interaction control including a target interaction control is displayed at an edge of the sensing area of the virtual joystick, where the display style of the virtual joystick is shown in fig. 7A.
For example, the triggering manner of the start instruction of the active interaction mode for the virtual joystick may be that a user manually triggers an interaction mode function item, or that the user performs a long-time pressing operation in a central area of the virtual joystick and the operation time length reaches a time length threshold.
In some embodiments, the terminal may further implement displaying of the interaction control in a manner shown in fig. 15, referring to fig. 15, where fig. 15 is a flowchart of a display method of the interaction control based on artificial intelligence provided in this embodiment of the present application. The description will be made with reference to the steps shown in fig. 15.
Step 1021, the terminal obtains the interactive data of the virtual object and the scene data of the virtual scene.
And 1022, calling a neural network model to predict the possibility of interaction between the virtual object and the interactive object based on the interactive data and the scene data, so as to obtain a prediction result.
And step 1023, when the prediction result represents that the possibility of the interaction between the virtual object and the interactive object reaches a possible threshold value, presenting at least one interactive control piece comprising a target interactive control piece in the sensing area of the virtual rocker.
In actual implementation, the terminal controls the virtual joystick to be in an interactive state and displays the interactive control by executing steps 1021 to 1023. The terminal can collect sample interaction data between each sample virtual object in each sample virtual scene pair in the sample virtual scene pair, collect sample scene data of each sample virtual scene in the sample virtual scene pair, construct a training sample according to the collected sample interaction data and the sample scene data, train the neural network model by taking the training sample as input of the neural network model to be trained, and train the neural network model by taking a value corresponding to the possibility that the virtual object interacts with the interaction object as annotation data to obtain the trained neural network model.
When the terminal presents at least one interactive control containing the target interactive control, firstly, interactive data of the virtual object and scene data of a virtual scene corresponding to the current position of the virtual object are obtained, then, a neural network model is called to predict the possibility of interaction between the virtual object and the interactive object based on the interactive data and the scene data, a prediction result is obtained, when the prediction result represents that the possibility of interaction between the virtual object and the interactive object reaches a possible threshold value, the terminal presents a virtual rocker in an interactive state, and presents at least one interactive control containing the target interactive control in a sensing area of the virtual rocker. Therefore, the interactive control can be triggered by a user, the virtual rocker in the interactive state and the corresponding interactive control are displayed when the interactive operation is executed, the screen occupation proportion of the interactive control is reduced, and the screen display utilization rate is improved.
In step 103, in response to the triggering operation for the target interaction control, the virtual object is controlled to perform the interaction operation of the target interaction object associated with the target interaction control.
In some embodiments, referring to fig. 16, fig. 16 is a flowchart of an object interaction method in a virtual scene provided in an embodiment of the present application, and based on fig. 3, step 103 may be implemented by steps 601 to 603.
Step 601, the terminal responds to the trigger operation aiming at the target interaction control and presents the target interaction control by adopting a third style.
In practical implementation, the terminal receives a trigger operation for the target interactive control, and may display an icon of the target interactive control in a manner of highlighting and enlarging the icon to prompt the user that the operation for the target interactive control has been triggered.
Step 602, displaying a target interaction object associated with the target interaction control in a target area of the virtual scene.
In actual implementation, after the operation corresponding to the target interaction control is triggered, a corresponding target interaction object is presented in a target area of the virtual scene.
Step 603, controlling the virtual object to execute the interactive operation with the target interactive object.
In some embodiments, the following operations may also be performed to cancel the display interaction control: and the terminal responds to the stop instruction aiming at the interactive operation, controls the virtual object to stop the interactive operation with the interactive object, and cancels the displayed at least one interactive control.
In actual implementation, in response to a stop instruction for interactive operation, the virtual object stops interactive operation with the interactive object, controls the virtual joystick to be switched from an interactive state to other states (an activated state and a default state), and displays the virtual joystick in a virtual scene by adopting display styles corresponding to the other states.
Illustratively, in the virtual scene, the virtual object is performing the interactive operation of "riding a horse", at this time, a stop interactive instruction of "getting off the horse" is received, and the terminal controls the virtual object to stop "riding the horse". Meanwhile, the virtual joystick is controlled to be switched from the interactive state to the activated state, and the display style of the virtual joystick can be switched from the display style shown in fig. 7A to the display style shown in fig. 5.
When the virtual object needs to be interactively operated, the interactive control is presented in the induction area of the virtual rocker, the interactive control is displayed in a mode of integrating the virtual rocker and the interactive control, interactive operation with the interactive object can be executed in a mode of one-hand operation of the virtual rocker, a player can directly trigger selection operation aiming at the interactive control in the process of controlling the operation of the virtual rocker, the screen occupation ratio of the interactive control with a large number in a virtual scene interface is reduced, and more controls are provided for other functions.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below. Taking a virtual scene as a game scene as an example, a terminal runs a client (for example, a standalone game application), and outputs a virtual scene including role play in the running process of the client, where the virtual scene is an environment for game role interaction, such as a plain, a street, a valley, and the like for game role battle; the virtual scene comprises a virtual object, a virtual rocker, an interaction control and a skill control, wherein the virtual object can be a game character controlled by a user (namely the virtual character controlled by a player), the virtual object can move in the virtual scene in response to the operation of a real user on a controller (comprising a touch screen, a voice control switch, a keyboard, a mouse, a rocker and the like), for example, when the real user moves the virtual rocker to the left, the virtual object can move to the left in the virtual scene, the virtual object is controlled to match the corresponding skill action in the virtual scene in response to the trigger operation on the skill control, and the virtual object is controlled to execute the interaction action of a target interaction object in the virtual scene in response to the trigger operation on the interaction control.
The complicated interaction in the game scene is explained, the complicated interaction in the game scene refers to the content of operations which are not commonly used but need to be performed in special scenes, such as horse riding, locking, stunning, weapon replacement and the like, and the operations are performed according to different game types, and the operation types are multiple and the operation frequency is low. In order to reasonably present the corresponding interaction controls of these complex interaction operations in the virtual scene interface, in the related art, the following manners may be adopted:
illustratively, FIGS. 17A-17B are display interface diagrams of a related art method of object interaction in a virtual scene. Referring to fig. 17A, for complex interactions in a game scene, in an interface of a virtual scene, an entry set of various complex interaction buttons is added at the lower left corner of a virtual joystick, the complex interaction buttons are retracted in a default state, all the complex interaction buttons can be viewed after being expanded by clicking, and the complex interaction buttons can be operated by clicking. At this moment, the virtual rocker and the complex interaction can not be interacted at the same time, and the complex interaction key can be used only under the condition that the rocker is not suitable for. Therefore, the interactive keys are independent and designed at each position of the main interface according to the operation convenience, so that the interactive operation between the virtual object and the interactive object is realized. The technical scheme has the advantages of convenience in clicking operation, convenience in operation steps and the like; on the other hand, the following disadvantages also exist: buttons are scattered at each position of the interface and need to be memorized independently; the screen occupation ratio is increased by adding buttons on the mobile phone, so that the operation areas with other functions are extruded, and the interaction with the virtual rocker operation is mutually exclusive.
Illustratively, referring to fig. 17B, in the interface of the virtual scene, all complex interaction operations are displayed on the interface as independent buttons, and when the buttons on the interface are clicked for interaction, there are particularly many interaction buttons on the screen. At this time, the virtual joystick and the operation key cannot be used simultaneously, and only one operation can be triggered at present. Therefore, the interactive keys are combined into one entrance, the entrance of the interactive key is added on the main interface, the interactive key can be selected by clicking the entrance, and on one hand, the screen occupation ratio is reduced due to the integration of the interactive keys. On the other hand, the operation steps of each interactive key are increased (the entrance is required to be opened first to operate the interactive key), and the interactive actions mutually exclusive with the virtual rocker operation are mutually exclusive.
Based on this, the embodiment of the application provides an object interaction method in a virtual scene, which can facilitate a player to directly enter the selection of an interaction key in the operation process of a virtual rocker through a mode of integrating the virtual rocker and the interaction key, and simultaneously reduce the occupation of a large number of interaction keys on a terminal screen and provide more space for other functions.
In practical implementation, the virtual joystick in the virtual scene may be a fixed joystick or a following joystick, and taking the fixed joystick as an example, the virtual joystick is displayed at a fixed position (e.g., a lower left corner) in the virtual scene interface and does not move along with the position of the finger of the user. Referring to fig. 17B, the sensing area of the virtual joystick in the figure can be divided into two areas: a central region and a peripheral region. In the embodiment of the application, a corresponding control icon, such as a state icon for representing the state of the virtual joystick, can be presented in the central area of the virtual joystick, and the peripheral area can be used for receiving touch operations of a user, such as clicking, sliding, dragging and the like.
In some embodiments, the virtual joystick may display different states according to different triggering conditions, and the state in which the virtual joystick is located may include a default state, a triggering state, and an interaction state, where the interaction state is set to implement the object interaction method provided in the embodiments of the present application.
In practical implementation, the terminal may execute the state switching of the virtual rocker by responding to a trigger operation for a state switching event of the virtual rocker, so as to implement the mutual switching of the virtual rocker between the three states. When the virtual rocker is in a default state (or called a dormant state), the user does not trigger the direction and speed control of the basic movement of the virtual rocker, and does not trigger any interactive state, and the virtual rocker is displayed as a button only in the default state; when the virtual rocker is in a trigger state (or called an activation state), a user triggers the direction and speed control of the basic movement of the virtual rocker, but does not trigger any interactive state, and a rocker center button is displayed as a default state; when the virtual rocker is in an interactive state, icon selection of an interactive button can be performed according to an interactive object; at this time, it is necessary to distinguish whether the rocker is currently in the default state or the trigger state, and whether the rocker is in the default state or the trigger state, the buttons in the central area of the rocker need to be replaced by the interactive buttons according to the interaction situation. In addition, the passive triggering function response mode of the virtual rocker can be that the virtual rocker is pressed for a long time in the central area, and the passive triggering function response can be directly judged.
In practical implementation, visual display can be performed through the control style of the central area of the virtual rocker according to different states of the virtual rocker. That is to say, the state change of the virtual rocker can be shown by changing the display style of the control arranged in the central area of the virtual rocker. It should be noted that there are many factors causing the state of the virtual joystick to change (i.e., factors triggering the state switching event of the virtual joystick), and the factors may include changes caused by active actions of the user (active triggering) and passive changes caused by the environment where the virtual object is located in the virtual scene (passive triggering). In the embodiment of the application, the state change of the virtual rocker caused by the passive change caused by the environment of the virtual object in the virtual scene is switched from other states to the interactive state. For convenience of description, the interaction mode of the virtual joystick in the interaction state may be divided into an active interaction mode and a passive interaction mode.
Taking as an example that a user needs to perform complex interaction as horse riding, dialogue and pickup in a game scene, referring to fig. 18, fig. 18 is a schematic view of display styles when a virtual rocker provided in the embodiment of the present application is in different states, and the display styles corresponding to the virtual rocker in the different states, that is, the display styles of a control in a central region of the virtual rocker, are shown in the figure, such as a default state, a virtual rocker trigger state, a passive-interactive trigger dialogue state, a passive-interactive trigger pickup state, a passive-interactive trigger horse riding state, and a long-press trigger interactive key selection state.
Illustratively, referring to fig. 19, fig. 19 is a state display diagram of a virtual joystick provided in the embodiment of the present application, when the virtual joystick is in a default state (shown as number 1 in the figure), hiding the outer circle and the direction of the virtual joystick can be used to indicate that a user has not triggered the virtual joystick movement operation, nor has triggered any interactive operation, and that the virtual object is not in any environment suitable for triggering the interactive operation.
In connection with the above example, referring to fig. 19, when the terminal receives that the user performs a click or a sliding operation within the peripheral region (region indicated by number 2-1 in the figure) of the virtual stick, it may determine that the movement operation of the stick has been triggered, that is, the virtual stick is controlled to be in a triggered state, and at this time, the game character (virtual object) responds to the direction and movement (speed) control of the stick in the corresponding direction.
In the above example, referring to fig. 19, when the terminal receives that the user performs the long press operation within the range of the central area (the area indicated by the number 3-1 in the figure) of the virtual joystick, the operation duration corresponding to the long press operation is obtained, and when the operation duration reaches the time threshold (for example, the long press time t is greater than or equal to 0.4s), it is determined that the user interaction selection key is actively triggered, and the virtual joystick is controlled to be in the interaction state (the interaction mode corresponding to the virtual object is the active interaction mode). Then, an interactive keyboard wheel (also called a control wheel) is presented on the outer edge of the virtual rocker (shown as number 4 in fig. 19), the interactive keyboard wheel can display at least one interactive key required by the current game, the currently selected interactive key can be used as a target interactive key in response to the sliding operation of the user finger sliding to the corresponding interactive key, and the virtual object executes the interactive operation corresponding to the target interactive key (for example, the virtual object executes the horse riding operation); and in response to a release instruction (such as the user releasing his hand) triggered by the user aiming at the target interactive key, the virtual rocker can be controlled to be switched to a default state, and the direction control and the direction selection of the virtual rocker cannot be controlled simultaneously.
In practical implementation, the display mode for triggering the interactive key may further include passive triggering, and the virtual joystick is in an interactive state through the passive triggering (at this time, the interactive mode corresponding to the virtual object is a passive interactive mode). The terminal detects the environment of the user (virtual object) in the virtual scene, and when the virtual object is in the interactive range of the interactive object, the virtual object can be controlled to perform interactive operation with the corresponding interactive object. For example, the terminal detects that the virtual object (user) is currently within the range of horse interaction (interactive object), and judges that the virtual object can perform horse riding operation; when the terminal detects that the user is in an interaction range of an NPC (network provider control) capable of interacting with the conversation, the virtual object is judged to be capable of triggering the conversation operation.
In some embodiments, for the determination of interactable objects, the determination elements may include a distance and an angle, and in the passive interaction mode, the rule for selecting an interactable object is: selecting an interactable object; and the selection of the interactive object is performed by the way of distance > angle. Specifically, firstly, the interactive objects are screened according to the distance between the interactive objects and the virtual objects to obtain candidate interactive objects, if the number of the obtained candidate interactive objects is multiple, one interactive object is determined from the candidate interactive objects as a target interactive object continuously according to the angle relation between the candidate interactive objects and the virtual objects, and the current control in the central area of the virtual rocker is replaced by the interactive control corresponding to the target interactive object, at the moment, a player performs click operation on the interactive control (button) corresponding to the target interactive object, so that the corresponding interactive operation can be triggered, and the operation cost of the player between the rocker and the interactive key is reduced.
In actual implementation, when the distances between the reference point where the virtual object is located and the reference point connecting lines where the plurality of interactable objects are located are different, the interactable object having a small distance is preferentially selected as the target interactive object (provided that the interactable object is within the interactable range of the interactive object).
Illustratively, referring to fig. 11, the position of the current user (the position of the virtual object) is detected, and the current user is located in the interaction range of the interaction objects corresponding to the "horse", "acquisition object", and "dialable NPC", and the distance d1 between the user and the "horse", the distance d2 between the user and the "acquisition object", and the distance d3 between the user and the "dialable NPC" are obtained. Comparing the sizes of d1, d2 and d3, determining that the minimum value is d1 which is the minimum value, and determining that the 'horse' is the target interactive object of the user in the current virtual scene, namely the user can perform the action 'riding the horse' corresponding to the 'horse'. And a virtual rocker shown by the number a in fig. 11 is presented in the interface, and at this time, the control in the central area of the virtual rocker is a "horse riding" control.
Further, in actual implementation, when the distances of the passively triggered interactable objects are the same, the facing (orientation) of the user (virtual object) in the virtual scene is taken as a reference point, the angle between the interactable object and the user is determined, and an interactable object is determined if the angle is small (provided that the interactable object is within the interactable range of the interactable object).
Illustratively, referring to fig. 13, the distance d4 between the user and the "horse", the distance d5 between the user and the "collected object", and the distance d6 between the user and the "dialectable NPC" are obtained, and comparison shows that d4, d5, and d6 are equal in size, and it is necessary to continuously obtain the angle × (denoted by reference numeral 1 in the figure), the angle × (denoted by reference numeral 2 in the figure) between the user and the "collected object", and the angle × (denoted by reference numeral 3 in the figure) between the user and the "dialectable NPC". Then, comparing the sizes of the < A, < B and < C, determining that the < A is the minimum value, and then determining that the 'horse' is the target interactive object of the user in the current virtual scene, namely the user can execute the action 'riding the horse' corresponding to the 'horse'. And the virtual rocker shown by the number a in fig. 13 is presented in the interface, and at this time, the control in the central area of the virtual rocker is the "riding horse" control.
It should be noted that, in the range where the passive trigger button exists, the player is still supported to manually switch the interactive key, and at this time, the long press of the button can trigger the selection of the interactive button.
In some embodiments, when active switching and passive switching coexist for an interactive control, the model triggering determination needs to be performed according to the triggering time sequence; when a user performs active switching, the passive interactive switching function is forbidden, so that state disorder is avoided; when the user first performs passive interaction, a long press triggers a response of the passive interaction.
In some embodiments, referring to fig. 20, fig. 20 is a schematic view of a decision flow of a joystick and an interactive key provided in an embodiment of the present application, where the terminal first performs step 701: judging whether a user executes clicking or sliding operation in a peripheral area of the virtual rocker; if yes, go to step 702: triggering the control response of the virtual rocker to the angular color movement; if not, executing step 703, and judging whether the user executes the long-press operation in the central area of the virtual rocker; if yes, go to step 704: triggering an interactive key selection disc display event, namely displaying an interactive key selection disc on the outer edge of the virtual rocker; and proceeds to step 705: judging whether a user drags to select an interactive key; if yes, go to step 706: triggering the response of the corresponding interactive key, namely responding to the triggering operation aiming at the selected interactive key (target interactive key), and controlling the virtual object to execute the corresponding interactive operation; if not, go to step 707: judging whether the user looses hands or leaves the central area of the virtual rocker; if yes, go to step 708: closing the response of the interactive key selection disc; if not, go to step 709: maintaining a response of the interactive key selection pad; based on step 703, if it is determined that the user does not perform the long press operation in the center area of the virtual joystick, then step 710 is performed: judging whether the user is in the range of passive triggering of the interactive key; if yes, go to step 711: triggering an interactive key replacement event in the central area of the virtual rocker, clicking (the replaced interactive key), and carrying out interactive key response; if not, go to step 712: and controlling the virtual rocker to be in a default state. Therefore, through the steps, the operation judgment of the virtual rocker and the interactive button can be carried out according to the current situation of the user, the user can operate the interactive button conveniently by one hand, and the screen occupation area of the complex interactive button on a limited interface is reduced.
By applying the embodiment of the application, complex interactive operation in a game can be integrated into the game virtual rocker through the trigger type double-layer rocker, the screen occupation ratio of a fighting interface is reduced, and the complex interactive operation is carried out in a single-hand operation mode of the rocker.
It is understood that, in the embodiments of the present application, the data related to the user information and the like need to be approved or approved by the user when the embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
Continuing with the exemplary structure of the object interaction device 555 in the virtual scene provided by the embodiments of the present application implemented as a software module, in some embodiments, as shown in fig. 2, the software module in the object interaction device 555 in the virtual scene stored in the memory 550 may include:
a presenting module 5551, configured to present an interface of a virtual scene including a virtual object, and in the interface, present a virtual joystick for controlling a displacement of the virtual object;
the presentation module is further used for presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker, wherein the interaction control is associated with an interaction object and is used for triggering the virtual object to interact with the interaction object;
an interaction module 5552, configured to, in response to a trigger operation for the target interaction control, control the virtual object to perform an interaction operation of a target interaction object associated with the target interaction control.
In some embodiments, the presentation module is further configured to present, in a central area of the sensing area of the virtual joystick, a status icon in a first pattern corresponding to an activated status, where the status icon is used to represent a status of the virtual joystick; and responding to a first trigger operation aiming at the state icon, controlling the virtual rocker to be switched from the activation state to the interaction state, and presenting the state icon by adopting a second style corresponding to the interaction state.
In some embodiments, the interaction module is further configured to control the virtual object to perform an interaction operation with the interaction object in response to a trigger operation for the target interaction control while the virtual joystick is in the interaction state.
In some embodiments, before the controlling the virtual object to perform the interactive operation with the interactive object, the presentation module is further configured to receive a release instruction for the first trigger operation during the execution of the first trigger operation; and responding to the release instruction, switching the state of the virtual rocker from the interactive state to an activated state, and presenting the state icon by adopting the first pattern.
In some embodiments, the interaction module is further configured to, when the first trigger operation is a pressing operation, receive a sliding operation in a process of executing the pressing operation, and acquire a sliding position corresponding to the sliding operation; and when the sliding position is in the display area of the target interaction control, receiving a trigger operation aiming at the target interaction control.
In some embodiments, the presenting module is further configured to present the virtual joystick in a sleep state, and when the virtual joystick is in the sleep state, a peripheral area of the central area in the sensing area is in a hidden display state; controlling the virtual rocker to be in the activated state in response to a triggering operation for the peripheral region.
In some embodiments, the presenting module is further configured to present, when the sensing area is a circular area and the number of the interaction controls is at least two, at least two of the interaction controls including the target interaction control that are uniformly distributed are presented at an edge of the circular area.
In some embodiments, the rendering module is further configured to render a control wheel at an outer edge of the circular region, the control wheel including at least two control display bits therein; presenting at least two of the interaction controls including the target interaction control in at least two control display positions included in the control carousel.
In some embodiments, the presenting module is further configured to present, in the sensing area of the virtual joystick, at least one interaction control including a target interaction control in a candidate state; when a selection operation aiming at the target interaction control in the candidate state is received, the selection operation is used as a trigger operation aiming at the target interaction control.
In some embodiments, the presentation module is further configured to detect a location of the virtual object in the virtual scene; when the virtual object is detected to be within the interaction range of at least one interaction object, presenting at least one interaction control comprising a target interaction control; wherein the at least one interaction object comprises the target interaction object.
In some embodiments, the presenting module is further configured to, when it is detected that the virtual object is only within an interaction range of one interaction object, take the interaction object within the interaction range of the virtual object as the target interaction object, and present a target interaction control associated with the target interaction object in a central area of the sensing area of the virtual joystick; when the virtual object is detected to be within the interaction range of at least two interaction objects, taking the at least two interaction objects as candidate interaction objects, and acquiring the distance between the virtual object and each candidate interaction object; and determining the candidate interactive object with the minimum distance to the virtual object as the target interactive object, and presenting a target interactive control associated with the target interactive object in the central area.
In some embodiments, the presenting module is further configured to, when the number of candidate interaction objects having the smallest distance to the virtual object is at least two, take the candidate interaction object having the smallest distance to the virtual object as the interaction object to be filtered; respectively acquiring the angle formed by each interactive object to be screened and the virtual object by taking the position of the virtual object as the vertex position of the angle, taking a straight line along the orientation of the virtual object as one side of the angle and taking a connecting line between the virtual object and the interactive object to be screened as the other side of the angle; and determining the interactive object to be screened corresponding to the angle with the minimum angle as the target interactive object.
In some embodiments, the presentation module is further configured to receive a switching operation for the target interaction control; responding to the switching operation, and switching the presented target interaction control into other interaction controls associated with other interaction objects; wherein, the other interactive objects are the interactive objects except the target interactive object in the interaction range of the virtual object;
in some embodiments, the interaction module is further configured to control the virtual object to perform an interaction operation with the other interaction object in response to a trigger operation for the other interaction control.
In some embodiments, the presentation module is further configured to control the virtual joystick to be in a passive interaction mode when the virtual object is detected to be within an interaction range of at least one interaction object, and present at least one interaction control including a target interaction control in a first area of the sensing area;
in some embodiments, the presentation module is further configured to, in response to an open instruction for an active interaction mode of the virtual joystick, control the mode of the virtual joystick to be switched from the passive interaction mode to the active interaction mode, and present at least one interaction control including a target interaction control in a second area of the sensing area.
In some embodiments, the interaction module is further configured to, in response to a trigger operation for the target interaction control, render the target interaction control in a third style; displaying a target interaction object associated with the target interaction control in a target area of the virtual scene; and controlling the virtual object to execute the interactive operation with the target interactive object.
In some embodiments, the interaction module is further configured to, in response to a stop instruction for the interaction operation, control the virtual object to stop the interaction operation with the interaction object and cancel the displayed at least one interaction control.
In some embodiments, the presentation module is further configured to obtain interaction data of the virtual object and scene data of the virtual scene; based on the interaction data and the scene data, calling a neural network model to predict the possibility of interaction between the virtual object and the interaction object, and obtaining a prediction result; and when the prediction result represents that the possibility of the interaction between the virtual object and the interactive object reaches a possible threshold value, at least one interactive control comprising a target interactive control is presented in the sensing area of the virtual rocker.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the object interaction method in the virtual scene described in the embodiment of the present application.
The embodiment of the present application provides a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when being executed by a processor, will cause the processor to execute an object interaction method in a virtual scene provided by the embodiment of the present application, for example, an object interaction method in a virtual scene as shown in fig. 3.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may, but need not, correspond to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the application, when the virtual object needs to be interactively operated, the interactive control is presented in the sensing area of the virtual rocker, in such a way that the interactive control is displayed by integrating the virtual rocker and the interactive control, the interactive operation with the interactive object can be executed in a way that the virtual rocker is operated by one hand, a player can directly trigger the selection operation aiming at the interactive control in the process of controlling the operation of the virtual rocker conveniently, and meanwhile, the screen occupation ratio of a large number of interactive controls in a virtual scene interface is reduced, so that more controls are provided for other functions.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. A method of object interaction in a virtual scene, the method comprising:
presenting an interface of a virtual scene comprising a virtual object, and presenting a virtual rocker for controlling the displacement of the virtual object in the interface;
presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker;
the interactive control is associated with an interactive object and used for triggering the virtual object to interact with the interactive object;
and responding to the trigger operation aiming at the target interaction control, and controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control.
2. The method of claim 1, wherein prior to presenting at least one interaction control including a target interaction control in the sensing region of the virtual joystick, the method further comprises:
in the central area of the sensing area of the virtual rocker, presenting a state icon by adopting a first pattern corresponding to an activation state, wherein the state icon is used for representing the state of the virtual rocker;
responding to a first trigger operation aiming at the state icon, controlling the virtual rocker to be switched from the activation state to an interaction state, and presenting the state icon by adopting a second style corresponding to the interaction state;
the controlling the virtual object to execute the interactive operation of the target interactive object associated with the target interactive control in response to the triggering operation for the target interactive control comprises:
and in response to the trigger operation aiming at the target interaction control when the virtual rocker is in an interaction state, controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control.
3. The method of claim 2, wherein prior to said controlling said virtual object to perform an interaction with said interaction object, said method further comprises:
receiving a release instruction aiming at the first trigger operation in the process of executing the first trigger operation;
and responding to the release instruction, switching the state of the virtual rocker from the interactive state to an activated state, and presenting the state icon by adopting the first pattern.
4. The method of claim 2, further comprising:
when the first trigger operation is a pressing operation, receiving a sliding operation in the process of executing the pressing operation, and acquiring a sliding position corresponding to the sliding operation;
and when the sliding position is in the display area of the target interaction control, receiving a trigger operation aiming at the target interaction control.
5. The method of claim 2, wherein prior to presenting the status icons in the first pattern corresponding to the activation status, the method further comprises:
presenting the virtual rocker in a dormant state, wherein when the virtual rocker is in the dormant state, a peripheral area of the central area in the induction area is in a hidden display state;
controlling the virtual rocker to be in the activated state in response to a triggering operation for the peripheral region.
6. The method of claim 1, wherein presenting at least one interaction control including a target interaction control in a sensing region of the virtual joystick comprises:
when the induction area is a circular area and the number of the interaction controls is at least two, the at least two interaction controls which comprise the target interaction control are uniformly distributed on the edge of the circular area.
7. The method of claim 6, wherein presenting at least two of the interaction controls including the target interaction control in a uniform distribution at the outer edge of the circular region comprises:
presenting a control wheel disc on the outer edge of the circular area, wherein the control wheel disc comprises at least two control display positions;
presenting at least two of the interaction controls including the target interaction control in at least two control display positions included in the control carousel.
8. The method of claim 1, wherein presenting at least one interaction control including a target interaction control in a sensing region of the virtual joystick comprises:
presenting at least one interaction control including a target interaction control in a candidate state in a sensing area of the virtual rocker;
the method further comprises the following steps:
when a selection operation aiming at the target interaction control in the candidate state is received, the selection operation is used as a trigger operation aiming at the target interaction control.
9. The method of claim 1, wherein presenting at least one interaction control including a target interaction control comprises:
detecting the position of the virtual object in the virtual scene;
when the virtual object is detected to be within the interaction range of at least one interaction object, presenting at least one interaction control comprising a target interaction control;
wherein the at least one interaction object comprises the target interaction object.
10. The method of claim 9, wherein when the virtual object is detected to be within an interaction range of at least one interaction object, presenting at least one interaction control including a target interaction control comprises:
when the virtual object is detected to be only in the interaction range of one interaction object, taking the interaction object in the interaction range of the virtual object as the target interaction object, and presenting a target interaction control associated with the target interaction object in the central area of the sensing area of the virtual rocker;
when the virtual object is detected to be within the interaction range of at least two interaction objects, taking the at least two interaction objects as candidate interaction objects, and acquiring the distance between the virtual object and each candidate interaction object;
and determining the candidate interactive object with the minimum distance to the virtual object as the target interactive object, and presenting a target interactive control associated with the target interactive object in the central area.
11. The method according to claim 10, wherein the determining the candidate interactive object with the smallest distance to the virtual object as the target interactive object comprises:
when the number of the candidate interactive objects with the minimum distance to the virtual object is at least two, taking the candidate interactive object with the minimum distance to the virtual object as an interactive object to be screened;
respectively acquiring the angle formed by each interactive object to be screened and the virtual object by taking the position of the virtual object as the vertex position of the angle, taking a straight line along the orientation of the virtual object as one side of the angle and taking a connecting line between the virtual object and the interactive object to be screened as the other side of the angle;
and determining the interactive object to be screened corresponding to the angle with the minimum angle as the target interactive object.
12. The method of claim 10 or 11, wherein after said presenting the target interaction control when the virtual object is within an interaction range of at least two interaction objects, the method further comprises:
receiving a switching operation aiming at the target interaction control;
responding to the switching operation, and switching the presented target interaction control into other interaction controls associated with other interaction objects;
wherein, the other interactive objects are the interactive objects except the target interactive object in the interactive range of the virtual object;
in response to a trigger operation for the target interaction control, controlling the virtual object to execute an interaction operation of a target interaction object associated with the target interaction control, including:
and controlling the virtual object to execute the interactive operation with the other interactive objects in response to the triggering operation of the other interactive controls.
13. The method of claim 9, wherein when the virtual object is detected to be within an interaction range of at least one interaction object, presenting at least one interaction control including a target interaction control comprises:
when the virtual object is detected to be in the interaction range of at least one interaction object, controlling the virtual rocker to be in a passive interaction mode, and presenting at least one interaction control comprising a target interaction control in a first area in the induction area;
the method further comprises the following steps:
responding to a starting instruction aiming at an active interaction mode of the virtual rocker, controlling the mode of the virtual rocker to be switched from the passive interaction mode to the active interaction mode, and presenting at least one interaction control comprising a target interaction control in a second area in the sensing area.
14. The method of claim 1, wherein the controlling the virtual object to perform the interactive operation of the target interactive object associated with the target interactive control in response to the triggering operation for the target interactive control comprises:
presenting the target interaction control in a third style in response to a triggering operation for the target interaction control;
displaying a target interaction object associated with the target interaction control in a target area of the virtual scene;
and controlling the virtual object to execute the interactive operation with the target interactive object.
15. The method of claim 1, further comprising:
in response to a stop instruction for the interactive operation, controlling the virtual object to stop the interactive operation with the interactive object, and
canceling the displayed at least one interactive control.
16. The method of claim 1, wherein presenting at least one interaction control including a target interaction control in a sensing region of the virtual joystick comprises:
acquiring interaction data of the virtual object and scene data of the virtual scene;
based on the interaction data and the scene data, calling a neural network model to predict the possibility of interaction between the virtual object and the interaction object, and obtaining a prediction result;
and when the prediction result represents that the possibility of the interaction between the virtual object and the interactive object reaches a possible threshold value, at least one interactive control comprising a target interactive control is presented in the sensing area of the virtual rocker.
17. An apparatus for object interaction in a virtual scene, the apparatus comprising:
the system comprises a presentation module, a control module and a display module, wherein the presentation module is used for presenting an interface of a virtual scene comprising a virtual object and presenting a virtual rocker for controlling the displacement of the virtual object in the interface;
the presentation module is further used for presenting at least one interaction control comprising a target interaction control in a sensing area of the virtual rocker, wherein the interaction control is associated with an interaction object and is used for triggering the virtual object to interact with the interaction object;
and the interaction module is used for responding to the triggering operation aiming at the target interaction control and controlling the virtual object to execute the interaction operation of the target interaction object associated with the target interaction control.
18. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the method of object interaction in a virtual scene of any one of claims 1 to 16 when executing executable instructions stored in the memory.
19. A computer-readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the method of object interaction in a virtual scene of any of claims 1 to 16.
20. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the method of object interaction in a virtual scene of any of claims 1 to 16.
CN202111666192.6A 2021-12-01 2021-12-31 Object interaction method, device, equipment and storage medium in virtual scene Pending CN114296597A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111453833X 2021-12-01
CN202111453833 2021-12-01

Publications (1)

Publication Number Publication Date
CN114296597A true CN114296597A (en) 2022-04-08

Family

ID=80972610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666192.6A Pending CN114296597A (en) 2021-12-01 2021-12-31 Object interaction method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN114296597A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027292A1 (en) * 2022-08-01 2024-02-08 腾讯科技(深圳)有限公司 Interaction method and apparatus in virtual scene, electronic device, computer-readable storage medium, and computer program product
WO2024082883A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Virtual object interaction method and apparatus, device, and computer-readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100013653A1 (en) * 2008-07-15 2010-01-21 Immersion Corporation Systems And Methods For Mapping Message Contents To Virtual Physical Properties For Vibrotactile Messaging
CN107126698A (en) * 2017-04-24 2017-09-05 网易(杭州)网络有限公司 Control method, device, electronic equipment and the computer-readable recording medium of game virtual object
CN108404408A (en) * 2018-02-01 2018-08-17 网易(杭州)网络有限公司 Information processing method, device, storage medium and electronic equipment
CN108434732A (en) * 2018-03-23 2018-08-24 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN109224442A (en) * 2018-09-03 2019-01-18 腾讯科技(深圳)有限公司 Data processing method, device and the storage medium of virtual scene
CN109445662A (en) * 2018-11-08 2019-03-08 腾讯科技(深圳)有限公司 Method of controlling operation thereof, device, electronic equipment and the storage medium of virtual objects
CN110215691A (en) * 2019-07-17 2019-09-10 网易(杭州)网络有限公司 The control method for movement and device of virtual role in a kind of game
CN110270086A (en) * 2019-07-17 2019-09-24 网易(杭州)网络有限公司 The control method for movement and device of virtual role in a kind of game
CN110559658A (en) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 Information interaction method, device, terminal and storage medium
CN111330272A (en) * 2020-02-14 2020-06-26 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN112121431A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Interactive processing method and device of virtual prop, electronic equipment and storage medium
CN112604305A (en) * 2020-12-17 2021-04-06 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN112755516A (en) * 2021-01-26 2021-05-07 网易(杭州)网络有限公司 Interaction control method and device, electronic equipment and storage medium
CN113332721A (en) * 2021-06-02 2021-09-03 网易(杭州)网络有限公司 Game control method and device, computer equipment and storage medium
CN113398572A (en) * 2021-05-26 2021-09-17 腾讯科技(深圳)有限公司 Virtual item switching method, skill switching method and virtual object switching method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100013653A1 (en) * 2008-07-15 2010-01-21 Immersion Corporation Systems And Methods For Mapping Message Contents To Virtual Physical Properties For Vibrotactile Messaging
CN107126698A (en) * 2017-04-24 2017-09-05 网易(杭州)网络有限公司 Control method, device, electronic equipment and the computer-readable recording medium of game virtual object
CN108404408A (en) * 2018-02-01 2018-08-17 网易(杭州)网络有限公司 Information processing method, device, storage medium and electronic equipment
CN108434732A (en) * 2018-03-23 2018-08-24 网易(杭州)网络有限公司 Virtual object control method and device, storage medium, electronic equipment
CN109224442A (en) * 2018-09-03 2019-01-18 腾讯科技(深圳)有限公司 Data processing method, device and the storage medium of virtual scene
CN109445662A (en) * 2018-11-08 2019-03-08 腾讯科技(深圳)有限公司 Method of controlling operation thereof, device, electronic equipment and the storage medium of virtual objects
CN110215691A (en) * 2019-07-17 2019-09-10 网易(杭州)网络有限公司 The control method for movement and device of virtual role in a kind of game
CN110270086A (en) * 2019-07-17 2019-09-24 网易(杭州)网络有限公司 The control method for movement and device of virtual role in a kind of game
CN110559658A (en) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 Information interaction method, device, terminal and storage medium
CN111330272A (en) * 2020-02-14 2020-06-26 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN112121431A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Interactive processing method and device of virtual prop, electronic equipment and storage medium
CN112604305A (en) * 2020-12-17 2021-04-06 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN112755516A (en) * 2021-01-26 2021-05-07 网易(杭州)网络有限公司 Interaction control method and device, electronic equipment and storage medium
CN113398572A (en) * 2021-05-26 2021-09-17 腾讯科技(深圳)有限公司 Virtual item switching method, skill switching method and virtual object switching method
CN113332721A (en) * 2021-06-02 2021-09-03 网易(杭州)网络有限公司 Game control method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027292A1 (en) * 2022-08-01 2024-02-08 腾讯科技(深圳)有限公司 Interaction method and apparatus in virtual scene, electronic device, computer-readable storage medium, and computer program product
WO2024082883A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Virtual object interaction method and apparatus, device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
WO2022057529A1 (en) Information prompting method and apparatus in virtual scene, electronic device, and storage medium
US20230347243A1 (en) Task guidance method and apparatus in virtual scene, electronic device, storage medium, and program product
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
CN112076473B (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
US20230241501A1 (en) Display method and apparatus for virtual prop, electronic device and storage medium
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
CN112402963B (en) Information sending method, device, equipment and storage medium in virtual scene
CN114296597A (en) Object interaction method, device, equipment and storage medium in virtual scene
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN113633964A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN113559510A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN114404969A (en) Virtual article processing method and device, electronic equipment and storage medium
CN114217708B (en) Control method, device, equipment and storage medium for opening operation in virtual scene
CN114344906A (en) Method, device, equipment and storage medium for controlling partner object in virtual scene
CN113018862A (en) Virtual object control method and device, electronic equipment and storage medium
CN117635891A (en) Model display method, device, equipment and storage medium in virtual scene
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
CN116688502A (en) Position marking method, device, equipment and storage medium in virtual scene
CN114425159A (en) Motion processing method, device and equipment in virtual scene and storage medium
CN117771649A (en) Method, device, equipment and storage medium for controlling virtual character
WO2023221716A1 (en) Mark processing method and apparatus in virtual scenario, and device, medium and product
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
CN114210057B (en) Method, device, equipment, medium and program product for picking up and processing virtual prop

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination