CN116764207A - Interactive processing method, device, equipment and storage medium in virtual scene - Google Patents

Interactive processing method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN116764207A
CN116764207A CN202210764776.5A CN202210764776A CN116764207A CN 116764207 A CN116764207 A CN 116764207A CN 202210764776 A CN202210764776 A CN 202210764776A CN 116764207 A CN116764207 A CN 116764207A
Authority
CN
China
Prior art keywords
virtual object
virtual
picture
displaying
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210764776.5A
Other languages
Chinese (zh)
Inventor
侯杰
陆人吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202210764776.5A priority Critical patent/CN116764207A/en
Publication of CN116764207A publication Critical patent/CN116764207A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an interactive processing method, device and equipment in a virtual scene and a computer readable storage medium; the method comprises the following steps: displaying a first picture of a virtual scene corresponding to the first virtual object visual angle; responding to a visual angle switching instruction corresponding to a target camera with control right for a first virtual object, and switching from displaying a first picture to displaying a second picture of a virtual scene corresponding to the visual angle of the target camera; in response to a marking operation for a third virtual object in the second picture, displaying position information of the third virtual object in the third picture of the virtual scene corresponding to the second virtual object view angle; the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship. The application can improve the man-machine interaction efficiency.

Description

Interactive processing method, device, equipment and storage medium in virtual scene
Technical Field
The present application relates to the field of man-machine interaction, and in particular, to a method, an apparatus, a device, a computer readable storage medium and a computer program product for processing interaction in a virtual scene.
Background
The man-machine interaction technology of the virtual scene based on the graphic processing hardware can realize diversified interactions among virtual objects controlled by users or artificial intelligence according to actual application requirements, and has wide practical value. For example, in a virtual scene application such as a game, a real fight process between virtual objects can be simulated, taking a shooting game as an example, virtual objects in different groups fight in the game, such as a first virtual object and a third virtual object (both of which are in hostile relationship with each other) belonging to different groups fight in the game.
In the related art, when the first virtual object detects the third virtual object (enemy) in the hostile relationship, the position information of the third virtual object needs to be shared with the second virtual object (teammate) in the cooperative relationship, and the third virtual object is mostly realized by sending voice or text, but this mode needs to record voice or edit text before sending, which is complex in operation, and takes a certain time to record voice or edit text, so that when the second virtual object receives the position information of the third virtual object, the third virtual object may leave the original position, so that the enemy position information shared in this mode lacks real-time, so that the group where the first virtual object is located has to perform multiple interaction operations in order to achieve a certain interaction purpose (such as defeating the group where the third virtual object is located), and the man-machine interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides an interaction processing method, device, equipment, a computer readable storage medium and a computer program product in a virtual scene, which can improve the man-machine interaction efficiency.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an interaction processing method in a virtual scene, which comprises the following steps:
displaying a first picture of the virtual scene corresponding to a first virtual object view angle;
responding to a view angle switching instruction corresponding to a target camera with control right for the first virtual object, and switching from displaying the first picture to displaying a second picture corresponding to the virtual scene of the target camera view angle;
in response to a marking operation for a third virtual object in the second picture, displaying position information of the third virtual object in a third picture of the virtual scene corresponding to a second virtual object view angle;
the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship.
The embodiment of the application provides an interaction processing device in a virtual scene, which comprises:
The first display module is used for displaying a first picture of a virtual scene corresponding to the first virtual object;
the picture switching module is used for responding to a visual angle switching instruction corresponding to a target camera with control right for the first virtual object, and switching from displaying the first picture to displaying a second picture corresponding to the virtual scene corresponding to the visual angle of the target camera;
a second display module, configured to display, in response to a marking operation for a third virtual object in the second screen, position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective;
the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship.
In the above scheme, before the first frame is switched from displaying to displaying the second frame corresponding to the view angle of the target camera, the first instruction receiving module is further included, and is configured to display, at a target position in the first frame, the target camera where the first virtual object or the second virtual object is placed; and responding to triggering operation of a first switching key for switching the view angle of the camera, and receiving a view angle switching instruction corresponding to the target camera.
In the above aspect, the first instruction receiving module is further configured to set, in the first screen, an interaction mode of the virtual scene to a camera placement mode in response to a mode setting instruction for the interaction mode of the virtual scene; in the camera placing mode, in response to a selection operation for a target position in a first picture, the first virtual object is controlled to take out the target camera from a virtual back pack, and the target camera is placed at the target position so as to display the target camera at the target position.
In the above scheme, the device further includes: the position prediction module is used for acquiring scene data of the first virtual object in the virtual scene, wherein the scene data comprises at least one of the following: the method comprises the steps of setting environment data of a first virtual object, distances between the first virtual object and other virtual objects and interaction data between the first virtual object and the other virtual objects; and according to the scene data, calling a machine learning model to predict the position to be placed of the target camera to obtain the target position to be placed, and highlighting the target position.
In the above solution, before the first frame is to be displayed to be switched to the second frame displaying the view angle corresponding to the target camera, the apparatus further includes a second instruction receiving module, configured to receive, when the number of cameras of the first virtual object having control rights is at least two and each of the cameras corresponds to one second switching key, a view angle switching instruction corresponding to a first camera of the at least two cameras in response to a trigger operation of the second switching key corresponding to the first camera; responding to a visual angle switching instruction corresponding to the first camera, switching from displaying the first picture to displaying a fourth picture of the virtual scene corresponding to the visual angle of the first camera; based on the fourth picture, responding to triggering operation of a second switching key corresponding to a target camera in the at least two cameras, and receiving a visual angle switching instruction corresponding to the target camera; correspondingly, the picture switching module is further configured to switch from displaying the fourth picture to displaying a second picture of the virtual scene corresponding to the target camera view angle in response to a view angle switching instruction corresponding to the target camera.
In the above scheme, before the first picture is switched from displaying to displaying the second picture corresponding to the view angle of the target camera, the device further includes a third instruction receiving module, configured to display a camera selection interface when the number of cameras of the first virtual object having control rights is at least two, and display the at least two cameras in the camera selection interface; and receiving a view angle switching instruction corresponding to a target camera in response to a selection operation of the target camera in the at least two cameras.
In the above solution, before the first frame is switched from displaying the second frame corresponding to the target camera to displaying the second frame, the apparatus further includes a fourth instruction receiving module, configured to display, when the number of cameras of the first virtual object having control rights is at least two, the target camera in the map corresponding to the virtual scene in a first display style, and display, in a second display style, other cameras except for the target camera in the at least two cameras; the first display pattern is different from the second display pattern, and the first display pattern represents the selection priority of the target camera and is higher than the selection priority of the other cameras; and responding to the selection operation of the target camera, and receiving a visual angle switching instruction corresponding to the target camera.
In the above solution, after the switching from displaying the first picture to displaying the second picture corresponding to the virtual scene with the target camera view angle, the apparatus further includes: the camera display module is used for displaying the target cameras in a third display mode in the map of the virtual scene when the number of cameras with control rights of the first virtual object is at least two; and the third display mode represents that the second picture currently displayed is a picture of the virtual scene corresponding to the target camera view angle.
In the above solution, before displaying the position information of the third virtual object in a third frame of the virtual scene corresponding to the second virtual object perspective, the apparatus further includes an object marking module, configured to receive a marking instruction for marking the third virtual object in response to the third virtual object existing in the second frame; and responding to the marking instruction, and controlling the first virtual object to mark the third virtual object.
In the above solution, the object marking module is further configured to receive a marking instruction for the third virtual object in response to a triggering operation for the third virtual object; or displaying a marking control for marking the third virtual object in the second picture, and receiving a marking instruction for the third virtual object in response to a triggering operation for the marking control.
In the above scheme, the device further includes: the prompt sending module is used for displaying object identifiers of target virtual objects in the at least two second virtual objects by adopting a fourth display mode in the map of the virtual scene when the number of the second virtual objects is at least two; wherein the fourth display style characterizes the object identification of the target virtual object as being in an operable state, the distance between the target virtual object and the third virtual object being below a distance threshold; and when receiving triggering operation aiming at the object identification, sending the position prompt information of the third virtual object to the terminal corresponding to the target virtual object.
In the above-mentioned aspect, the second display module is further configured to display, in a third screen corresponding to the virtual scene from the second virtual object perspective, contour information of the marked third virtual object in a fifth display style, so as to determine, when an obstacle that blocks the field of view exists between the second virtual object and the third virtual object, position information of the third virtual object; or displaying the marked position information of the third virtual object in the map of the virtual scene.
In the above aspect, when the third virtual object is blocked by an obstacle, the second display module is further configured to perspective-display the third virtual object blocked by the obstacle in a third picture corresponding to the virtual scene from a second virtual object perspective, so that position information of the third virtual object is in a visible state; wherein the second virtual object and the third virtual object are located at two sides of the obstacle.
In the above scheme, the device further includes: the first attack module is used for responding to an aiming instruction aiming at the third virtual object in the process of interaction between the first virtual object and the third virtual object when the obstacle is a penetrable obstacle, and controlling the first virtual object to project a first virtual prop along the aiming direction indicated by the aiming instruction; when the first virtual prop hits the obstacle, the first virtual prop is controlled to penetrate the obstacle to act on the third virtual object.
In the above scheme, the device further includes: the second attack module is used for responding to a damage instruction to the obstacle triggered based on the position of the third virtual object in the process of interaction between the first virtual object and the third virtual object when the obstacle is a destructible obstacle, and controlling the first virtual object to destroy a part of the area in the obstacle through a second virtual prop; in response to an aiming instruction for the third virtual object, controlling the first virtual object to project a third virtual prop along an aiming direction indicated by the aiming instruction; and when the third virtual prop passes through the partial area, controlling the third virtual prop to act on the third virtual object through the partial area.
In the above aspect, after displaying the position information of the third virtual object in the third picture corresponding to the virtual scene from the second virtual object perspective, the apparatus further includes: and the updating module is used for updating and displaying the position information of the third virtual object in a third picture along with the movement of the third virtual object when the third virtual object moves in the virtual scene.
The embodiment of the application provides a terminal device, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the interactive processing method in the virtual scene when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for realizing the interactive processing method in the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions realize the interactive processing method in the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
when the terminal receives the view angle switching instruction of the target camera with the control right for the first virtual object, the first picture of the view angle of the first virtual object displayed by the terminal of the first virtual object can be switched to the second picture under the view angle of the display camera, when the first virtual object marks the third virtual object in the second picture, the position information of the third virtual object can be displayed in real time in the third picture of other virtual objects (such as the second virtual object) which are in a cooperative relation with the first virtual object, the operation is simple, the position information of the third virtual object displayed in the third picture has instantaneity, the interaction between the virtual object in the group where the first virtual object is located and the third virtual object is facilitated, the number of times of the interaction operation executed for achieving a certain interaction purpose (such as the defeating of the group where the third virtual object is located) can be reduced, and the man-machine interaction efficiency is improved.
Drawings
Fig. 1A is an application mode schematic diagram of an interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 1B is an application mode schematic diagram of an interaction processing method in a virtual scene according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application;
fig. 3 is a flow chart of an interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 4 is a schematic placement diagram of a camera according to an embodiment of the present application;
FIG. 5 is a schematic diagram of marking a virtual object according to an embodiment of the present application;
FIG. 6 is a schematic diagram of displaying object identifiers according to an embodiment of the present application;
FIG. 7 is a schematic diagram of displaying a virtual object according to an embodiment of the present application;
FIG. 8 is an attack schematic diagram of a virtual object according to an embodiment of the present application;
fig. 9 is a flowchart illustrating an interaction processing method in a virtual scene according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the term "first/second …" is merely to distinguish similar objects and does not represent a particular ordering for objects, it being understood that the "first/second …" may be interchanged with a particular order or precedence where allowed to enable embodiments of the present application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And a client, an application program for providing various services, such as a video playing client, a game client, etc., running in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) The virtual scene is a virtual scene displayed (or provided) when the application program runs on the terminal, and the virtual scene can be a simulation environment for a real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual object to move in the virtual scene.
4) Virtual objects, images of various people and objects in a virtual scene that can interact, or movable objects in a virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as a character, an animal, etc., displayed in a virtual scene. The virtual object may be an avatar that is virtual in the virtual scene to represent a user, and the virtual scene may include a plurality of virtual objects therein, each having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene. The virtual object may be a game character under the control of a user (or player), i.e., the virtual object is controlled by a real user, will move in a virtual scene in response to the real user's operation of a controller (including a touch screen, voice operated switches, keyboard, mouse, joystick, etc.), for example, when the real user moves the joystick to the left, the virtual object will move to the left in the virtual scene, and may remain stationary in place, jump, and use various functions (such as skills and props).
The embodiment of the application provides an interaction processing method, an interaction processing device, terminal equipment, a computer readable storage medium and a computer program product in a virtual scene, which can improve the man-machine interaction efficiency. In order to facilitate easier understanding of the method for processing interactions in a virtual scene provided by the embodiment of the present application, first, an exemplary implementation scenario of the method for processing interactions in a virtual scene provided by the embodiment of the present application is described. In some embodiments, the virtual scene may be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the game characters, so that a user can relax life pressure in the game process.
In an implementation scenario, referring to fig. 1A, fig. 1A is a schematic application mode diagram of an interaction processing method in a virtual scenario provided in an embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of the virtual scenario 100 completely depending on the computing capability of graphics processing hardware of the terminal device 400, for example, a game in a single-machine/offline mode, and output of the virtual scenario is completed through various different types of terminal devices 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device. By way of example, the types of graphics processing hardware include central processing units (CPU, central Processing Unit) and graphics processors (GPU, graphics Processing Unit).
When forming the visual perception of the virtual scene 100, the terminal device 400 calculates the data required for display through the graphic computing hardware, and completes loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception for the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is presented on the display screen of the smart phone, or a video frame realizing the three-dimensional display effect is projected on the lens of the augmented reality/virtual reality glasses; in addition, to enrich the perceived effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception and gustatory perception by means of different hardware.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and the virtual scene 100 including role playing is output during the running of the client 410, and the virtual scene 100 may be an environment for the game role to interact, for example, may be a plain, a street, a valley, etc. for the game role to fight against; the virtual scene 100 may include a part or all of the virtual objects in the first group (such as the first virtual object and the second virtual object), and a part or all of the virtual objects in the second group (such as the third virtual object) that are in hostile opposition to each other in the first group, that is, the first virtual object and the second virtual object are respectively in hostile opposition to the third virtual object, and the virtual scene 100 may further include a target camera 110 for detecting and marking the hostile placed in the virtual scene by the virtual objects in the first group (such as the first virtual object or the second virtual object), where all the virtual objects in the first group share the control right of the target camera, taking the terminal device 400 as the terminal corresponding to the first virtual object as an example.
As an example, the terminal device may display a first picture of the virtual scene corresponding to a first virtual object perspective; responding to a visual angle switching instruction corresponding to a target camera with control right for a first virtual object, and switching from displaying a first picture to displaying a second picture of a virtual scene corresponding to the visual angle of the target camera; in response to a marking operation for a third virtual object in the second picture, displaying position information of the third virtual object in the third picture of the virtual scene corresponding to the second virtual object view angle; the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship; therefore, the position information of the third virtual object can be displayed in real time in the third picture through the second picture, the operation is simple, the position information has real-time performance, the number of times of interaction operation for achieving a certain interaction purpose (such as defeating the group where the third virtual object is located) can be reduced, and the man-machine interaction efficiency is improved.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic application mode diagram of an interaction processing method in a virtual scenario provided in an embodiment of the present application, applied to a terminal device 400 and a server 200, and adapted to an application mode that completes virtual scenario calculation depending on a computing capability of the server 200 and outputs a virtual scenario at the terminal device 400. Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of virtual scene related display data (such as scene data) and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 finishes loading, analyzing and rendering the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect can be projected on a lens of an augmented reality/virtual reality glasses; as regards the perception of the form of the virtual scene, it is understood that the auditory perception may be formed by means of the corresponding hardware output of the terminal device 400, for example using a microphone, the tactile perception may be formed using a vibrator, etc.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and the virtual scene 100 including role playing is output during the running of the client 410, and the virtual scene 100 may be an environment for the game role to interact, for example, may be a plain, a street, a valley, etc. for the game role to fight against; the virtual scene 100 may include a part or all of the virtual objects in the first group (such as the first virtual object and the second virtual object), and a part or all of the virtual objects in the second group (such as the third virtual object) that are in hostile opposition to each other in the first group, that is, the first virtual object and the second virtual object are respectively in hostile opposition to the third virtual object, and the virtual scene 100 may further include a target camera 110 for detecting and marking the hostile placed in the virtual scene by the virtual objects in the first group (such as the first virtual object or the second virtual object), where all the virtual objects in the first group share the control right of the target camera, taking the terminal device 400 as the terminal corresponding to the first virtual object as an example.
As an example, the terminal device may display a first picture of the virtual scene corresponding to a first virtual object perspective; responding to a visual angle switching instruction corresponding to a target camera with control right for a first virtual object, and switching from displaying a first picture to displaying a second picture of a virtual scene corresponding to the visual angle of the target camera; in response to a marking operation for a third virtual object in the second picture, displaying position information of the third virtual object in the third picture of the virtual scene corresponding to the second virtual object view angle; the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship; therefore, the position information of the third virtual object can be displayed in real time in the third picture through the second picture, the operation is simple, the position information has real-time property, the interaction between the virtual object in the group where the first virtual object is located and the third virtual object is facilitated, the number of times of executing interaction operation for achieving a certain interaction purpose (such as defeating the group where the third virtual object is located) can be reduced, and the man-machine interaction efficiency is improved.
In some embodiments, the terminal device 400 may implement the interactive processing method in the virtual scene provided by the embodiment of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) application (APP, APPlication), i.e. a program that needs to be installed in an operating system to run, such as a shooting game APP (i.e. client 410 described above); the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an example of an application program, in actual implementation, the terminal device 400 installs and runs an application program supporting a virtual scene. The application may be any one of a First person shooter game (FPS), a third person shooter game, a virtual reality application, a three-dimensional map program, a maneuver simulation program, or a multi-person gunfight survival game. The user uses the terminal device 400 to operate a virtual object located in a virtual scene to perform activities including, but not limited to: at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona, or the like.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
For example, the server 200 in fig. 1B may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
The structure of the terminal device 400 shown in fig. 1A is explained below. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application, and the terminal device 400 shown in fig. 2 includes: at least one processor 420, a memory 460, at least one network interface 430, and a user interface 440. The various components in terminal device 400 are coupled together by bus system 450. It is understood that bus system 450 is used to implement the connected communications between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 450 in fig. 2.
The processor 420 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 440 includes one or more output devices 441 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 440 also includes one or more input devices 442, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 460 optionally includes one or more storage devices physically remote from processor 420.
Memory 460 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 460 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 460 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, the exemplary network interfaces 430 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 463 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 441 (e.g., a display screen, speakers, etc.) associated with the user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the interaction processing device in the virtual scene provided by the embodiments of the present application may be implemented in software, and fig. 2 shows the interaction processing device 465 in the virtual scene stored in the memory 460, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the first display module 4651, the screen switching module 4652 and the second display module 4653 are logical, and thus may be arbitrarily combined or further split according to the implemented functions, and functions of the respective modules will be described below.
In other embodiments, the interactive processing device in the virtual scenario provided by the embodiments of the present application may be implemented in hardware, and by way of example, the interactive processing device in the virtual scenario provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the interactive processing method in the virtual scenario provided by the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic component.
The following will specifically describe an interaction processing method in a virtual scene provided by the embodiment of the present application with reference to the accompanying drawings. The method for processing interaction in the virtual scene provided by the embodiment of the application can be independently executed by the terminal device 400 in fig. 1A, or can be cooperatively executed by the terminal device 400 and the server 200 in fig. 1B. Next, an example will be described in which the terminal device 400 in fig. 1A alone executes the interactive processing method in the virtual scene provided by the embodiment of the present application. Referring to fig. 3, fig. 3 is a flowchart of an interaction processing method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
It should be noted that the method shown in fig. 3 may be executed by various computer programs running on the terminal device 400, and is not limited to the above-mentioned client 410, but may also be the operating system 461, software modules and scripts described above, and therefore the client should not be considered as limiting the embodiments of the present application.
Step 101: the terminal equipment displays a first picture of the virtual scene corresponding to the first virtual object visual angle.
The first virtual object is a virtual object corresponding to the current login account in the virtual scene, and the terminal equipment is a terminal corresponding to the current login account or the first virtual object. A client supporting a virtual scene is installed on the terminal device (for example, when the virtual scene is a game, the corresponding client may be a shooting game APP), and when a user opens the client installed on the terminal device and the terminal device runs the client, a first screen for viewing the virtual scene with a first virtual object perspective may be displayed in the client, and virtual scene information may be displayed in the first screen.
Step 102: and responding to a view angle switching instruction corresponding to the target camera with control right for the first virtual object, and switching from displaying the first picture to displaying a second picture of the virtual scene corresponding to the view angle of the target camera.
The target camera is used for the first virtual object to reconnaissance other virtual objects in the virtual scene. In practical application, the virtual object can independently share the control right of the camera placed by itself, at this time, the target camera is placed by the first virtual object, and only the first virtual object has the control right of the target camera; in addition, the control rights of cameras placed by the virtual objects in the group can be shared among different virtual objects in the same group, at this time, the target camera can be placed by the first virtual object or the second virtual object in the group where the first virtual object is located, and the first virtual object and the second virtual object share the control rights of the target camera, that is, when any one or more (two or more) virtual objects in the group where the first virtual object is located are placed in the virtual scene for detecting cameras of other virtual objects, all virtual objects in the group have the control rights of the placed cameras (such as the target camera). A first virtual object having control of the target camera may enter the view angle of the target camera to observe the virtual scene under the view angle of the target camera, such as reconnaissance of other virtual objects in the virtual scene (which may include a second virtual object belonging to the same group as the first virtual object or a third virtual object belonging to a different group than the first virtual object), marking of the observed other virtual objects, and sharing the marked other virtual objects in the group in which the first virtual object is located.
It should be noted that, in the embodiment of the present application, the first virtual object and the second virtual object are in a cooperative relationship, and the second virtual object is a generic term of other virtual objects (i.e., other login accounts different from the current login account id) except the first virtual object in the group where the first virtual object is located, and is not a specific virtual object in the virtual scene, for example, it is assumed that the group where the first virtual object is located includes: and the virtual object A, the virtual object B, the virtual object C, the virtual object D and the virtual object E are all called a second virtual object when the virtual object A, the virtual object B, the virtual object C, the virtual object D and the virtual object E do not correspond to the current login account. Likewise, the third virtual object described in the embodiments of the present application may be a generic term for other virtual objects belonging to different groups than the first virtual object. Similarly, in the embodiment of the present application, the first virtual object and the second virtual object are in hostile relationship, that is, the third virtual object and the second virtual object belong to different groups, which is a generic term of virtual objects in other groups different from the group in which the first virtual object is located.
In some embodiments, before the terminal device switches from displaying the first frame to displaying the second frame of the virtual scene corresponding to the target camera view angle, the terminal device may receive the view angle switching instruction corresponding to the target camera of which the first virtual object has control right by: displaying a target camera where the first virtual object or the second virtual object is placed at a target position in the first picture; and responding to the triggering operation of a first switching key for switching the view angle of the camera, and receiving a view angle switching instruction corresponding to the target camera.
In practical applications, in view of view angle limitation, a camera in which the first virtual object or the second virtual object is placed in the virtual scene may be displayed in real time at a target position (a placement position of the camera) in the current first screen, or may not be displayed in real time at the target position in the current first screen. However, in order to facilitate the first virtual object or the second virtual object to learn about the situation of the placed cameras, all the placed cameras may be displayed in real time in the map of the virtual scene.
Taking a camera as a target camera, when the target camera is displayed at a target position in a first picture or in a map of a virtual scene, a player can trigger a visual angle switching instruction for the target camera through a first switching key for switching the visual angle of the camera, wherein the first switching key can be a real key (such as a Z key) which is preset in a real keyboard and can trigger the visual angle switching instruction, and can also be a virtual key or a virtual control which is displayed in the first picture and can trigger the visual angle switching instruction. When the terminal equipment receives the view angle switching instruction aiming at the target camera, the switching operation of the picture can be performed in response to the view angle switching instruction.
In some embodiments, the terminal device may display the target camera on which the first virtual object is placed at the target position in the first screen by: in the first screen, setting an interaction mode of the virtual scene as a camera placement mode in response to a mode setting instruction for the interaction mode of the virtual scene; in the camera placement mode, in response to a selection operation for a target position in the first screen, the first virtual object is controlled to take out the target camera from the virtual back pack, and the target camera is placed at the target position to display the target camera at the target position.
Taking the placement of the first virtual object as an example, when a virtual knapsack (used for loading various types of virtual props) of the first virtual object is loaded with the target camera, in setting an interaction mode of a virtual scene to be a camera placement mode, the first virtual object can be controlled to place the target camera at any position within a touchable range, for example, see fig. 4, fig. 4 is a placement schematic diagram of the camera provided in the embodiment of the present application, after a target position 401 to be placed is selected, the first virtual object is controlled to take out the target camera 402 from the virtual knapsack, and the target camera 402 is placed at the target position 401 for display, and in addition, after the target camera 402 is placed, an icon 403 of the placed target camera is updated and displayed in a map of the virtual scene in real time for viewing of all virtual objects in a group where the first virtual object is located, so as to prompt a player of the placement position of the target camera.
When the virtual knapsack of the first virtual object is not loaded with the target camera, the first virtual object can be controlled to acquire (such as purchase or pick up) and place the target camera into the virtual knapsack, for example, before or during a game, at least one portable prop is displayed in the first picture, wherein the at least one portable prop at least comprises the target camera, and the target camera is equipped into the virtual knapsack of the first virtual object in response to the selection operation of the target camera in the at least one portable prop, so that when the target camera is needed to be placed in the virtual scene later, the target camera is taken out from the virtual knapsack for placement. In practical application, the selected target camera can be directly placed at the target position (without firstly equipping the target camera to the virtual knapsack and then taking out the target camera from the virtual knapsack) for display, so that the operation complexity is reduced, and the man-machine interaction efficiency is improved.
In some embodiments, the terminal device may predict the target location to be placed by: acquiring scene data of a first virtual object in a virtual scene; and according to the scene data, calling a machine learning model to predict the position to be placed of the target camera, obtaining the target position to be placed, and highlighting the target position.
Wherein the scene data includes at least one of: environmental data (such as location information of a location, hidden conditions, etc.) where the first virtual object is located, distances between the first virtual object and other virtual objects (such as distances between the first virtual object and the second virtual object, distances between the first virtual object and the third virtual object, or distances between the second virtual object and the third virtual object), interaction data between the first virtual object and other virtual objects (such as related data reflecting interaction conditions of the first virtual object and the third virtual object, such as interaction roles, interaction preferences, interaction equipment, etc.). When the target camera needs to be placed, the terminal equipment automatically acquires related scene data, predicts the target position which is needed to be placed by the target camera from the virtual scene through a machine learning model of an artificial intelligent algorithm, can enable a prediction result to be more accurate and more suitable for the current scene, and displays the predicted target position in a highlighting mode after predicting the target position so as to guide the first virtual object to select the target position for placing the target camera. Wherein the highlighting mode includes at least one of the following display modes: target color display, overlay display, highlighting and tracing display are adopted.
It should be noted that, the machine learning model may be a neural network model (such as a convolutional neural network, a deep convolutional neural network, or a fully connected neural network), a decision tree model, a gradient lifting tree, a multi-layer perceptron, a support vector machine, and the like, and the type of the machine learning model is not limited in the embodiment of the present application.
In some embodiments, before the terminal device switches from displaying the first frame to displaying the second frame corresponding to the virtual scene corresponding to the target camera view angle, the terminal device may further receive a view angle switching instruction corresponding to the target camera with the control right of the first virtual object by: when the number of cameras of the first virtual object with control right is at least two and each camera corresponds to one second switching key, responding to triggering operation of the second switching key corresponding to the first camera in the at least two cameras, and receiving a visual angle switching instruction corresponding to the first camera; responding to a visual angle switching instruction corresponding to the first camera, switching from displaying the first picture to displaying a fourth picture of the virtual scene corresponding to the visual angle of the first camera; based on the fourth picture, responding to the triggering operation of a second switching key corresponding to a target camera in at least two cameras, and receiving a visual angle switching instruction corresponding to the target camera; correspondingly, the terminal device can respond to the visual angle switching instruction corresponding to the target camera by switching from displaying the first picture to displaying the second picture of the virtual scene corresponding to the visual angle of the target camera in the following manner: and responding to a visual angle switching instruction corresponding to the target camera, switching from displaying the fourth picture to displaying the second picture of the virtual scene corresponding to the visual angle of the target camera.
The cameras with the control right of the first virtual object are any one or more cameras placed by the virtual object in the group where the first virtual object is located, when at least two cameras with the control right of the first virtual object are provided, the at least two cameras can be respectively placed by the first virtual object or the second virtual object, or can be jointly placed by the first virtual object and the second virtual object, each camera can correspond to a second switching key, and a view angle switching instruction for the corresponding camera is triggered through the second switching key so as to switch a currently displayed picture into a picture corresponding to a virtual scene under a view angle of the corresponding camera. The second switching key may be a real key (corresponding to the camera, for example, a key position such as A, B, C is used for triggering the view angle switching instructions of the camera 1, the camera 2, and the camera 3) which is preset in the real keyboard and can trigger the view angle switching instruction; the second switching key may also be a key position set on the mouse wheel and capable of triggering a visual angle switching instruction, and the screen of the previous or next camera is switched by sliding the mouse wheel until the screen desired by the player is entered, or may be a virtual key or a virtual control which is displayed on the first screen and capable of triggering a visual angle switching instruction, for example, by clicking the virtual key position "next" to switch and display the screen of the corresponding virtual scene under the visual angle of the next camera.
In some embodiments, before the terminal device switches from displaying the first frame to displaying the second frame corresponding to the virtual scene corresponding to the target camera view angle, the terminal device may further receive a view angle switching instruction corresponding to the target camera with the control right of the first virtual object by: when the number of cameras of the first virtual object with control rights is at least two, displaying a camera selection interface, and displaying at least two cameras in the camera selection interface; receiving a visual angle switching instruction corresponding to a target camera in response to a selection operation of the target camera in at least two cameras; therefore, the user can manually select which camera view angle to enter so as to further improve the game experience of the user by displaying the picture of the virtual scene corresponding to the corresponding camera view angle.
For example, the terminal device displays a switching key for switching the frames in the first frame, displays a camera selection interface in the first frame in response to a triggering operation for the switching key, displays a plurality of cameras such as a camera a, a camera B, a camera C and the like in the camera selection interface for a user to select, receives a view angle switching instruction corresponding to the camera a in response to a selection operation for a target camera (such as the camera a), and switches the currently displayed first frame into a frame of a virtual scene corresponding to the view angle of the camera a in response to the view angle switching instruction, so that the first virtual object can view the virtual scene in the frame corresponding to the view angle of the camera a.
In some embodiments, before the terminal device switches from displaying the first frame to displaying the second frame corresponding to the virtual scene corresponding to the target camera view angle, the terminal device may further receive a view angle switching instruction corresponding to the target camera with the control right of the first virtual object by: when the number of cameras of the first virtual object with control right is at least two, displaying the target cameras in a map corresponding to the virtual scene by adopting a first display mode, and displaying other cameras except the target cameras in the at least two cameras by adopting a second display mode; the first display pattern is different from the second display pattern, and the first display pattern represents that the selection priority of the target camera is higher than the selection priority of other cameras; and receiving a view angle switching instruction corresponding to the target camera in response to the selection operation of the target camera.
Here, when the number of cameras of which the first virtual object has control right is plural, the camera identification of each camera may be displayed in the map of the virtual scene, which camera identification the user triggers will trigger the view angle switching instruction for the corresponding camera, for example, the cameras of which the first virtual object has control right are three, i.e., the camera 1, the camera 2, and the camera 3, the identification of the camera 1, the identification of the camera 2, and the identification of the camera 3 are respectively displayed in the map, and when the user triggers the identification of the camera 3, the terminal device receives the view angle switching instruction for the camera 3 in response to the triggering operation to switch the currently displayed first screen to the second screen of the virtual scene corresponding to the view angle of the camera 3.
In practical application, when the number of cameras of the first virtual object having control right is multiple, different display modes can be adopted to display each camera in the map of the virtual scene, for example, a display mode adapted to the display priority is adopted to display the corresponding camera according to the display priority of each camera. The terminal device may determine the display priority of each camera in the following manner: acquiring reference prediction features corresponding to each camera, and calling a machine learning model according to the reference prediction features to predict the selection priority of each camera to obtain the selection priority of each camera; wherein the reference prediction features comprise at least one of: the camera comprises a visual angle range of the camera, a distance between the camera and a first virtual object, a distance between the camera and a second virtual object and a distance between the camera and a third virtual object; before each camera is displayed in a map of a virtual scene, the terminal equipment automatically acquires reference prediction characteristics of each camera, predicts the selection priority of each camera through a machine learning model of an artificial intelligent algorithm, and displays each camera in a display mode matched with each selection priority, namely, displays each camera in a distinguishing mode according to the difference of the selection priorities of each camera, for example, displays corresponding cameras in the map of the virtual scene in different display modes (such as different colors, different brightness and the like) according to the difference of the selection priorities of each camera; or the higher the selection priority, the more prominent the display style of the corresponding camera in the map of the virtual scene; or selectively displaying partial cameras with higher selection priorities than the target priorities in the map of the virtual scene; therefore, cameras with different selection priorities are displayed by adopting different display modes, obvious prompts can be given to the player, the player can be guided to select the proper camera from the cameras to switch pictures, the selection efficiency is improved, and the man-machine interaction efficiency is improved.
It will be appreciated that the relevant data relating to the scene data, the reference prediction features, etc. in the embodiments of the present application are essentially user-related data, and when the embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of the relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions.
After receiving the view angle switching instruction corresponding to the target camera, the terminal equipment responds to the view angle switching instruction, switches the first picture displayed currently into a second picture corresponding to the virtual scene under the view angle of the target camera to display, and further observes the virtual scene according to the second picture under the view angle of the target camera to detect whether other virtual objects exist in the virtual scene.
In some embodiments, after switching from displaying the first picture to displaying the second picture of the virtual scene corresponding to the target camera view angle, when the number of cameras of the first virtual object having control rights is at least two, the terminal device may display the target cameras in a third display style in the map of the virtual scene; the third display mode characterizes that the second picture displayed at present is a picture corresponding to the virtual scene under the view angle of the target camera.
Here, after the first virtual object is controlled to enter the view angle of the target camera to observe the virtual scene, if the number of cameras for which the first virtual object has control rights is plural, as shown in fig. 6, in the map of the virtual scene, the target camera 601 is displayed in a third display style (e.g., different color, different brightness, etc.) different from the display styles of the other cameras except for the target camera, such as highlighting the target camera, and the other cameras are displayed in gray scale to prompt the player which camera's view angle has entered.
Step 103: and in response to the marking operation for the third virtual object in the second picture, displaying the position information of the third virtual object in the third picture of the virtual scene corresponding to the second virtual object.
After the terminal device switches the current display picture to the second picture, the terminal device can control the first virtual object to adjust the visual angle range of the target camera (namely, adjust the display content of the second picture), and can detect the display target in the second picture in real time to determine whether an enemy target (namely, a third virtual object) appears in the second picture, when the third virtual object exists in the second picture, the third virtual object can be automatically marked, and the position information of the marked third virtual object is shared to the second virtual object in the same group for viewing, so that the first virtual object and the second virtual object can formulate and adopt an interaction strategy suitable for the current interaction condition to execute interaction operation with the third virtual object, thereby being beneficial to improving the interaction capability of the group where the first virtual object exists and further improving the man-machine interaction efficiency.
In some embodiments, before the terminal device displays the position information of the third virtual object in the third screen of the virtual scene corresponding to the second virtual object perspective, the third virtual object may be marked by: receiving a marking instruction for marking the third virtual object in response to the third virtual object existing in the second picture; and in response to the marking instruction, controlling the first virtual object to mark the third virtual object. In practical application, when the third virtual object exists in the second picture, the third virtual object can be marked automatically, the corresponding marking instruction can be triggered to mark the third virtual object, and after the third virtual object is marked, the position information of the marked third virtual object is shared to the second virtual object in the same group for viewing, so that the first virtual object and the second virtual object can be formulated and interactive operation can be executed with the third virtual object by adopting an interaction strategy suitable for the current interaction condition, the interaction capability of the group where the first virtual object is positioned is improved, and the man-machine interaction efficiency is improved.
In some embodiments, the terminal device may receive a marking instruction for marking the third virtual object by: responding to a triggering operation for the third virtual object, and receiving a marking instruction for the third virtual object; or displaying a marking control for marking the third virtual object in the second screen, and receiving a marking instruction for the third virtual object in response to a triggering operation for the marking control.
When the terminal device detects that the third virtual object exists in the second picture, the player can trigger the marking instruction by clicking the third virtual object (for example, clicking a left mouse button to select the third virtual object after aligning the third virtual object through the mouse); of course, in practical application, the marking instruction for the third virtual object may also be triggered by the marking control displayed in the second screen.
In some embodiments, the terminal device may display the position information of the third virtual object in the third screen of the virtual scene corresponding to the second virtual object perspective in the following manner: displaying contour information of the marked third virtual object in a third picture of the virtual scene corresponding to the visual angle of the second virtual object by adopting a fifth display mode so as to determine the position information of the third virtual object when an obstacle which blocks the visual field exists between the second virtual object and the third virtual object; alternatively, the position information of the marked third virtual object is displayed in the map of the virtual scene.
Here, after receiving the marking instruction, the terminal device sends a marking request for the third virtual object to the server, and the server determines display parameters (such as display contour data, display position information, equipment information, etc.) of the third virtual object based on the marking request, and returns the display parameters to the terminal devices of all virtual objects in the group where the first virtual object is located, where the terminal device displays relevant information of the third virtual object according to the display parameters, such as highlighting the contour of the third virtual object in the second screen of the first virtual object and in the third screen of the second virtual object, or displaying the position information, the equipment information, etc. of the third virtual object in the virtual scene in the map of the virtual scene.
Referring to fig. 5, fig. 5 is a schematic diagram of marking a virtual object provided in an embodiment of the present application, when a third virtual object 501 exists in a second frame, after marking the third virtual object 501, a terminal device highlights outline information of the third virtual object 501 in the second frame, and displays position information 502 of the third virtual object 501 in a virtual scene in a map of the virtual scene; it can be understood that, because all virtual objects in the group where the first virtual object is located share the control right of the target camera, after the first virtual object is controlled to mark the third virtual object, related information of the marked third virtual object can be displayed in the terminal devices of other virtual objects in the group where the first virtual object is located; therefore, all virtual objects in the group where the first virtual object is located can view the marked third virtual object, which is beneficial to assisting interaction between the first virtual object and teammates (namely the second virtual object) thereof and the third virtual object, and can reduce the times of interaction operation executed for achieving a certain interaction purpose (such as defeating the group where the third virtual object is located), and improve the man-machine interaction efficiency.
In some embodiments, after the terminal device marks the third virtual object, when the number of the second virtual objects is at least two, displaying, in the map of the virtual scene, the object identifier of the target virtual object in the at least two second virtual objects by using a fourth display style; the fourth display style characterizes that the object identification of the target virtual object is in an operable state, and the distance between the target virtual object and the third virtual object is lower than a distance threshold; and when receiving triggering operation aiming at the object identification, sending position prompt information of the third virtual object to a terminal corresponding to the target virtual object.
Here, after the terminal device controls the first virtual object to mark the third virtual object, if the number of the second virtual objects belonging to the same group as the first virtual object is multiple, the terminal device may calculate a distance between each second virtual object and the third virtual object, select, from the multiple second virtual objects, the second virtual object whose distance is lower than a target number of the distance threshold as a target virtual object, and display an object identifier (such as an object account number, a nickname, an object icon, or a head portrait) of the target virtual object in a map of the virtual scene by using a fourth display style, so as to prompt the player that the object identifier of the target virtual object is in an operable state (such as a clickable or interactable state), when the terminal device receives a trigger operation for the object identifier of the target virtual object, send position prompt information for prompting a position of the third virtual object in the virtual scene to the terminal of the target virtual object, and after the terminal of the target virtual object receives the position prompt information, may formulate an interaction policy with the third virtual object based on the position information of the third virtual object, and execute a corresponding interaction operation according to the interaction policy.
Referring to fig. 6, fig. 6 is a schematic display diagram of an object identifier provided in an embodiment of the present application, after a player marks a third virtual object (enemy), a target virtual object closest to the enemy is selected from second virtual objects of all the friends, and an object icon 602 of the target virtual object is highlighted in a map of a virtual scene, where the object icon 602 is in a clickable state, and when the player clicks the object identifier 602 in the clickable state, position prompt information of the third virtual object may be sent to the target virtual object to prompt the target virtual object to perform an interaction operation most suitable for the third virtual object, such as pursuing the third virtual object; therefore, by making the target virtual object of the most effective interaction strategy for the third virtual object with prompt information, not only are the playing methods enriched, but also the interaction capability of the target virtual object is improved, and further the overall interaction capability of the group where the first virtual object is located is improved, so that the man-machine interaction efficiency is improved.
In some embodiments, the terminal device may further display the position information of the third virtual object in the second screen and the third screen of the virtual scene corresponding to the second virtual object perspective in the following manner: when the third virtual object is blocked by the obstacle, the third virtual object blocked by the obstacle is displayed in a perspective mode in the second picture and a third picture of the virtual scene corresponding to the view angle of the second virtual object, so that the position information of the third virtual object is in a visible state; the first virtual object or the second virtual object and the third virtual object are positioned on two sides of the obstacle.
Here, when the third virtual object is blocked by an obstacle (e.g., a wall, a door, or the like) under the view angle of the target camera or under the view angle of the second virtual object, the third virtual object may be displayed in perspective so that the first virtual object and the second virtual object can view the blocked third virtual object through the obstacle.
Referring to fig. 7, fig. 7 is a schematic diagram of displaying a virtual object provided in an embodiment of the present application, after a first virtual object marks a third virtual object 701, if the third virtual object 701 is blocked by an obstacle 702 in a second virtual object view angle (i.e. in a third screen), displaying a screen of the third virtual object 701 blocked by the obstacle 702 in a night vision manner, and highlighting an outline of the third virtual object 701 in the screen to highlight the third virtual object; in this way, when the third virtual object 701 and the second virtual object 703 are located at two sides of the obstacle 702, the third virtual object 701 is visible relative to the second virtual object 703, but the second virtual object is still invisible relative to the third virtual object 701, so as to enlarge the difference between the interaction capability of the third virtual object and the second virtual object, and further improve the overall interaction capability of the group where the first virtual object is located, so as to improve the man-machine interaction efficiency.
It should be noted that, when the third virtual object displayed in perspective moves out of the area where the obstacle in the field of view of the second virtual object is located (i.e., the third virtual object is not blocked by the obstacle), the perspective display effect may be canceled.
In some embodiments, when the obstacle is a penetrable obstacle, during interaction of the first virtual object with the third virtual object, in response to an aiming instruction for the third virtual object, controlling the first virtual object to project the first virtual prop in an aiming direction indicated by the aiming instruction; when the first virtual prop hits the obstacle, the first virtual prop is controlled to penetrate the obstacle to act on the third virtual object.
Here, when the first virtual object and the third virtual object are located at two sides of the obstacle, and the third virtual object is blocked by the obstacle to display in a perspective display mode, if the obstacle is a penetrable obstacle, the terminal device may control the first virtual object to project the first virtual prop towards the position where the third virtual object is located according to the position of the third virtual object which is in perspective display relative to the obstacle, so that the first virtual prop penetrates the corresponding position of the obstacle to attack the third virtual object; correspondingly, when the second virtual object and the third virtual object are positioned at two sides of the obstacle and the third virtual object is blocked by the obstacle, the terminal equipment at the side of the second virtual object can also control the second virtual object to attack the third virtual object by penetrating the obstacle through the corresponding virtual prop; therefore, the attack of the virtual object of the group where the first virtual object is located on the enemy outside the field of view of the user is realized, and the man-machine interaction efficiency is improved.
Referring to fig. 8, fig. 8 is an attack schematic diagram of a virtual object provided in an embodiment of the present application, after a first virtual object marks a third virtual object, if the marked third virtual object 801 is blocked by a penetrable barrier 802 in a view angle of a second virtual object (i.e., in a third image), a terminal device on the second virtual object side may control the second virtual object to shoot a virtual prop toward a position where the third virtual object displayed in a perspective manner is located, so that the virtual prop penetrates the barrier to attack the third virtual object blocked by the barrier.
In some embodiments, when the obstacle is a penetrable obstacle, in the process of interaction between the first virtual object and the third virtual object, responding to a breaking instruction for the obstacle triggered based on the position of the third virtual object, and controlling the first virtual object to break a part of the area in the obstacle through the second virtual prop; in response to an aiming instruction for the third virtual object, controlling the first virtual object to project the third virtual prop along an aiming direction indicated by the aiming instruction; and when the third virtual prop passes through the partial area, controlling the third virtual prop to act on the third virtual object through the partial area.
Here, when the first virtual object and the third virtual object are located at two sides of the obstacle and the third virtual object is blocked by the obstacle, the first virtual object and the third virtual object are displayed in a perspective display mode, if the obstacle is a breakable obstacle (not directly penetrable), the terminal device can control the first virtual object to use the second virtual prop to break a part of the area of the third virtual object in the obstacle, so that the broken part of the area can be penetrable; and then controlling the first virtual object to penetrate the damaged partial region by using the third virtual prop so as to act on the third virtual object positioned at the rear side of the partial region, thereby realizing the attack on the third virtual object. Similarly, after the first virtual object marks the third virtual object, under the view angle of the second virtual object (i.e. in the third picture), the terminal equipment at the second virtual object side can control the second virtual object to destroy a part of the area of the third virtual object in the relative barrier by using the second virtual prop, so that the destroyed part of the area can be penetrated; then, the second virtual object is controlled to penetrate the damaged partial area by using the third virtual prop so as to act on the third virtual object positioned at the rear side of the partial area, and attack on the third virtual object is realized; therefore, the attack of the virtual object of the group where the first virtual object is located on the enemy outside the field of view of the user is realized, and the man-machine interaction efficiency is improved.
It should be noted that the first virtual prop, the second virtual prop and the third virtual prop may be the same virtual prop or may be different virtual props, and the embodiment of the present application does not limit the types of the virtual props.
By means of the mode, the camera provided by the embodiment of the application can detect and mark enemies, can also cooperate with scene elements (such as obstacles) in a virtual scene to generate additional effects (such as the ability of attacking the enemies shielded by the obstacles), further improves interaction ability, and can improve the enthusiasm of a player for using the camera.
In some embodiments, after the position information of the third virtual object is displayed in the third screen of the virtual scene corresponding to the view angle of the second virtual object of the terminal device, when the third virtual object moves in the virtual scene, along with the movement of the third virtual object, the position information of the third virtual object of the display mark is updated in the third screen.
Here, when the marked third virtual object moves in the virtual scene, during the movement of the third virtual object, the terminal device may control the first virtual object to continuously mark the third virtual object, and update, in the second screen and the third screen, the position information of the third virtual object displaying the mark, such as updating contour information highlighting the third virtual object, or updating, in a map of the virtual scene, position information, equipment information, etc. displaying the third virtual object in the virtual scene; therefore, all virtual objects in the group where the first virtual object is located can view the movement condition of the marked third virtual object in the virtual scene in real time, which is beneficial to assisting interaction between the first virtual object and teammates (namely the second virtual object) thereof and the third virtual object, and can reduce the times of executing interaction operation for achieving a certain interaction purpose (such as defeating the group where the third virtual object is located), and improve the man-machine interaction efficiency.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Taking a virtual scene as an example of a shooting game, referring to fig. 9, fig. 9 is a flow chart of an interaction processing method in the virtual scene provided by the embodiment of the application, taking a control right of a target camera as an example of a control right shared by a first virtual object and a second virtual object in a group where the first virtual object is located, where the target camera is placed, the method includes:
step 201: the terminal device displays a first picture of the shooting game corresponding to the first virtual object visual angle.
Here, when the user opens a shooting game client installed on the terminal device and the terminal device runs the shooting game client, a first screen from which the shooting game is observed at the first virtual object angle may be displayed in the shooting game client, and game related information may be displayed in the first screen.
Step 202: and judging whether the virtual knapsack is loaded with the target camera or not.
The target camera is used for detecting virtual objects in the shooting game, and in practical application, the target camera can detect other players in the shooting game. When the first virtual object wants to place the target camera in the shooting game, it needs to determine whether the target camera is loaded in the virtual knapsack of the first virtual object, and when the target camera is not loaded in the virtual knapsack, step 203 is executed; otherwise, step 204 is performed.
Step 203: and controlling the first virtual object to purchase the target camera and loading the target camera into the virtual knapsack.
Here, when the virtual knapsack of the first virtual object is not loaded with the target camera, the first virtual object may be controlled to acquire and load the target camera into the virtual knapsack, for example, 45 seconds is required to select an object (virtual prop) carried by the game in each game play, and during this time, the virtual prop carrying the camera may be selected.
For example, the terminal device displays at least one portable prop in a first screen, wherein the at least one portable prop at least comprises a target camera, a loading request for the target camera is sent to the server in response to a selection operation for the target camera in the at least one portable prop, and the server loads the target camera requested to be loaded by the loading request into a virtual knapsack of a first virtual object based on the loading request, namely, a preset number of target cameras are added in the virtual knapsack of the first virtual object, so that when the target cameras need to be placed in a shooting game later, the target cameras are taken out from the virtual knapsack to be placed.
Step 204: and in response to the selection operation for the target position in the first picture, controlling the first virtual object to take out the target camera from the virtual knapsack, and placing the target camera at the target position.
Here, in the first screen, in response to a mode setting instruction for an interaction mode of the shooting game, the interaction mode of the shooting game is set to a camera placement mode in which the first virtual object can be controlled to place the target camera at any position (e.g., target position) within the reachable range. For example, in response to a selection operation for a target position, the terminal device sends a camera placement request to the server, the server parses the camera placement request to obtain a target position to be placed carried by the camera placement request, reduces one camera (i.e. a target camera) from a virtual backpack, and places the target camera at the target position, which can be seen in fig. 4.
After the target camera is placed at the target position, all virtual objects in the group where the first virtual object is located share the control right of the target camera; that is, when any one or more (two or more) virtual objects in the group where the first virtual object is located place a camera for detecting other virtual objects in the virtual scene, all the virtual objects in the group have control right of the placed camera (such as a target camera), and the virtual object with control right can enter the view angle of the placed camera to view the virtual scene under the view angle of the placed camera to detect other virtual objects in the virtual scene, such as detecting a second virtual object (i.e. a friend) belonging to the same group as the first virtual object, and detecting a third virtual object (i.e. an adversary) belonging to a different group as the first virtual object, and the detection of an adversary will be described below.
Step 205: and responding to the triggering operation of the first switching key, and receiving a visual angle switching instruction corresponding to the target camera.
Here, the player may trigger the view angle switching instruction for the target camera through a first switching key for switching the view angle of the camera, where the first switching key may be a real key (e.g. a Z key) preset in the real keyboard and capable of triggering the view angle switching instruction, or may be a virtual key or a virtual control displayed in the first screen and capable of triggering the view angle switching instruction. When the terminal equipment receives the view angle switching instruction aiming at the target camera, the switching operation of the view angle picture can be executed in response to the view angle switching instruction.
As described above, the camera with the control right of the first virtual object is any one or more cameras placed by the virtual objects in the group where the first virtual object is located, when there are at least two cameras (other than the target camera) with the control right of the first virtual object, each camera may correspond to one second switching key, and the view angle switching instruction for the corresponding camera is triggered by the second switching key. The second switching key may be a real key (corresponding to the camera, for example, a key position such as A, B, C is used for triggering the view angle switching instructions of the camera 1, the camera 2, and the camera 3) which is preset in the real keyboard and can trigger the view angle switching instruction; the second switching key may also be a key position set on the mouse wheel and capable of triggering a view angle switching instruction, and the view angle picture of the previous or next camera is switched by sliding the mouse wheel until the view angle picture desired by the player is entered, or may be a virtual key or a virtual control displayed on the first picture and capable of triggering the view angle switching instruction, for example, by clicking the virtual key position "next" to trigger the view angle switching instruction for the next camera until the view angle switching instruction for the target camera is triggered.
Step 206: and responding to the visual angle switching instruction corresponding to the target camera, switching from displaying the first picture to displaying the second picture of the shooting game corresponding to the visual angle of the target camera.
In actual implementation, when receiving a view angle switching instruction corresponding to a target camera, the terminal device responds to the view angle switching instruction, and switches a first picture displayed at present into a second picture corresponding to a virtual scene under the view angle of the target camera to display, so as to observe the shooting game according to the second picture under the view angle of the target camera, for example, to detect whether other virtual objects exist in the shooting game.
When at least two cameras (other cameras except the target camera) of the first virtual object have control rights, the terminal equipment receives the view angle switching instruction corresponding to the camera, namely, the view angle switching request corresponding to the camera can be sent to the server, and the server responds to the view angle switching request, acquires the view angle picture corresponding to the camera and returns to the terminal equipment to display so as to switch the currently displayed view angle picture into the view angle picture corresponding to the corresponding camera. For example, the camera with the control right of the first virtual object includes a camera a, a camera B and a camera C, when receiving a view angle switching instruction corresponding to the camera a, the currently displayed view angle picture is switched to a view angle picture corresponding to the camera a, when triggering the view angle switching instruction corresponding to the camera B by sliding the mouse wheel, the currently displayed view angle picture corresponding to the camera a is switched to a view angle picture corresponding to the camera B, and so on, after receiving the view angle switching instruction of the user for switching different cameras, the terminal device obtains game pictures under the view fields of the plurality of cameras from the server, and performs up-down switching display in the terminal device according to the sliding operation of the mouse wheel of the user.
Step 207: when the third virtual object exists in the second picture, marking the third virtual object, and displaying object information of the marked third virtual object in the second picture and a third picture of the shooting game corresponding to the second virtual object.
Here, the terminal device detects whether the third virtual object exists in the second screen, when detecting that the third virtual object exists in the second screen, the player may trigger a marking instruction by clicking the third virtual object (for example, clicking a left mouse button to select the third virtual object after aligning the third virtual object with a mouse), and send a marking request for the third virtual object to the server in response to the marking instruction, where the server determines display parameters (for example, display contour data, display position information, equipment information, etc.) of the third virtual object based on the marking request, and returns the display parameters to the terminal devices of all virtual objects in the group where the first virtual object exists, where the terminal devices display object information of the third virtual object according to the display parameters, for example, highlighting the contour information of the third virtual object in the second screen of the first virtual object and the third screen of the second virtual object, or displaying the position information, the equipment information, etc. of the third virtual object in the virtual scene in the map of the virtual scene, as can be seen in fig. 5.
Through the mode, after the first virtual object marks the third virtual object, the marked third virtual object is shared to the second virtual object in the same group for viewing, and the first virtual object and the second virtual object can formulate and execute interactive operation with the third virtual object by adopting an interactive strategy suitable for the current interactive condition based on the marked third virtual object, so that the interactive capability of the group where the first virtual object is located is improved, and further the man-machine interactive efficiency is improved.
In some embodiments, after the terminal device controls the first virtual object to mark the third virtual object (enemy), if the number of second virtual objects (friends) belonging to the same group with the first virtual object is multiple, the terminal device may screen a target virtual object closest to the enemy from the second virtual objects of all friends, highlight an object icon of the target virtual object in a map of the virtual scene, and the object icon is in a clickable state, and when the player clicks the object identifier in the clickable state, send a position prompt message of the third virtual object to the target virtual object to prompt the target virtual object to perform an interaction operation most suitable for the third virtual object, for example, to pursue the third virtual object, as can be seen in fig. 6; therefore, by making the target virtual object of the most effective interaction strategy for the third virtual object with prompt information, not only are the playing methods enriched, but also the interaction capability of the target virtual object is improved, and further the overall interaction capability of the group where the first virtual object is located is improved, so that the man-machine interaction efficiency is improved.
Step 208: and when the third virtual object is blocked by the obstacle, the third virtual object blocked by the obstacle is displayed in perspective in the second picture and the third picture.
Here, when the third virtual object is blocked by an obstacle (e.g., a wall, a door, or the like) under the view angle of the target camera or under the view angle of the second virtual object, the third virtual object is displayed in perspective so that the first virtual object and the second virtual object can see the blocked third virtual object through the obstacle, see fig. 7 in particular; thus, when the third virtual object and the second virtual object (or the first virtual object) are positioned at two sides of the obstacle, the third virtual object is visible relative to the second virtual object (or the first virtual object), but the second virtual object (or the first virtual object) is still invisible relative to the third virtual object, so that the difference of interaction capability between the third virtual object and the second virtual object (or the first virtual object) is enlarged, and the overall interaction capability of the group where the first virtual object is positioned is further improved, and the man-machine interaction efficiency is improved.
Step 209: when the obstacle is a penetrable obstacle or a destructible obstacle, the first virtual object is controlled to penetrate the obstacle through the virtual prop to act on the third virtual object in the process of interaction between the first virtual object and the third virtual object.
Here, the obstacle includes a penetrable obstacle and a destructible obstacle, when the obstacle is a penetrable obstacle, the terminal device may control the first virtual object to project the first virtual prop toward the position where the third virtual object is located according to the position of the third virtual object displayed in perspective relative to the obstacle, so that the first virtual prop penetrates the corresponding position of the obstacle to attack the third virtual object. When the obstacle is a destructible obstacle (not directly penetrable), the terminal equipment can control the first virtual object to destroy a partial area of the third virtual object relative to the obstacle by using the second virtual prop, so that the destroyed partial area can be penetrable; and then controlling the first virtual object to penetrate the damaged partial region by using the third virtual prop so as to act on the third virtual object positioned at the rear side of the partial region, thereby realizing the attack on the third virtual object. It should be noted that the first virtual prop, the second virtual prop and the third virtual prop may be the same virtual prop or may be different virtual props, and the embodiment of the present application does not limit the types of the virtual props.
Similarly, other virtual objects belonging to the same group as the first virtual object except the first virtual object can attack the third virtual object blocked by the barrier in the mode, so that the attack of the virtual object of the group where the first virtual object is located on the enemy outside the field of view of the user is realized, and the man-machine interaction efficiency is improved.
By means of the mode, the camera provided by the embodiment of the application can detect and mark enemies, can also cooperate with scene elements (such as obstacles) in a virtual scene to generate additional effects (such as being capable of attacking enemies shielded by the obstacles), further improves interaction capability, can improve enthusiasm of a player for using the camera, and further improves the utilization rate of the camera.
Continuing with the description below of an exemplary architecture implemented as software modules for the interaction processing means 465 in a virtual scenario provided by an embodiment of the present application, in some embodiments, the software modules stored in the interaction processing means 465 in the virtual scenario of the memory 460 in fig. 2 may include: a first display module 4651, configured to display a first screen of a virtual scene corresponding to a first virtual object; a screen switching module 4652, configured to switch from displaying the first screen to displaying a second screen of the virtual scene corresponding to a target camera view angle having control right for the first virtual object in response to a view angle switching instruction corresponding to the target camera; a second display module 4653, configured to display, in response to a marking operation for a third virtual object in the second screen, position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective; the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship.
In some embodiments, before the first screen is switched from displaying the first screen to displaying the second screen corresponding to the view angle of the target camera, the first instruction receiving module is further included, and is configured to display, at a target position in the first screen, the target camera where the first virtual object or the second virtual object is placed; and responding to triggering operation of a first switching key for switching the view angle of the camera, and receiving a view angle switching instruction corresponding to the target camera.
In some embodiments, the instruction receiving module is further configured to set, in the first screen, an interaction mode of the virtual scene to a camera placement mode in response to a mode setting instruction for the interaction mode of the virtual scene; in the camera placing mode, in response to a selection operation for a target position in a first picture, the first virtual object is controlled to take out the target camera from a virtual back pack, and the target camera is placed at the target position so as to display the target camera at the target position.
In some embodiments, the apparatus further comprises: the position prediction module is used for acquiring scene data of the first virtual object in the virtual scene, wherein the scene data comprises at least one of the following: the method comprises the steps of setting environment data of a first virtual object, distances between the first virtual object and other virtual objects and interaction data between the first virtual object and the other virtual objects; and according to the scene data, calling a machine learning model to predict the position to be placed of the target camera to obtain the target position to be placed, and highlighting the target position.
In some embodiments, before the first frame is to be displayed to be switched to the second frame displaying the view angle corresponding to the target camera, the apparatus further includes a second instruction receiving module, configured to receive, when the number of cameras of the first virtual object having control rights is at least two and each of the cameras corresponds to one second switching key, a view angle switching instruction corresponding to a first camera of the at least two cameras in response to a trigger operation of the second switching key corresponding to the first camera; responding to a visual angle switching instruction corresponding to the first camera, switching from displaying the first picture to displaying a fourth picture of the virtual scene corresponding to the visual angle of the first camera; based on the fourth picture, responding to triggering operation of a second switching key corresponding to a target camera in the at least two cameras, and receiving a visual angle switching instruction corresponding to the target camera; correspondingly, the picture switching module is further configured to switch from displaying the fourth picture to displaying a second picture of the virtual scene corresponding to the target camera view angle in response to a view angle switching instruction corresponding to the target camera.
In some embodiments, before the first screen is switched from displaying to displaying the second screen corresponding to the view angle of the target camera, the apparatus further includes a third instruction receiving module, configured to display a camera selection interface when the number of cameras of the first virtual object having control rights is at least two, and display the at least two cameras in the camera selection interface; and receiving a view angle switching instruction corresponding to a target camera in response to a selection operation of the target camera in the at least two cameras.
In some embodiments, before the first screen is switched to display the second screen corresponding to the view angle of the target camera, the apparatus further includes a fourth instruction receiving module, configured to display, when the number of cameras of the first virtual object having control rights is at least two, the target camera in a map corresponding to the virtual scene in a first display style, and display other cameras except for the target camera in the at least two cameras in a second display style; the first display pattern is different from the second display pattern, and the first display pattern represents the selection priority of the target camera and is higher than the selection priority of the other cameras; and responding to the selection operation of the target camera, and receiving a visual angle switching instruction corresponding to the target camera.
In some embodiments, after the switching from displaying the first picture to displaying a second picture of the virtual scene corresponding to the target camera perspective, the apparatus further comprises: the camera display module is used for displaying the target cameras in a third display mode in the map of the virtual scene when the number of cameras with control rights of the first virtual object is at least two; and the third display mode represents that the second picture currently displayed is a picture of the virtual scene corresponding to the target camera view angle.
In some embodiments, before the position information of the third virtual object is displayed in the third screen corresponding to the virtual scene from the second virtual object perspective, the apparatus further includes an object marking module, configured to receive a marking instruction for marking the third virtual object; and responding to the marking instruction, and controlling the first virtual object to mark the third virtual object.
In some embodiments, the object tagging module is further configured to receive a tagging instruction for the third virtual object in response to a trigger operation for the third virtual object; or displaying a marking control for marking the third virtual object in the second picture, and receiving a marking instruction for the third virtual object in response to a triggering operation for the marking control.
In some embodiments, the apparatus further comprises: the prompt sending module is used for displaying object identifiers of target virtual objects in the at least two second virtual objects by adopting a fourth display mode in the map of the virtual scene when the number of the second virtual objects is at least two; wherein the fourth display style characterizes the object identification of the target virtual object as being in an operable state, the distance between the target virtual object and the third virtual object being below a distance threshold; and when receiving triggering operation aiming at the object identification, sending the position prompt information of the third virtual object to the terminal corresponding to the target virtual object.
In some embodiments, the second display module is further configured to display, in a third screen of the virtual scene corresponding to a second virtual object perspective, contour information of the marked third virtual object in a fifth display style, so as to determine, when an obstacle that obstructs a field of view exists between the second virtual object and the third virtual object, position information of the third virtual object; or displaying the marked position information of the third virtual object in the map of the virtual scene.
In some embodiments, the second display module is further configured to, when the third virtual object is blocked by an obstacle, perspective-display the third virtual object blocked by the obstacle in a third screen corresponding to the virtual scene from a second virtual object perspective, so that position information of the third virtual object is in a visible state; wherein the second virtual object and the third virtual object are located at two sides of the obstacle.
In some embodiments, the apparatus further comprises: the first attack module is used for responding to an aiming instruction aiming at the third virtual object in the process of interaction between the first virtual object and the third virtual object when the obstacle is a penetrable obstacle, and controlling the first virtual object to project a first virtual prop along the aiming direction indicated by the aiming instruction; when the first virtual prop hits the obstacle, the first virtual prop is controlled to penetrate the obstacle to act on the third virtual object.
In some embodiments, the apparatus further comprises: the second attack module is used for responding to a damage instruction to the obstacle triggered based on the position of the third virtual object in the process of interaction between the first virtual object and the third virtual object when the obstacle is a destructible obstacle, and controlling the first virtual object to destroy a part of the area in the obstacle through a second virtual prop; in response to an aiming instruction for the third virtual object, controlling the first virtual object to project a third virtual prop along an aiming direction indicated by the aiming instruction; and when the third virtual prop passes through the partial area, controlling the third virtual prop to act on the third virtual object through the partial area.
In some embodiments, after the displaying the position information of the third virtual object in the third screen corresponding to the virtual scene from the second virtual object perspective, the apparatus further includes: and the updating module is used for updating and displaying the position information of the third virtual object in the third picture along with the movement of the third virtual object when the third virtual object moves in the virtual scene.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the interactive processing method in the virtual scene according to the embodiment of the application.
Embodiments of the present application provide a computer readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform a method of interaction processing in a virtual scene provided by embodiments of the present application, for example, a method as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. An interaction processing method in a virtual scene, the method comprising:
displaying a first picture of the virtual scene corresponding to a first virtual object view angle;
responding to a view angle switching instruction corresponding to a target camera with control right for the first virtual object, and switching from displaying the first picture to displaying a second picture corresponding to the virtual scene of the target camera view angle;
in response to a marking operation for a third virtual object in the second picture, displaying position information of the third virtual object in a third picture of the virtual scene corresponding to a second virtual object view angle;
The first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship.
2. The method of claim 1, wherein before the switching from displaying the first picture to displaying a second picture corresponding to the target camera view, the method further comprises:
displaying a target camera where the first virtual object or the second virtual object is placed at a target position in the first picture;
and responding to triggering operation of a first switching key for switching the view angle of the camera, and receiving a view angle switching instruction corresponding to the target camera.
3. The method of claim 2, wherein displaying the target camera on which the first virtual object is placed at the target position in the first screen comprises:
in the first screen, setting an interaction mode of the virtual scene as a camera placement mode in response to a mode setting instruction for the interaction mode of the virtual scene;
in the camera placing mode, in response to a selection operation for a target position in a first picture, the first virtual object is controlled to take out the target camera from a virtual back pack, and the target camera is placed at the target position so as to display the target camera at the target position.
4. A method as claimed in claim 3, wherein the method further comprises:
acquiring scene data of the first virtual object in the virtual scene, wherein the scene data comprises at least one of the following: the method comprises the steps of setting environment data of a first virtual object, distances between the first virtual object and other virtual objects and interaction data between the first virtual object and the other virtual objects;
and according to the scene data, calling a machine learning model to predict the position to be placed of the target camera to obtain the target position to be placed, and highlighting the target position.
5. The method of claim 1, wherein before the switching from displaying the first picture to displaying a second picture corresponding to the target camera view, the method further comprises:
when the number of cameras of the first virtual object with control right is at least two and each camera corresponds to one second switching key, responding to triggering operation of the second switching key corresponding to a first camera in the at least two cameras, and receiving a visual angle switching instruction corresponding to the first camera;
Responding to a visual angle switching instruction corresponding to the first camera, switching from displaying the first picture to displaying a fourth picture of the virtual scene corresponding to the visual angle of the first camera;
based on the fourth picture, responding to triggering operation of a second switching key corresponding to a target camera in the at least two cameras, and receiving a visual angle switching instruction corresponding to the target camera;
the responding to the view angle switching instruction corresponding to the target camera switches from displaying the first picture to displaying the second picture of the virtual scene corresponding to the view angle of the target camera, and the method comprises the following steps:
and responding to a visual angle switching instruction corresponding to the target camera, switching from displaying the fourth picture to displaying a second picture corresponding to the virtual scene of the visual angle of the target camera.
6. The method of claim 1, wherein before the switching from displaying the first picture to displaying a second picture corresponding to the target camera view, the method further comprises:
when the number of cameras of the first virtual object with control rights is at least two, displaying a camera selection interface, and displaying the at least two cameras in the camera selection interface;
And receiving a view angle switching instruction corresponding to a target camera in response to a selection operation of the target camera in the at least two cameras.
7. The method of claim 1, wherein before the switching from displaying the first picture to displaying a second picture corresponding to the target camera view, the method further comprises:
when the number of cameras of the first virtual object with control right is at least two, displaying a target camera in a map corresponding to the virtual scene by adopting a first display mode, and displaying other cameras except the target camera in the at least two cameras by adopting a second display mode;
the first display pattern is different from the second display pattern, and the first display pattern represents the selection priority of the target camera and is higher than the selection priority of the other cameras;
and responding to the selection operation of the target camera, and receiving a visual angle switching instruction corresponding to the target camera.
8. The method of claim 1, wherein after the switching from displaying the first picture to displaying a second picture of the virtual scene corresponding to the target camera perspective, the method further comprises:
When the number of cameras with control rights of the first virtual object is at least two, displaying the target cameras in a third display mode in a map of the virtual scene;
and the third display mode represents that the second picture currently displayed is a picture of the virtual scene corresponding to the target camera view angle.
9. The method of claim 1, wherein the method further comprises, prior to displaying the location information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective:
receiving a marking instruction for marking a third virtual object in response to the third virtual object existing in the second picture;
and responding to the marking instruction, and controlling the first virtual object to mark the third virtual object.
10. The method of claim 9, wherein the receiving a marking instruction for marking the third virtual object comprises:
receiving a marking instruction for the third virtual object in response to a triggering operation for the third virtual object; or alternatively, the process may be performed,
And displaying a marking control for marking the third virtual object in the second picture, and receiving a marking instruction for the third virtual object in response to a triggering operation for the marking control.
11. The method of claim 1, wherein the method further comprises:
when the number of the second virtual objects is at least two, displaying object identifiers of target virtual objects in the at least two second virtual objects in the map of the virtual scene by adopting a fourth display mode;
wherein the fourth display style characterizes the object identification of the target virtual object as being in an operable state, the distance between the target virtual object and the third virtual object being below a distance threshold;
and when receiving triggering operation aiming at the object identification, sending the position prompt information of the third virtual object to the terminal corresponding to the target virtual object.
12. The method of claim 1, wherein displaying the position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective comprises:
displaying contour information of the marked third virtual object in a third picture of the virtual scene corresponding to the second virtual object visual angle by adopting a fifth display mode so as to determine the position information of the third virtual object when an obstacle for obstructing the visual field exists between the second virtual object and the third virtual object;
Or displaying the position information of the third virtual object in the map of the virtual scene.
13. The method of claim 1, wherein displaying the position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective comprises:
when the third virtual object is blocked by an obstacle, in a third picture of the virtual scene corresponding to a second virtual object view angle, the third virtual object blocked by the obstacle is displayed in a perspective mode, so that the position information of the third virtual object is in a visible state;
wherein the second virtual object and the third virtual object are located at two sides of the obstacle.
14. The method of claim 13, wherein the method further comprises:
when the obstacle is a penetrable obstacle, in the process of interaction between the first virtual object and the third virtual object, responding to an aiming instruction aiming at the third virtual object, and controlling the first virtual object to project a first virtual prop along the aiming direction indicated by the aiming instruction;
when the first virtual prop hits the obstacle, the first virtual prop is controlled to penetrate the obstacle to act on the third virtual object.
15. The method of claim 13, wherein the method further comprises:
when the obstacle is a destructible obstacle, in the process of interaction between the first virtual object and the third virtual object, responding to a destruction instruction for the obstacle triggered based on the position of the third virtual object, and controlling the first virtual object to destroy a part of the area in the obstacle through a second virtual prop;
in response to an aiming instruction for the third virtual object, controlling the first virtual object to project a third virtual prop along an aiming direction indicated by the aiming instruction;
and when the third virtual prop passes through the partial area, controlling the third virtual prop to act on the third virtual object through the partial area.
16. The method of claim 1, wherein after displaying the position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective, the method further comprises:
when the third virtual object moves in the virtual scene, along with the movement of the third virtual object, the position information of the third virtual object is updated and displayed in the third picture.
17. An interaction processing apparatus in a virtual scene, the apparatus comprising:
the first display module is used for displaying a first picture of a virtual scene corresponding to the first virtual object;
the picture switching module is used for responding to a visual angle switching instruction corresponding to a target camera with control right for the first virtual object, and switching from displaying the first picture to displaying a second picture corresponding to the virtual scene corresponding to the visual angle of the target camera;
a second display module, configured to display, in response to a marking operation for a third virtual object in the second screen, position information of the third virtual object in a third screen of the virtual scene corresponding to a second virtual object perspective;
the first virtual object and the second virtual object are in a cooperative relationship, and the first virtual object and the third virtual object are in an hostile relationship.
18. A terminal device, comprising:
a memory for storing executable instructions;
a processor configured to implement the interaction processing method in a virtual scene according to any one of claims 1 to 16 when executing the executable instructions stored in the memory.
19. A computer readable storage medium storing executable instructions for implementing the interactive processing method in a virtual scene according to any one of claims 1 to 16 when executed by a processor.
20. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the interactive processing method in a virtual scene as claimed in any one of claims 1 to 16.
CN202210764776.5A 2022-06-29 2022-06-29 Interactive processing method, device, equipment and storage medium in virtual scene Pending CN116764207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210764776.5A CN116764207A (en) 2022-06-29 2022-06-29 Interactive processing method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210764776.5A CN116764207A (en) 2022-06-29 2022-06-29 Interactive processing method, device, equipment and storage medium in virtual scene

Publications (1)

Publication Number Publication Date
CN116764207A true CN116764207A (en) 2023-09-19

Family

ID=88010389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210764776.5A Pending CN116764207A (en) 2022-06-29 2022-06-29 Interactive processing method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN116764207A (en)

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
US20220266136A1 (en) Method and apparatus for state switching in virtual scene, device, medium, and program product
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
TWI831074B (en) Information processing methods, devices, equipments, computer-readable storage mediums, and computer program products in virtual scene
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
US20230078440A1 (en) Virtual object control method and apparatus, device, storage medium, and program product
CN114217708B (en) Control method, device, equipment and storage medium for opening operation in virtual scene
CN114296597A (en) Object interaction method, device, equipment and storage medium in virtual scene
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN114344896A (en) Virtual scene-based snap-shot processing method, device, equipment and storage medium
US20230330525A1 (en) Motion processing method and apparatus in virtual scene, device, storage medium, and program product
CN116764207A (en) Interactive processing method, device, equipment and storage medium in virtual scene
CN112717403B (en) Virtual object control method and device, electronic equipment and storage medium
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
WO2024060924A1 (en) Interaction processing method and apparatus for virtual scene, and electronic device and storage medium
WO2024012016A1 (en) Information display method and apparatus for virtual scenario, and electronic device, storage medium and computer program product
CN113101636B (en) Information display method and device for virtual object, electronic equipment and storage medium
CN114210057B (en) Method, device, equipment, medium and program product for picking up and processing virtual prop
WO2023221716A1 (en) Mark processing method and apparatus in virtual scenario, and device, medium and product
CN116920403A (en) Virtual object control method, device, equipment, storage medium and program product
CN114146413A (en) Virtual object control method, device, equipment, storage medium and program product
CN116920401A (en) Virtual object control method, device, equipment, storage medium and program product
CN118057277A (en) Interactive processing method and device for virtual scene, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination