CN117504279A - Interactive processing method and device in virtual scene, electronic equipment and storage medium - Google Patents

Interactive processing method and device in virtual scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN117504279A
CN117504279A CN202210901781.6A CN202210901781A CN117504279A CN 117504279 A CN117504279 A CN 117504279A CN 202210901781 A CN202210901781 A CN 202210901781A CN 117504279 A CN117504279 A CN 117504279A
Authority
CN
China
Prior art keywords
virtual
scene
virtual object
canvas
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210901781.6A
Other languages
Chinese (zh)
Inventor
朱盈婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN202210901781.6A priority Critical patent/CN117504279A/en
Publication of CN117504279A publication Critical patent/CN117504279A/en
Pending legal-status Critical Current

Links

Abstract

The application provides an interactive processing method, an interactive processing device, electronic equipment and a storage medium in a virtual scene; the method comprises the following steps: displaying a first virtual scene and a virtual prop positioned in the first virtual scene; responding to the interactive operation between the first virtual object and the virtual prop, controlling the first virtual object and the second virtual object to switch to a second virtual scene, and displaying a canvas control in the second virtual scene; the first virtual object and the second virtual object have a combination relationship; in response to a drawing operation in the canvas control, displaying first scene material drawn by the first virtual object and second scene material drawn by the second virtual object in the canvas control; and displaying the virtual image blended into the second virtual scene in response to the confirmation ending the drawing operation in the canvas control, the virtual image comprising the first scene material and the second scene material. Through the method and the device, interaction crossing the virtual scene can be efficiently realized.

Description

Interactive processing method and device in virtual scene, electronic equipment and storage medium
Technical Field
The present disclosure relates to computer technology, and in particular, to an interactive processing method and apparatus in a virtual scene, an electronic device, and a storage medium.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified movement of the virtual object controlled by a user or artificial intelligence according to the actual application requirement, has various typical application scenes, for example, in the virtual scene of games and the like, and can simulate the interaction process between the virtual objects.
In the related art, if a plurality of virtual objects interacted in one virtual scene are transmitted to another virtual scene, the plurality of virtual objects interacted originally can be caused due to isolation between different virtual scenes, and it is difficult to keep continuity of interaction in the other virtual scene.
Disclosure of Invention
The embodiment of the application provides an interaction processing method, an interaction processing device, electronic equipment, a computer readable storage medium and a computer program product in a virtual scene, which can realize the interaction across the virtual scene efficiently.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an interactive processing method in a virtual scene, which comprises the following steps:
displaying a first virtual scene and displaying virtual props positioned in the first virtual scene;
Responding to the interactive operation between the first virtual object and the virtual prop, controlling the first virtual object and the second virtual object to switch to a second virtual scene, and displaying a canvas control in the second virtual scene; wherein the first virtual object and the second virtual object have a combination relationship;
in response to a drawing operation in the canvas control, displaying at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in the canvas control;
and displaying a virtual image blended into the second virtual scene in response to the confirmation ending drawing operation in the canvas control, wherein the virtual image comprises the at least one first scene material and the at least one second scene material.
The embodiment of the application provides an interactive processing device in a virtual scene, which comprises: .
The display module is used for displaying a first virtual scene and displaying virtual props positioned in the first virtual scene;
the switching module is used for responding to the interactive operation between the first virtual object and the virtual prop, controlling the first virtual object and the second virtual object to switch to a second virtual scene, and displaying a canvas control in the second virtual scene; wherein the first virtual object and the second virtual object have a combination relationship;
A drawing module, configured to respond to a drawing operation in the canvas control, and display at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in the canvas control;
the display module is further configured to display a virtual image blended into the second virtual scene in response to a confirmation end drawing operation in the canvas control, where the virtual image includes the at least one first scene material and the at least one second scene material.
An embodiment of the present application provides an electronic device, including:
a memory for storing computer executable instructions;
and the processor is used for realizing the interactive processing method in the virtual scene when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores computer executable instructions for implementing an interactive processing method in a virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions realize the interactive processing method in the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
on the one hand, because the virtual objects with the combination relationship are used as switching units, compared with the switching of the virtual scenes by using the individual units of the virtual objects, the efficiency of the switching of the virtual scenes is improved; on the other hand, interaction between the first virtual scene and the second virtual scene is realized, seamless connection is formed through the combination relation, strange feeling after virtual scene switching is eliminated, and user experience is improved.
Drawings
Fig. 1A is an application mode schematic diagram of an interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 1B is an application mode schematic diagram of an interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application;
fig. 3A to 3J are schematic flow diagrams of an interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 4A is a schematic diagram of interaction processing in a first virtual scene provided in an embodiment of the present application;
FIG. 4B is a schematic diagram of an interaction range of virtual props provided by embodiments of the present application;
FIG. 4C is a schematic diagram of interaction processing in a first virtual scenario provided by an embodiment of the present application;
FIG. 4D is a schematic diagram of a secondary confirmation dialog provided by an embodiment of the present application;
FIGS. 4E-4I are schematic diagrams of canvas controls provided by embodiments of the present application;
FIGS. 5A-5B are schematic diagrams of a sharing interface provided by embodiments of the present application;
fig. 5C to 5D are schematic diagrams of a second virtual scene provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of an alternative method for processing interactions in a virtual scenario according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein. In the following, the first virtual object and the second virtual object are used for distinguishing different virtual objects, and the scheme of the first virtual object and the second virtual object is interchangeable without specifying any object; the first scene material and the second scene material, the first canvas area and the second canvas area are the same.
It should be noted that, in the embodiments of the present application, related data such as user information, user feedback data, etc., when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use, and processing of related data needs to comply with related laws and regulations and standards of related countries and regions.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) Virtual scenes, namely, a scene which is output by equipment and is different from the real world, can form visual perception of the virtual scenes through naked eyes or the assistance of equipment, for example, a two-dimensional image output by a display screen, and a three-dimensional image output by three-dimensional display technologies such as three-dimensional projection, virtual reality and augmented reality technologies; in addition, various simulated real world sensations such as auditory sensations, tactile sensations, olfactory sensations, and motion sensations can also be formed by various possible hardware.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) Virtual objects, objects that interact in a virtual scene, objects that are under the control of a user or a robot program (e.g., an artificial intelligence based robot program) are capable of being stationary, moving, and performing various actions in the virtual scene, such as various characters in a game, and the like.
4) The field of view, the spatial range that can be seen in the virtual scene, may be presented by a first person perspective or a third person perspective. The eyes of the virtual objects are implemented by virtual cameras in the virtual scene engine. For example, a virtual camera is mounted at a preset position (e.g., a head) of a virtual object, the field of view of the virtual object is the range of view of its corresponding virtual camera in the virtual environment, and the field of view of the virtual sensor is the range of view of its corresponding virtual camera (virtual lens) in the virtual environment.
5) Billboard technology (billboards), a technology that renders a tile (virtual plane with boundaries in a virtual scene) in a virtual scene and always directs the tile to a virtual camera.
6) Team formation, a set of at least two virtual objects formed by the game with the aim of cooperatively completing a task.
7) Combination relation: in a one-to-one social relationship in a virtual scene, each virtual object can only be in one combination relationship at the same time, that is, at a certain moment, the virtual object a can only be in combination relationship with one of the virtual object B and the virtual object C, if the virtual object a is already in combination relationship with the virtual object B, the virtual object a needs to be in combination relationship with the virtual object C again, the combination relationship needs to be contacted with the virtual object B first, so that the combination relationship is an exclusive intimate social relationship. For example, in a scenario-like game, the combined relationship may be a spouse relationship or a lover relationship (also referred to as a concentric relationship); in a combat game, the combination relationship may be a partner relationship.
8) The face sheet is a basic element of a polygonal mesh (Polygon mesh), which is a collection of vertices and polygons representing various shapes in computer graphics, and is also called an unstructured mesh. Polygonal meshes are typically composed of triangles, quadrilaterals, or other simple convex polygons, which may simplify the rendering process.
The embodiment of the application provides an interaction processing method in a virtual scene, an interaction processing device in the virtual scene, electronic equipment, a computer readable storage medium and a computer program product, which can efficiently realize interaction crossing the virtual scene.
The following describes exemplary applications of the electronic device provided in the embodiments of the present application, where the electronic device provided in the embodiments of the present application may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), a vehicle-mounted terminal, and other various types of user terminals, and may also be implemented as a server. In the following, an exemplary application when the electronic device is implemented as a server will be described.
Before describing fig. 1A, a description will be first given of a game mode related to an implementation of collaboration between terminal devices. Aiming at the scheme of collaborative implementation among the terminal devices, each terminal device is provided with a complete client corresponding to the game. The operation instructions input by the player in the terminal equipment are independently processed by the game logic (such as interaction logic between virtual objects) by the terminal equipment, and the game scene data are rendered into audio and video streams by utilizing the computing power of the graphic processing hardware of the terminal equipment. The communication mode between the terminal devices can be a mode of Bluetooth connection, an intranet, the Internet or the like.
In an application scenario, referring to fig. 1A, fig. 1A is an application mode schematic diagram of an interaction processing method in a virtual scenario provided in the embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of the virtual scenario completely depending on the computing capability of graphics processing hardware of a terminal device, for example, a game in a stand-alone mode or an offline mode, and output of the virtual scenario is completed through various different types of terminal devices such as a smart phone, a tablet computer, and a virtual reality or augmented reality device.
Terminal device 400-1 runs client 101-1, terminal device 400-2 runs client 101-2, and communication between terminal device 400-1 and terminal device 400-2 is performed through network 300, or bluetooth is performed.
For example, the first virtual object and the second virtual object may be virtual objects controlled by two real players (first user, second user), respectively, and the client 101-1 and the client 101-2 are clients of the same game. The following is a description in connection with the above examples.
The client 101-1 running in the terminal device 400-1 displays the first virtual scene and the virtual prop in the field of view of the first virtual object. The client 101-2 operated by the terminal device 400-2 displays the first virtual scene in the field of view of the second virtual object. The first user controls the first virtual object to interact with the virtual prop through the terminal device 400-1, and controls the first virtual object and the second virtual object to be switched into the second virtual scene. The canvas control 102-1 is displayed in the second virtual scene and the first user draws a virtual image through the canvas control 102-1 in the identity of the first virtual object. Meanwhile, the second user controls the second virtual object through the terminal device 400-2, the canvas control 102-2 is displayed in the second virtual scene of the terminal device 400-2, and the second user draws the same virtual image with the first user through the canvas control 102-2 in the identity of the second virtual object. After the drawing is completed, the terminal device 400-1 and the terminal device 400-2 synchronously display the virtual image blended into the second virtual scene.
Before describing fig. 1B, a description will be first given of a game mode related to an implementation in which a terminal device and a server are cooperatively implemented. Aiming at the scheme of collaborative implementation of terminal equipment and a server, two game modes, namely a local game mode and a cloud game mode, are mainly involved, wherein the local game mode refers to that the terminal equipment and the server cooperatively run game processing logic, an operation instruction input by a player in the terminal equipment is partially processed by the game logic run by the terminal equipment, the other part is processed by the game logic run by the server, and the game logic process run by the server is more complex and consumes more calculation power; the cloud game mode is that a server runs game logic processing, and a cloud server renders game scene data into audio and video streams and transmits the audio and video streams to a terminal device for display. The terminal device only needs to have the basic streaming media playing capability and the capability of acquiring the operation instruction of the player and sending the operation instruction to the server.
In another application scenario, referring to fig. 1B, fig. 1B is a schematic application mode diagram of an interaction processing method in a virtual scenario provided in an embodiment of the present application, applied to a terminal device and a server 200, and adapted to complete virtual scenario calculation depending on a computing capability of the server 200, and output an application mode of the virtual scenario at the terminal device. The server 200 communicates with each terminal device via a network 300.
For example, the first virtual object and the second virtual object of the server 200, which are servers of the game platform, may be virtual objects controlled by two real players (first user, second user), respectively, and the client 101-1 and the client 101-2 are clients of the same game. The following is a description in connection with the above examples.
The client 101-1 running in the terminal device 400-1 displays the first virtual scene in the field of view of the first virtual object, and the virtual prop. The client 101-2 operated by the terminal device 400-2 displays the first virtual scene in the field of view of the second virtual object. The first user controls the first virtual object to interact with the virtual prop through the terminal device 400-1, and the first virtual object and the second virtual object are switched into the second virtual scene. The canvas control 102-1 is displayed in the second virtual scene and the first user draws a virtual image through the canvas control 102-1 in the identity of the first virtual object. The server 200 receives the drawing operation for the canvas control 102-1 by the first user sent by the terminal device 400-1 and synchronizes the drawing operation to the terminal device 400-2.
Simultaneously, a second user controls a second virtual object through the terminal device 400-2, the canvas control 102-2 is displayed in a second virtual scene of the terminal device 400-2, and the second user draws the same virtual image with the first user through the canvas control 102-2 in the identity of the second virtual object. The server 200 receives the drawing operation for the canvas control 102-2 sent by the terminal device 400-2 by the second user and synchronizes the drawing operation to the terminal device 400-1. The interactive processing between the virtual objects in the virtual scene is realized. After the drawing is completed, the server 200 merges the completed virtual image into the second virtual scene, and transmits corresponding data to the terminal device 400-1 and the terminal device 400-2, so that the terminal device 400-1 and the terminal device 400-2 synchronously display the virtual image merged into the second virtual scene.
Taking a computer program as an application program as an example, the terminal device installs and runs the application program supporting the virtual scene. The application may be any one of a First person shooter game (FPS), a third person shooter game, a virtual reality application, a three-dimensional map program, or a multiplayer game. A user uses a terminal device to operate a virtual object located in a virtual scene to perform activities including, but not limited to: at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona, or the like.
In some embodiments, the terminal device may implement the interactive processing method in the virtual scene provided in the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; may be a Native Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources. Cloud gaming (Cloud gaming), which may also be referred to as game on demand, is an online gaming technology based on Cloud computing technology. Cloud gaming technology enables lightweight devices (Thin clients) with relatively limited graphics processing and data computing capabilities to run high quality games. In a cloud game scene, the game is not run in a player game terminal, but is run in a cloud server, the cloud server renders the game scene into a video and audio stream, and the video and audio stream is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capability, and only needs to have basic streaming media playing capability and the capability of acquiring player input instructions and sending the player input instructions to the cloud server.
For example, the server 200 in fig. 1B may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal device and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 (for example, may be the terminal device 400-1 and the terminal device 400-2 above) provided in an embodiment of the present application, and the electronic device may be the terminal device, and the terminal device 400 shown in fig. 2 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal device 400 are coupled together by bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 450 described in the embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the interactive processing device in the virtual scene provided in the embodiments of the present application may be implemented in a software manner, and fig. 2 shows the interactive processing device 455 in the virtual scene stored in the memory 450, which may be software in the form of a program, a plug-in, and the like, including the following software modules: the display module 4551, the switching module 4552 and the drawing module 4553 are logical, and thus may be arbitrarily combined or further split according to the functions to be implemented. The functions of the respective modules will be described hereinafter.
The method for processing interaction in the virtual scene provided by the embodiment of the application will be described with reference to an exemplary application and implementation of the terminal device provided by the embodiment of the application.
Referring to fig. 3A, fig. 3A is a flowchart of an interactive processing method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3A.
In step 301, a first virtual scene is displayed and virtual props located in the first virtual scene are displayed.
In some embodiments, described with reference to fig. 1B, a first virtual scene and virtual props thereof are displayed in a first man-machine interaction interface for controlling a first virtual object, where the first man-machine interaction interface is a first man-machine interaction interface for a first user (player) to control the first virtual object at a terminal device 400-1, and a second user (player) controls a second virtual object through a second man-machine interaction interface in the terminal device 400-2.
For example, in the first human-computer interaction interface, the first virtual scene may be displayed based on a perspective (first or third perspective) of the first virtual object.
By way of example, the virtual prop may be a Non-Player Character (NPC), and the virtual prop may be in the form of a box, flower, stone, or the like. The virtual prop is arranged at a specific position of the first virtual scene. Alternatively, the location of the virtual prop can be moved in the first virtual scene.
For example: setting a plurality of different preset positions (for example, 5) in the first virtual scene, and switching the virtual prop from one of the preset positions to other preset positions every preset time period (for example, one day), wherein the mode of the other preset positions to which the virtual prop is switched comprises any one of the following modes: 1. randomly switching; 2. switching is performed according to a descending order of the distance between the preset position and the virtual object. For example: the switching period of the positions of the virtual props is T, the distance between the virtual object and each preset position is sampled before each switching, the first preset position of the head with the descending order of the distance is selected as the target preset position for switching, and the virtual props are switched from the current position to the target preset position; 3. and switching from large to small circularly according to the number of the preset position. For example: the preset positions comprise: preset position 1, preset position 2, preset position 3. The virtual prop is sequentially switched from the preset position 1 to the preset position 2, from the preset position 2 to the preset position 3 and from the preset position 3 to the preset position 1.
In some embodiments, the default display attribute of the virtual prop is a hidden state; displaying the virtual prop located in the first virtual scene may be achieved by: and controlling the virtual prop to switch from the hidden state to the visible state in response to the distance between the setting position of the virtual prop and the first virtual object being smaller than the distance threshold, so that the virtual prop appears at the setting position.
For example, the first virtual object and the second virtual object are both in the first virtual scene, and the positions of the first virtual object and the second virtual object may be the same or different. The distance between the setting position of the virtual prop and the first virtual object is smaller than a distance threshold (for example, 5 meters in the virtual scene), and the virtual prop is controlled to be switched from a hidden state to a visible state so that the virtual prop appears at the setting position.
In the embodiment of the application, the virtual prop is switched into the visible state only when the virtual object is close to the virtual prop, namely, when the virtual prop is hidden when the virtual prop is not needed, the model of the virtual prop is not rendered, so that the memory required for running the virtual scene is saved, and the graphic computing resource is saved.
For example, the first virtual object and the second virtual object are both in the first virtual scene, and the positions of the first virtual object and the second virtual object may be the same or different. When the setting position of the virtual prop is in the visual field of the first virtual object, the virtual prop is controlled to be switched from the hidden state to the visible state, so that the virtual prop appears at the setting position.
In the embodiment of the application, by judging whether the virtual prop is in the field of view of the first virtual object, the virtual prop is correspondingly displayed or hidden, the authenticity of the virtual scene is improved, the user experience of interaction between the virtual object and the virtual prop is improved, the memory required for running the virtual scene is saved, and the graphic computing resource is saved.
In some embodiments, the default display attribute of the virtual prop is a visible state; displaying the virtual prop located in the first virtual scene may be achieved by: and displaying the virtual prop at the set position in response to the first virtual object and the second virtual object being enqueued to the set position of the virtual prop and the set position being within the field of view of the first virtual object.
In step 302, in response to an interaction between the first virtual object and the virtual prop, the first virtual object and the second virtual object are controlled to switch to the second virtual scene, and a canvas control is displayed in the second virtual scene.
Here, the first virtual object has a combination relationship with the second virtual object.
By way of example, the interactive operation may be implemented by clicking, long pressing, approaching a virtual object, etc. on the virtual prop.
Referring to fig. 3B, fig. 3B is a flow chart of an interactive processing method in a virtual scene according to an embodiment of the present application; step 302 may be implemented by steps 3021 to 3022, which are described in detail below.
In step 3021, a first conversation item output by a virtual prop is displayed in a conversation control corresponding to the virtual prop.
Here, the first dialog item characterizes a drawing interaction function that triggers the virtual prop.
For example, the first conversation item may be displayed in a form that maximizes the window to be full of the first human-machine interaction interface, or a widget. Referring to fig. 4A, fig. 4A is a schematic diagram of interaction processing in a first virtual scene provided in an embodiment of the present application. The first dialogue item is displayed in the first virtual scene in a floating layer manner, and the floating layer is used for fully paving the first man-machine interaction interface in the maximized window, the first virtual object in the first virtual scene is in the interaction range of the virtual prop 401A (for example, a box), and the first dialogue item 402A and the prompt message of 'a box which looks flat and no odd' are displayed (refer to the prompt message area 403A). The first man-machine interaction interface further comprises an exit control 404A, and when the exit control 404A is triggered, the first dialogue item 402A can be closed, and the interface corresponding to the first dialogue item 402A is exited.
In some embodiments, the display of the conversation control may be automatically triggered, or the user controls the first virtual object to interact with the virtual prop, triggering the display of the conversation control in response to the interaction behavior of the first virtual object with the virtual prop.
In some embodiments, for automatically displaying dialog controls, step 3021 may be implemented by: and displaying the dialogue control of the virtual prop in response to the first virtual object being in the interaction range of the virtual prop, and displaying the first dialogue item output by the virtual prop in the dialogue control corresponding to the virtual prop.
Here, the interaction range is a radiation area centered on the set position of the virtual prop. For example, the radiation area may be geometrically regular or irregular, whether regular depends on whether the set position is obscured by an obstacle. Referring to fig. 4B, fig. 4B is a schematic diagram of an interaction range of virtual props provided in an embodiment of the present application; in the first virtual scene 405B, a spherical radiation area (interaction range 402B) is formed centering on the setting position of the virtual prop 401A, and the first virtual object 401B is within the interaction range. A portion of the interaction range 402B is occluded by the ground 403B of the first virtual scene 405B. The location of the second virtual object 404B in the first virtual scene 405B may be different from the first virtual object 401B.
According to the embodiment of the application, the dialogue item is automatically displayed, so that the operation process of controlling the virtual object by the user is saved, and the computing resources of the server or the client can be saved by saving the operation process.
In step 3022, in response to the interactive operation selecting the first conversation item and meeting an on condition of the drawing interactive function, the first virtual object and the second virtual object are controlled to switch to the second virtual scene, and the canvas control is displayed.
In some embodiments, the on condition may be the following condition 1, or the following condition 1 and condition 2.
Condition 1, the level of the combining relationship is equal to or higher than the level threshold (e.g., level 2). Condition 2, the first virtual object, and the second virtual object form a team state. The second virtual scene is a virtual scene which is different from the first virtual scene and can run a drawing interaction function.
For condition 1, the level of the combining relationship may be positively correlated with the value of one of the following parameters: and the interaction frequency of the first virtual object and the second virtual object after the combination relation is formed is long when the first virtual object and the second virtual object keep the combination relation.
For example, each level of the combination relationship has a corresponding maximum score value, when the current score value of the current level reaches the maximum score value corresponding to the current level, 1 is added to the current level to obtain a new level, and the score value is calculated from 0 at the new level. The score value may be a weighting of the frequency of interaction and the duration of time that the combined relationship is maintained. For example: interaction frequency 2+ duration of the hold-up relationship 3=score.
For example, assuming that the open condition includes only condition 1, when the level of the combination relationship of the first virtual object and the second virtual object is higher than level 2 (level threshold), the first virtual object satisfies condition 1 due to symmetry of interaction of objects in the combination relationship, the second virtual object also satisfies condition 1, the first virtual object and the second virtual object are controlled to switch to the second virtual scene, and the canvas control is displayed.
For condition 2, the first virtual object can be in a team state with any one other virtual object, and the switching between the first virtual scene and the second virtual scene takes the virtual object set in the team state as the minimum switching unit.
By way of example, assume that the on condition includes condition 1 and condition 2, such as: the level of the combination relationship between the first virtual object and the second virtual object is greater than the level threshold (condition 1 is satisfied), but the first virtual object and the second virtual object are not formed, the formation state is not satisfied (condition 2 is not satisfied), and the on condition is not satisfied. For another example: the first virtual object and the third virtual object (any virtual object other than the second virtual object) are grouped (condition 2 is not satisfied) while the level of the combination relationship between the first virtual object and the second virtual object is greater than the level threshold (condition 1 is satisfied), and the on condition is not satisfied.
In some embodiments, it may be determined whether the second virtual object that is in team with the first virtual object is a virtual object that has a combined relationship with the first virtual object by: and acquiring a serial number (ID) of the first virtual object, searching the ID of the first virtual object in a database storing the IDs of the virtual objects with the combination relation, searching the IDs of the virtual objects with the combination relation to obtain the IDs of the virtual objects with the combination relation of the first virtual object, comparing the IDs of the virtual objects with the IDs of teammates of the first virtual object, and if the comparison result is different, making the teammates not be virtual objects with the combination relation with the first virtual object. Otherwise, when the comparison results are the same, the teammates are virtual objects with a combination relationship with the first virtual object.
In some embodiments, controlling the switching of the first virtual object and the second virtual object to the second virtual scene may be achieved by: the current location (which may be the same or different) of the first virtual object and the second virtual object, respectively, in the first virtual scene is updated to the same location in the second virtual scene.
The first virtual object and the second virtual object are located at the same position in the second virtual scene, and the first virtual object and the second virtual object have the same viewing direction (for example, a direction facing the sky) and the same viewing angle (for example, a 30-degree elevation angle) at the position, so that the first virtual object and the second virtual object have the same field of view, and the same field of view is used as an image merging area of the second virtual scene to display a virtual image which is cooperatively drawn by the subsequent first virtual object and the second virtual object, so that a user of the first virtual object can be conveniently controlled, and the user of the second virtual object can be controlled to view through the viewing angle (for example, a first person viewing angle or a third person viewing angle to be presented) of the corresponding virtual object.
In some embodiments, in response to the open condition not being met, a second dialog item output by the virtual prop is displayed in a dialog control corresponding to the virtual prop, wherein the second dialog item includes an open condition (i.e., a portion of the open condition that is not met by the first virtual object) that the open drawing interactive function needs to be met. Referring to fig. 4C, fig. 4C is a schematic diagram of interaction processing in a first virtual scene provided in an embodiment of the present application; in the first human-computer interaction interface, a second dialog item 401C is displayed. The second dialog item 401C includes a portion of the open condition that is not satisfied by the first virtual object, for example: after team formation, the method is started. The first virtual object is characterized as not being teamed with the virtual objects having the composite relationship.
In some embodiments, a transitional animation is played before the canvas control is displayed, wherein the transitional animation characterizes the first virtual object and the second virtual object switching from the first virtual scene to the same location in the second virtual scene.
By way of example, the same location may be a location (a location where the field of view is open) that facilitates viewing of a virtual image that blends into the second virtual scene. For example: lake sides, seasides, mountain tops, roofs, plains, and the like.
In some embodiments, prior to playing the transitional animation, the transitional animation may be obtained by: acquiring a first model of a first virtual object and a second model of a second virtual object; and fusing the skeleton animation and the character model based on the first model, the second model and the transition animation template (comprising action data and second virtual scene data) to obtain the transition animation.
By way of example, the data corresponding to the model of the virtual object includes: body type data (body type of virtual object, etc.), five sense organ data (virtual object long phase, hairstyle, etc.), component data (clothing worn by virtual object).
Illustratively, the transition animation template includes: and according to the preset skeleton animation (comprising action data and skeleton model) of the body type corresponding to the virtual object and the data of the second virtual scene. Bone animation is a kind of model animation in which a model of a virtual object has interconnected "bones" and is generated by driving the model motion by changing the orientation and position of the bones.
As an example of fusing the skeletal animation with the character model, assume that the virtual object is a human virtual object, and the body type includes adult male, adult female, boy, girl. And reading the body type (such as adult male and adult female) of the first virtual object and the second virtual object, five sense organs data and component data, directing the data to skeletal animation of the same body type to obtain action animations of the first virtual object and the second virtual object, overlapping the action animations with scene data of the second virtual scene by taking the action animations as a foreground and taking the second virtual scene as a background, and obtaining transition animations by taking the first virtual object and the second virtual object as main angles.
With continued reference to fig. 3A, in step 303, at least one first scene material drawn based on the identity of the first virtual object and at least one second scene material drawn based on the identity of the second virtual object are displayed in the canvas control in response to the drawing operation in the canvas control.
In some embodiments, the drawing operation includes at least one of: drawing brand new materials based on a drawing tool, and editing operations based on candidate materials, wherein the types of the editing operations comprise: placing, moving, turning, zooming, rotating, deleting and changing the color; the types of the first scene material and the second scene material include: images, characters.
In some embodiments, the canvas control includes at least one first canvas area and at least one second canvas area, the first canvas area being an area that is drawn based on the identity of the first virtual object and the second canvas area being an area that is drawn based on the identity of the second virtual object.
For example, in the first man-machine interaction interface, the first canvas area may be displayed in any one of the following differentiated manners to distinguish canvas areas corresponding to different virtual object identities: displaying the boundary of the second canvas area in a common mode, wherein the boundary of the first canvas area is highlighted; displaying the boundary of the second canvas area in a common mode, and labeling the boundary of the first canvas area with animation special effects; the name or head portrait of the first virtual pair is marked above the first canvas area, and the name or head portrait of the second virtual pair is marked above the second canvas area.
In this embodiment of the present application, a view angle of a first man-machine interaction interface is taken as an example to describe an example, and the above manner of displaying a first canvas area is also applicable to other man-machine interaction interfaces, so as to distinguish canvas areas corresponding to identities of different virtual objects.
For example: the canvas control is divided into a nine-square (comprising 9 sections), with 5 sections being a first canvas area for drawing based on the identity of the first virtual object and 4 sections being a second canvas area for drawing based on the identity of the second virtual object. The number of first canvas areas may be different from the number of second canvas areas.
For another example: referring to FIG. 4I, FIG. 4I is a schematic diagram of a canvas control provided by an embodiment of the present application; the canvas control is divided into a plurality of sections, wherein the first canvas area 401I is characterized by a blank area. The second canvas area 402I is characterized by a striped shadow region.
By way of example, the first canvas area and the second canvas area may be divided in any manner, with equal areas or unequal area divisions of straight lines or curves; the first canvas area and the first canvas area can be connected, a second canvas area can be arranged at intervals, and the arrangement form is not limited; the division may be based on the bearing relationship of the virtual object in the second virtual scene, for example: in the map of the second virtual scene, the first virtual object is located at the left side of the second virtual object, and then the canvas area at the left side in the canvas control is used by the first virtual object. The usage rights for the canvas area may be obtained based on the captain or player identity of the virtual object in the team relationship, for example: the first virtual object is a team leader and the second virtual object is a team member. The identity of the first virtual object has the right to allocate a canvas area, and the user selects the canvas area on the left side, and the canvas area on the left side in the canvas control is used by the first virtual object.
Referring to fig. 3C, fig. 3C is a flowchart of an interactive processing method in a virtual scene according to an embodiment of the present application, and step 303 may be implemented by the following steps 3031C to 3033C, which are described in detail below.
In step 3031C, at least one first scene material drawn based on the identity of the first virtual object is displayed in the first canvas area in response to the drawing operation in the canvas control.
By way of example, the embodiment of the present application illustrates a first human-computer interaction interface corresponding to a first virtual object, where step 3031C is in response to a drawing operation in a canvas control based on an identity of the first virtual object.
In some embodiments, the at least one second scene material is drawn in a second human-machine interaction interface based on an identity of the second virtual object and is drawn in a second canvas area in a canvas control displayed by the second human-machine interaction interface, the second human-machine interaction interface being for displaying the second virtual scene based on a perspective (first or third perspective) of the second virtual object, and the canvas control being displayed in the second virtual scene.
The canvas control comprises: an image editing tool (e.g., an image editing tool that functions to paste, cut, rotate, select, erase, fill in colors, paint, etc.), a first canvas area, a second canvas area, and a list of materials.
Referring to FIG. 4E, FIG. 4E is a schematic diagram of a canvas control provided by an embodiment of the present application. The canvas control comprises a drawing interface, and comprises a first canvas area 401E, a second canvas area 402E and a material list 403E, wherein the first canvas area 401E on the left side belongs to a first virtual object (prompt information 'please draw a left half image' above the first canvas area 401E indicates that scene materials in the first canvas area 401E are drawn in the identity of the first virtual object), and the second canvas area 402E on the right side belongs to a second virtual object (prompt information 'D2 above the second canvas area 402E is drawing a half image' indicates that scene materials in the second canvas area 402E are drawn in the identity of the second virtual object). The user may draw with the scene material provided in the material list 403E in the corresponding canvas area based on the identity of the corresponding virtual object, respectively. The material list 403E includes a large number of scene materials, for example: cloud, lucky grass, heart, crown, meteor, crescent, notes, etc.
Before step 3032C or step 3033C, referring to fig. 1B, the terminal device 400-1 corresponding to the first man-machine interaction interface receives at least one second scene material sent by the server 200, where the at least one second scene material is sent to the server 200 by the terminal device 400-2 running the second man-machine interaction interface.
In step 3032C, at least one second scene material drawn based on the identity of the second virtual object is displayed in real time in the second canvas area.
Referring to FIG. 4F, FIG. 4F is a schematic diagram of a canvas control provided by an embodiment of the present application. At least one second scene material drawn based on the identity of the second virtual object is synchronously displayed in the second canvas area 402E. In the case where the canvas control includes a first canvas area and a second canvas area, the first scene material and the second scene material may be distinguished by the canvas area in which the scene material is located.
According to the method and the device for processing the interaction in the virtual scene, the synchronization of the interaction processing in the virtual scene is improved by displaying at least one second scene material drawn based on the identity of the second virtual object in real time, communication among users is facilitated, and the sense of reality of the virtual scene is improved.
Or, in step 3033C, in the second canvas area, displaying the prompt information that the second virtual object is drawing, and after the drawing based on the identity of the second virtual object is completed, displaying at least one second scene material drawn based on the identity of the second virtual object.
And when the identity drawing based on the second virtual object is completed, synchronizing at least one second scene material which is sent by the server and is drawn based on the identity of the second virtual object into a second canvas area in the canvas control of the first man-machine interaction interface. With continued reference to FIG. 4E, when drawing based on the identity of the second virtual object is not complete, a hint message of "drawing" is displayed in the second canvas area 402E.
For example, the steps 3033C and 3032C are executed separately from the step 3031C.
According to the embodiment of the application, after the identity drawing based on the second virtual object is completed, at least one second scene material based on the identity drawing of the second virtual object is displayed, so that the computing resources are saved, the memory required for running the virtual scene is saved, the fluency of drawing the image in the virtual scene by a user is improved, and the interactive experience of the user is improved.
In some embodiments, in the case where the canvas control includes the first canvas area and the second canvas area, referring to fig. 3D, fig. 3D is a schematic flow chart of an interaction processing method in the virtual scene provided in the embodiments of the present application, and the rendering completion or the clearing of the rendered scene material may be further confirmed through the following steps 3034D to 3036D, which is described in detail below.
In step 3034D, a first complete drawing button and a first empty button are displayed in the canvas control.
For example, step 3034D may be performed prior to step 3031C or in synchronization with step 3031C. With continued reference to FIG. 4E, a first finish drawing button 404E and a first empty button 405E are displayed on the right side of the canvas control. In particular implementations, the first draw completion button 404E and the first empty button 405E may be disposed at other locations.
In step 3035D, the first scene material drawn in the first canvas area is deleted in response to the trigger operation for the first empty button.
For example: referring to fig. 4E, there are 5 first scene materials in the first canvas area 401E, and in response to a click (trigger) operation of the first clear button, these 5 first scene materials are deleted.
In step 3036D, at least one first scene material drawn based on the identity of the first virtual object is synchronized into the second human-computer interaction interface to display the at least one first scene material in a first canvas area in a canvas control displayed in the second human-computer interaction interface in response to a trigger operation for the first completion draw button or a draw countdown end.
Here, the second human-computer interaction interface is configured to display the second virtual scene based on a perspective (first or third perspective) of the second virtual object.
For example, the second virtual scene is displayed based on the perspective (first or third perspective) of the second virtual object, that is, the first scene material and the second scene material are differentially displayed at the perspective of the second virtual object in the second human-computer interaction interface of the terminal device 400-2 used by the second user.
For example, the draw countdown may be 5 minutes, and when the draw countdown is over, each first scene material included in the first canvas area is synchronized to a first canvas area in the canvas controls of the second human-machine interactive interface display.
In some embodiments, the user may be prevented from drawing too much scene material to occupy additional running memory by setting an upper limit on the number of scene materials drawn. For example: referring to fig. 4E, the upper limit of the scene material of the character type is 8, the scene material of 4 characters is currently drawn, the upper limit of the scene material of the pattern type is 10, and the scene material of 3 patterns is currently drawn.
By way of example, step 3035D is not causally related to the execution of step 3036D, and step 3036D may be executed without executing step 3035D. Alternatively, after execution of step 3035D, step 3036D is executed.
In some embodiments, referring to fig. 3E, fig. 3E is a flowchart illustrating an interactive processing method in a virtual scene provided in the embodiments of the present application, after step 3036D, canvas controls may be closed by the following steps 3037D to 3039D, which are described in detail below.
In step 3037D, a first confirm end draw button and a first share button are displayed in the canvas control.
For example, when the first drawing completion button in the first man-machine interaction interface and the first drawing completion button in the second man-machine interaction interface are both triggered, that is, after the terminal device 400-1 and the terminal device 400-2 each execute step 3036D, execution proceeds to step 3037D. Hiding the material list in the canvas control, and displaying a first confirmation end drawing button and a first sharing button.
Referring to fig. 5A, fig. 5A is a schematic diagram of a sharing interface provided in an embodiment of the present application. The sharing interface includes a confirmation end drawing button 501A (first confirmation end drawing button), a sharing button 502A (first sharing button), and a virtual image 503A.
In step 3038D, in response to the trigger operation of the drawing button for the first confirmation end or the confirmation of the end of countdown, the process proceeds to execute the process of displaying the virtual image incorporated in the second virtual scene.
With continued reference to FIG. 5A, taking the first virtual object side as an example, the user may select to click the confirm end draw button 501A (or wait to determine that the countdown is over, which may be 5 seconds), then close the canvas control and switch to a second virtual scene displayed in the first virtual object field of view. The second virtual scene is integrated with a drawn virtual image, and the virtual image comprises a first scene material and a second scene material.
In step 3039D, in response to a trigger operation for the first sharing button, a first snapshot image of the second virtual scene is sent to the social network, the first snapshot image including the second virtual scene blended with the virtual image.
With continued reference to FIG. 5A, when the share button 502A is triggered, a first snapshot image with a second virtual scene as background and a virtual image as foreground is intercepted and sent into the social network. Social networks, such as: social platforms associated with games. The embodiment of the application realizes the one-key sharing function of the snapshot image.
In some embodiments, when the share button 502A is triggered, the second snapshot image is saved to the terminal device 400-1, and is shared according to the social platform selected by the user.
According to the embodiment of the application, the sharing entrance is provided, so that the game snapshot images are shared quickly, the operation flow is saved, and the interactive efficiency is improved.
In some embodiments, the canvas control comprises an integral canvas area, any location in the canvas area being capable of rendering based on the identity of the first virtual object or based on the identity of the second virtual object; referring to fig. 3F, fig. 3F is a flowchart of an interactive processing method in a virtual scene according to an embodiment of the present application, and step 303 may be implemented by the following steps 3031F to 3033F, which are described in detail below.
In step 3031F, at least one first scene material drawn based on the identity of the first virtual object is displayed in the canvas area in response to the drawing operation in the canvas control.
By way of example, the embodiment of the present application illustrates a first human-computer interaction interface corresponding to a first virtual object, where step 3031F is in response to a drawing operation in a canvas control based on an identity of the first virtual object.
In some embodiments, the at least one second scene material is drawn in a second human-machine interaction interface based on an identity of the second virtual object and is drawn in a canvas area in a canvas control displayed by the second human-machine interaction interface, the second human-machine interaction interface being for displaying the second virtual scene based on a perspective (first or third perspective) of the second virtual object, and the canvas control being displayed in the second virtual scene.
Referring to FIG. 4G, FIG. 4G is a schematic diagram of a canvas control provided by embodiments of the present application. The canvas control, i.e., the drawing interface, includes an integral canvas area 401G and a material list 403E, and two users can draw in the integral canvas area 401G using scene materials provided in the material list 403E based on the identities of the respective virtual objects.
Before step 3032F or step 3033F, referring to fig. 1B, at least one second scene material sent by the server 200 is received at the terminal device 400-1 corresponding to the first man-machine interaction interface, where the at least one second scene material is sent to the server 200 by the terminal device 400-2 running the second man-machine interaction interface.
In step 3032F, at least one second scene material drawn based on the identity of the second virtual object is displayed in real time in the canvas area.
For example, during the rendering process, scene materials rendered based on the identities of different virtual objects may be displayed in a differentiated manner to distinguish between scene materials rendered based on the identities of different virtual objects. Taking the perspective of the first man-machine interface as an example, for example: the first scene material is displayed in its original state and the second scene material is displayed in dashed form. The first scene material is displayed in its original state and the second scene material is displayed in a semi-transparent form. The first scene material is displayed in its original state and the second scene material is displayed in an animated effect of time-lapse (e.g., periodically, with a period length of 1 second). For each scene material, the name or head portrait of the virtual object drawing the scene material is annotated.
In the embodiment of the present application, the view angle of the first man-machine interaction interface is taken as an example for explanation, and the above manner of displaying the first scene material is also applicable to other man-machine interaction interfaces, so as to distinguish the scene materials drawn based on the identities of different virtual objects.
Referring to FIG. 4H, FIG. 4H is a schematic diagram of a canvas control provided by embodiments of the present application. In the first human-machine interaction interface, at least one second scene material drawn based on the identity of the second virtual object is synchronously displayed in the canvas area 401G. The editing control 401H is an editing control being used by a user performing a drawing operation in the first virtual object identity. Edit control 402H is an edit control being used by a user performing a drawing operation in the second virtual object identity. The editing control 402H is displayed in dashed form to characterize the editing control that is corresponding to the second virtual object, and at least one second scene material drawn based on the identity of the second virtual object is also displayed in dashed form to distinguish from the first scene material.
In the embodiment of the application, at least one second scene material drawn based on the identity of the second virtual object is displayed in real time, so that the synchronism of interaction processing in the virtual scene is improved, the situation that the scene materials are overlapped can be avoided, communication between users is facilitated, and the sense of reality of the virtual scene is improved.
Or, in step 3033F, in the canvas area, displaying the prompt information that the second virtual object is drawing, and after the drawing based on the identity of the second virtual object is completed, displaying at least one second scene material drawn based on the identity of the second virtual object.
And when the identity drawing based on the second virtual object is completed, at least one second scene material which is sent by the server and is drawn based on the identity of the second virtual object is overlapped into the whole canvas area in the canvas control of the first man-machine interaction interface. With continued reference to fig. 4G, only scene material drawn with the identity of the first virtual object is shown in fig. 4G, D2 is the name of the second virtual object, and the hint information "drawing an image" next to the name characterizes that drawing is not completed based on the identity of the second virtual object.
For example, the steps 3033F and 3032F are performed in no order from the step 3031F.
According to the embodiment of the application, after the identity drawing based on the second virtual object is completed, at least one second scene material based on the identity drawing of the second virtual object is displayed, so that the computing resources are saved, the memory required for running the virtual scene is saved, the fluency of drawing the image in the virtual scene by a user is improved, and the interactive experience of the user is improved.
In some embodiments, in the case where the canvas control includes a complete canvas area, referring to fig. 3G, fig. 3G is a schematic flow chart of an interaction processing method in the virtual scene provided in the embodiments of the present application, and the rendering completion or the clearing of the rendered scene material may be further confirmed through the following steps 3034G to 3036G, which is described in detail below.
In step 3034G, a second complete drawing button and a second empty button are displayed in the canvas control.
For example, step 3034G may be performed prior to step 3031F or in synchronization with step 3031F. With continued reference to FIG. 4G, a second finish drawing button 403G and a first empty button 402G are displayed on the right side of the canvas control. In particular implementations, the second draw completion button 403G and the first empty button 402G may be disposed at other locations.
In step 3035G, the first scene material drawn in the canvas area based on the identity of the first virtual object is deleted in response to the trigger operation for the second empty button.
For example: referring to fig. 4G, in the case where the second scene material drawn in the canvas area based on the identity of the second virtual object is not displayed in real time, there are 5 first scene materials in the canvas area 401G, and in response to a click (trigger) operation of the first clear button, the 5 first scene materials are deleted.
For another example: and deleting the first scene material drawn in the canvas area based on the identity of the first virtual object under the condition that the second scene material drawn in the canvas area based on the identity of the second virtual object is displayed in real time, synchronizing the deleting operation into the second man-machine interaction interface, and hiding the first scene material in the second man-machine interaction interface along with the deleting operation.
In step 3036G, at least one first scene material drawn based on the identity of the first virtual object is synchronized into the second human-computer interaction interface to display the at least one first scene material in a canvas area in a canvas control displayed by the second human-computer interaction interface in response to a trigger operation for the second completion draw button or a draw countdown end.
Here, the second human-computer interaction interface is configured to display the second virtual scene based on a perspective (first or third perspective) of the second virtual object.
For example, step 3035G has no logical dependency on step 3036G, and step 3036G may be performed without performing step 3035G. Alternatively, after the execution of step 3035G, step 3036G is executed.
For example, the draw countdown may be 5 minutes, and when the draw countdown is over, each first scene material included in the canvas area is synchronized into a position corresponding to the canvas area in the canvas control displayed by the second human-machine interaction interface. In the case of drawing using the same canvas area, there may be a case where there is an overlap between the first scene material and the second scene material.
For example, step 3035G is performed without causal relationship with step 3036G, and step 3036G may be performed without performing step 3035G. Alternatively, after the execution of step 3035G, step 3036G is executed.
In some embodiments, referring to fig. 3H, fig. 3H is a flow chart of an interactive processing method in a virtual scene provided in the embodiments of the present application, after step 3036G, canvas controls may be closed by the following steps 3037G to 3039G, which are described in detail below.
In step 3037G, a second confirm end draw button and a second share button are displayed in the canvas control.
For example, when the second drawing completion button in the first man-machine interaction interface and the second drawing completion button in the second man-machine interaction interface are both triggered, that is, after the terminal device 400-1 and the terminal device 400-2 each execute step 3036G, execution proceeds to step 3037G. Hiding the material list in the canvas control, and displaying a second confirm end drawing button and a second share button.
Referring to fig. 5A, fig. 5A is a schematic diagram of a sharing interface provided in an embodiment of the present application. The sharing interface includes a confirmation end drawing button 501A (second confirmation end drawing button), a sharing button 502A (second sharing button), and a virtual image 503A.
In step 3038G, in response to the trigger operation of the drawing button for the second confirmation end or the confirmation of the end of countdown, the process proceeds to execute the process of displaying the virtual image incorporated in the second virtual scene.
By way of example, the virtual image corresponding to step 3038G is drawn based on a canvas area in the canvas control that includes only one entity. Although the same schematic diagram of the sharing interface is multiplexed with step 3038D for explanation, the configuration of the virtual image to be actually shared is different, and steps 3038G and 3038D are not repeated. With continued reference to FIG. 5A, taking the first virtual object side as an example, the user may select to click the confirm end draw button 501A (or wait to determine that the countdown is over, which may be 5 seconds), then close the canvas control and switch to a second virtual scene displayed in the first virtual object field of view. The second virtual scene is integrated with a drawn virtual image, and the virtual image comprises a first scene material and a second scene material.
In step 3039G, in response to a trigger operation for the second sharing button, a second snapshot image of the second virtual scene is sent to the social network, the second snapshot image including the second virtual scene blended with the virtual image.
With continued reference to FIG. 5A, when the share button 502A is triggered, a second snapshot image with the second virtual scene as background and the virtual image as foreground is intercepted and sent into the social network. Social networks, such as: social platforms associated with games. The embodiment of the application realizes the one-key sharing function of the snapshot image.
In some embodiments, when the share button 502A is triggered, the second snapshot image is saved to the terminal device, and the second snapshot image is shared according to the social platform selected by the user.
In some embodiments, referring to fig. 3I, fig. 3I is a flow chart of an interaction processing method in a virtual scene provided in the embodiment of the present application, after step 303, a virtual image may be shared through the following steps 3041 to 3043, which are described in detail below. The execution of steps 3041 to 3043 may be before or after step 304.
In step 3041, a third share button and an open scene information button are displayed in the virtual image.
In some embodiments, when the first sharing button or the second sharing button is triggered, a sharing detail page is formed based on the virtual image, and a third sharing button and an open scene information button are displayed in the virtual image. Alternatively, after step 303, a third share button and an open scene information button are displayed in the virtual image. The open scene information button is used for controlling whether the shared snapshot image comprises information of a virtual scene or not, for example, when the open scene information button is in an on state, the snapshot image shared by the third sharing button is triggered to comprise related information of a first virtual object and a second virtual object in the first virtual scene; when the open scene information button is in a closed state, triggering the snapshot image shared by the third sharing button to not include the related information of the first virtual object and the second virtual object in the first virtual scene.
Referring to fig. 5B, fig. 5B is a schematic diagram of a sharing interface provided in an embodiment of the present application, which shows a sharing detail page. The share details page includes an open scene information button 502B, a share button 501B (third share button). The share button 501B includes icons of a variety of different social platforms, as well as save to local icons. The user can select the corresponding icon, and share the virtual image to the social platform or local corresponding to the icon.
In step 3042, in response to the triggering operation of the third sharing button, if the open scene information button is in an open state, a third snapshot image of the second virtual scene is sent to the social network.
The third snapshot image comprises a second virtual scene blended with the virtual image, and related information of the first virtual object and the second virtual object in the first virtual scene;
with continued reference to fig. 5B, when the state corresponding to the open scene information button 502B is the on state, the shared virtual image includes information of the first virtual object, information of the second virtual object, and game information (displayed in watermark form) in the game.
In step 3043, if the open scene information button is in the closed state, a fourth snapshot image of the second virtual scene is sent to the social network.
The fourth snapshot image comprises a second virtual scene fused with the virtual image and the first virtual object.
With continued reference to fig. 5B, when the state corresponding to the open scene information button 502B is the closed state, the shared virtual image does not include the information of the first virtual object, the information of the second virtual object, and the game information (shown in the form of a watermark).
In the embodiment of the application, by setting the open scene information button, a user can conveniently select whether to share the information of the virtual object in the virtual scene and the information corresponding to the virtual scene. When the state corresponding to the open scene information button is in the closed state, the user privacy information can be protected under the condition that the requirement of sharing the snapshot image by the user is met, and the game experience of the user is improved.
In some embodiments, when the canvas control is displayed in the first human-machine interactive interface, the terminal device 400-1 records the process of drawing the virtual image based on the first virtual object identity in cooperation with the second virtual object identity in synchronization with the screen recording. And when the first user and the second user draw, displaying the animation of splicing the images drawn by the two users based on the identities of the corresponding virtual objects respectively to form the virtual image. And playing the animation of the process of drawing the virtual image obtained by screen recording in response to the confirmation of finishing the drawing operation in the canvas control. Through the display mode, the canvas control and the second virtual scene are switched more smoothly, the display effect of interaction processing in the virtual scene is enriched, the substitution sense of a user for a game is improved, and the interaction experience is improved.
In some embodiments, in response to a confirmation in the canvas control ending the drawing operation, the drawn virtual image is set as an avatar of a virtual object drawing the virtual image, or the drawn virtual image is set as wallpaper (e.g., desktop wallpaper, lock screen wallpaper, etc.) of a terminal device of the corresponding virtual object.
With continued reference to FIG. 3A, in step 304, a virtual image blended into the second virtual scene is displayed in response to the confirmation ending the drawing operation in the canvas control.
Here, the virtual image includes at least one first scene material and at least one second scene material.
By way of example, ending the drawing operation in response to an acknowledgement in the canvas control may be accomplished by at least one of: in response to a trigger operation for a confirm end draw button (either the first confirm end draw button or the second confirm end draw button above), the display of canvas controls in the second virtual scene is stopped. In response to a trigger operation for a sharing button (the first sharing button or the second sharing button above) displayed with the confirm end draw button, display of the canvas control in the second virtual scene is stopped. In response to confirming that the countdown is over, stopping displaying the canvas control in the second virtual scene.
By way of example, stopping the display of the canvas control in the second virtual scene may be accomplished by: hiding a material list and an editing control included by the canvas control, and reserving a virtual image.
In some embodiments, the first scene material and the second scene material are in two-dimensional form; the first human-machine interaction interface is presented by a virtual camera following the first virtual object.
Referring to fig. 3J, fig. 3J is a flowchart of an interactive processing method in a virtual scene provided in the embodiment of the present application, and step 304 may be implemented by the following steps 3044 to 3045, which are specifically described below.
In step 3044, the second virtual scene is used as a background, and the virtual image is drawn as a foreground onto a patch in the second virtual scene that is located in the field of view of the first virtual object.
For example, the first virtual object and the second virtual object have the same positions in the second virtual scene, so that the fields of view are consistent, wherein the virtual camera is used for following the first virtual object in the virtual scene and displaying a shot picture in the first man-machine interaction interface.
For example, referring to fig. 5D, fig. 5D is a schematic diagram of a second virtual scene provided in an embodiment of the present application; FIG. 5C illustrates a human-machine interaction interface of a first virtual object, comprising: a virtual image 503A blended into the second virtual scene 504C, a first virtual object 502C, a second virtual object 503C, a control region 506C, and a control region 505C (the control region includes a plurality of controls for controlling the virtual object to move).
Wherein the virtual image 503A is in a position in the field of view of the first virtual object that is convenient for viewing (a position that is not occluded by any control area or by an object in the virtual object, virtual scene).
In step 3045, the control patch is always directed in the photographing direction of the virtual camera, and the size of the control patch is changed in negative correlation with the distance of the first virtual object.
By way of example, the control panel always faces the shooting direction of the virtual camera, which can be achieved by: and acquiring a straight line corresponding to the direction of the virtual camera, wherein a plane corresponding to the control panel is vertical to the straight line corresponding to the direction.
By way of example, the change in the size of the control patch in dependence on the distance of the patch from the first virtual object can be achieved by: when the distance between the patch and the first virtual object becomes larger, the size of the control patch is reduced; as the distance of the patch from the first virtual object becomes smaller, the size of the control patch increases. That is, the size of the patch and the distance between the patch virtual objects are in a near-far-small relationship, so that the 2D virtual image can form a 3D stereoscopic display effect in the second virtual scene based on the size change of the patch.
In this embodiment of the application, through controlling 2D scene material to change along with the shooting direction of virtual camera, change along with the distance negative correlation with first virtual object, utilize 2D material to realize 3D's perspective degree of depth effect, practiced thrift the required memory of operation virtual scene, practiced thrift the required resource of realization perspective effect, promoted virtual image's sense of reality, promoted user's game experience.
In some embodiments, in response to both the first virtual object and the second virtual object leaving the second virtual scene, the virtual image in the second virtual scene is emptied. And synchronously deleting the virtual images in the second virtual scene in the server, so that the storage resources of the server are saved.
In the embodiment of the application, the first virtual object and the second virtual object with the combination relation in the first virtual scene are switched to the second virtual scene and the interaction of image drawing is performed, so that on one hand, the virtual object with the combination relation is used as a switching unit, and compared with the virtual scene switching unit which uses the individual of the virtual object as a unit, the efficiency of virtual scene switching is improved; on the other hand, interaction between the first virtual scene and the second virtual scene is realized, seamless connection is formed through the combination relation, strange feeling after virtual scene switching is eliminated, and user experience is improved. Scene materials respectively drawn by the identities of the first virtual object and the second virtual object are displayed through canvas controls, so that interaction forms among the virtual objects in the virtual scene are enriched. The virtual images are blended into the virtual scene, so that the display effect of the virtual images is enriched, the virtual images are convenient for a user to observe, and the sense of reality of the virtual scene is improved.
Next, an exemplary application of the interactive processing method in the virtual scenario in the embodiment of the present application in an actual application scenario will be described.
The first virtual object and the second virtual object may be virtual objects controlled by different real users, and the first virtual scene and the second virtual scene are different virtual scenes, respectively, which are described in connection with the above examples. According to the interaction processing method in the virtual scene, after the first virtual object interacts with the virtual prop, the first virtual object and the second virtual object are transmitted to the second virtual scene from the first virtual scene, clients corresponding to the first virtual object and the second virtual object are respectively displayed on canvas controls in the second virtual scene, a user can draw by using scene materials on the canvas controls, scene materials drawn by different users are synthesized into the same pair of virtual images, and virtual images blended into the virtual scene are displayed in the second virtual scene. According to the interactive processing method in the virtual scene, interaction modes among virtual objects in the virtual scene are enriched, social relations among users are facilitated to be promoted, the canvas control is displayed through the specific virtual scene, memory required by the function of operating the canvas control is reduced, computing resources are saved, virtual images are blended into the virtual scene to be displayed, the display effect of the virtual images is enriched, and the sense of reality of the virtual scene is improved.
The method for processing interaction in the virtual scene provided by the embodiment of the application will be described with reference to an exemplary application and implementation of the terminal device provided by the embodiment of the application. Referring to fig. 6, fig. 6 is a schematic flowchart of an alternative method for processing interaction in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 6.
In step 601, a first dialog item is displayed in response to a first virtual object in a first virtual scene being in an interaction range corresponding to a virtual prop.
By way of example, the virtual prop may be a non-player character, the form of the virtual prop may be a treasured box, a concentric knot, etc., the interaction range may be a radiating area centered on the virtual prop, and the shape of the radiating area may be regular or irregular, for example: a spherical region centered on the virtual prop. When the first virtual object is in the interaction range, the NPC in the hidden state is converted into a visible state, and the first dialogue item is displayed. The first dialogue item is an interactive button corresponding to the virtual prop.
Referring to fig. 4A, fig. 4A is a schematic diagram of interaction processing in a first virtual scene provided in an embodiment of the present application. Fig. 4A illustrates that, in a first virtual scene in a first human-computer interaction interface, a first virtual object is within the interaction range of a virtual prop 401A (e.g., a box), a first conversation item 402A is displayed along with a prompt "a box that appears flat and unoccupied" (refer to a prompt area 403A). In the first man-machine interaction interface, an exit control 404A is further included, and when the exit control 404A is triggered, the first dialogue item 402A may be closed, and the interface corresponding to the first dialogue item 402A is exited.
In step 602, in response to a triggering operation for the first dialog item, it is determined whether the first virtual object satisfies an open condition. When the determination result of step 602 is yes, the process proceeds to step 603. When the result of the determination in step 602 is negative, the process proceeds to step 604.
For example, when the first dialog item is triggered, the server determines whether the first virtual object satisfies the open condition. The starting conditions include: the level corresponding to the combination relation corresponding to the virtual object reaches a preset level (or stage, for example, a second level or a second stage), and the first virtual object is formed with a second virtual object, and the second virtual object is a virtual object having a combination relation with the first virtual object. The composite relationship is a one-to-one relationship established between virtual objects through social relationships of players, for example: lover relationship and spouse relationship.
In step 602, determining whether the first virtual object satisfies the start condition may be implemented in the following manner: the player-based ID is searched in a composition library (the composition library includes the ID of the virtual object, the ID of the virtual object having a composition relationship with the virtual object, the correspondence relationship between the IDs of the virtual objects having a composition relationship, and the level of the composition relationship corresponding to the virtual object).
If the ID of the first virtual object is not retrieved in the combination relation library, the first virtual object is not established. Proceed to step 603.
If the ID of the first virtual object is retrieved from the association relation library, determining the ID of the virtual object having the association relation with the first virtual object according to the correspondence relation between the IDs, comparing the ID of the virtual object with the teammate virtual object ID, and if the comparison result is that the IDs are different, then the second virtual object and the first virtual object have no association relation, and turning to step 603.
If the ID of the first virtual object is retrieved, but the level of the combination relationship corresponding to the first virtual object is lower than the preset level, step 603 is performed.
If the first virtual object is in a non-two person teaming state (e.g., not teamed, teamed with multiple teammates), then step 603 is entered.
In step 603, a second dialog item is displayed that characterizes the open condition not met by the first virtual object.
The second dialog item is, for example, a reminder item or an interaction option.
Referring to fig. 4C, fig. 4C is a schematic diagram of interaction processing in a first virtual scene provided in an embodiment of the present application; in the first human-computer interaction interface, a second dialog item 401C is displayed. The second dialog item 401C includes contents of an open condition that the first virtual object does not satisfy, for example: after team formation, the method is started. The first virtual object is characterized as not being teamed with the virtual objects having the composite relationship.
For example, if the first virtual object is not double-person teamed, the server issues an error code, and the terminal device receives the error code and displays "the player needs to meet the double-person teaming condition"; if the first virtual object is in double team formation, but has no combination relation with teammates or the combination relation is lower than a preset level, the server issues an error code, and the terminal equipment receives the error code and displays that the combination relation of the player is lower than the preset level or that the player has no association combination relation.
In step 604, the first virtual object and the second virtual object are transferred from the current location to the same location in the second virtual scene, and a drawing interface is displayed in the second virtual scene.
In some embodiments, a secondary confirmation frame may be displayed to the human-computer interaction interface corresponding to the first virtual object and the second virtual object, respectively, between the virtual object being transferred to the second virtual scene. Referring to fig. 4D, fig. 4D is a schematic diagram of a secondary acknowledgement dialog box provided in an embodiment of the present application; the secondary confirmation dialog box 404D is displayed in a floating layer manner and is overlaid on the whole man-machine interaction interface, and the secondary confirmation dialog box 404D comprises a secondary confirmation option 403D, a first virtual object head portrait 401D and a second virtual object head portrait 402D. D1 is the name of the first virtual object, D2 is the name of the second virtual object, and the secondary validation option 403D includes "reject", "consent". The selection consent indicates confirmation and the selection rejection indicates no confirmation.
In response to a validation operation for the secondary validation box with the identity of each virtual object (i.e., the consent option is selected), the current location of both the first virtual object and the second virtual object are switched to the same location in the second virtual scene.
In some embodiments, when the server determines that the first virtual object and the second virtual object are both in the second virtual scene, the server invokes the transitional animation ID stored in the database, and the clients in the first terminal device and the second terminal device play the transitional animation rendering atmosphere generated by the models of the first virtual object and the second virtual object respectively.
In step 605, in response to a rendering operation for the rendering interface, a plurality of 2D maps (the first scene material above) rendered with a first virtual object identity and a plurality of 2D maps (the second scene material above) rendered with a first virtual object identity are displayed.
For example, after the transitional animation is played, the client displays the corresponding drawing interface according to the team leader or the team member identity corresponding to the virtual object, and the method can be implemented by the following steps: the canvas area on the left side or the canvas area on the right side and the optional 2D map (scene material) list for drawing are respectively displayed, and the 2D map can be used for players to perform operations such as free placement, movement, overturning, scaling, rotation, color changing, deleting and the like.
Referring to FIG. 4E, FIG. 4E is a schematic diagram of a canvas control provided by an embodiment of the present application. The canvas control, i.e. the drawing interface, comprises a first canvas area 401E, a second canvas area 402E and a material list 403E, wherein the first canvas area 401E on the left side belongs to a first virtual object, and the second canvas area 402E on the right side belongs to a second virtual object. Players may draw with the 2D map provided in the material list 403E in the corresponding canvas area with the identity of the corresponding virtual object, respectively.
In the drawing process, the user may select the type of 2D map based on the label by triggering the "character" label, "pattern" corresponding to the material list 403E. In this embodiment, taking the field of view of the first virtual object as an example, the drawing interface corresponding to the second canvas area 402E displays a prompt indicating that the teammate (with the identity of the second virtual object) is drawing. When the drawing of the teammate is completed, the drawing result of the teammate is synchronized to the second canvas area 402E corresponding to the teammate in the drawing interface. By the aid of the synchronous mode, memory occupation of the client is reduced, fluency in the drawing process is improved, and computing resources of the client are saved.
In the drawing interface (canvas control), further comprising: drawing countdown, material list (2D map list), finish drawing button (first finish drawing button 404E), and one-touch clear button (first clear button 405E). And a chat panel can be opened in the drawing interface, and a team chat channel is selected by default in the chat panel based on the team formation relation between the first virtual object and the second virtual object, so that quick communication between players is facilitated.
By way of example, the 2D map may be a lovely map of neon effect classes, such as: cloud, lucky grass, heart, crown, meteor, crescent, notes, etc. The second virtual scene may be a night sky scene, and blending the virtual image including the 2D map into the night sky scene may create a warm romantic atmosphere.
When the one-key clear button is triggered, all characters and graphics in the canvas area corresponding to the virtual object are cleared. When the drawing completion button is triggered, the material list 403E is hidden, and the drawn image is synchronized to the human-computer interaction interface corresponding to another virtual object. If the other party has not completed, the drawing button position is replaced with a status display: waiting for the drawing of the other party to be completed.
For example: and (3) counting down for 5 minutes, when the counting down is finished or a user controlling the virtual object clicks a drawing completion button, forming a left virtual image by taking a pattern drawn by a captain (by taking the identity of the first virtual object) as a foreground, forming a right virtual image by taking a pattern drawn by a captain (by taking the identity of the second virtual object) as a foreground and taking the second virtual scene as a background, and forming a complete virtual image by taking the left virtual image and the right virtual image.
Referring to fig. 5A, fig. 5A is a schematic diagram of a sharing interface provided in an embodiment of the present application. The sharing interface includes a confirm end drawing button 501A, a sharing button 502A, and a virtual image 503A. Taking the first virtual object side as an example, the user may select to click the confirm end drawing button 501A (or wait to confirm that the countdown is ended, and confirm that the countdown may be 5 seconds), close the drawing interface, and switch to a second virtual scene displayed in the first virtual object field of view, where the drawn virtual image is merged. When the share button 502A is triggered, it switches to share detail page,
referring to fig. 5B, fig. 5B is a schematic diagram of a sharing interface provided in an embodiment of the present application, which shows a sharing detail page. The share details page includes an open scene information button 502B, a share button 501B. The share button 501B includes icons of a variety of different social platforms, as well as save to local icons. The user can select the corresponding icon, and share the virtual image to the social platform or local corresponding to the icon.
When the state corresponding to the open scene information button 502B is the open state, the shared virtual image includes information of the first virtual object, information of the second virtual object, and game information (displayed in watermark form) in the game. When the state corresponding to the open scene information button 502B is the closed state, the shared virtual image does not include the information of the first virtual object, the information of the second virtual object, and the game information (displayed in the form of a watermark).
For example, after the client detects that one side confirms that drawing of the 2D drawing (virtual image) is completed, the server is uploaded and synchronized to the other side interface. After the client detects that both sides finish drawing of the 2D drawing, displaying the pictures drawn by both sides together, playing the dynamic effect of fireworks, and providing a sharing function button; if the user does not trigger the drawing completion confirmation button, the client automatically closes the drawing interface after 5 seconds.
In step 606, a virtual image comprising a plurality of 2D maps is merged into a second virtual scene.
The client displays the 2D drawing on a surface patch at a preset position in the sky of the virtual scene, and the surface patch always faces the camera in the rotating process of the camera through a billboard (billboards) technology, so that the 3D effect of naked eyes is achieved.
For example, after detecting that the drawing interface is closed, the client invokes the preset lens configuration to pull the lens to the optimal viewing star sky angle, so that the player can appreciate the effect of drawing the star sky by two persons.
Referring to fig. 5C, fig. 5C is a schematic diagram of a second virtual scene provided in an embodiment of the present application; continuing with the description of the field of view corresponding to the first virtual object, fig. 5C shows a man-machine interaction interface of the first virtual object, including: a virtual image 503A blended into the second virtual scene 504C, a first virtual object 502C, a second virtual object 503C, a control region 506C, and a control region 505C (the control region includes a plurality of controls for controlling the virtual object to move).
In some embodiments, the user may take a picture of the minds within the second virtual scene after the drawing is completed. Referring to fig. 5D, fig. 5D is a schematic diagram of a second virtual scene provided in an embodiment of the present application; in fig. 5D, compared to fig. 5C, in response to the triggering operation for the screenshot mode, the controls in the control area 506C and the control area 505C are hidden, so that shielding of the controls on the virtual scene is reduced, and the user can conveniently screenshot. When the first virtual object and the second virtual object exit the second virtual scene, the virtual image disappears, and the server deletes the virtual image record, so that storage resources are saved. The user can control the virtual object to interact with the virtual prop again, and repeatedly enter the second virtual scene to draw the virtual image.
According to the method and the device, two virtual objects are transmitted to the second virtual scene, the drawing interface (canvas control) is displayed in the second virtual scene, a user can freely draw a pair of 2D drawing images (virtual images) together according to the identity of the virtual objects through the operations of overturning, zooming, moving and the like in the drawing interface, the 2D drawing images are finally presented in the preset surface piece in the virtual scene, the surface piece always faces towards the camera in the rotating process of the camera, the 3D effect of naked eyes is achieved, the realism of the virtual scene is improved, and the interactive mode in the virtual scene is enriched.
Continuing with the description below of an exemplary structure implemented as a software module of the interactive processing device 455 in a virtual scene provided in an embodiment of the present application, in some embodiments, as shown in fig. 2, the software module stored in the interactive processing device 455 in the virtual scene of the memory 450 may include: the display module 4551 is configured to display a first virtual scene and display a virtual prop located in the first virtual scene; the switching module 4552 is configured to control the first virtual object and the second virtual object to switch to the second virtual scene in response to an interaction operation between the first virtual object and the virtual prop, and display a canvas control in the second virtual scene; the first virtual object and the second virtual object have a combination relationship; a rendering module 4553 for displaying, in the canvas control, at least one first scene material rendered based on the identity of the first virtual object and at least one second scene material rendered based on the identity of the second virtual object in response to a rendering operation in the canvas control; the display module 4551 is further configured to display a virtual image blended into the second virtual scene in response to the confirmation ending the drawing operation in the canvas control, wherein the virtual image comprises at least one first scene material and at least one second scene material.
In some embodiments, the default display attribute of the virtual prop is a hidden state; and the display module 4551 is configured to control the virtual prop to switch from the hidden state to the visible state in response to the distance between the set position of the virtual prop and the first virtual object being less than the distance threshold, so that the virtual prop appears at the set position.
In some embodiments, the default display attribute of the virtual prop is a visible state; and the display module 4551 is used for responding to the setting position of the first virtual object and the second virtual object which are formed by the first virtual object and the second virtual object and the setting position is in the visual field of the first virtual object, and displaying the virtual prop at the setting position.
In some embodiments, the switching module 4552 is configured to display, in a dialog control corresponding to the virtual prop, a first dialog item output by the virtual prop, where the first dialog item characterizes a drawing interaction function that triggers the virtual prop; responding to the interactive operation of selecting the first dialogue item, and meeting the starting condition of the drawing interactive function, controlling the first virtual object and the second virtual object to switch to the second virtual scene, and displaying a canvas control; wherein the open condition includes a level of the combination relationship being equal to or higher than a level threshold.
In some embodiments, the switching module 4552 is configured to update a current position of the first virtual object and a current position of the second virtual object in the first virtual scene to a same position in the second virtual scene, where at least a portion of a field of view corresponding to the virtual object is used as the blending area of the virtual image when the virtual object is at the same position in the second virtual scene.
In some embodiments, the first virtual object can be in a team state with any one other virtual object, and the switching between the first virtual scene and the second virtual scene is in a minimum switching unit of the set of virtual objects in the team state; the opening conditions further include: the first virtual object and the second virtual object form a team state.
In some embodiments, the level of the combining relationship is positively correlated with the value of one of the following parameters: and the interaction frequency of the first virtual object and the second virtual object after the combination relation is formed is long when the first virtual object and the second virtual object keep the combination relation.
In some embodiments, the switching module 4552 is configured to display a dialog control of the virtual prop in response to the first virtual object being in an interaction range of the virtual prop, and display, in a dialog control corresponding to the virtual prop, a first dialog item output by the virtual prop, where the interaction range is a radiation area centered on a set position of the virtual prop.
In some embodiments, a switching module 4552 is configured to obtain a first model of a first virtual object and a second model of a second virtual object; generating a transition animation based on the first model, the second model and the transition animation template; before the canvas control is displayed, a transitional animation is played, wherein the transitional animation characterizes the first virtual object and the second virtual object switching from the first virtual scene to the same location in the second virtual scene.
In some embodiments, the canvas control comprises at least one first canvas area and at least one second canvas area, the first canvas area being an area drawn based on the identity of the first virtual object, the second canvas area being an area drawn based on the identity of the second virtual object; a rendering module 4553 for displaying, in a first canvas area, at least one first scene material rendered based on the identity of the first virtual object in response to a rendering operation in the canvas control; and displaying at least one second scene material drawn based on the identity of the second virtual object in real time in the second canvas area, or displaying prompt information that the second virtual object is drawing in the second canvas area and displaying at least one second scene material drawn based on the identity of the second virtual object after the drawing based on the identity of the second virtual object is completed.
In some embodiments, a drawing module 4553 for displaying a first complete drawing button and a first empty button in the canvas control; deleting the first scene material drawn in the first canvas area in response to a trigger operation for the first empty button; and in response to triggering operation of the first drawing completion button or finishing of the drawing countdown, synchronizing at least one first scene material drawn based on the identity of the first virtual object into a second man-machine interaction interface to display the at least one first scene material in a first canvas area in a canvas control displayed by the second man-machine interaction interface, wherein the second man-machine interaction interface is used for displaying a second virtual scene based on the view angle of the second virtual object.
In some embodiments, the drawing module 4553 is configured to display a first confirm end drawing button and a first share button in the canvas control after synchronizing the at least one first scene material drawn based on the identity of the first virtual object into the second human-machine interaction interface; in response to a trigger operation for the first confirmation end drawing button or confirmation of the end of countdown, shifting to executing processing of displaying a virtual image blended into the second virtual scene; and in response to the triggering operation for the first sharing button, sending a first snapshot image of the second virtual scene to the social network, wherein the first snapshot image comprises the second virtual scene blended with the virtual image.
In some embodiments, the at least one second scene material is drawn in a second human-machine interaction interface based on an identity of a second virtual object and is drawn in a second canvas area in a canvas control displayed by the second human-machine interaction interface, the second human-machine interaction interface being for displaying a second virtual scene based on a perspective of the second virtual object; and the drawing module 4553 is configured to receive at least one second scene material sent by the server before displaying the at least one second scene material drawn based on the identity of the second virtual object, where the at least one second scene material is sent to the server by the terminal device running the second man-machine interaction interface.
In some embodiments, the canvas control comprises a canvas area, any location in the canvas area being capable of rendering based on the identity of the first virtual object or based on the identity of the second virtual object; a rendering module 4553 for displaying, in a canvas area, at least one first scene material rendered based on the identity of the first virtual object in response to a rendering operation in the canvas control; at least one second scene material drawn based on the identity of the second virtual object is displayed in real time in the canvas area, or prompt information that the second virtual object is drawing is displayed in the canvas area, and after the drawing based on the identity of the second virtual object is completed, at least one second scene material drawn based on the identity of the second virtual object is displayed.
In some embodiments, the drawing module 4553 is configured to display a second complete drawing button and a second empty button in the canvas control; deleting first scene material drawn in the canvas area based on the identity of the first virtual object in response to a trigger operation for the second empty button; and in response to triggering operation of the second completion drawing button or the end of the drawing countdown, synchronizing at least one first scene material drawn based on the identity of the first virtual object into a second man-machine interaction interface to display the at least one first scene material in a canvas area in a canvas control displayed by the second man-machine interaction interface, wherein the second man-machine interaction interface is used for displaying a second virtual scene based on the view angle of the second virtual object.
In some embodiments, the drawing module 4553 is configured to display a second confirm end draw button and a second share button in the canvas control after synchronizing the at least one first scene material drawn based on the identity of the first virtual object into the second human-machine interaction interface; in response to the trigger operation of the drawing button for the second confirmation end or confirmation of the end of the countdown, shifting to executing the process of displaying the virtual image blended into the second virtual scene; in response to a trigger operation for the second sharing button, a second snapshot image of the second virtual scene is sent to the social network, the second snapshot image including the second virtual scene blended with the virtual image.
In some embodiments, the at least one second scene material is drawn in a second human-machine interaction interface based on an identity of the second virtual object and is drawn in a canvas area in a canvas control displayed by the second human-machine interaction interface, the second human-machine interaction interface being for displaying the second virtual scene based on a perspective of the second virtual object; the rendering module 4553 is configured to, before displaying the at least one second scene material rendered based on the identity of the second virtual object, further comprise: and receiving at least one second scene material sent by the server, wherein the at least one second scene material is sent to the server by the terminal equipment running the second man-machine interaction interface.
In some embodiments, the drawing operation includes at least one of: drawing brand new materials based on a drawing tool, and editing operations based on candidate materials, wherein the types of the editing operations comprise: placing, moving, turning over and zooming; the types of the first scene material and the second scene material include: images, characters.
In some embodiments, the first scene material and the second scene material are in two-dimensional form; the first man-machine interaction interface is presented through a virtual camera following the first virtual object; the display module 4551 is configured to draw a virtual image as a foreground onto a patch located in a field of view of the first virtual object in the second virtual scene, with the second virtual scene being used as a background, where the virtual camera is configured to follow the first virtual object in the virtual scene, and display a captured picture in the first man-machine interaction interface; the control patch always faces the shooting direction of the virtual camera, and the size of the control patch varies inversely with the distance to the first virtual object.
In some embodiments, the display module 4551 is configured to display a third share button and an open scene information button in the virtual image; responding to the triggering operation of a third sharing button, and if the open scene information button is in an open state, sending a third snapshot image of a second virtual scene to the social network, wherein the third snapshot image comprises the second virtual scene blended with the virtual image, and related information of the first virtual object and the second virtual object in the first virtual scene; and if the open scene information button is in a closed state, sending a fourth snapshot image of the second virtual scene to the social network, wherein the fourth snapshot image comprises the second virtual scene blended with the virtual image and the first virtual object.
Embodiments of the present application provide a computer program product or computer program comprising computer executable instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the electronic device executes the interactive processing method in the virtual scene according to the embodiment of the application.
The embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions, in which the computer-executable instructions are stored, which when executed by a processor, cause the processor to perform an interaction processing method in a virtual scene provided by the embodiments of the present application, for example, an interaction processing method in a virtual scene as shown in fig. 3A.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
In summary, by switching the first virtual object and the second virtual object having the combination relationship in the first virtual scene to the second virtual scene and performing the interaction of image drawing, on the one hand, because the virtual object having the combination relationship is used as a switching unit, compared with the switching of the virtual scene by using the individual virtual object as a unit, the efficiency of virtual scene switching is improved; on the other hand, interaction between the first virtual scene and the second virtual scene is realized, seamless connection is formed through the combination relation, strange feeling after virtual scene switching is eliminated, and user experience is improved. Scene materials respectively drawn by the identities of the first virtual object and the second virtual object are displayed through canvas controls, so that interaction forms among the virtual objects in the virtual scene are enriched. The virtual images are blended into the virtual scene, so that the display effect of the virtual images is enriched, the virtual images are convenient for a user to observe, and the sense of reality of the virtual scene is improved.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (24)

1. An interactive processing method in a virtual scene, the method comprising:
displaying a first virtual scene and displaying virtual props positioned in the first virtual scene;
responding to the interactive operation between the first virtual object and the virtual prop, controlling the first virtual object and the second virtual object to switch to a second virtual scene, and displaying a canvas control in the second virtual scene; wherein the first virtual object and the second virtual object have a combination relationship;
in response to a drawing operation in the canvas control, displaying at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in the canvas control;
and displaying a virtual image blended into the second virtual scene in response to the confirmation ending drawing operation in the canvas control, wherein the virtual image comprises the at least one first scene material and the at least one second scene material.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the default display attribute of the virtual prop is a hidden state;
the displaying the virtual prop located in the first virtual scene includes:
and controlling the virtual prop to switch from the hidden state to the visible state in response to the distance between the setting position of the virtual prop and the first virtual object being smaller than a distance threshold value, so that the virtual prop appears at the setting position.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the default display attribute of the virtual prop is a visible state;
the displaying the virtual prop located in the first virtual scene includes:
and displaying the virtual prop at the set position in response to the first virtual object and the second virtual object being enqueued to the set position of the virtual prop and the set position being within the field of view of the first virtual object.
4. The method of claim 1, wherein the controlling the first virtual object and the second virtual object to switch to a second virtual scene and displaying canvas controls in the second virtual scene in response to the interaction between the first virtual object and the virtual prop comprises:
Displaying a first dialogue item output by the virtual prop in a dialogue control corresponding to the virtual prop, wherein the first dialogue item represents a drawing interaction function triggering the virtual prop;
responding to the interactive operation of selecting the first dialogue item, and meeting the starting condition of the drawing interactive function, controlling the first virtual object and the second virtual object to switch to the second virtual scene, and displaying a canvas control; wherein the open condition includes a level of the combination relationship being equal to or higher than a level threshold.
5. The method of claim 4, wherein the controlling the first virtual object and the second virtual object to switch to the second virtual scene comprises:
and updating the current positions of the first virtual object and the second virtual object in the first virtual scene to the same position in the second virtual scene, wherein when the virtual object is positioned at the same position in the second virtual scene, at least part of the view corresponding to the virtual object is used as the merging area of the virtual image.
6. The method of claim 5, wherein prior to displaying the canvas control, the method further comprises:
Acquiring a first model of the first virtual object and a second model of the second virtual object;
generating a transition animation based on the first model, the second model and a transition animation template;
and playing the transitional animation, wherein the transitional animation characterizes the first virtual object and the second virtual object to be switched from the first virtual scene to the same position in the second virtual scene.
7. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the first virtual object can form a team state with any one other virtual object, and the switching between the first virtual scene and the second virtual scene takes a virtual object set forming the team state as a minimum switching unit;
the opening conditions further include: the first virtual object and the second virtual object form the team state.
8. The method according to any one of claim 4 to 7, wherein,
the level of the combination relationship is positively correlated with the value of one of the following parameters: and the interaction frequency of the first virtual object and the second virtual object after the combination relation is formed, and the duration of the first virtual object and the second virtual object maintaining the combination relation.
9. The method according to any one of claims 4 to 7, wherein displaying, in the dialog control corresponding to the virtual prop, the first dialog item output by the virtual prop includes:
displaying the dialog control of the virtual prop in response to the first virtual object being within the interaction range of the virtual prop, and
and displaying a first dialogue item output by the virtual prop in a dialogue control corresponding to the virtual prop, wherein the interaction range is a radiation area taking the setting position of the virtual prop as the center.
10. The method according to any one of claims 1 to 7, wherein,
the canvas control comprises at least one first canvas area and at least one second canvas area, wherein the first canvas area is an area for drawing based on the identity of the first virtual object, and the second canvas area is an area for drawing based on the identity of the second virtual object;
the displaying, in the canvas control, at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in response to a drawing operation in the canvas control, comprising:
In response to a drawing operation in the canvas control, displaying at least one first scene material drawn by the first virtual object in the first canvas area;
at least one second scene material drawn by the second virtual object is displayed in real-time in the second canvas area, or,
and displaying prompt information which is being drawn by the second virtual object in the second canvas area, and displaying at least one second scene material drawn by the second virtual object after the drawing based on the identity of the second virtual object is completed.
11. The method according to claim 10, wherein the method further comprises:
displaying a first finish drawing button and a first empty button in the canvas control;
deleting the first scene material drawn in the first canvas area in response to a trigger operation for the first empty button;
and responding to the triggering operation of the first completion drawing button or the end of the drawing countdown, synchronizing the at least one first scene material drawn based on the identity of the first virtual object into a second man-machine interaction interface to display the at least one first scene material in the first canvas area in the canvas control displayed by the second man-machine interaction interface, wherein the second man-machine interaction interface is used for displaying the second virtual scene based on the view angle of the second virtual object.
12. The method of claim 11, wherein after synchronizing the at least one first scene material drawn based on the identity of the first virtual object into a second human-machine interaction interface, the method further comprises:
displaying a first confirmation end drawing button and a first sharing button in the canvas control;
in response to a trigger operation for the first confirmation end drawing button or confirmation of end of countdown, shifting to executing a process of displaying a virtual image blended into the second virtual scene;
and responding to the triggering operation of the first sharing button, and sending a first snapshot image of the second virtual scene to a social network, wherein the first snapshot image comprises the second virtual scene blended with the virtual image.
13. The method of claim 10, wherein the step of determining the position of the first electrode is performed,
the at least one second scene material is drawn in a second human-computer interaction interface based on the identity of the second virtual object and is drawn in the second canvas area in the canvas control displayed by the second human-computer interaction interface, and the second human-computer interaction interface is used for displaying the second virtual scene based on the view angle of the second virtual object;
Before said displaying the at least one second scene material drawn by the second virtual object, the method further comprises:
and receiving the at least one second scene material sent by a server, wherein the at least one second scene material is sent to the server by a terminal device running the second man-machine interaction interface.
14. The method according to any one of claims 1 to 7, wherein,
the canvas control comprises a canvas area, and any position in the canvas area can be used for drawing operation based on the identity of the first virtual object or the identity of the second virtual object;
the displaying, in the canvas control, at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in response to a drawing operation in the canvas control, comprising:
in response to a drawing operation in the canvas control, displaying at least one first scene material drawn by the first virtual object in the canvas area;
at least one second scene material drawn by the second virtual object is displayed in real-time in the canvas area, or,
And displaying prompt information which is being drawn by the second virtual object in the canvas area, and displaying at least one second scene material drawn by the second virtual object after the drawing based on the identity of the second virtual object is completed.
15. The method of claim 14, wherein the method further comprises:
displaying a second finish drawing button and a second empty button in the canvas control;
deleting the first scene material drawn in the canvas area based on the identity of the first virtual object in response to a trigger operation for the second empty button;
and responding to the triggering operation of the second completion drawing button or the end of the drawing countdown, synchronizing the at least one first scene material drawn based on the identity of the first virtual object into a second man-machine interaction interface to display the at least one first scene material in the canvas area in the canvas control displayed by the second man-machine interaction interface, wherein the second man-machine interaction interface is used for displaying the second virtual scene based on the view angle of the second virtual object.
16. The method of claim 15, wherein after synchronizing the at least one first scene material drawn based on the identity of the first virtual object into a second human-machine interaction interface, the method further comprises:
Displaying a second confirm end draw button and a second share button in the canvas control;
in response to a trigger operation for the second confirmation end drawing button or confirmation of end of countdown, shifting to executing processing of displaying a virtual image blended into the second virtual scene;
and in response to a trigger operation for the second sharing button, sending a second snapshot image of the second virtual scene to a social network, wherein the second snapshot image comprises the second virtual scene blended with the virtual image.
17. The method of claim 14, wherein the step of providing the first information comprises,
the at least one second scene material is drawn in a second man-machine interaction interface based on the identity of the second virtual object and is drawn in the canvas area in the canvas control displayed by the second man-machine interaction interface, and the second man-machine interaction interface is used for displaying the second virtual scene based on the view angle of the second virtual object;
before said displaying the at least one second scene material drawn by the second virtual object, the method further comprises:
and receiving the at least one second scene material sent by a server, wherein the at least one second scene material is sent to the server by a terminal device running the second man-machine interaction interface.
18. The method according to any one of claims 1 to 7, wherein,
the drawing operation includes at least one of: drawing brand new materials based on a drawing tool, and editing operations based on candidate materials, wherein the types of the editing operations comprise: placing, moving, turning over and zooming;
the types of the first scene material and the second scene material include: images, characters.
19. The method according to any one of claims 1 to 7, wherein,
the first scene material and the second scene material are in a two-dimensional form;
the first man-machine interaction interface is presented through a virtual camera following the first virtual object;
the displaying of the virtual image blended into the second virtual scene includes:
drawing the virtual image as a foreground onto a surface patch, which is positioned in the field of view of the first virtual object, in the second virtual scene by taking the second virtual scene as a background, wherein the virtual camera is used for following the first virtual object in the virtual scene and displaying a shot picture in the first man-machine interaction interface;
and controlling the surface patch to always face the shooting direction of the virtual camera, and controlling the size of the surface patch to change according to the negative correlation with the distance of the first virtual object.
20. The method according to any one of claims 1 to 7, further comprising:
displaying a third sharing button and an open scene information button in the virtual image;
responding to the triggering operation of the third sharing button, and if the open scene information button is in an open state, sending a third snapshot image of the second virtual scene to a social network, wherein the third snapshot image comprises the second virtual scene integrated with the virtual image, and related information of the first virtual object and the second virtual object in the first virtual scene;
and if the open scene information button is in a closed state, sending a fourth snapshot image of the second virtual scene to a social network, wherein the fourth snapshot image comprises the second virtual scene blended with the virtual image and the first virtual object.
21. An interactive processing apparatus in a virtual scene, the apparatus comprising:
the display module is used for displaying a first virtual scene and displaying virtual props positioned in the first virtual scene;
the switching module is used for responding to the interactive operation between the first virtual object and the virtual prop, controlling the first virtual object and the second virtual object to switch to a second virtual scene, and displaying a canvas control in the second virtual scene; wherein the first virtual object and the second virtual object have a combination relationship;
A drawing module, configured to respond to a drawing operation in the canvas control, and display at least one first scene material drawn by the first virtual object and at least one second scene material drawn by the second virtual object in the canvas control;
the display module is further configured to display a virtual image blended into the second virtual scene in response to a confirmation end drawing operation in the canvas control, where the virtual image includes the at least one first scene material and the at least one second scene material.
22. An electronic device, the electronic device comprising:
a memory for storing computer executable instructions;
a processor for implementing the interactive processing method in a virtual scene according to any one of claims 1 to 20 when executing computer executable instructions stored in said memory.
23. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the method of interactive processing in a virtual scene according to any one of claims 1 to 20.
24. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the interactive processing method in a virtual scene according to any one of claims 1 to 20.
CN202210901781.6A 2022-07-28 2022-07-28 Interactive processing method and device in virtual scene, electronic equipment and storage medium Pending CN117504279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210901781.6A CN117504279A (en) 2022-07-28 2022-07-28 Interactive processing method and device in virtual scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210901781.6A CN117504279A (en) 2022-07-28 2022-07-28 Interactive processing method and device in virtual scene, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117504279A true CN117504279A (en) 2024-02-06

Family

ID=89740543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210901781.6A Pending CN117504279A (en) 2022-07-28 2022-07-28 Interactive processing method and device in virtual scene, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117504279A (en)

Similar Documents

Publication Publication Date Title
US11632530B2 (en) System and method for presenting virtual reality content to a user
EP3729238B1 (en) Authoring and presenting 3d presentations in augmented reality
CN110465097B (en) Character vertical drawing display method and device in game, electronic equipment and storage medium
US10963140B2 (en) Augmented reality experience creation via tapping virtual surfaces in augmented reality
WO2018043135A1 (en) Information processing device, information processing method, and program
US20110210962A1 (en) Media recording within a virtual world
WO2019147392A1 (en) Puppeteering in augmented reality
CN112402963B (en) Information sending method, device, equipment and storage medium in virtual scene
KR20180126145A (en) Virtual history experience system with Age-specific cultural image and thereof method
US11645805B2 (en) Animated faces using texture manipulation
Marner et al. Exploring interactivity and augmented reality in theater: A case study of Half Real
CN117170504B (en) Method, system and storage medium for viewing with person in virtual character interaction scene
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN117504279A (en) Interactive processing method and device in virtual scene, electronic equipment and storage medium
CN109561338B (en) Interface seamless switching display method and storage medium of song-ordering system
CN114189743A (en) Data transmission method and device, electronic equipment and storage medium
CN113426110A (en) Virtual character interaction method and device, computer equipment and storage medium
CN113763568A (en) Augmented reality display processing method, device, equipment and storage medium
US11962954B2 (en) System and method for presenting virtual reality content to a user
Ucchesu A Mixed Reality application to support TV Studio Production
CN117788689A (en) Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
CN116943196A (en) Body part orientation editing method, device, equipment and storage medium
CN117278820A (en) Video generation method, device, equipment and storage medium
CN116764215A (en) Virtual object control method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination