WO2023231553A1 - 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品 - Google Patents

虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023231553A1
WO2023231553A1 PCT/CN2023/085343 CN2023085343W WO2023231553A1 WO 2023231553 A1 WO2023231553 A1 WO 2023231553A1 CN 2023085343 W CN2023085343 W CN 2023085343W WO 2023231553 A1 WO2023231553 A1 WO 2023231553A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
prop
props
area
scene
Prior art date
Application number
PCT/CN2023/085343
Other languages
English (en)
French (fr)
Inventor
邓昱
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023231553A1 publication Critical patent/WO2023231553A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to human-computer interaction technology, and in particular to a virtual scene prop interaction method, device, electronic equipment, computer-readable storage media and computer program products.
  • Display technology based on graphics processing hardware has expanded the channels for perceiving the environment and obtaining information, especially the multimedia technology of virtual scenes.
  • virtual objects controlled by users or artificial intelligence can be realized according to actual application needs.
  • the diverse interactions between them have various typical application scenarios, such as in virtual scenes such as games, which can simulate the battle process between virtual objects.
  • Virtual scenes are often configured with multiple virtual props with interactive functions, so that virtual objects can be controlled to interact with virtual props. For example, virtual objects can sit down on a chair and other action interactions, such as unpacking a material box. Display multiple supplies for functional interactions such as virtual object selection.
  • Embodiments of the present application provide a method, device, electronic device, computer-readable storage medium, and computer program product for prop interaction in a virtual scene, which can recommend virtual props to be interacted with to the user to improve the efficiency of human-computer interaction.
  • An embodiment of the present application provides a method for interacting with props in a virtual scene.
  • the method is executed through an electronic device.
  • the method includes:
  • An embodiment of the present application provides a virtual scene prop interaction device, including:
  • a first display module configured to display at least part of the area in the virtual scene on a human-computer interaction interface, where the at least part of the area includes a virtual object;
  • the first display module is further configured to, in response to the appearance of at least two virtual props with interactive functions in the at least part of the area, display that a first virtual prop among the at least two virtual props is in a selected state, and display At least one interactive control corresponding to the first virtual prop; wherein the interactive control is used to be triggered to execute the interactive function corresponding to the interactive control, and the interactive function is used for the virtual object to interact with the first virtual prop. Interact with virtual props.
  • An embodiment of the present application provides a method for interacting with props in a virtual scene.
  • the method is executed through an electronic device.
  • the method includes:
  • Interactive controls For at least one second virtual prop that is not in the selected state, display a switching control corresponding to the at least one second virtual prop; wherein the switching control is used to be triggered to display a switching control corresponding to the at least one second virtual prop.
  • An embodiment of the present application provides a virtual scene prop interaction device, including:
  • the second display module is configured to display at least part of the area in the virtual scene on the human-computer interaction interface, wherein the at least part Areas include virtual objects;
  • the second display module is further configured to, in response to the appearance of at least two virtual props in the at least part of the area, display the first virtual prop with an interactive function among the at least two virtual props and the corresponding virtual props based on the selected state.
  • At least one interactive control of the first virtual prop wherein the interactive control is used to be triggered to execute the interactive function corresponding to the interactive control, and the interactive function is used for the virtual object and the first virtual prop interact;
  • the second display module is also configured to display a switching control corresponding to the at least one second virtual prop that is not in the selected state; wherein the switching control is used to be triggered to display An interactive control corresponding to at least one of the second virtual props.
  • An embodiment of the present application provides an electronic device, including:
  • Memory for storing computer-executable instructions
  • the processor is configured to implement the prop interaction method of the virtual scene provided by the embodiment of the present application when executing computer-executable instructions stored in the memory.
  • Embodiments of the present application provide a computer-readable storage medium that stores computer-executable instructions for implementing the prop interaction method of a virtual scene provided by embodiments of the present application when executed by a processor.
  • Embodiments of the present application provide a computer program product, which includes computer-executable instructions.
  • the computer-executable instructions are executed by a processor, the prop interaction method of the virtual scene provided by the embodiment of the present application is implemented.
  • Figures 1A-1B are schematic interface diagrams of prop interaction methods in virtual scenes provided in related technologies
  • Figure 2 is a schematic structural diagram of a virtual scene prop interaction system provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figures 4A-4C are schematic flow diagrams of a method for interacting with props in a virtual scene provided by an embodiment of the present application
  • Figures 5A-5E are schematic interface diagrams of the prop interaction method in virtual scenes provided by embodiments of the present application.
  • Figure 6 is a schematic flowchart of a prop interaction method in a virtual scene provided by an embodiment of the present application.
  • Figures 7A-7B are schematic diagrams of the overlapping areas of the prop interaction method of the virtual scene provided by the embodiment of the present application.
  • Figures 8A-8E are schematic diagrams of overlapping area calculations for the prop interaction method in virtual scenes provided by embodiments of the present application.
  • first ⁇ second ⁇ third are only used to distinguish similar objects and do not represent a specific ordering of objects. It is understandable that "first ⁇ second ⁇ third" Where permitted, the specific order or sequence may be interchanged so that the embodiments of the application described herein can be practiced in an order other than that illustrated or described herein.
  • Virtual scenes using the scenes output by the device that are different from the real world, can form a visual perception of the virtual scene through the naked eye or the assistance of the device, such as two-dimensional images output through the display screen, through stereoscopic projection, virtual reality and augmented reality Three-dimensional images output by stereoscopic display technology such as technology; in addition, various simulated real-world perceptions such as auditory perception, tactile perception, olfactory perception, and motion perception can also be formed through various possible hardware.
  • Response is used to represent the conditions or states on which the performed operations depend.
  • the dependent conditions or states are met, the one or more operations performed may be in real time or may have a set delay; Without special instructions, multiple operations performed There is no restriction on the order of execution.
  • Interactive props refer to virtual props with interactive functions. Players control virtual objects to interact with interactive props. For example, virtual objects can ride on "vehicles", which are interactive props, and virtual objects interact with tables in buildings. If interaction cannot occur, the "table" is not an interactive prop, but only an ordinary virtual item.
  • FIG. 1A multiple virtual props 302A of virtual objects are displayed in the human-computer interaction interface 301A.
  • the virtual props 302A are obtained through range recognition or cross-center recognition.
  • the picking operation of the corresponding virtual prop 302A is completed.
  • FIG. 1B only the synthesis station 302B and the non-player control character 303B are displayed in the human-computer interaction interface 301B.
  • the synthesis station 302B and the non-player control character 303B are obtained through range recognition, in response to the triggering of the interactive control 304B for the synthesis station 302B. Operation to trigger the synthesis stage to perform synthesis processing.
  • players can place interactive virtual props in virtual scenes, so there are more possibilities for the plane and spatial positions of virtual props, and the rules for accurate interaction between virtual objects and virtual props become more complex, thus It is difficult to achieve accurate interaction; and there may be different interactive actions between virtual objects and multiple interactive virtual props, and players cannot determine which virtual prop the virtual object will interact with.
  • Embodiments of the present application provide a method, device, electronic device, computer-readable storage medium, and computer program product for prop interaction in a virtual scene, which can recommend virtual props to be interacted with to the user to improve the efficiency of human-computer interaction.
  • Exemplary applications of the electronic devices provided by the embodiments of the present application are described below.
  • the electronic devices provided by the embodiments of the present application can be implemented as notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (for example, mobile phones, portable music players, Personal digital assistants, dedicated messaging devices, portable gaming devices, virtual reality hardware devices) and other types of user terminals.
  • the prop interaction method of the virtual scene provided by the embodiment of the present application can be applied to the virtual reality hardware device.
  • the virtual scene can be output based entirely on the virtual reality hardware device, or based on the collaboration between the terminal and the server.
  • the server calculates the scene display data of the virtual scene.
  • the scene display data includes the prop display data of the first virtual prop in the selected state, and the scene display data is sent to the virtual reality hardware device, and at least a partial area of the virtual scene including the virtual object is displayed in the virtual reality hardware device, at least At least two virtual props with interactive functions are displayed in a partial area, a first virtual prop in a selected state among the at least two virtual props is displayed, and at least one interactive control corresponding to the first virtual prop is displayed; an interactive tool for a virtual reality hardware device
  • the virtual reality hardware device After receiving the account's trigger operation for the interactive control, sends the operation data of the trigger operation to the server through the network.
  • the server calculates the response data corresponding to the interactive function of the interactive control based on the operation data.
  • the server sends the response data to the virtual reality hardware through the network.
  • the device displays the interaction process between virtual objects and virtual props in the virtual reality hardware device based on the response data.
  • the virtual scene can be completely based on terminal output, or based on the terminal and Server collaboration to output.
  • the virtual scene may be an environment for game characters to interact with, for example, it may be for game characters to compete in the virtual scene.
  • Figure 2 is a schematic diagram of the application mode of the virtual scene prop interaction method provided by the embodiment of the present application. It is applied to the terminal 400 and the server 200. Generally, it is suitable for completion that relies on the computing power of the server 200. The virtual scene is calculated and the application mode of the virtual scene is output on the terminal 400.
  • the user logs in to a client running on the terminal 400 (such as an online version of a game application) through an account, and the server 200 calculates the scene display data of the virtual scene.
  • the scene display data includes the prop display data of the first virtual prop in the selected state.
  • the scene display data to the terminal 400, display at least a partial area of the virtual scene including the virtual object in the human-computer interaction interface of the client, display at least two virtual props with interactive functions in at least a partial area, display at least two The first virtual prop in the selected state among the virtual props, and at least one interactive control corresponding to the first virtual prop is displayed; the terminal 400 receives the account's trigger operation for the interactive control, and the terminal 400 sends the operation data of the trigger operation to the server through the network 300 200.
  • the server 200 calculates the response data of the interactive control corresponding to the interactive function based on the operation data.
  • the server 200 sends the response data to the terminal 400 through the network 300. Based on the response data, the relationship between the virtual object and the virtual prop is displayed in the human-computer interaction interface of the terminal 400. interaction process.
  • the user logs in to a client (such as an online version of a game application) running on the terminal 400 through an account, and the client calculates the scene display data of the virtual scene.
  • the scene display data includes the prop display data of the first virtual prop in the selected state. Displaying at least a partial area of the virtual scene including the virtual object in the client's human-computer interaction interface, displaying at least two virtual props with interactive functions in at least the partial area, and displaying the first virtual prop in a selected state among the at least two virtual props.
  • the terminal 400 receives the account's trigger operation for the interactive control, and the client calculates the interactive control based on the operation data
  • the interaction process between the virtual object and the virtual prop is displayed in the human-computer interaction interface of the terminal 400 based on the response data corresponding to the interactive function.
  • the terminal 400 can implement the prop interaction method of the virtual scene provided by the embodiments of the present application by running a computer program.
  • the computer program can be a native program or software module in the operating system; it can be a native (Native) program.
  • Application refers to a program that needs to be installed in the operating system to run, such as a game APP (the above-mentioned client) and a live broadcast APP; it can also be a small program, which only needs to be downloaded to the browser environment.
  • the computer program described above can be any form of application, module or plug-in.
  • Cloud technology refers to a hosting system that unifies a series of resources such as hardware, software, and networks within a wide area network or a local area network to realize data calculation, storage, processing, and sharing. technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on the cloud computing business model. It can form a resource pool and use it on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background services of technical network systems require a large amount of computing and storage resources.
  • the server 200 can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal 400 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal 400 and the server 200 can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • FIG. 3 is a schematic structural diagram of an electronic device applying the prop interaction method of a virtual scene provided by an embodiment of the present application.
  • the electronic device is used as a terminal as an example for illustration.
  • the terminal 400 shown in Figure 3 includes: at least one processor 410.
  • Storage 450 at least one network interface 420 and user interface 430.
  • the various components in terminal 400 are coupled together by bus system 440.
  • bus system 440 is used to implement connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 3 .
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable the presentation of media content, including one or more speakers and/or one or more visual displays.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, etc.
  • Memory 450 may include one or more storage devices physically located remotely from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may include both volatile and non-volatile memory.
  • Non-volatile memory can be read-only memory (ROM, Read Only Memory), and volatile memory can be random access memory (RAM, Random Access Memory).
  • RAM Random Access Memory
  • the memory 450 described in the embodiments of this application is intended to include any suitable type of memory.
  • the memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • the operating system 451 includes system programs used to process various basic system services and perform hardware-related tasks, such as the framework layer, core library layer, driver layer, etc., which are used to implement various basic services and process hardware-based tasks;
  • Network communication module 452 for reaching other electronic devices via one or more (wired or wireless) network interfaces 420.
  • Exemplary network interfaces 420 include: Bluetooth, Wireless Compliance Certification (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • Presentation module 453 for enabling the presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430 );
  • information e.g., a user interface for operating peripheral devices and displaying content and information
  • output devices 431 e.g., display screens, speakers, etc.
  • An input processing module 454 for detecting one or more user inputs or interactions from one or more input devices 432 and translating the detected inputs or interactions.
  • the prop interaction device of the virtual scene provided by the embodiment of the present application can be implemented in a software manner.
  • Figure 3 shows the prop interaction device 455-1 of the virtual scene stored in the memory 450, which can be a program and Software in the form of plug-ins and other forms, including: a first display module 4551.
  • Figure 3 also shows a virtual scene prop interaction device 455-2 stored in the memory 450, which can be software in the form of programs and plug-ins, including:
  • Two display modules 4552 these modules are logical, so they can be combined or further split according to the functions implemented. The functions of each module will be explained below.
  • the electronic device that implements the method of interacting with props in a virtual scene according to the embodiment of the present application may be a terminal device. Therefore, the execution subjects of each step will not be repeatedly described below.
  • FIG. 4A is a schematic flowchart of a prop interaction method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with steps 101 to 102 shown in FIG. 4A .
  • step 101 at least part of the virtual scene is displayed on the human-computer interaction interface.
  • At least part of the area includes virtual objects, and at least part of the area including virtual objects in the virtual scene can be displayed in the human-computer interaction interface.
  • At least part of the area here can be the entire area, or a partial area, and the virtual objects can all be displayed in In the human-computer interaction interface (that is, the whole body of the virtual object is displayed), or partially displayed in the human-computer interaction interface (for example, the upper body of the virtual object is displayed).
  • step 102 in response to the appearance of at least two virtual props with interactive functions in at least part of the area, displaying that a first virtual prop among the at least two virtual props is in a selected state, and displaying at least one interaction corresponding to the first virtual prop controls.
  • the human-computer interaction interface 501B displays a chair 502B and a material box 503B. Both the chair 502B and the material box 503B are virtual props.
  • the chair 502B is the first virtual prop.
  • a check mark 504B is displayed on the chair 502B (representing in the selected state), the interactive control 505B of the chair 502B is also displayed in the human-computer interaction interface.
  • the virtual object 506B sits on the chair 502B.
  • the number of the first virtual props is one, and the first The virtual props are displayed in a selected state, and the corresponding interactive controls are displayed, thus clarifying to the user which virtual props should be prioritized for interaction.
  • the interactive control is used to be triggered to perform an interactive function corresponding to the interactive control, and the interactive function is used for the virtual object to interact with the first virtual prop.
  • the first virtual prop may correspond to one interactive control or multiple interactive controls.
  • the first virtual prop is a chair, it corresponds to a "sit down" interactive control.
  • the virtual object is controlled to sit down.
  • the first virtual prop is a vehicle, there are corresponding "drive” interactive controls and "sit in a car” interactive controls.
  • the "drive” interactive control is triggered, the virtual object is controlled to enter the driver's seat of the vehicle.
  • At least two virtual props with interactive functions can be located at any position in the partial area, or at least two virtual props with interactive functions can be located within the interaction range of the virtual object.
  • the interaction range is a specified distance centered on the virtual object. is a circle with radius.
  • the interaction range of the virtual object is at least part of the area displayed in the human-computer interaction interface, or the interaction range of the virtual object can be within at least part of the area displayed in the human-computer interaction interface, that is, the interaction range of the virtual object is the human-computer interaction area.
  • a sub-area of at least part of the area displayed in the computer interactive interface is at least part of the area displayed in the human-computer interaction interface.
  • a third virtual prop among at least two virtual props is in a selected state and the first virtual prop is in an unselected state, for example , the position of the virtual object changes, moving away from the first virtual prop and approaching the third virtual prop.
  • the distance between the virtual object and the first virtual prop is greater than the first distance threshold, and the distance between the virtual object and the third virtual prop is less than the first distance threshold. 2 distance threshold, the virtual prop that is most suitable for interacting with the virtual object changes from the first virtual prop to the third virtual prop, so the third virtual prop will automatically be in the selected state, while the first virtual prop will be in the unselected state.
  • the first display mode in response to the appearance of at least two virtual props with interactive functions in at least part of the area, is applied to the at least two virtual props in at least part of the area; wherein the prominence of the first display mode Positively related to the characteristic values of at least two virtual props, the characteristic values include at least one of the following: the frequency of use of the virtual prop, the distance between the virtual prop and the virtual object, and the orientation angle of the virtual prop and the virtual object.
  • first display methods with different degrees of prominence By applying first display methods with different degrees of prominence to virtual props, the difference between virtual props can be reflected, and richer information can be prompted to the user, such as comparative information on frequency of use, comparative information on distance, etc., so that Players can obtain richer information through the human-computer interaction interface, which is conducive to improving user experience and information display efficiency.
  • the orientation angle of the virtual object can be obtained in the following way: obtain the first connection line between the virtual prop and the virtual object, obtain the first ray with the virtual object as the endpoint and toward the crosshair direction (the orientation of the virtual object), and add the first ray to the crosshair direction (the orientation of the virtual object).
  • the angle between the connecting line and the first ray serves as the orientation angle between the virtual prop and the virtual object, and the orientation angle represents the degree of deviation of the virtual prop relative to the virtual object.
  • the virtual props in the human-computer interaction interface can be displayed in a first display mode.
  • the first display mode can be to apply a stroke special effect to the virtual props, where the stroke special effect is to deepen the lines of the virtual props.
  • Special effects see Figure 5D.
  • the human-computer interaction interface 501D displays a chair 502D and a material box 503D. Both the chair and the material box are virtual props. The chair is more prominent than the material box.
  • the chair is used more frequently than The frequency of use of the material box, so that the salience of the chair is higher than that of the material box, or the distance between the chair and the virtual object is greater than the distance between the material box and the virtual object, so the salience of the chair is higher than that of the material box.
  • the first display mode in response to the appearance of at least two virtual props with interactive functions in at least part of the area, is applied to the at least two virtual props in the human-computer interaction interface, and the first display mode is applied to the at least two virtual props in the human-computer interaction interface.
  • Other virtual props apply the second display method; The second display mode is different from the first display mode, and other virtual props do not have interactive functions.
  • the human-computer interaction interface 501E displays a chair 502E and a material box 503E.
  • the human-computer interaction interface also displays a stone 504E.
  • the chair 502E and the material box 503E are virtual props (chairs) with interactive functions.
  • 502E and the supply box 503E are the first virtual props), and the stone 504E is other virtual props without interactive functions (the stone 504E is other virtual props), so the first display method based on stroke effects is applied to the chair 502E and the supply box 503E.
  • a second display mode that applies non-stroke special effects to the stone 504E.
  • the first display mode is not limited to the stroke special effects, as long as there is a difference between the first display mode and the second display mode.
  • a first sector-shaped area is determined with the virtual object as the center, the set distance as the radius, and the first angle as the central angle. , wherein the orientation of the virtual object coincides with the angular bisector of the central angle of the first sector area; at least one first candidate virtual prop that overlaps with the first sector area is determined, wherein the first candidate virtual prop is on the ground of the virtual scene The projection area overlaps with the first fan-shaped area; use one of the at least one first candidate virtual prop as the first virtual prop.
  • FIG. 8A The ground is a plane formed by the XY coordinate axes.
  • the area occupied by a sector is the first sector area
  • the ray of the virtual object facing the center direction is the angular bisector of the first sector.
  • Figure 8A shows the virtual prop A1 in the virtual scene The projection 801A on the ground and the projection 802A of the virtual prop A2 on the ground in the virtual scene.
  • the two first candidate virtual props are both within the center of view.
  • use one of the two first candidate virtual props as the first virtual prop for example, use the virtual prop A1 corresponding to the projection 801A shown in FIG. 8A as the first virtual prop.
  • the above-mentioned use of one first candidate virtual prop among at least one first candidate virtual prop as the first virtual prop can be achieved through the following technical solution: when the first sector area includes a first candidate virtual prop , determine the first candidate virtual prop as the first virtual prop; when the first fan-shaped area includes two first candidate virtual props, use the first candidate virtual prop with a larger area in the first overlapping area as the first virtual prop, The first overlapping area is the overlapping area of the first candidate virtual prop and the first fan-shaped area; when the first fan-shaped area includes at least three first candidate virtual props, the following processing is performed: determine the virtual object as the center of the circle to set the distance is the radius, the second sector-shaped area with the second angle as the central angle, wherein the orientation of the virtual object coincides with the angular bisector of the central angle of the second sector-shaped area, and the second angle is smaller than the first angle; determine the relationship with the second sector-shaped area At least one overlapping second candidate virtual prop, wherein the projection area of the second candidate
  • a virtual prop, the second overlapping area is the overlapping area of the second candidate virtual prop and the second fan-shaped area.
  • FIG. 8A The ground is a plane formed by the XY coordinate axes.
  • the area occupied by a sector is the first sector area
  • the ray of the virtual object facing the center direction is the angular bisector of the first sector.
  • Figure 8A shows the virtual prop A1 in the virtual scene The projection 801A on the ground and the projection 802A of the virtual prop A2 on the ground in the virtual scene.
  • the two first candidate virtual props are both within the center of view.
  • the virtual object preferentially interacts with the first candidate virtual prop that has a larger projection proportion in the center of field of view. Therefore, the first candidate virtual prop that has a larger overlap area between the projection and the first fan-shaped area can be directly used as the first virtual prop that gives priority to interact.
  • the virtual prop A1 corresponding to the projection 801A shown in FIG. 8A is used as the first virtual prop for priority interaction.
  • the ground is a plane formed by the XY coordinate axes.
  • the first sector 805B (first center field of view) with the position of the virtual object 804B as the center and the set distance r as the radius is obtained.
  • the area occupied by the first sector is the first sector area, and the ray of the virtual object facing the center direction (that is, the direction of the virtual object) is the angular bisector of the first sector 805B.
  • Figure 8B shows the virtual prop B1
  • the overlap of the sector 805B indicates that there are at least three first candidate virtual props in the first center field of view. Therefore, the second center field of view can be obtained to determine the first candidate virtual prop for priority interaction.
  • the location of the virtual object 804B is used as the center of the circle.
  • the second sector 806B (the second center field of view) with the set distance r as the radius, the area occupied by the second sector is the second sector area, and the virtual object faces the ray in the direction of the center (that is, the direction of the virtual object ) is the angular bisector of the second sector 806B.
  • the second sector is smaller than the first sector (the central angle of the second sector is smaller than the central angle of the first sector).
  • the virtual props B1, virtual props B2 and virtual props B3 are all on the first axis. Within the center field of view, virtual props B1 and virtual props B2 are both within the second center field of view, and virtual props B3 are not within the second center field of view.
  • virtual props B1 and virtual props B2 are second candidate virtual props and can be directly projected
  • the second candidate virtual prop with the largest overlap area with the second fan-shaped area is used as the first virtual prop for priority interaction, that is, the second candidate virtual prop B2 corresponding to the projection 802B shown in FIG. 8B is used as the first virtual prop for priority interaction.
  • any one of the following processes is performed: sorting the at least two virtual props according to frequency of use, and sorting the first virtual prop
  • the usage frequency is the usage frequency of the current virtual object or the usage frequency of all virtual objects
  • at least two virtual props are sorted according to the scene distance, and the virtual prop ranked first is used as the first virtual prop.
  • the scene distance is the distance between virtual props and virtual objects in the virtual scene
  • at least two virtual props are sorted according to the most recent use time, and the virtual prop ranked first is regarded as the first virtual prop.
  • the most recent use time is the most recent use of the virtual object. virtual props moment.
  • the first virtual prop that needs to be displayed in the selected state can be clearly defined.
  • the user does not need to manually select the virtual prop, thereby improving the efficiency of human-computer interaction, and only the interactive controls of the virtual prop ranked first are displayed. Can improve display resource utilization.
  • the above sorting process can be ascending order sorting process or descending order sorting process.
  • descending order sorting process can be adopted when sorting based on frequency of use and recent use time.
  • you can sort in ascending order that is, the first virtual prop is the most frequently used virtual prop among at least two virtual props, or the virtual prop closest to the virtual object among at least two virtual props, or at least two virtual props. The virtual prop most recently used by the virtual object among the virtual props.
  • the first virtual prop among at least two virtual props before displaying that the first virtual prop among at least two virtual props is in the selected state, historical interaction data for at least two virtual props in the virtual scene and prop parameters are obtained, and the historical interaction data of each virtual prop is obtained.
  • scene parameters for each use of virtual props the following processing is performed through the first neural network model: scene features are extracted from scene parameters, and prop features are extracted from prop parameters; scene features and prop features are fused to obtain the first Fusion features; map the first fusion feature to the first probability of each virtual prop adapting to the virtual scene; sort at least two virtual props in order from high to low first probabilities, and sort the first Virtual props serve as the first virtual props.
  • the neural network model can improve the intelligence and accuracy of the interactive controls that display the first virtual prop, effectively improving the efficiency of the user's interaction with the virtual props, and by only displaying the interactive controls of the virtual props ranked first, it can effectively Improve the utilization efficiency of display resources.
  • the historical interaction data of each virtual prop includes scene parameters for each use of the virtual prop.
  • the scene parameters include battle data, environmental data, and status data of virtual objects, etc.
  • the prop parameters are parameters of the virtual prop itself, such as virtual props. The purpose of the props, etc.
  • the following describes how to train the above neural network model.
  • Collect sample scene parameters and sample prop parameters of each sample virtual prop in the sample virtual scene construct a training sample based on the collected sample scene parameters and sample prop parameters, use the training sample as the input of the neural network model to be trained, and use the sample Whether the virtual props are the preferred virtual props in the sample virtual scene is used as label data.
  • the label of the sample virtual props is 1.
  • the neural network model to be trained is trained based on the training samples and label data, so that a certain neural network model can be determined directly through the first neural network model in the future. Whether a virtual prop should be the first virtual prop recommended with priority.
  • FIG. 4B is a schematic flowchart of the prop interaction method in a virtual scene provided by an embodiment of the present application. Based on FIG. 4A , step 103 in FIG. 4B can also be performed.
  • step 103 for at least one second virtual prop that is not in the selected state, a switching control corresponding to at least one second virtual prop is displayed.
  • the second virtual prop has an interactive function, and the switching control is used to be triggered to display the interactive control corresponding to at least one second virtual prop.
  • the switching control in response to the triggering operation for the switching control, at least one interactive control of the second virtual prop is displayed (displaying that the second virtual prop is in the selected state, The first virtual prop is in an unselected state), and at least one interactive control of the first virtual prop is hidden.
  • the second virtual prop may correspond to one interactive control or multiple interactive controls.
  • the second virtual prop when the second virtual prop is a chair, it corresponds to a "sit down" interactive control.
  • the control When the "sit down” interactive control is triggered, the control The virtual object is sitting on a chair; when the second virtual prop is a vehicle, there are corresponding "drive” interactive controls and “sit in a car” interactive controls.
  • the "drive” interactive control is triggered, the virtual object is controlled to enter the driver's seat of the vehicle.
  • the human-computer interaction interface 501B displays a chair 502B and a material box 503B.
  • the chair 502B displays a check mark 504B (indicating that it is in a selected state).
  • the chair 502B is the first virtual prop, and the material box 503B is not displayed.
  • the interactive control 505B of the chair 502B is also displayed in the human-computer interaction interface.
  • the interactive control 505B of the chair 502B is hidden in the human-computer interaction interface and the interactive control 508B of the material box 503B is displayed, and in A check mark 504B is displayed in the material box 503B.
  • prop identifications corresponding to the multiple second virtual props are displayed.
  • display at least one interactive control of the second virtual prop corresponding to the triggered prop identification displaying that the second virtual prop is selected middle state, the first virtual prop is in an unselected state
  • at least one interactive control of the first virtual prop is hidden.
  • the human-computer interaction interface 501C displays a first chair 502C, a material box 503C and a second chair 504C.
  • a check mark 505C is displayed in the first chair 502C.
  • the first chair 502C is a first virtual prop, and the material The box 503C and the second chair 504C are second virtual props.
  • the interactive control 506C of the first chair 502C is also displayed in the human-computer interaction interface. In response to the trigger operation for the switching control 511C, the first chair is hidden in the human-computer interaction interface.
  • the interactive control 506C of 502C also displays the identification control 508C of the material box 503C and the identification control 509C of the second chair.
  • a check mark 505C is displayed in the material box 503C.
  • the identification control 508C of the material box 503C and the identification control 509C of the second chair are hidden in the computer interactive interface, and the interactive control 510C of the material box 503C is displayed.
  • the above-mentioned display of prop identifications corresponding to multiple second virtual props can be achieved through the following technical solution: displaying prop identifications one-to-one corresponding to multiple second virtual props in a set order (for example, Logo control); wherein, the number of multiple second virtual props is any one of the following: a set number (the set number here can be an average of the number of second virtual props in multiple historical virtual scenes), and the number of people The number is positively correlated with the size of the computer interactive interface, the number is positively correlated with the area of the free area of the virtual scene, and the number is positively correlated with the number of props of the second virtual props.
  • a recommendation function can be formed to the user and the display resource utilization rate can be effectively improved.
  • the number of second virtual props in an unselected state as 5 as an example, that is, there are 5 second virtual props in an unselected state in the virtual scene. See Figure 5C.
  • the number of multiple second virtual props is 2, and the representation is for 5
  • the unselected second virtual props only display the prop identification of the two second virtual props.
  • the number of the second virtual props showing the prop identification can be a set number; the number can also be the same as the size of the human-computer interaction interface.
  • the larger the size of the human-computer interaction interface the more second virtual props that display prop logos.
  • 5 second virtual props can be displayed.
  • Prop identification of 4 second virtual props; the number can also be positively correlated with the area of the free area of the virtual scene.
  • the larger the area of the free area of the virtual scene the second virtual prop with the prop identification will be displayed in the human-computer interaction interface.
  • the greater the number for example, when the area of the free area of the virtual scene is larger than the area of the current virtual scene, the prop identification of 4 second virtual props can be displayed for the 5 second virtual props; the number can also be the same as the number of the second virtual props.
  • the number of props of the two virtual props is positively correlated.
  • the greater the number of the second virtual props the more second virtual props displaying prop logos.
  • the prop logos of 3 second virtual props can be displayed.
  • prop identification of two second virtual props can be displayed for four second virtual props.
  • the setting order is any one of the following: the order of the usage frequency of the second virtual prop from high to low or from low to high; the scene distance of the second virtual prop in the order of small to large or from low to high.
  • the scene distance is the distance between the virtual prop and the virtual object in the virtual scene; the most recent use time of the second virtual prop is in order from recent to far or from far to near, and the most recent use time is the most recent time of the virtual object.
  • the moment when the second virtual prop is used; the interaction efficiency between the second virtual prop and the virtual object is in order from small to large or from large to small.
  • the above-mentioned sorting process may be an ascending order sorting process or a descending order sorting process.
  • descending order can be used when sorting based on frequency of use, recent use time, and interaction efficiency.
  • ascending order can be used, that is, props are displayed.
  • the identified second virtual prop is a virtual prop that is used more frequently among the second virtual props in the unselected state, or is a second virtual prop that is closer to the virtual object among the second virtual props in the unselected state, or is Among the second virtual props in the unselected state, the second virtual prop has been used by the virtual object in a recent period of time, or the second virtual prop in the unselected state has a higher interactive efficiency.
  • sorting based on frequency of use, recent use time and interaction efficiency you can sort in ascending order.
  • the second virtual prop is a less frequently used virtual prop among the second virtual props in an unselected state, or a second virtual prop further away from the virtual object among the second virtual props in an unselected state, or a second virtual prop in an unselected state.
  • a second sector-shaped area is determined with the virtual object as the center, the set distance as the radius, and the second angle as the central angle, wherein the orientation of the virtual object bisects the central angle of the second sector-shaped area.
  • the lines coincide with each other, and the second angle is smaller than the first angle; the third overlapping area of each second virtual prop and the second fan-shaped area is obtained, and the third overlapping area is the projection area of the second virtual prop on the ground of the virtual scene and the second The overlapping area of the sector area, and obtains the interaction efficiency that is positively related to the area of the third overlapping area.
  • FIG. 8E which shows the projection 801E of the virtual prop E1 on the ground in the virtual scene, the projection 802E of the virtual prop E2 on the ground in the virtual scene, and the projection 803E of the virtual prop E3 on the ground in the virtual scene. and the projection 807E of the virtual prop E4 on the ground in the virtual scene, where the virtual prop E2 and the virtual prop E4 are in a stacked state on the Z axis, and the virtual object E4 is obtained by 804E is the center of the circle and the second sector 806E (the second center field of view) with the set distance r as the radius.
  • the area occupied by the second sector 806E is the second sector area.
  • the virtual object faces the ray in the direction of the center of the circle is the angular bisector of the second sector 806E.
  • Virtual props E1, virtual props E2 and virtual props E4 are all within the second center field of view.
  • the virtual props with the largest overlap area between the projection and the second sector can be directly prioritized.
  • the first virtual prop for interaction is the virtual prop E2 corresponding to the projection 802E shown in Figure 8E as the first virtual prop for priority interaction, and the virtual prop E1 corresponding to the projection 801E and the virtual prop E4 corresponding to the projection 804E are used as the second virtual prop , since the area of the third overlapping area corresponding to the projection 801E is larger than the area of the third overlapping area corresponding to the projection 804E, it indicates that the interaction efficiency of the virtual prop E1 is higher than the interaction efficiency of the virtual prop E4.
  • FIG. 4C is a schematic flowchart of a prop interaction method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with steps 201 to 203 shown in FIG. 4C .
  • step 201 at least part of the area in the virtual scene is displayed on the human-computer interaction interface.
  • At least part of the area includes virtual objects.
  • step 202 in response to the appearance of at least two virtual props in at least part of the area, a first virtual prop with an interactive function among the at least two virtual props and at least one interactive control corresponding to the first virtual prop are displayed based on the selected state.
  • the interactive control is used to be triggered to execute an interactive function corresponding to the interactive control, and the interactive function is used for the virtual object to interact with the first virtual prop.
  • step 203 for at least one second virtual prop that is not in the selected state, a switching control corresponding to at least one second virtual prop is displayed.
  • the switching control is configured to be triggered to display an interactive control corresponding to at least one second virtual prop.
  • steps 201 to 203 For the implementation of steps 201 to 203, reference may be made to the implementation of steps 101 to 103.
  • the first virtual prop among the at least two virtual props is in a selected state, and at least one interaction corresponding to the first virtual prop is displayed.
  • control thereby directly displaying the automatically selected first virtual prop and the corresponding interactive control to the player, eliminating the need for the player to manually select the virtual prop that currently needs to be interacted with, which can effectively improve the efficiency of human-computer interaction, and is achieved by switching controls
  • Switching display between interactive controls of multiple virtual props enables interaction with multiple virtual props under a limited number of controls, which can effectively improve display resource utilization.
  • the account logs into the client running on the terminal (such as an online version of a game application), and the server calculates the scene display data of the virtual scene.
  • the scene display data includes the prop display data of the first virtual prop in the selected state, and Sending the scene display data to the terminal, displaying at least part of the virtual scene including the virtual object in the client's human-computer interaction interface, displaying at least two virtual props with interactive functions in at least part of the area, and displaying at least two virtual props
  • the first virtual prop in the selected state, and at least one interactive control corresponding to the first virtual prop is displayed;
  • the terminal receives the account's trigger operation for the interactive control, and the terminal sends the operation data of the trigger operation to the server through the network, and the server based on the operation data
  • the response data corresponding to the interactive function of the interactive control is calculated.
  • the server sends the response data to the terminal through the network. Based on the response data, the interaction process between the virtual object and the virtual prop is displayed in the human-computer interaction interface of
  • the player controls the virtual object to interact with the interactive virtual props in the virtual scene.
  • the virtual object selects the virtual prop, the virtual object can perform various interactive actions with the virtual prop.
  • the virtual prop is a chair
  • the virtual object can sit on the chair.
  • the virtual prop is a bathtub
  • the virtual object can fill or drain the bathtub.
  • the user's request for the chair needs to be received first. selection operation, and then receives the sit-down operation of the user to sit down on the chair, that is, the user needs to perform two operations to achieve interaction.
  • the virtual props of the current interaction can be clarified, and it is efficient and accurate Continuously switching more interactive virtual props can effectively improve the efficiency of human-computer interaction. For example, it only needs to receive the user's sit-down operation to sit down on a chair, and the interaction process for virtual props can be realized.
  • virtual props 502A are displayed in the human-computer interaction interface 501A, and a check mark 503A is displayed in the virtual props 502A.
  • Interactive controls 504A are also displayed in the human-computer interaction interface.
  • the operation is triggered, and the virtual object 505A sits on the virtual prop 502A.
  • the virtual prop with the interactive function and the interactive control corresponding to the virtual prop are displayed.
  • the human-computer interaction interface 501B displays a chair 502B and a material box 503B.
  • the chair 502B displays a check mark 504B.
  • the human-computer interaction interface also displays an interactive control 505B of the chair 502B.
  • the virtual object 506B sits on the chair 502B.
  • the interactive control 505B of the chair 502B is hidden in the human-computer interaction interface and the interactive control 508B of the material box 503B is displayed.
  • a check mark 504B is displayed in the material box 503B.
  • Figure 5C is an interface schematic diagram of the prop interaction method in a virtual scene provided by an embodiment of the present application.
  • the human-computer interaction interface 501C displays a first chair 502C, a material box 503C and a second chair 504C.
  • a check mark 505C is displayed in the first chair 502C, and the interactive control 506C of the first chair 502C is also displayed in the human-computer interaction interface.
  • the triggering operation of the interactive control 506C the virtual object 507C is sitting on the first chair 502C, in response to the triggering operation of the switching control 511C, the interactive control 506C of the first chair 502C is hidden in the human-computer interaction interface and the logo of the material box 503C is displayed.
  • the control 508C and the identification control 509C of the second chair in response to the triggering operation of the identification control 508C of the material box 503C, display a check mark 505C in the material box 503C, and hide the identification control 508C of the material box 503C in the human-computer interaction interface.
  • the interactive control 510C of the supply box 503C is displayed.
  • the interactive control of the virtual props is switched through a judgment mechanism, and the currently recommended virtual props are displayed.
  • the player when there are multiple virtual props with interactive functions that are close to the virtual object in the virtual scene, the player does not need to move the crosshair to select the virtual props with interactive functions, so that the current state can be accurately displayed.
  • Recommend interactive virtual props perform quantity-based recognition processing on multiple virtual props with interactive functions within the current recognition range, and execute different recognition processes for different quantities, thereby achieving accurate switching between multiple virtual props.
  • FIG. 6 is a schematic flowchart of a prop interaction method in a virtual scene provided by an embodiment of the present application.
  • Players can control virtual objects to interact with virtual props that can be freely placed in the virtual scene; by Identify and switch virtual props to help players accurately and conveniently complete interactions with multiple virtual props.
  • the virtual scene is recognized and processed to obtain virtual props to be interacted with.
  • a trigger operation for the virtual props is received.
  • interaction between the virtual object and the virtual props is performed.
  • Step 601 can be implemented through step 6011.
  • step 6011 when a virtual prop is identified, the virtual prop is used as the virtual prop to be interacted with.
  • Step 601 can be implemented through steps 6012 and 6013.
  • step 6012 when two virtual props are recognized, the virtual prop with a larger recognition range is used as the virtual prop to be interacted with.
  • step 6013 in response to the switching operation, switch Displays interactive controls for 2 virtual props.
  • Step 601 can be implemented through steps 6014 and 6015.
  • step 6014 when at least three virtual props are identified, a secondary range recognition based on the center of sight is performed, and the virtual prop with the largest recognition range is used as the virtual prop to be interacted with.
  • step 6015 in response to the switching operation, the interactive controls displaying at least three virtual props are switched.
  • the player controls virtual objects to interact with different objects (virtual props) in the game scene, and different types of operations such as viewing, dialogue, and picking can occur.
  • the player controls the virtual objects to interact with objects in the virtual scene.
  • the virtual object interacts with object A, object B and object C, and the objects can be in a stacked state, for example, object B and object D are in a stacked state. state.
  • Figure 8A shows the projection 801A of the virtual prop A1 on the ground in the virtual scene and the projection 802A of the virtual prop A2 on the ground in the virtual scene.
  • the ground is the plane formed by the XY coordinate axes.
  • the ray of the virtual object facing the center direction is the angular bisector of the sector. Since there are only two projections of virtual props Overlapping with a fan shape means that two virtual props are within the same field of view.
  • the virtual prop with a larger projection proportion in the center field of view will be given priority to interact with.
  • the virtual prop with a larger overlap area between the projection and the fan shape can be directly used as a priority for interaction.
  • the virtual prop A1 corresponding to the projection 801A shown in FIG. 8A is used as the virtual prop for priority interaction.
  • the switching display between the interactive controls of the two virtual props can be completed. The player does not need to control the virtual object to move the crosshair to change the projection proportion of the virtual prop in the crosshair field of view.
  • FIG. 8B shows the projection 801B of the virtual prop B1 on the ground in the virtual scene, the projection 802B of the virtual prop B2 on the ground in the virtual scene, and the projection 802B of the virtual prop B3 on the ground in the virtual scene.
  • the projection 803B obtains the first sector 805B (the first center of view) with the virtual object 804B as the center and the set distance r as the radius.
  • the ray of the virtual object facing the direction of the center is the angular bisector of the first sector 805B.
  • a secondary cross-center field of view is obtained to determine the virtual props for priority interaction
  • the virtual object is obtained 804B is the center of the circle and the second sector 806B (the second center field of view) with the set distance r as the radius.
  • the ray of the virtual object facing the direction of the center is the angular bisector of the second sector 806B.
  • the second sector is smaller than the first sector.
  • the virtual props B1, virtual props B2 and virtual props B3 are all within the first center of view
  • the virtual props B1 and virtual props B2 are all within the second center of view. You can directly project the projection with the largest overlapping area of the second sector.
  • Virtual props are used as virtual props that give priority to interaction, that is, virtual props B2 corresponding to the projection 802B shown in FIG. 8B are used as virtual props that give priority to interaction, and the interactive controls corresponding to virtual props B1, virtual props B2, and virtual props B3 can be realized through operating controls. display switching between.
  • Figure 8C is a schematic diagram of the overlapping area calculation of the prop interaction method in the virtual scene provided by the embodiment of the present application.
  • Figure 8C shows the projection 801C of the virtual prop C1 on the ground in the virtual scene, the virtual The projection 802C of prop C2 on the ground in the virtual scene and the projection 803C of virtual prop C3 on the ground in the virtual scene are obtained, with the virtual object 804C as the center and the set distance r as the radius of the sector 805C, with the virtual object facing the center of aim.
  • the ray in the direction is the angular bisector of the sector 805C.
  • the virtual prop C3 is used as the virtual prop that has priority for interaction. , will not trigger interaction with virtual prop C1 and virtual prop C2.
  • Figure 8D is a schematic diagram of the overlapping area calculation of the prop interaction method in the virtual scene provided by the embodiment of the present application.
  • Figure 8D shows the projection 801D of the virtual prop D1 on the ground in the virtual scene, the virtual The projection 802D of the prop D2 on the ground in the virtual scene and the projection 803D of the virtual prop D3 on the ground in the virtual scene are used to obtain the first sector 805D (first center) with the virtual object 804D as the center and the set distance r as the radius.
  • the ray of the virtual object facing the center of sight is the corner of the first sector 805D Bisector, since there are three projections of virtual props overlapping the first sector 805D, it means that there are at least three virtual props in the cross-center field of view.
  • a secondary cross-center field of view is obtained to determine the virtual props for priority interaction, and obtain the
  • the virtual object 804D is the center of the circle and the second sector 806D (the second center field of view) with the set distance r as the radius.
  • the ray of the virtual object facing the direction of the center is the angular bisector of the second sector 806D.
  • the second sector is smaller than the second sector 806D.
  • a sector, virtual props D1, virtual props D2 and virtual props D3 are all within the first center of view, virtual props D1 and virtual props D2 are all within the second center of view, you can directly calculate the overlapping area of the projection and the second sector
  • the largest virtual prop is used as the virtual prop that gives priority to interaction, that is, the virtual prop D1 corresponding to the projection 801D shown in FIG. 8D is used as the virtual prop that gives priority to interaction, and the corresponding virtual props D1, D2, and D3 can be realized by operating the controls. Display switching between interactive controls.
  • Figure 8E shows the projection 801E of the virtual prop E1 on the ground in the virtual scene, the projection 802E of the virtual prop E2 on the ground in the virtual scene, and the projection 802E of the virtual prop E3 on the ground in the virtual scene.
  • the first sector 805E (first center field of view), the ray of the virtual object facing the direction of the center of field is the angular bisector of the first sector 805E. Since there are three projections of virtual props overlapping the first sector 805E, it represents the center field of view. There are at least three virtual props in the memory. A secondary center field of view is obtained based on the center position to determine the virtual prop for priority interaction. A second sector 806E (the second center field) with the virtual object 804E as the center and the set distance r as the radius is obtained.
  • the ray of the virtual object facing the center direction is the angular bisector of the second sector 806E, the second sector is smaller than the first sector, the virtual props E1, virtual props E2, virtual props E3 and virtual props E4 are all in the first sector Within the crosshair field of view, virtual props E1, virtual props E2 and virtual props E4 are all within the second crosshair field of view.
  • the virtual props with the largest overlapping area between the projection and the second sector can be directly used as the virtual props for priority interaction, as shown in Figure 8E
  • the virtual prop E2 corresponding to the shown projection 802E is used as the virtual prop for priority interaction, and the display switching between the interactive controls corresponding to the virtual prop E1, the virtual prop E2, and the virtual prop E4 can be realized by operating controls.
  • Embodiments of the present application can help players identify virtual props with priority for interaction, and realize interactive switching between multiple virtual props with limited controls.
  • the virtual scene prop interaction device 455-1 stored in the memory 450
  • the software module in the device 455-1 may include: a first display module 4551 configured to display at least part of the area in the virtual scene on the human-computer interaction interface, where at least part of the area includes a virtual object; the first display module 4551 is also configured to In response to the appearance of at least two virtual props with interactive functions in at least part of the area, displaying that a first virtual prop among the at least two virtual props is in a selected state, and displaying at least one interactive control corresponding to the first virtual prop; wherein, The interactive control is used to be triggered to execute the interactive function corresponding to the interactive control, and the interactive function is used for the virtual object to interact with the first virtual prop.
  • the first display module 4551 is further configured to: in response to the appearance of at least two virtual props with interactive functions in at least part of the area, apply the first display method to the at least two virtual props in at least part of the area. ;
  • the prominence of the first display mode is positively related to the characteristic values of at least two virtual props, and the characteristic values include at least one of the following: the frequency of use of the virtual props, the distance between the virtual props and the virtual object, and the distance between the virtual props and the virtual object. Orientation angle.
  • the first display module 4551 is further configured to: in response to the appearance of at least two virtual props with interactive functions in at least part of the area, apply the first display to the at least two virtual props in the human-computer interaction interface method, and apply the second display method to other virtual props in the human-computer interaction interface; wherein the second display method is different from the first display method, and other virtual props do not have interactive functions.
  • the first display module 4551 is further configured to: before displaying the first virtual prop among the at least two virtual props in the selected state, determine that the virtual object is the center of the circle, set the distance as the radius, and use the first virtual prop as the radius.
  • the angle is the first sector-shaped area of the central angle, wherein the orientation of the virtual object coincides with the angular bisector of the central angle of the first sector-shaped area; determine at least one first candidate virtual prop that overlaps the first sector-shaped area, wherein the first The projection area of the candidate virtual prop on the ground of the virtual scene overlaps with the first fan-shaped area; one of the at least one first candidate virtual prop is used as the first virtual prop.
  • the first display module 4551 is further configured to: when the first fan-shaped area includes a first candidate virtual prop, determine the first candidate virtual prop as the first virtual prop; when the first fan-shaped area includes two When there is a first candidate virtual prop, the first candidate virtual prop with a larger first overlapping area is used as the first virtual prop, and the first overlapping area is the overlapping area of the first candidate virtual prop and the first fan-shaped area; when the first candidate virtual prop is When a sector-shaped area includes at least three first candidate virtual props, the following processing is performed: determine a second sector-shaped area with the virtual object as the center, the set distance as the radius, and the second angle as the central angle, where the virtual object The direction coincides with the angular bisector of the central angle of the second sector area, and the second angle is smaller than the first angle; at least one second candidate virtual prop that overlaps with the second sector area is determined, wherein the second candidate virtual prop is in the virtual scene The projected area on the ground overlaps the second fan-shaped area; the second
  • the first display module 4551 is also configured to: before displaying that the first virtual prop among the at least two virtual props is in the selected state, perform any of the following processes: perform a processing on the at least two virtual props according to the frequency of use.
  • the virtual prop ranked first is used as the first virtual prop; at least two virtual props are sorted according to the scene distance, and the virtual prop ranked first is used as the first virtual prop.
  • the scene distance is the virtual prop in the virtual scene.
  • the most recent use time is the time when the virtual object last used the virtual prop.
  • the first display module 4551 is also configured to: before displaying that the first virtual prop among the at least two virtual props is in the selected state, obtain historical interaction data for the at least two virtual props in the virtual scene, and the props Parameters, the historical interaction data of each virtual prop includes scene parameters for each use of the virtual prop; perform the following processing through the first neural network model: extract scene features from scene parameters, and extract prop features from prop parameters; and prop features are fused to obtain the first fusion feature; the first fusion feature is mapped to the first probability of each virtual prop being adapted to the virtual scene; and in order of the first probability from high to low, at least two virtual props are The props are sorted, and the virtual prop ranked first is regarded as the first virtual prop.
  • the first display module 4551 is also configured to: display a switching control corresponding to at least one second virtual prop that is not in the selected state; wherein the second virtual prop has an interactive function , the switching control is used to be triggered to display the interactive control corresponding to at least one second virtual prop.
  • the first display module 4551 is further configured to: when the number of second virtual props that are not in the selected state is one, in response to a triggering operation for the switching control, display at least one interactive control of the second virtual prop , and hide at least one interactive control of the first virtual prop.
  • the first display module 4551 is also configured to: when the number of second virtual props that are not in the selected state is multiple, in response to a triggering operation for the switching control, display the number of second virtual props associated with the multiple second virtual props.
  • One-to-one corresponding prop identification in response to a trigger operation for any prop identification, display at least one interactive control of the second virtual prop corresponding to the triggered prop identification, and hide at least one interactive control of the first virtual prop.
  • the first display module 4551 is also configured to: display prop identifications corresponding to a plurality of second virtual props in a set order; wherein the number of the plurality of second virtual props is any of the following: : The number of settings, the number that is positively correlated with the size of the human-computer interaction interface, the number that is positively correlated with the area of the free area of the virtual scene, and the number that is positively correlated with the number of props of the second virtual props.
  • the setting order is any one of the following: the order of the usage frequency of the second virtual prop from high to low or from low to high; the scene distance of the second virtual prop in the order of small to large or from low to high.
  • the scene distance is the distance between the virtual prop and the virtual object in the virtual scene; the most recent use time of the second virtual prop is in order from recent to far or from far to near, and the most recent use time is the most recent time of the virtual object.
  • the moment when the second virtual prop is used; the interaction efficiency between the second virtual prop and the virtual object is in order from small to large or from large to small.
  • the first display module 4551 is further configured to: determine a second sector-shaped area with the virtual object as the center, the set distance as the radius, and the second angle as the central angle, where the orientation of the virtual object is The angular bisectors of the central angle of the second fan-shaped area coincide with each other, and the second angle is smaller than the first angle; the third overlapping area of each second virtual prop and the second sector-shaped area is obtained, and the third overlapping area is the second virtual prop in the virtual The overlap area of the projection area on the ground of the scene and the second fan-shaped area is obtained, and the interaction efficiency that is positively related to the area of the third overlapping area is obtained.
  • the virtual scene prop interaction device 455-2 stored in the memory 450
  • the software module in the device 455-2 may include: a second display module 4552 configured to display at least part of the area in the virtual scene on the human-computer interaction interface, where at least part of the area includes a virtual object; the second display module 4552 is also configured to In response to the appearance of at least two virtual props in at least part of the area, display a first virtual prop with an interactive function among the at least two virtual props and at least one interactive control corresponding to the first virtual prop based on the selected state, wherein the interactive control Used to be triggered to execute the interactive function corresponding to the interactive control, and the interactive function is used to interact with the virtual object; the second display module 4552 is also configured to display the corresponding at least one second virtual prop that is not in the selected state.
  • a switching control for virtual props wherein the switching control
  • Embodiments of the present application provide a computer program product.
  • the computer program product includes computer-executable instructions, and the computer-executable instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the prop interaction method of the virtual scene described above in the embodiment of the present application.
  • Embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions.
  • the computer-executable instructions are stored therein.
  • the computer-executable instructions When executed by a processor, they will cause the processor to execute the steps provided by the embodiments of the present application.
  • the prop interaction method of the virtual scene for example, the prop interaction method of the virtual scene as shown in Figures 4A-4C.
  • the computer-readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, Magnetic surface memory, optical disk, CD-ROM and other memories; it can also be various devices including one of the above memories or any combination thereof.
  • computer-executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and It may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • computer-executable instructions may, but do not necessarily correspond to, files in a file system and may be stored as part of a file holding other programs or data, for example, in Hyper Text Markup Language (HTML)
  • HTML Hyper Text Markup Language
  • scripts in the document stored in a single file specific to the program in question, or, stored in multiple collaborative files (for example, a file storing one or more modules, subroutines, or portions of code) .
  • computer-executable instructions may be deployed to execute on one electronic device, or on multiple electronic devices located at one location, or on multiple electronic devices distributed across multiple locations and interconnected by a communications network. executed on the device.
  • the embodiments of the present application in response to the appearance of at least two virtual props with interactive functions in at least part of the area, it is displayed that the first virtual prop among the at least two virtual props is in a selected state, and the corresponding first virtual prop is displayed.
  • At least one interactive control of the prop thereby directly displaying the automatically selected first virtual prop and the corresponding interactive control to the player, eliminating the need for the player to manually select the virtual prop that currently needs to be interacted with, which can effectively improve the efficiency of human-computer interaction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品;方法包括:在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应所述第一虚拟道具的至少一个交互控件;其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与第一虚拟道具进行交互。

Description

虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品
相关申请的交叉引用
本申请基于申请号为202210625554.5、申请日为2022年06月02日的中国专利申请提出,并要求中国专利申请的优先权,中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人机交互技术,尤其涉及一种虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的多媒体技术,借助于人机交互引擎技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在游戏等虚拟场景中,能够模拟虚拟对象之间的对战过程。
在虚拟场景中往往配置有多个具有交互功能的虚拟道具,从而可以控制虚拟对象与虚拟道具发生交互,例如,虚拟对象在椅子上坐下等动作性交互,例如,对物资箱进行开箱从而显示多个物资供虚拟对象选择等功能性交互。相关技术中难以明确玩家当前交互的虚拟道具,用户需要通过不断的试错才能找到合适的虚拟道具,增加人机交互复杂度,而且不断试错浪费计算资源和通信资源,甚至影响虚拟场景运行的流畅性。
发明内容
本申请实施例提供一种虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够向用户推荐出将要交互的虚拟道具以提高人机交互效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景的道具交互方法,所述方法是通过电子设备执行的,所述方法包括:
在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态,以及
显示对应所述第一虚拟道具的至少一个交互控件;其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互。
本申请实施例提供一种虚拟场景的道具交互装置,包括:
第一显示模块,配置为在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
所述第一显示模块,还配置为响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应所述第一虚拟道具的至少一个交互控件;其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互。
本申请实施例提供一种虚拟场景的道具交互方法,所述方法是通过电子设备执行的,所述方法包括:
在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
响应于在所述至少部分区域中出现至少两个虚拟道具,基于选中状态显示所述至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应所述第一虚拟道具的至少一个交互控件,其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互;
针对未处于所述选中状态的至少一个第二虚拟道具,显示对应所述至少一个第二虚拟道具的切换控件;其中,所述切换控件用于被触发显示对应至少一个所述第二虚拟道具的交互控件。
本申请实施例提供一种虚拟场景的道具交互装置,包括:
第二显示模块,配置为在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分 区域包括虚拟对象;
所述第二显示模块,还配置为响应于在所述至少部分区域中出现至少两个虚拟道具,基于选中状态显示所述至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应所述第一虚拟道具的至少一个交互控件,其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互;
所述第二显示模块,还配置为针对未处于所述选中状态的至少一个第二虚拟道具,显示对应所述至少一个第二虚拟道具的切换控件;其中,所述切换控件用于被触发显示对应至少一个所述第二虚拟道具的交互控件。
本申请实施例提供一种电子设备,包括:
存储器,用于存储计算机可执行指令;
处理器,用于执行所述存储器中存储的计算机可执行指令时,实现本申请实施例提供的虚拟场景的道具交互方法。
本申请实施例提供一种计算机可读存储介质,存储有计算机可执行指令,用于被处理器执行时,实现本申请实施例提供的虚拟场景的道具交互方法。
本申请实施例提供一种计算机程序产品,包括计算机可执行指令,所述计算机可执行指令被处理器执行时实现本申请实施例提供的虚拟场景的道具交互方法。
本申请实施例具有以下有益效果:
响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件,从而直接向玩家显示出自动选中的第一虚拟道具以及对应的交互控件,省去了玩家手动选择当前需要交互的虚拟道具的过程,可以有效提高人机交互效率。
附图说明
图1A-1B是相关技术中提供的虚拟场景的道具交互方法的界面示意图;
图2是本申请实施例提供的虚拟场景的道具交互系统的结构示意图;
图3是本申请实施例提供的电子设备的结构示意图;
图4A-4C是本申请实施例提供的虚拟场景的道具交互方法的流程示意图;
图5A-5E是本申请实施例提供的虚拟场景的道具交互方法的界面示意图;
图6是本申请实施例提供的虚拟场景的道具交互方法的流程示意图;
图7A-7B是本申请实施例提供的虚拟场景的道具交互方法的重叠区域示意图;
图8A-8E是本申请实施例提供的虚拟场景的道具交互方法的重叠区域计算示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景,利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操 作不存在执行先后顺序的限制。
3)客户端,终端中运行的用于提供各种服务的应用程序,例如游戏客户端等。
4)交互道具,指具有交互功能的虚拟道具,玩家控制虚拟对象与交互道具发生交互,例如,虚拟对象可以搭乘“载具”,“载具”属于交互道具,虚拟对象与建筑物中的桌子无法发生交互,则“桌子”不属于交互道具,仅属于普通虚拟物品。
参见图1A,人机交互界面301A中显示虚拟对象的多个虚拟道具302A,虚拟道具302A是通过范围识别或准心识别得到的,响应于针对每个虚拟道具302A的交互控件303A的触发操作,完成对应虚拟道具302A的拾取操作。参见图1B,人机交互界面301B中仅显示合成台302B以及非玩家控制角色303B,合成台302B以及非玩家控制角色303B是通过范围识别得到的,响应于针对合成台302B的交互控件304B的触发操作,触发合成台执行合成处理。
相关技术中的可交互的虚拟道具之间不存在交互优先级,虚拟场景中没有明确优先交互的虚拟道具的提示,也没有设置虚拟对象针对多个可交互的虚拟道具之间的交互机制,
相关技术中玩家可以在虚拟场景中摆放可交互的虚拟道具,因此虚拟道具的平面位置和空间位置会出现较多可能性,虚拟对象与虚拟道具实现准确交互的规则变得更为复杂,从而难以实现准确交互;并且虚拟对象与多个可交互的虚拟道具之间可能存在不同的交互动作,玩家无法判断虚拟对象具体会和哪个虚拟道具发生交互。相关技术中的可交互的虚拟道具之间不存在交互优先级,虚拟场景中没有明确优先交互的虚拟道具的提示,也没有设置虚拟对象针对多个可交互的虚拟道具之间的交互机制,从而难以实现存在多个虚拟道具的情况下的准确交互。
本申请实施例提供一种虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够向用户推荐出将要交互的虚拟道具以提高人机交互效率。下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑、平板电脑、台式计算机、机顶盒、移动设备(例如、移动电话、便携式音乐播放器、个人数字助理、专用消息设备、便携式游戏设备、虚拟现实硬件设备)等各种类型的用户终端。
本申请实施例提供的虚拟场景的道具交互方法可以应用于虚拟现实硬件设备,虚拟场景可以完全基于虚拟现实硬件设备输出,或者基于终端和服务器的协同来输出,服务器计算出虚拟场景的场景显示数据,场景显示数据包括处于选中状态的第一虚拟道具的道具显示数据,并将场景显示数据发送至虚拟现实硬件设备,在虚拟现实硬件设备中显示虚拟场景中包括虚拟对象的至少部分区域,在至少部分区域中显示具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中处于选中状态的第一虚拟道具,以及显示对应第一虚拟道具的至少一个交互控件;虚拟现实硬件设备的交互工具接收账号针对交互控件的触发操作,虚拟现实硬件设备将触发操作的操作数据通过网络发送至服务器,服务器基于操作数据计算交互控件对应交互功能的响应数据,服务器将响应数据通过网络发送至虚拟现实硬件设备,基于响应数据在虚拟现实硬件设备中显示虚拟对象与虚拟道具之间的交互过程。
为便于更容易理解本申请实施例提供的虚拟场景的道具交互方法,首先说明本申请实施例提供的虚拟场景的道具交互方法的示例性实施场景,虚拟场景可以完全基于终端输出,或者基于终端和服务器的协同来输出。
在一些实施例中,虚拟场景可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战。
在一个实施场景中,参见图2,图2是本申请实施例提供的虚拟场景的道具交互方法的应用模式示意图,应用于终端400和服务器200,一般地,适用于依赖服务器200的计算能力完成虚拟场景计算、并在终端400输出虚拟场景的应用模式。
作为示例,用户通过账号登录终端400运行的客户端(例如网络版的游戏应用),服务器200计算出虚拟场景的场景显示数据,场景显示数据包括处于选中状态的第一虚拟道具的道具显示数据,并将场景显示数据发送至终端400,在客户端的人机交互界面中显示虚拟场景中包括虚拟对象的至少部分区域,在至少部分区域中显示具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中处于选中状态的第一虚拟道具,以及显示对应第一虚拟道具的至少一个交互控件;终端400接收账号针对交互控件的触发操作,终端400将触发操作的操作数据通过网络300发送至服务器200,服务器200基于操作数据计算交互控件对应交互功能的响应数据,服务器200将响应数据通过网络300发送至终端400,基于响应数据在终端400的人机交互界面中显示虚拟对象与虚拟道具之间的交互过程。
作为示例,用户通过账号登录终端400运行的客户端(例如网络版的游戏应用),客户端计算出虚拟场景的场景显示数据,场景显示数据包括处于选中状态的第一虚拟道具的道具显示数据,在客户端的人机交互界面中显示虚拟场景中包括虚拟对象的至少部分区域,在至少部分区域中显示具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中处于选中状态的第一虚拟道具,以及显示对应第一虚拟道具的至少一个交互控件;终端400接收账号针对交互控件的触发操作,客户端基于操作数据计算交互控件 对应交互功能的响应数据,基于响应数据在终端400的人机交互界面中显示虚拟对象与虚拟道具之间的交互过程。
在一些实施例中,终端400可以通过运行计算机程序来实现本申请实施例提供的虚拟场景的道具交互方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序,例如游戏APP(即上述的客户端)、直播APP;也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
作为示例,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、以及智能手表等,但并不局限于此。终端400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
参见图3,图3是本申请实施例提供的应用虚拟场景的道具交互方法的电子设备的结构示意图,以电子设备为终端为例进行说明,图3所示的终端400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可以包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他电子设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景的道具交互装置可以采用软件方式实现,图3示出了存储在存储器450中的虚拟场景的道具交互装置455-1,其可以是程序和插件等形式的软件,包括:第一显示模块4551,图3还示出了存储在存储器450中的虚拟场景的道具交互装置455-2,其可以是程序和插件等形式的软件,包括:第二显示模块4552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,将在下文中说明各个模块的功能。
将结合本申请实施例提供的终端的示例性应用和实施,说明本申请实施例提供的虚拟场景的道具交互方法。
下面,说明本申请实施例提供的虚拟场景的道具交互方法,如前,实现本申请实施例的虚拟场景的道具交互方法的电子设备可以是终端设备。因此下文中不再重复说明各个步骤的执行主体。
参见图4A,图4A是本申请实施例提供的虚拟场景的道具交互方法的流程示意图,将结合图4A示出的步骤101至步骤102进行说明。
在步骤101中,在人机交互界面显示虚拟场景中的至少部分区域。
作为示例,至少部分区域包括虚拟对象,可以在人机交互界面中显示虚拟场景中包括虚拟对象的至少部分区域,这里的至少部分区域可以为全部区域,或者是部分区域,虚拟对象可以全部显示在人机交互界面中(即显示出虚拟对象的全身),或者部分显示在人机交互界面中(例如,显示出虚拟对象的上半身)。
在步骤102中,响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件。
作为示例,参见图5B,人机交互界面501B中显示椅子502B以及物资箱503B,椅子502B以及物资箱503B均是虚拟道具,椅子502B是第一虚拟道具,椅子502B中显示有选中标记504B(表征处于选中状态),在人机交互界面中还显示椅子502B的交互控件505B,响应于针对交互控件505B的触发操作,虚拟对象506B坐在椅子502B上,第一虚拟道具的数目为一个,第一虚拟道具以选中状态进行显示,并且显示出对应的交互控件,从而可以起到向用户明确出优先交互的虚拟道具的作用。
作为示例,交互控件用于被触发以执行交互控件对应的交互功能,交互功能用于虚拟对象与第一虚拟道具进行交互。第一虚拟道具可以对应有一个交互控件或者多个交互控件,例如,当第一虚拟道具为椅子时,对应有“坐下”交互控件,当“坐下”交互控件被触发,控制虚拟对象坐在椅子上;当第一虚拟道具为载具时,对应有“开车”交互控件以及“坐车”交互控件,当“开车”交互控件被触发,控制虚拟对象进入载具的驾驶席。
作为示例,至少两个具有交互功能的虚拟道具可以处于部分区域中的任意位置,或者至少两个具有交互功能的虚拟道具可以处于虚拟对象的交互范围内,交互范围是以虚拟对象为中心指定距离为半径的圆。
作为示例,虚拟对象的交互范围即为人机交互界面中所显示的至少部分区域,或者虚拟对象的交互范围可以处于人机交互界面中所显示的至少部分区域内部,即虚拟对象的交互范围是人机交互界面中所显示的至少部分区域的子区域。
响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件,从而直接向玩家显示出自动选中的第一虚拟道具以及对应的交互控件,省去了玩家手动选择当前需要交互的虚拟道具的过程,可以有效提高人机交互效率。
作为示例,响应于虚拟对象在虚拟场景中的朝向发生变化或者虚拟对象在虚拟场景中移动,显示至少两个虚拟道具中的第三虚拟道具处于选中状态,且第一虚拟道具处于非选中状态,例如,虚拟对象的位置发生变化,远离第一虚拟道具而靠近第三虚拟道具,虚拟对象与第一虚拟道具之前的距离大于第一距离阈值,且虚拟对象与第三虚拟道具之间的距离小于第二距离阈值,则最适合与虚拟对象发生交互的虚拟道具由第一虚拟道具变化为第三虚拟道具,因此第三虚拟道具会自动处于选中状态,而第一虚拟道具处于非选中状态。通过本申请实施例可以根据虚拟对象与虚拟道具之间的位置方向关系,适应性调整自动选中的虚拟道具,提高人机交互效率以及虚拟场景的智能化程度。
在一些实施例中,响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,在至少部分区域中对至少两个虚拟道具应用第一显示方式;其中,第一显示方式的显著程度与至少两个虚拟道具的特征值正相关,特征值包括以下至少之一:虚拟道具的使用频率、虚拟道具与虚拟对象的距离、虚拟道具与虚拟对象的朝向角。通过对虚拟道具应用不同显著程度的第一显示方式,可以体现出虚拟道具之间的区别,且可以向用户提示更丰富的信息,例如,使用频率的比较信息、距离的比较信息等等,使得玩家通过人机交互界面得到的信息更丰富,有利于提升用户体验以及信息显示效率。
作为示例,虚拟对象的朝向角可以通过以下方式获取:获取虚拟道具与虚拟对象的第一连线,获取以虚拟对象为端点,朝向准星方向(虚拟对象的朝向)的第一射线,将第一连线与第一射线的夹角作为虚拟道具与虚拟对象的朝向角,朝向角表征虚拟道具相对于虚拟对象的偏离程度。
作为示例,人机交互界面中的虚拟道具可以按照第一显示方式进行显示,例如,第一显示方式可以是对虚拟道具应用描边特效,其中,描边特效是对虚拟道具的线条进行加深的特效,参见图5D,人机交互界面501D中显示有椅子502D以及物资箱503D,椅子与物资箱均为虚拟道具,椅子的显著程度高于物资箱的显著程度,例如,椅子的使用频率高于物资箱的使用频率,从而椅子的显著程度高于物资箱的显著程度,或者椅子与虚拟对象的距离大于物资箱与虚拟对象的距离,从而椅子的显著程度高于物资箱的显著程度。
在一些实施例中,响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,在人机交互界面中对至少两个虚拟道具应用第一显示方式,并在人机交互界面中对其他虚拟道具应用第二显示方式; 其中,第二显示方式区别于第一显示方式,其他虚拟道具不具有交互功能。通过对具有交互功能以及不具有交互功能的虚拟道具应用不同的显示方式可以提醒玩家关注于具有交互功能的虚拟道具,使得玩家通过人机交互界面得到的信息更丰富,有利于提升用户体验以及信息显示效率。
作为示例,参见图5E,人机交互界面501E中显示有椅子502E以及物资箱503E,人机交互界面中还显示有石头504E,其中,椅子502E以及物资箱503E是具有交互功能的虚拟道具(椅子502E以及物资箱503E是第一虚拟道具),石头504E是不具有交互功能的其他虚拟道具(石头504E是其他虚拟道具),从而对椅子502E以及物资箱503E应用基于描边特效的第一显示方式,对石头504E应用非描边特效的第二显示方式,第一显示方式不限定于描边特效,第一显示方式与第二显示方式存在区别即可。
在一些实施例中,在显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,确定以虚拟对象为圆心,以设定距离为半径,以第一角度为圆心角的第一扇形区域,其中,虚拟对象的朝向与第一扇形区域的圆心角的角平分线重合;确定与第一扇形区域重叠的至少一个第一候选虚拟道具,其中,第一候选虚拟道具在虚拟场景的地面上的投影区域与第一扇形区域重叠;将至少一个第一候选虚拟道具中的一个第一候选虚拟道具,作为第一虚拟道具。
作为示例,参见图8A,地面即为XY坐标轴构成的平面,获取以虚拟对象803A所处的位置为圆心,以设定距离r为半径的第一扇形804A(准心视野),这里的第一扇形所占据的区域即为第一扇形区域,虚拟对象面朝准心方向的射线(即为虚拟对象的朝向)是第一扇形的角平分线,图8A示出了虚拟道具A1在虚拟场景中地面上的投影801A以及虚拟道具A2在虚拟场景中地面上的投影802A,由于仅有两个虚拟道具的投影与第一扇形804A重叠,表征两个第一候选虚拟道具同在准心视野内,将两个第一候选虚拟道具中的一个第一候选虚拟道具作为第一虚拟道具,例如,将图8A示出的投影801A对应的虚拟道具A1作为第一虚拟道具。
在一些实施例中,上述将至少一个第一候选虚拟道具中的一个第一候选虚拟道具,作为第一虚拟道具,可以通过以下技术方案实现:当第一扇形区域包括一个第一候选虚拟道具时,将第一候选虚拟道具确定为第一虚拟道具;当第一扇形区域包括两个第一候选虚拟道具时,将第一重叠区域的面积较大的第一候选虚拟道具作为第一虚拟道具,第一重叠区域是第一候选虚拟道具与第一扇形区域的重叠区域;当第一扇形区域包括至少三个第一候选虚拟道具时,执行以下处理:确定以虚拟对象为圆心,以设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,虚拟对象的朝向与第二扇形区域的圆心角的角平分线重合,第二角度小于第一角度;确定与第二扇形区域重叠的至少一个第二候选虚拟道具,其中,第二候选虚拟道具在虚拟场景的地面上的投影区域与第二扇形区域重叠;将第二重叠区域的面积最大的第二候选虚拟道具,作为第一虚拟道具,第二重叠区域是第二候选虚拟道具与第二扇形区域的重叠区域。通过本申请实施例可以在各种场景环境下明确出优先推荐的第一虚拟道具,并且第一虚拟道具是玩家操作最方便的虚拟道具,省去了玩家试错不方便交互的虚拟道具的过程,从而有效提高用户体验以及人机交互效率。
作为示例,参见图8A,地面即为XY坐标轴构成的平面,获取以虚拟对象803A所处的位置为圆心,以设定距离r为半径的第一扇形804A(准心视野),这里的第一扇形所占据的区域即为第一扇形区域,虚拟对象面朝准心方向的射线(即为虚拟对象的朝向)是第一扇形的角平分线,图8A示出了虚拟道具A1在虚拟场景中地面上的投影801A以及虚拟道具A2在虚拟场景中地面上的投影802A,由于仅有两个虚拟道具的投影与第一扇形804A重叠,表征两个第一候选虚拟道具同在准心视野内,将两个第一候选虚拟道具中的一个第一候选虚拟道具作为第一虚拟道具。虚拟对象优先与准心视野中投影占比更大的第一候选虚拟道具交互,因此可以直接将投影与第一扇形区域的重叠面积较大的第一候选虚拟道具作为优先交互的第一虚拟道具,例如,将图8A示出的投影801A对应的虚拟道具A1作为优先交互的第一虚拟道具。
作为示例,参见图8B,地面即为XY坐标轴构成的平面,获取以虚拟对象804B所处的位置为圆心,以设定距离r为半径的第一扇形805B(第一准心视野),这里的第一扇形所占据的区域即为第一扇形区域,虚拟对象面朝准心方向的射线(即为虚拟对象的朝向)是第一扇形805B的角平分线,图8B示出了虚拟道具B1在虚拟场景中地面上的投影801B、虚拟道具B2在虚拟场景中地面上的投影802B以及虚拟道具B3在虚拟场景中地面上的投影803B,由于存在三个第一候选虚拟道具的投影与第一扇形805B重叠,表征第一准心视野内存在至少三个第一候选虚拟道具,因此可以获取第二准心视野以判定优先交互的第一候选虚拟道具,获取以虚拟对象804B所在位置为圆心,以设定距离r为半径的第二扇形806B(第二准心视野),第二扇形所占的区域即为第二扇形区域,虚拟对象面朝准心方向的射线(即为虚拟对象的朝向)是第二扇形806B的角平分线,第二扇形小于第一扇形(第二扇形的圆心角小于第一扇形的圆心角),虚拟道具B1、虚拟道具B2以及虚拟道具B3均在第一准心视野内,虚拟道具B1以及虚拟道具B2均在第二准心视野内,虚拟道具B3不在第二准心视野内,因此虚拟道具B1以及虚拟道具B2是第二候选虚拟道具,可以直接将投影与第二扇形区域的重叠面积最大的第二候选虚拟道具作为优先交互的第一虚拟道具,即将图8B示出的投影802B对应的第二候选虚拟道具B2作为优先交互的第一虚拟道具。
在一些实施例中,显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,执行以下任意一种处理:按照使用频率对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具,使用频率是当前虚拟对象的使用频率或者是所有虚拟对象的使用频率;按照场景距离对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具,场景距离是虚拟场景中虚拟道具与虚拟对象的距离;按照最近使用时间对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具,最近使用时间是虚拟对象最近一次使用虚拟道具的时刻。通过上述排序的方式可以清晰明确地限定出需要以选中状态显示的第一虚拟道具,用户不需要手动选择虚拟道具,从而提升人机交互效率,并且仅显示排序在首位的虚拟道具的交互控件,可以提高显示资源利用率。
作为示例,上述排序处理可以是升序排序处理或者降序排序处理,为了向用户推荐最优的第一虚拟道具,在基于使用频率以及最近使用时间进行排序时,可以采取降序排序处理,在基于场景距离进行排序时,可以采取升序排序的方式,即第一虚拟道具是至少两个虚拟道具中使用频率最高的虚拟道具、或者是至少两个虚拟道具中与虚拟对象最近的虚拟道具,或者是至少两个虚拟道具中最近被虚拟对象使用的虚拟道具。
在一些实施例中,显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,获取虚拟场景中针对至少两个虚拟道具的历史交互数据、以及道具参数,每个虚拟道具的历史交互数据包括每次使用虚拟道具的场景参数;通过第一神经网络模型执行以下处理:从场景参数中提取场景特征,并从道具参数中提取道具特征;对场景特征以及道具特征进行融合处理,得到第一融合特征;将第一融合特征映射为每个虚拟道具与虚拟场景适配的第一概率;按照第一概率从高到低的顺序,对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具。通过神经网络模型可以提高显示第一虚拟道具的交互控件的智能化程度以及准确度,有效提高用户与虚拟道具交互的效率,并且通过仅显示排序在首位的虚拟道具的交互控件的方式,可以有效提高显示资源的利用效率。
作为示例,每个虚拟道具的历史交互数据包括每次使用虚拟道具的场景参数,场景参数包括对战数据、环境数据、以及虚拟对象的状态数据等等,道具参数是虚拟道具本身的参数,例如虚拟道具的用途等等。
下面介绍如何对上述神经网络模型进行训练。在样本虚拟场景中采集各个样本虚拟道具的样本场景参数以及样本道具参数,根据所采集的样本场景参数以及样本道具参数构建训练样本,以训练样本为待训练的神经网络模型的输入,并以样本虚拟道具在样本虚拟场景中是否为优先使用的虚拟道具作为标注数据,当同时显示有多个虚拟道具,且样本虚拟道具被优先使用时,样本虚拟道具的标注是1,当同时显示有多个虚拟道具,且样本虚拟道具未被优先使用时,则样本虚拟道具的标注是0,基于训练样本以及标注数据对待训练的神经网络模型进行训练,从而后续可以直接通过第一神经网络模型确定出某个虚拟道具是否作为优先推荐的第一虚拟道具。
在一些实施例中,参见图4B,图4B是本申请实施例提供的虚拟场景的道具交互方法的流程示意图,基于图4A,还可以执行图4B中步骤103。
在步骤103中,针对未处于选中状态的至少一个第二虚拟道具,显示对应至少一个第二虚拟道具的切换控件。
作为示例,第二虚拟道具具有交互功能,切换控件用于被触发显示对应至少一个第二虚拟道具的交互控件。
在一些实施例中,当未处于选中状态的第二虚拟道具的数目为一时,响应于针对切换控件的触发操作,显示第二虚拟道具的至少一个交互控件(显示第二虚拟道具处于选中状态,第一虚拟道具处于未选中状态),并隐藏第一虚拟道具的至少一个交互控件。通过本申请实施例实现了两个虚拟道具的交互控件之间的切换显示,从而通过一个交互控件实现与多个虚拟道具的交互。
作为示例,第二虚拟道具可以对应有一个交互控件或者多个交互控件,例如,当第二虚拟道具为椅子时,对应有“坐下”交互控件,当“坐下”交互控件被触发,控制虚拟对象坐在椅子上;当第二虚拟道具为载具时,对应有“开车”交互控件以及“坐车”交互控件,当“开车”交互控件被触发,控制虚拟对象进入载具的驾驶席。
作为示例,参见图5B,人机交互界面501B中显示椅子502B以及物资箱503B,椅子502B中显示有选中标记504B(表征处于选中状态),椅子502B是第一虚拟道具,物资箱503B中未显示有选中标记504B(表征未处于选中状态),物资箱503B是第二虚拟道具。在人机交互界面中还显示椅子502B的交互控件505B,响应于针对切换控件507B的触发操作,在人机交互界面中隐藏椅子502B的交互控件505B并显示物资箱503B的交互控件508B,并在物资箱503B中显示有选中标记504B。
在一些实施例中,当未处于选中状态的第二虚拟道具的数目为多个时,响应于针对切换控件的触发操作,显示与多个第二虚拟道具一一对应的道具标识(例如标识控件);响应于针对任意一个道具标识的触发操作,显示与触发的道具标识对应的第二虚拟道具的至少一个交互控件(显示第二虚拟道具处于选 中状态,第一虚拟道具处于未选中状态),并隐藏第一虚拟道具的至少一个交互控件。通过本申请实施例实现了至少三个虚拟道具的交互控件之间的切换显示,从而通过有限数量的控件实现与多个虚拟道具的交互。
作为示例,参见图5C,人机交互界面501C中显示第一椅子502C、物资箱503C以及第二椅子504C,第一椅子502C中显示有选中标记505C,第一椅子502C是第一虚拟道具,物资箱503C以及第二椅子504C是第二虚拟道具,在人机交互界面中还显示第一椅子502C的交互控件506C,响应于针对切换控件511C的触发操作,在人机交互界面中隐藏第一椅子502C的交互控件506C并显示物资箱503C的标识控件508C以及第二椅子的标识控件509C,响应于针对物资箱503C的标识控件508C的触发操作,在物资箱503C中显示有选中标记505C,在人机交互界面中隐藏物资箱503C的标识控件508C以及第二椅子的标识控件509C,并显示物资箱503C的交互控件510C。
在一些实施例中,上述显示与多个第二虚拟道具一一对应的道具标识,可以通过以下技术方案实现:按照设定顺序显示与多个第二虚拟道具一一对应的道具标识(例如,标识控件);其中,多个第二虚拟道具的数目是以下任意一种:设定数目(这里的设定数目可以为多个历史虚拟场景中第二虚拟道具的数目的平均值)、与人机交互界面的尺寸正相关的数目、与虚拟场景的空闲区域的面积正相关的数目、与第二虚拟道具的道具数目正相关的数目。通过显示第二虚拟道具的道具标识,可以向用户形成推荐作用,并且有效提高显示资源利用率。
作为示例,以处于未选中状态的第二虚拟道具的数目是5为例进行说明,即虚拟场景中有5个处于未选中状态的第二虚拟道具,参见图5C,响应于针对切换控件511C的触发操作,在人机交互界面中隐藏第一椅子502C的交互控件506C并显示物资箱503C的标识控件508C以及第二椅子的标识控件509C,多个第二虚拟道具的数目为2,表征针对5个处于未选中状态的第二虚拟道具仅显示2个第二虚拟道具的道具标识,显示有道具标识的第二虚拟道具的数目可以是设定数目;数目还可以与人机交互界面的尺寸正相关,人机交互界面的尺寸越大则显示道具标识的第二虚拟道具越多,例如,当人机交互界面的尺寸大于当前人机交互界面的尺寸时,针对5个第二虚拟道具可以显示4个第二虚拟道具的道具标识;数目还可以与虚拟场景的空闲区域的面积正相关,虚拟场景的空闲区域的面积越大,会在人机交互界面中显示出道具标识的第二虚拟道具的数量越多,例如,当虚拟场景的空闲区域的面积大于当前虚拟场景的空闲区域的面积时,针对5个第二虚拟道具可以显示4个第二虚拟道具的道具标识;数目还可以与第二虚拟道具的道具数目正相关,第二虚拟道具的道具数目越大则显示道具标识的第二虚拟道具越多,例如,针对5个第二虚拟道具可以显示3个第二虚拟道具的道具标识,针对4个第二虚拟道具可以显示2个第二虚拟道具的道具标识。
在一些实施例中,设定顺序是以下任意一种:第二虚拟道具的使用频率从高到低的顺序或者从低到高的顺序;第二虚拟道具的场景距离从小到大的顺序或者从大到小的顺序,场景距离是虚拟场景中虚拟道具与虚拟对象的距离;第二虚拟道具的最近使用时间从近到远的顺序或者从远到近的顺序,最近使用时间是虚拟对象最近一次使用第二虚拟道具的时刻;第二虚拟道具与虚拟对象的交互效率从小到大的顺序或者从大到小的顺序。通过上述排序的方式可以清晰明确地限定出需要显示道具标识以参与切换的第二虚拟道具,并且按照顺序进行显示可以向玩家提示更加丰富的道具信息,提升人机交互效率。
作为示例,上述排序处理可以是升序排序处理或者降序排序处理。为了向用户推荐符合用户兴趣的虚拟道具,在基于使用频率、最近使用时间以及交互效率进行排序时,可以采取降序排序处理,在基于场景距离进行排序时,可以采取升序排序的方式,即显示道具标识的第二虚拟道具是处于未选中状态的第二虚拟道具中使用频率较高的虚拟道具、或者是处于未选中状态的第二虚拟道具中与虚拟对象较近的第二虚拟道具,或者是处于未选中状态的第二虚拟道具中最近一段时间被虚拟对象使用的第二虚拟道具,或者是处于未选中状态的第二虚拟道具中交互效率较高的第二虚拟道具。为了提高虚拟道具的使用多样性,在基于使用频率、最近使用时间以及交互效率进行排序时,可以采取升序排序处理,在基于场景距离进行排序时,可以采取降序排序的方式,即显示道具标识的第二虚拟道具是处于未选中状态的第二虚拟道具中使用频率较低的虚拟道具、或者是处于未选中状态的第二虚拟道具中与虚拟对象较远的第二虚拟道具,或者是处于未选中状态的第二虚拟道具中最近一段时间未被虚拟对象使用的第二虚拟道具,或者是处于未选中状态的第二虚拟道具中交互效率较低的第二虚拟道具。
在一些实施例中,确定以虚拟对象为圆心,以设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,虚拟对象的朝向与第二扇形区域的圆心角的角平分线重合,第二角度小于第一角度;获取每个第二虚拟道具与第二扇形区域的第三重叠区域,第三重叠区域是第二虚拟道具在虚拟场景的地面上的投影区域与第二扇形区域的重叠区域,并获取与第三重叠区域的面积正相关的交互效率。
作为示例,参见图8E,图8E示出了虚拟道具E1在虚拟场景中地面上的投影801E、虚拟道具E2在虚拟场景中地面上的投影802E、虚拟道具E3在虚拟场景中地面上的投影803E以及虚拟道具E4在虚拟场景中地面上的投影807E,其中,虚拟道具E2和虚拟道具E4在Z轴上处于堆叠状态,获取以虚拟对象 804E为圆心,以设定距离r为半径的第二扇形806E(第二准心视野),第二扇形806E所占的区域是第二扇形区域,虚拟对象面朝准心方向的射线(虚拟对象的朝向)是第二扇形806E的角平分线,虚拟道具E1、虚拟道具E2以及虚拟道具E4均在第二准心视野内,可以直接将投影与第二扇形的重叠面积最大的虚拟道具作为优先交互的第一虚拟道具,即将图8E示出的投影802E对应的虚拟道具E2作为优先交互的第一虚拟道具,将投影801E对应的虚拟道具E1以及投影804E对应的虚拟道具E4作为第二虚拟道具,由于投影801E对应的第三重叠区域的面积大于投影804E对应的第三重叠区域的面积,表征虚拟道具E1的交互效率高于虚拟道具E4的交互效率。
在一些实施例中,参见图4C,图4C是本申请实施例提供的虚拟场景的道具交互方法的流程示意图,将结合图4C示出的步骤201-步骤203进行说明。
在步骤201中,在人机交互界面显示虚拟场景中的至少部分区域。
作为示例,至少部分区域包括虚拟对象。
在步骤202中,响应于在至少部分区域中出现至少两个虚拟道具,基于选中状态显示至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应第一虚拟道具的至少一个交互控件。
作为示例,交互控件用于被触发执行交互控件对应的交互功能,交互功能用于虚拟对象与第一虚拟道具进行交互。
在步骤203中,针对未处于选中状态的至少一个第二虚拟道具,显示对应至少一个第二虚拟道具的切换控件。
作为示例,切换控件用于被触发以显示对应至少一个第二虚拟道具的交互控件。
步骤201至步骤203的实施方式可以参考步骤101至步骤103的实施方式。
通过本申请实施例响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件,从而直接向玩家显示出自动选中的第一虚拟道具以及对应的交互控件,省去了玩家手动选择当前需要交互的虚拟道具的过程,可以有效提高人机交互效率,并且通过切换控件实现了多个虚拟道具的交互控件之间的切换显示,从而在有限控件数目的情况下实现与多个虚拟道具的交互,从而可以有效提升显示资源利用率。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
在一些实施例中,账号登录终端运行的客户端(例如网络版的游戏应用),服务器计算出虚拟场景的场景显示数据,场景显示数据包括处于选中状态的第一虚拟道具的道具显示数据,并将场景显示数据发送至终端,在客户端的人机交互界面中显示虚拟场景中包括虚拟对象的至少部分区域,在至少部分区域中显示具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中处于选中状态的第一虚拟道具,以及显示对应第一虚拟道具的至少一个交互控件;终端接收账号针对交互控件的触发操作,终端将触发操作的操作数据通过网络发送至服务器,服务器基于操作数据计算交互控件对应交互功能的响应数据,服务器将响应数据通过网络发送至终端,基于响应数据在终端的人机交互界面中显示虚拟对象与虚拟道具之间的交互过程。
在游戏场景中,玩家控制虚拟对象与虚拟场景中的可交互的虚拟道具进行交互,当虚拟对象选定虚拟道具之后虚拟对象可以与虚拟道具发生多样化的交互动作,例如,当虚拟道具是椅子时,虚拟对象可以在椅子上坐下,当虚拟道具是浴缸时,虚拟对象可以对浴缸进行注水或者放水,但是当多个虚拟道具同时处于虚拟对象的交互范围内时,首先需要接收用户针对椅子的选中操作,再接收用户坐下椅子的坐下操作,即用户需要执行两次操作才能实现交互,通过本申请实施例提供的虚拟场景的道具交互方法可以明确当前交互的虚拟道具,并且高效准确地切换更多的可交互的虚拟道具,可以有效提高人机交互效率,例如,仅需要接收用户坐下椅子的坐下操作,即可以实现针对虚拟道具的交互过程。
在一些实施例中,参见图5A,人机交互界面501A中显示虚拟道具502A,虚拟道具502A中显示有选中标记503A,在人机交互界面中还显示交互控件504A,响应于针对交互控件504A的触发操作,虚拟对象505A坐在虚拟道具502A上。当虚拟对象的识别范围内只有一个具有交互功能的虚拟道具时,显示具有交互功能的虚拟道具以及对应虚拟道具的交互控件。
在一些实施例中,参见图5B,人机交互界面501B中显示椅子502B以及物资箱503B,椅子502B中显示有选中标记504B,在人机交互界面中还显示椅子502B的交互控件505B,响应于针对交互控件505B的触发操作,虚拟对象506B坐在椅子502B上,响应于针对切换控件507B的触发操作,在人机交互界面中隐藏椅子502B的交互控件505B并显示物资箱503B的交互控件508B,并在物资箱503B中显示有选中标记504B。当虚拟对象的识别范围内只有两个具有交互功能的虚拟道具时,显示当前推荐的具有交互功能的虚拟道具、对应虚拟道具的交互控件,以及快速切换其他虚拟道具的切换控件。
在一些实施例中,参见图5C,图5C是本申请实施例提供的虚拟场景的道具交互方法的界面示意图,人机交互界面501C中显示第一椅子502C、物资箱503C以及第二椅子504C,第一椅子502C中显示有选中标记505C,在人机交互界面中还显示第一椅子502C的交互控件506C,响应于针对第一椅子502C的 交互控件506C的触发操作,虚拟对象507C坐在第一椅子502C上,响应于针对切换控件511C的触发操作,在人机交互界面中隐藏第一椅子502C的交互控件506C并显示物资箱503C的标识控件508C以及第二椅子的标识控件509C,响应于针对物资箱503C的标识控件508C的触发操作,在物资箱503C中显示有选中标记505C,在人机交互界面中隐藏物资箱503C的标识控件508C以及第二椅子的标识控件509C,并显示物资箱503C的交互控件510C。当虚拟对象的识别范围内有至少三个具有交互功能的虚拟道具时,通过判断机制进行虚拟道具的交互控件切换,并显示当前推荐的虚拟道具。
在一些实施例中,当虚拟场景中有多个与虚拟对象的距离较近的具有交互功能的虚拟道具时,不需要玩家通过移动准心去选择具有交互功能的虚拟道具,就能够准确显示当前推荐交互的虚拟道具,对当前识别范围内具有交互功能的多个虚拟道具,进行基于数量识别处理,针对不同数量的情况执行不同的识别流程,从而实现多个虚拟道具之间的准确切换。
在一些实施例中,参见图6,图6是本申请实施例提供的虚拟场景的道具交互方法的流程示意图,玩家可以控制虚拟对象来与虚拟场景中可自由摆放的虚拟道具进行交互;通过对虚拟道具进行识别及切换,帮助玩家准确和便捷的完成与多个虚拟道具的交互行为。在步骤601中,对虚拟场景进行识别处理,得到待交互的虚拟道具,在步骤602中,接收针对虚拟道具的触发操作,在步骤603中,执行虚拟对象与虚拟道具的交互。步骤601可以通过步骤6011实现,在步骤6011中,当识别到1个虚拟道具时,将该虚拟道具作为待交互的虚拟道具。步骤601可以通过步骤6012以及6013实现,在步骤6012中,当识别到2个虚拟道具时,将识别范围较大的虚拟道具作为待交互的虚拟道具,在步骤6013中,响应于切换操作,切换显示2个虚拟道具的交互控件。步骤601可以通过步骤6014以及6015实现,在步骤6014中,当识别到至少三个虚拟道具时,进行基于准心的二次范围识别,将识别范围最大的虚拟道具作为待交互的虚拟道具,在步骤6015中,响应于切换操作,切换显示至少三个虚拟道具的交互控件。
在一些实施例中,玩家控制虚拟对象与游戏场景中的不同物件(虚拟道具)进行交互,可以发生查看、对话、拾取等不同类型的操作,玩家控制虚拟对象与虚拟场景中的物件发生交互,当虚拟场景中同时存在多个物件时,参见图7A-图7B,虚拟对象与物件A、物件B以及物件C发生交互,物件之间可以是堆叠状态,例如物件B与物件D之间处于堆叠状态。
在一些实施例中,参见图8A,图8A示出了虚拟道具A1在虚拟场景中地面上的投影801A以及虚拟道具A2在虚拟场景中地面上的投影802A,地面即为XY坐标轴构成的平面,获取以虚拟对象803A为圆心,以设定距离r为半径的扇形804A(准心视野),虚拟对象面朝准心方向的射线是扇形的角平分线,由于仅有两个虚拟道具的投影与扇形重叠,表征两个虚拟道具同在准心视野内,优先与准心视野中投影占比更大的虚拟道具交互,因此可以直接将投影与扇形的重叠面积较大的虚拟道具作为优先交互的虚拟道具,例如,将图8A示出的投影801A对应的虚拟道具A1作为优先交互的虚拟道具。通过对切换控件的触发操作可以完成两个虚拟道具的交互控件之间的切换显示,不需要玩家控制虚拟对象移动准心来改变虚拟道具在准心视野中投影占比。
在一些实施例中,参见图8B,图8B示出了虚拟道具B1在虚拟场景中地面上的投影801B、虚拟道具B2在虚拟场景中地面上的投影802B以及虚拟道具B3在虚拟场景中地面上的投影803B,获取以虚拟对象804B为圆心,以设定距离r为半径的第一扇形805B(第一准心视野),虚拟对象面朝准心方向的射线是第一扇形805B的角平分线,由于存在三个虚拟道具的投影与第一扇形805B重叠,表征准心视野内存在至少三个虚拟道具,基于准心方位获取二次准心视野以判定优先交互的虚拟道具,获取以虚拟对象804B为圆心,以设定距离r为半径的第二扇形806B(第二准心视野),虚拟对象面朝准心方向的射线是第二扇形806B的角平分线,第二扇形小于第一扇形,虚拟道具B1、虚拟道具B2以及虚拟道具B3均在第一准心视野内,虚拟道具B1以及虚拟道具B2均在第二准心视野内,可以直接将投影与第二扇形的重叠面积最大的虚拟道具作为优先交互的虚拟道具,即将图8B示出的投影802B对应的虚拟道具B2作为优先交互的虚拟道具,并且可通过操作控件实现虚拟道具B1、虚拟道具B2以及虚拟道具B3对应的交互控件之间的显示切换。
在一些实施例中,参见图8C,图8C是本申请实施例提供的虚拟场景的道具交互方法的重叠区域计算示意图,图8C示出了虚拟道具C1在虚拟场景中地面上的投影801C、虚拟道具C2在虚拟场景中地面上的投影802C以及虚拟道具C3在虚拟场景中地面上的投影803C,获取以虚拟对象804C为圆心,以设定距离r为半径的扇形805C,虚拟对象面朝准心方向的射线是扇形805C的角平分线,由于虚拟道具C1的投影以及虚拟道具C2的投影与扇形805C没有重叠,仅有虚拟道具C3的投影与扇形805C,将虚拟道具C3作为优先交互的虚拟道具,不会触发与虚拟道具C1以及虚拟道具C2发生交互。
在一些实施例中,参见图8D,图8D是本申请实施例提供的虚拟场景的道具交互方法的重叠区域计算示意图,图8D示出了虚拟道具D1在虚拟场景中地面上的投影801D、虚拟道具D2在虚拟场景中地面上的投影802D以及虚拟道具D3在虚拟场景中地面上的投影803D,获取以虚拟对象804D为圆心,以设定距离r为半径的第一扇形805D(第一准心视野),虚拟对象面朝准心方向的射线是第一扇形805D的角 平分线,由于存在三个虚拟道具的投影与第一扇形805D重叠,表征准心视野内存在至少三个虚拟道具,基于准心方位获取二次准心视野以判定优先交互的虚拟道具,获取以虚拟对象804D为圆心,以设定距离r为半径的第二扇形806D(第二准心视野),虚拟对象面朝准心方向的射线是第二扇形806D的角平分线,第二扇形小于第一扇形,虚拟道具D1、虚拟道具D2以及虚拟道具D3均在第一准心视野内,虚拟道具D1以及虚拟道具D2均在第二准心视野内,可以直接将投影与第二扇形的重叠面积最大的虚拟道具作为优先交互的虚拟道具,即将图8D示出的投影801D对应的虚拟道具D1作为优先交互的虚拟道具,并且可通过操作控件实现虚拟道具D1、虚拟道具D2以及虚拟道具D3对应的交互控件之间的显示切换。
在一些实施例中,参见图8E,图8E示出了虚拟道具E1在虚拟场景中地面上的投影801E、虚拟道具E2在虚拟场景中地面上的投影802E、虚拟道具E3在虚拟场景中地面上的投影803E以及虚拟道具E4在虚拟场景中地面上的投影807E,其中,虚拟道具E2和虚拟道具E4在Z轴上处于堆叠状态,获取以虚拟对象804E为圆心,以设定距离r为半径的第一扇形805E(第一准心视野),虚拟对象面朝准心方向的射线是第一扇形805E的角平分线,由于存在三个虚拟道具的投影与第一扇形805E重叠,表征准心视野内存在至少三个虚拟道具,基于准心方位获取二次准心视野以判定优先交互的虚拟道具,获取以虚拟对象804E为圆心,以设定距离r为半径的第二扇形806E(第二准心视野),虚拟对象面朝准心方向的射线是第二扇形806E的角平分线,第二扇形小于第一扇形,虚拟道具E1、虚拟道具E2、虚拟道具E3以及虚拟道具E4均在第一准心视野内,虚拟道具E1、虚拟道具E2以及虚拟道具E4均在第二准心视野内,可以直接将投影与第二扇形的重叠面积最大的虚拟道具作为优先交互的虚拟道具,即将图8E示出的投影802E对应的虚拟道具E2作为优先交互的虚拟道具,并且可通过操作控件实现虚拟道具E1、虚拟道具E2以及虚拟道具E4对应的交互控件之间的显示切换。
相关技术中当虚拟场景中密集摆放多个虚拟道具时,难以明确优先交互的虚拟道具,尤其当Z轴方向堆叠多个虚拟道具时,可能需要频繁的移动屏幕以控制准心来调整待交互的虚拟道具。通过本申请实施例可以帮助玩家明确出优先进行交互的虚拟道具,并在有限控件的情况下实现多个虚拟道具之间的交互切换。
可以理解的是,在本申请实施例中,涉及到用户信息等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
下面继续说明本申请实施例提供的虚拟场景的道具交互装置455-1的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器450的虚拟场景的道具交互装置455-1中的软件模块可以包括:第一显示模块4551,配置为在人机交互界面显示虚拟场景中的至少部分区域,其中,至少部分区域包括虚拟对象;第一显示模块4551,还配置为响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件;其中,交互控件用于被触发执行交互控件对应的交互功能,交互功能用于虚拟对象与第一虚拟道具进行交互。
在一些实施例中,第一显示模块4551,还配置为:响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,在至少部分区域中对至少两个虚拟道具应用第一显示方式;其中,第一显示方式的显著程度与至少两个虚拟道具的特征值正相关,特征值包括以下至少之一:虚拟道具的使用频率、虚拟道具与虚拟对象的距离、虚拟道具与虚拟对象的朝向角。
在一些实施例中,第一显示模块4551,还配置为:响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,在人机交互界面中对至少两个虚拟道具应用第一显示方式,并在人机交互界面中对其他虚拟道具应用第二显示方式;其中,第二显示方式区别于第一显示方式,其他虚拟道具不具有交互功能。
在一些实施例中,第一显示模块4551,还配置为:显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,确定以虚拟对象为圆心,以设定距离为半径,以第一角度为圆心角的第一扇形区域,其中,虚拟对象的朝向与第一扇形区域的圆心角的角平分线重合;确定与第一扇形区域重叠的至少一个第一候选虚拟道具,其中,第一候选虚拟道具在虚拟场景的地面上的投影区域与第一扇形区域重叠;将至少一个第一候选虚拟道具中的一个第一候选虚拟道具,作为第一虚拟道具。
在一些实施例中,第一显示模块4551,还配置为:当第一扇形区域包括一个第一候选虚拟道具时,将第一候选虚拟道具确定为第一虚拟道具;当第一扇形区域包括两个第一候选虚拟道具时,将第一重叠区域的面积较大的第一候选虚拟道具作为第一虚拟道具,第一重叠区域是第一候选虚拟道具与第一扇形区域的重叠区域;当第一扇形区域包括至少三个第一候选虚拟道具时,执行以下处理:确定以虚拟对象为圆心,以设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,虚拟对象的朝向与第二扇形区域的圆心角的角平分线重合,第二角度小于第一角度;确定与第二扇形区域重叠的至少一个第二候选虚拟道具,其中,第二候选虚拟道具在虚拟场景的地面上的投影区域与第二扇形区域重叠;将第二重叠 区域的面积最大的第二候选虚拟道具,作为第一虚拟道具,第二重叠区域是第二候选虚拟道具与第二扇形区域的重叠区域。
在一些实施例中,第一显示模块4551,还配置为:显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,执行以下任意一种处理:按照使用频率对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具;按照场景距离对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具,场景距离是虚拟场景中虚拟道具与虚拟对象的距离;按照最近使用时间对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具,最近使用时间是虚拟对象最近一次使用虚拟道具的时刻。
在一些实施例中,第一显示模块4551,还配置为:显示至少两个虚拟道具中的第一虚拟道具处于选中状态之前,获取虚拟场景中针对至少两个虚拟道具的历史交互数据、以及道具参数,每个虚拟道具的历史交互数据包括每次使用虚拟道具的场景参数;通过第一神经网络模型执行以下处理:从场景参数中提取场景特征,并从道具参数中提取道具特征;对场景特征以及道具特征进行融合处理,得到第一融合特征;将第一融合特征映射为每个虚拟道具与虚拟场景适配的第一概率;按照第一概率从高到低的顺序,对至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为第一虚拟道具。
在一些实施例中,第一显示模块4551,还配置为:针对未处于选中状态的至少一个第二虚拟道具,显示对应至少一个第二虚拟道具的切换控件;其中,第二虚拟道具具有交互功能,切换控件用于被触发显示对应至少一个第二虚拟道具的交互控件。
在一些实施例中,第一显示模块4551,还配置为:当未处于选中状态的第二虚拟道具的数目为一时,响应于针对切换控件的触发操作,显示第二虚拟道具的至少一个交互控件,并隐藏第一虚拟道具的至少一个交互控件。
在一些实施例中,第一显示模块4551,还配置为:当未处于选中状态的第二虚拟道具的数目为多个时,响应于针对切换控件的触发操作,显示与多个第二虚拟道具一一对应的道具标识;响应于针对任意一个道具标识的触发操作,显示与触发的道具标识对应的第二虚拟道具的至少一个交互控件,并隐藏第一虚拟道具的至少一个交互控件。
在一些实施例中,第一显示模块4551,还配置为:按照设定顺序显示与多个第二虚拟道具一一对应的道具标识;其中,多个第二虚拟道具的数目是以下任意一种:设定数目、与人机交互界面的尺寸正相关的数目、与虚拟场景的空闲区域的面积正相关的数目、与第二虚拟道具的道具数目正相关的数目。
在一些实施例中,设定顺序是以下任意一种:第二虚拟道具的使用频率从高到低的顺序或者从低到高的顺序;第二虚拟道具的场景距离从小到大的顺序或者从大到小的顺序,场景距离是虚拟场景中虚拟道具与虚拟对象的距离;第二虚拟道具的最近使用时间从近到远的顺序或者从远到近的顺序,最近使用时间是虚拟对象最近一次使用第二虚拟道具的时刻;第二虚拟道具与虚拟对象的交互效率从小到大的顺序或者从大到小的顺序。
在一些实施例中,第一显示模块4551,还配置为:确定以虚拟对象为圆心,以设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,虚拟对象的朝向与第二扇形区域的圆心角的角平分线重合,第二角度小于第一角度;获取每个第二虚拟道具与第二扇形区域的第三重叠区域,第三重叠区域是第二虚拟道具在虚拟场景的地面上的投影区域与第二扇形区域的重叠区域,并获取与第三重叠区域的面积正相关的交互效率。
下面继续说明本申请实施例提供的虚拟场景的道具交互装置455-2的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器450的虚拟场景的道具交互装置455-2中的软件模块可以包括:第二显示模块4552,配置为在人机交互界面显示虚拟场景中的至少部分区域,其中,至少部分区域包括虚拟对象;第二显示模块4552,还配置为响应于在至少部分区域中出现至少两个虚拟道具,基于选中状态显示至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应第一虚拟道具的至少一个交互控件,其中,交互控件用于被触发执行交互控件对应的交互功能,交互功能用于与虚拟对象进行交互;第二显示模块4552,还配置为针对未处于选中状态的至少一个第二虚拟道具,显示对应至少一个第二虚拟道具的切换控件;其中,切换控件用于被触发显示对应至少一个第二虚拟道具的交互控件。
本申请实施例提供了一种计算机程序产品,该计算机程序产品包括计算机可执行指令,该计算机可执行指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机可执行指令,处理器执行该计算机可执行指令,使得该电子设备执行本申请实施例上述的虚拟场景的道具交互方法。
本申请实施例提供一种存储有计算机可执行指令的计算机可读存储介质,其中存储有计算机可执行指令,当计算机可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景的道具交互方法,例如,如图4A-4C示出的虚拟场景的道具交互方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、 磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,计算机可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,计算机可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,计算机可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
综上所述,通过本申请实施例响应于在至少部分区域中出现具有交互功能的至少两个虚拟道具,显示至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应第一虚拟道具的至少一个交互控件,从而直接向玩家显示出自动选中的第一虚拟道具以及对应的交互控件,省去了玩家手动选择当前需要交互的虚拟道具的过程,可以有效提高人机交互效率。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (19)

  1. 一种虚拟场景的道具交互方法,所述方法由电子设备执行,所述方法包括:
    在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
    响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态,以及
    显示对应所述第一虚拟道具的至少一个交互控件;其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,在所述至少部分区域中对所述至少两个虚拟道具应用第一显示方式;
    其中,所述第一显示方式的显著程度与所述至少两个虚拟道具的特征值正相关,所述特征值包括以下至少之一:所述虚拟道具的使用频率、所述虚拟道具与所述虚拟对象的距离、所述虚拟道具与所述虚拟对象的朝向角。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,在所述人机交互界面中对所述至少两个虚拟道具应用第一显示方式,并在所述人机交互界面中对其他虚拟道具应用第二显示方式;
    其中,所述第二显示方式区别于所述第一显示方式,所述其他虚拟道具不具有所述交互功能。
  4. 根据权利要求1所述的方法,其中,所述显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态之前,所述方法还包括:
    确定以所述虚拟对象为圆心,以设定距离为半径,以第一角度为圆心角的第一扇形区域,其中,所述虚拟对象的朝向与所述第一扇形区域的圆心角的角平分线重合;
    确定与所述第一扇形区域重叠的至少一个第一候选虚拟道具,其中,所述第一候选虚拟道具在所述虚拟场景的地面上的投影区域与所述第一扇形区域重叠;
    将所述至少一个第一候选虚拟道具中的一个第一候选虚拟道具,作为所述第一虚拟道具。
  5. 根据权利要求4所述的方法,其中,所述将所述至少一个第一候选虚拟道具中的一个第一候选虚拟道具,作为所述第一虚拟道具,包括:
    当所述第一扇形区域包括一个所述第一候选虚拟道具时,将所述第一候选虚拟道具确定为所述第一虚拟道具;
    当所述第一扇形区域包括两个所述第一候选虚拟道具时,将第一重叠区域的面积较大的第一候选虚拟道具作为所述第一虚拟道具,所述第一重叠区域是所述第一候选虚拟道具与所述第一扇形区域的重叠区域;
    当所述第一扇形区域包括至少三个所述第一候选虚拟道具时,执行以下处理:
    确定以所述虚拟对象为圆心,以所述设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,所述虚拟对象的朝向与所述第二扇形区域的圆心角的角平分线重合,所述第二角度小于所述第一角度;
    确定与所述第二扇形区域重叠的至少一个第二候选虚拟道具,其中,所述第二候选虚拟道具在所述虚拟场景的地面上的投影区域与所述第二扇形区域重叠;
    将第二重叠区域的面积最大的第二候选虚拟道具,作为所述第一虚拟道具,所述第二重叠区域是所述第二候选虚拟道具与所述第二扇形区域的重叠区域。
  6. 根据权利要求1所述的方法,其中,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态之前,所述方法还包括:
    执行以下任意一种处理:
    按照使用频率对所述至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为所述第一虚拟道具;
    按照场景距离对所述至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为所述第一虚拟道具,所述场景距离是所述虚拟场景中所述虚拟道具与所述虚拟对象的距离;
    按照最近使用时间对所述至少两个虚拟道具进行排序处理,将排序在首位的虚拟道具作为所述第一虚拟道具,所述最近使用时间是所述虚拟对象最近一次使用所述虚拟道具的时刻。
  7. 根据权利要求1所述的方法,其中,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态之前,所述方法还包括:
    获取所述虚拟场景中针对所述至少两个虚拟道具的历史交互数据、以及道具参数,每个所述虚拟道具的历史交互数据包括每次使用所述虚拟道具的场景参数;
    通过第一神经网络模型执行以下处理:从所述场景参数中提取场景特征,并从所述道具参数中提取 道具特征;对所述场景特征以及所述道具特征进行融合处理,得到第一融合特征;将所述第一融合特征映射为每个所述虚拟道具与所述虚拟场景适配的第一概率;
    按照所述第一概率从高到低的顺序,对所述至少两个虚拟道具进行排序处理,将排序在首位的所述虚拟道具作为所述第一虚拟道具。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    针对未处于所述选中状态的至少一个第二虚拟道具,显示对应所述至少一个第二虚拟道具的切换控件;
    其中,所述第二虚拟道具具有所述交互功能,所述切换控件用于被触发显示对应至少一个所述第二虚拟道具的交互控件。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    当未处于所述选中状态的所述第二虚拟道具的数目为一时,响应于针对所述切换控件的触发操作,显示所述第二虚拟道具的至少一个交互控件,并隐藏所述第一虚拟道具的至少一个交互控件。
  10. 根据权利要求8所述的方法,其中,所述方法还包括:
    当未处于所述选中状态的所述第二虚拟道具的数目为多个时,响应于针对所述切换控件的触发操作,显示与多个所述第二虚拟道具一一对应的道具标识;
    响应于针对任意一个所述道具标识的触发操作,显示与触发的所述道具标识对应的第二虚拟道具的至少一个交互控件,并隐藏所述第一虚拟道具的至少一个交互控件。
  11. 根据权利要求10所述的方法,其中,所述显示与多个所述第二虚拟道具一一对应的道具标识,包括:
    按照设定顺序显示与多个所述第二虚拟道具一一对应的道具标识;
    其中,多个所述第二虚拟道具的数目是以下任意一种:设定数目、与所述人机交互界面的尺寸正相关的数目、与所述虚拟场景的空闲区域的面积正相关的数目、与所述第二虚拟道具的道具数目正相关的数目。
  12. 根据权利要求11所述的方法,其中,
    所述设定顺序是以下任意一种:
    所述第二虚拟道具的使用频率从高到低的顺序或者从低到高的顺序;
    所述第二虚拟道具的场景距离从小到大的顺序或者从大到小的顺序,所述场景距离是所述虚拟场景中所述虚拟道具与所述虚拟对象的距离;
    所述第二虚拟道具的最近使用时间从近到远的顺序或者从远到近的顺序,所述最近使用时间是所述虚拟对象最近一次使用所述第二虚拟道具的时刻;
    所述第二虚拟道具与所述虚拟对象的交互效率从小到大的顺序或者从大到小的顺序。
  13. 根据权利要求12所述的方法,其中,所述方法还包括:
    确定以所述虚拟对象为圆心,以所述设定距离为半径,以第二角度为圆心角的第二扇形区域,其中,所述虚拟对象的朝向与所述第二扇形区域的圆心角的角平分线重合;
    获取每个所述第二虚拟道具与所述第二扇形区域的第三重叠区域,所述第三重叠区域是所述第二虚拟道具在所述虚拟场景的地面上的投影区域与所述第二扇形区域的重叠区域,并获取与所述第三重叠区域的面积正相关的交互效率。
  14. 一种虚拟场景的道具交互方法,所述方法由电子设备执行,所述方法包括:
    在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
    响应于在所述至少部分区域中出现至少两个虚拟道具,基于选中状态显示所述至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应所述第一虚拟道具的至少一个交互控件,其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互;
    针对未处于所述选中状态的至少一个第二虚拟道具,显示对应所述至少一个第二虚拟道具的切换控件;其中,所述切换控件用于被触发显示对应至少一个所述第二虚拟道具的交互控件。
  15. 一种虚拟场景的道具交互装置,所述装置包括:
    第一显示模块,配置为在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
    所述第一显示模块,还配置为响应于在所述至少部分区域中出现具有交互功能的至少两个虚拟道具,显示所述至少两个虚拟道具中的第一虚拟道具处于选中状态,以及显示对应所述第一虚拟道具的至少一个交互控件;其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与所述第一虚拟道具进行交互。
  16. 一种虚拟场景的道具交互装置,所述装置包括:
    第二显示模块,配置为在人机交互界面显示所述虚拟场景中的至少部分区域,其中,所述至少部分区域包括虚拟对象;
    所述第二显示模块,还配置为响应于在所述至少部分区域中出现至少两个虚拟道具,基于选中状态显示所述至少两个虚拟道具中具有交互功能的第一虚拟道具、以及对应所述第一虚拟道具的至少一个交互控件,其中,所述交互控件用于被触发执行所述交互控件对应的所述交互功能,所述交互功能用于所述虚拟对象与第一虚拟道具进行交互;
    所述第二显示模块,还配置为针对未处于所述选中状态的至少一个第二虚拟道具,显示对应所述至少一个第二虚拟道具的切换控件;其中,所述切换控件用于被触发显示对应至少一个所述第二虚拟道具的交互控件。
  17. 一种电子设备,所述电子设备包括:
    存储器,用于存储计算机可执行指令;
    处理器,用于执行所述存储器中存储的计算机可执行指令时,实现权利要求1至13或14任一项所述的虚拟场景的道具交互方法。
  18. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至13或14任一项所述的虚拟场景的道具交互方法。
  19. 一种计算机程序产品,包括计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至13或14任一项所述的虚拟场景的道具交互方法。
PCT/CN2023/085343 2022-06-02 2023-03-31 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品 WO2023231553A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210625554.5A CN117205560A (zh) 2022-06-02 2022-06-02 道具交互方法、装置、设备、存储介质及程序产品
CN202210625554.5 2022-06-02

Publications (1)

Publication Number Publication Date
WO2023231553A1 true WO2023231553A1 (zh) 2023-12-07

Family

ID=89026856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085343 WO2023231553A1 (zh) 2022-06-02 2023-03-31 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN117205560A (zh)
WO (1) WO2023231553A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245892A (zh) * 2017-12-19 2018-07-06 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN112007360A (zh) * 2020-08-28 2020-12-01 腾讯科技(深圳)有限公司 监控功能道具的处理方法、装置及电子设备
CN112121431A (zh) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及存储介质
CN112295230A (zh) * 2020-10-30 2021-02-02 腾讯科技(深圳)有限公司 虚拟场景中虚拟道具的激活方法、装置、设备及存储介质
CN112691366A (zh) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 虚拟道具的显示方法、装置、设备及介质
CN113041611A (zh) * 2021-04-06 2021-06-29 腾讯科技(深圳)有限公司 虚拟道具显示方法、装置、电子设备以及可读存储介质
CN113398572A (zh) * 2021-05-26 2021-09-17 腾讯科技(深圳)有限公司 虚拟道具切换方法、技能切换方法、虚拟对象切换方法
CN114146414A (zh) * 2021-11-24 2022-03-08 腾讯科技(深圳)有限公司 虚拟技能的控制方法、装置、设备、存储介质及程序产品

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245892A (zh) * 2017-12-19 2018-07-06 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN112007360A (zh) * 2020-08-28 2020-12-01 腾讯科技(深圳)有限公司 监控功能道具的处理方法、装置及电子设备
CN112121431A (zh) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及存储介质
WO2022068452A1 (zh) * 2020-09-29 2022-04-07 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及可读存储介质
CN112295230A (zh) * 2020-10-30 2021-02-02 腾讯科技(深圳)有限公司 虚拟场景中虚拟道具的激活方法、装置、设备及存储介质
CN112691366A (zh) * 2021-01-13 2021-04-23 腾讯科技(深圳)有限公司 虚拟道具的显示方法、装置、设备及介质
CN113041611A (zh) * 2021-04-06 2021-06-29 腾讯科技(深圳)有限公司 虚拟道具显示方法、装置、电子设备以及可读存储介质
CN113398572A (zh) * 2021-05-26 2021-09-17 腾讯科技(深圳)有限公司 虚拟道具切换方法、技能切换方法、虚拟对象切换方法
CN114146414A (zh) * 2021-11-24 2022-03-08 腾讯科技(深圳)有限公司 虚拟技能的控制方法、装置、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN117205560A (zh) 2023-12-12

Similar Documents

Publication Publication Date Title
CN109636919B (zh) 一种基于全息技术的虚拟展馆构建方法、系统及存储介质
CN111527525A (zh) 混合现实服务提供方法及系统
CN113011723B (zh) 一种基于增强现实的远程装备维保系统
CN109196464A (zh) 基于上下文的用户代理
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
KR20120045744A (ko) 체험형 학습 콘텐츠 저작 장치 및 방법
CN105808071A (zh) 一种显示控制方法、装置和电子设备
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
CN111565320A (zh) 基于弹幕的互动方法及装置、存储介质、电子设备
CN113268303A (zh) 界面元素配置方法、装置、存储介质及电子设备
CN115408622A (zh) 一种基于元宇宙的在线交互方法、装置及存储介质
CN116543082A (zh) 数字人的生成方法、装置和数字人的生成系统
CN109857259B (zh) 碰撞体交互控制方法及装置、电子设备和存储介质
WO2023231553A1 (zh) 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN114743422B (zh) 一种答题方法及装置和电子设备
US20220301250A1 (en) Avatar-based interaction service method and apparatus
CN115624740A (zh) 一种虚拟现实设备及其控制方法、装置、系统及互动系统
CN112651801B (zh) 一种房源信息的展示方法和装置
CN113987142A (zh) 与虚拟人偶的语音智能交互方法、装置、设备及存储介质
CN114489337A (zh) 一种ar互动方法、装置、设备及存储介质
CN113763568A (zh) 增强现实的显示处理方法、装置、设备及存储介质
Zhang et al. Virtual Museum Scene Design Based on VRAR Realistic Interaction under PMC Artificial Intelligence Model
Yantong et al. Design experiment of spatial dimension of infographics in the background of AR—Take the Beijing 2022 winter olympics as an example
Xu Application of 3D Roaming Technology in Micro-course Design
WO2024037001A1 (zh) 互动数据处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814749

Country of ref document: EP

Kind code of ref document: A1