WO2022068452A1 - 虚拟道具的交互处理方法、装置、电子设备及可读存储介质 - Google Patents

虚拟道具的交互处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022068452A1
WO2022068452A1 PCT/CN2021/113264 CN2021113264W WO2022068452A1 WO 2022068452 A1 WO2022068452 A1 WO 2022068452A1 CN 2021113264 W CN2021113264 W CN 2021113264W WO 2022068452 A1 WO2022068452 A1 WO 2022068452A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
idle
prop
props
virtual prop
Prior art date
Application number
PCT/CN2021/113264
Other languages
English (en)
French (fr)
Inventor
刘智洪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227038479A priority Critical patent/KR20220163452A/ko
Priority to JP2022555126A priority patent/JP7447296B2/ja
Publication of WO2022068452A1 publication Critical patent/WO2022068452A1/zh
Priority to US17/971,943 priority patent/US20230040737A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the present application relates to computer human-computer interaction technology, and in particular, to a method, apparatus, electronic device, and computer-readable storage medium for interactive processing of virtual props.
  • Display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the display technology for virtual scenes, which can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements. It has various typical application scenarios, for example, in the simulation of military exercises and virtual scenarios of games, etc., it can simulate the real battle process between virtual objects.
  • the related art provides a function of picking up virtual props, and a virtual object controlled by a player can pick up idle virtual props in a virtual scene.
  • Embodiments of the present application provide an interactive processing method, device, electronic device, and computer-readable storage medium for virtual props, which can realize accurate picking in accordance with physical laws in a virtual scene.
  • An embodiment of the present application provides an interaction processing method for virtual props, including:
  • controlling the first virtual object In response to a movement operation of controlling the first virtual object, controlling the first virtual object to move in the virtual scene;
  • the first virtual object In response to controlling the picking operation of the first virtual object, the first virtual object is controlled to pick up the idle virtual prop.
  • An embodiment of the present application provides an interactive processing device for virtual props, including:
  • a presentation module configured to present at least one idle virtual prop in the virtual scene
  • a response module configured to control the first virtual object to move in the virtual scene in response to a movement operation of controlling the first virtual object
  • a processing module configured to present a pick-up of the idle virtual prop when the idle virtual prop is located in the direction of the used virtual prop of the first virtual object and there is no obstacle between the idle virtual prop and the used virtual prop hint;
  • the response module is further configured to control the first virtual object to pick up the idle virtual prop in response to controlling the picking operation of the first virtual object.
  • An embodiment of the present application provides an electronic device for interactive processing of virtual props, the electronic device includes:
  • the processor is configured to implement the interactive processing method for virtual props provided by the embodiments of the present application when executing the executable instructions stored in the memory.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions for causing a processor to execute the interactive processing method for virtual props provided by the embodiments of the present application.
  • the obstacle detection between the idle virtual props and the virtual objects can be realized, so that there is no obstacle between the idle virtual props and the virtual objects.
  • the pickup function of virtual props realizes accurate pickup in line with physical laws, improves the accuracy of human-computer interaction in virtual scenes, and further improves the actual utilization of computing resources consumed in virtual scenes.
  • FIGS. 1A-1B are schematic diagrams of application modes of a method for interactive processing of a virtual scene provided by an embodiment of the present application
  • FIG. 2A is a schematic structural diagram of an electronic device for interactive processing of a virtual scene provided by an embodiment of the present application
  • 2B is a schematic diagram of the principle of a human-computer interaction engine installed in an interaction processing device for virtual props provided by an embodiment of the present application;
  • 3A-3C are schematic flowcharts of a method for interactive processing of virtual props provided by an embodiment of the present application.
  • FIG. 4 is a schematic interface diagram of a method for interactive processing of virtual props provided by an embodiment of the present application
  • 5A-5B are schematic interface diagrams of virtual reality provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of a plurality of idle virtual props provided by an embodiment of the present application.
  • FIG. 7 is a schematic interface diagram of a dropped weapon provided by an embodiment of the present application.
  • FIG. 8 is a schematic interface diagram of a dropped weapon provided by an embodiment of the present application.
  • FIG. 9 is a schematic interface diagram of a dropped weapon provided by an embodiment of the present application.
  • FIG. 10 is a schematic interface diagram of a dropped weapon provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of interactive processing of virtual props provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an interface for obstacle detection provided by an embodiment of the present application.
  • first ⁇ second involved is only to distinguish similar objects, and does not represent a specific ordering of objects. It is understood that “first ⁇ second” can be used when permitted.
  • the specific order or sequence is interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
  • Virtual scene Using the scene output by the device that is different from the real world, the visual perception of the virtual scene can be formed through the assistance of the naked eye or the device, such as the two-dimensional image output by the display screen, through stereo projection, virtual reality and augmented reality. 3D images output by stereoscopic display technologies such as technology; in addition, various real-world perceptions such as auditory perception, tactile perception, olfactory perception and motion perception can also be formed through various possible hardware.
  • one or more of the executed operations may be real-time, or may have a set delay; Unless otherwise specified, there is no restriction on the order of execution of multiple operations to be executed.
  • Client an application running in the terminal for providing various services, such as game client, military exercise simulation client.
  • Virtual objects the images of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., for example, characters, animals, plants, oil barrels, walls, stones, etc. displayed in the virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object may be a user character controlled by an operation on the client, or an artificial intelligence (AI, Artificial Intelligence) set in the virtual scene battle through training, or it may be set in the virtual scene interaction Non-user characters (NPC, Non-Player Character).
  • AI Artificial Intelligence
  • NPC Non-Player Character
  • the virtual object may be a virtual character performing adversarial interactions in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene may be preset or dynamically determined according to the number of clients participating in the interaction.
  • users can control virtual objects to fall freely in the sky of the virtual scene, glide, or open a parachute to fall, and run, jump, crawl, bend forward, etc. on the land.
  • Virtual objects swim, float or dive in the ocean.
  • users can also control virtual objects to move in the virtual scene on a virtual vehicle.
  • the virtual vehicle can be a virtual car, a virtual aircraft, a virtual yacht, etc. , only the above scenario is used as an example for illustration, which is not specifically limited in this embodiment of the present application.
  • Users can also control virtual objects to interact with other virtual objects confrontationally through virtual props.
  • the virtual props can be throwing virtual props such as grenades, cluster mines, sticky grenades, etc., or shooting types such as machine guns, pistols, and rifles.
  • virtual props this application does not specifically limit the types of virtual props.
  • Scene data represent various characteristics of the objects in the virtual scene during the interaction process, for example, may include the position of the objects in the virtual scene.
  • scene data may include the waiting time for various functions configured in the virtual scene (depending on the ability to use the same Function times), and can also represent attribute values of various states of the game character, such as life value (also called red amount) and magic value (also called blue amount), etc.
  • the user in a mobile shooting game, the user can control the virtual object to enter with two weapons before starting the game, and then kill the enemy during the game. The killed person will drop the currently used weapon.
  • the controlled virtual object can pick up dropped weapons or other equipment.
  • picking up weapons and equipment that is, picking up items (virtual props) through walls may occur, that is, picking up items across obstacles.
  • the above-mentioned problems are caused mainly because the pickup of items is achieved by triggering the collision box. Since the collision box is bound to the dropped item, the dropped position may cause the collision box to pass through. Obstacles such as walls, allowing players to pick up items through walls.
  • the embodiments of the present application provide an interactive processing method, device, electronic device, and computer-readable storage medium for virtual props, which can realize accurate picking in accordance with physical laws in a virtual scene.
  • the following describes the methods provided by the embodiments of the present application.
  • Exemplary applications of electronic devices the electronic devices provided by the embodiments of the present application may be implemented as notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (eg, mobile phones, portable music players, personal digital assistants, dedicated messaging devices, Various types of user terminals such as portable game equipment, vehicle-mounted terminals, and smart TVs can also be implemented as servers.
  • exemplary applications when the device is implemented as a terminal will be described.
  • the virtual scene can be completely based on terminal output, or based on terminal and Server collaboration to output.
  • the virtual scene may be a picture presented in a military exercise simulation.
  • the user can simulate a battle situation, strategy or tactics through virtual objects belonging to different teams, which has a great impact on the command of military operations. guiding role.
  • the virtual scene may be an environment for game characters to interact, for example, it may be for game characters to play against each other in the virtual scene.
  • the two sides can interact in the virtual scene, so that the user can play in the virtual scene. Relieve the stress of life during the game.
  • FIG. 1A is a schematic diagram of the application mode of the interactive processing method for virtual props provided by the embodiment of the present application, which is suitable for some data calculations of the virtual scene 100 that can be completed completely relying on the computing power of the terminal 400
  • the game in the stand-alone version/offline mode the output of the virtual scene is completed through terminals 400 such as smart phones, tablet computers, and virtual reality/augmented reality devices.
  • the terminal 400 calculates the data required for display through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs a video frame capable of forming a visual perception of the virtual scene on the graphics output hardware,
  • a video frame capable of forming a visual perception of the virtual scene on the graphics output hardware
  • two-dimensional video frames can be presented on the display screen of a smartphone, or three-dimensional video frames can be projected on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the device can also use different hardware to form one or more of auditory perception, tactile perception, motor perception and taste perception.
  • the terminal 400 runs the client 410 (eg, a stand-alone game application), and outputs a virtual scene including role-playing during the running of the client 410.
  • the virtual scene is an environment for game characters to interact, such as for Plains, streets, valleys, etc.
  • the virtual scene includes a first virtual object 110 and a virtual prop 120, and the first virtual object 110 may be a game character controlled by a user (or a player), that is, the first virtual object 110
  • the virtual object 110 is controlled by the real user, and will move in the virtual scene in response to the real user's manipulation of the controller (including touch screen, voice-activated switch, keyboard, mouse, joystick, etc.), for example, when the real user moves to the left
  • the joystick is used, the first virtual object will move to the left in the virtual scene, and can also remain stationary, jump and use various functions (such as skills and props); the virtual prop 120 may be the first virtual object in the virtual scene.
  • the battle tool used by the virtual object 110 for example, the first virtual object 110 can pick up the virtual prop 120 in the virtual scene, so as to use the function of the virtual prop 120 to perform game battle.
  • the user controls the first virtual object 110 to move to the virtual prop 120 (ie, the idle virtual prop) in the virtual scene through the client 410
  • the virtual prop 120 is located in the direction of the first virtual object 110 that uses the virtual prop , and when there is no obstacle between using the virtual prop, the user controls the first virtual object 110 to pick up the virtual prop 120 in the virtual scene through the client 410 .
  • FIG. 1B is a schematic diagram of the application mode of the interactive processing method for virtual props provided by the embodiment of the present application, which is applied to the terminal 400 and the server 200 , and is suitable for relying on the computing power of the server 200 to complete the virtual scene Calculate and output the application mode of the virtual scene at the terminal 400 .
  • the server 200 calculates the relevant display data of the virtual scene and sends it to the terminal 400.
  • the terminal 400 relies on the graphics computing hardware to complete the loading, parsing and rendering of the calculation display data, and depends on the graphics output hardware.
  • Output virtual scenes to form visual perception for example, two-dimensional video frames can be presented on the display screen of a smartphone, or video frames can be projected on the lenses of augmented reality/virtual reality glasses to achieve a three-dimensional display effect; for virtual scenes in the form of
  • the corresponding hardware output of the terminal can be used, for example, an auditory perception can be formed using a microphone output, a tactile perception can be formed using a vibrator output, and so on.
  • the terminal 400 runs the client 410 (eg, a game application of the online version), and interacts with other users by connecting to the game server (ie, the server 200 ), and the terminal 400 outputs the virtual scene 100 of the client 410 , which includes the first virtual scene
  • the object 110 and the virtual prop 120, the first virtual object 110 may be a game character controlled by the user, that is, the first virtual object 110 is controlled by a real user, and will respond to the real user for the controller (including the touch screen, voice-activated switch, (keyboard, mouse and joystick, etc.) to move in the virtual scene, for example, when the real user moves the joystick to the left, the first virtual object will move to the left in the virtual scene, and can also remain stationary and jump in place and use various functions (such as skills and props);
  • the virtual prop 120 may be a battle tool used by the first virtual object 110 in the virtual scene, for example, the first virtual object 110 may pick up the virtual prop 120 in the virtual scene, thereby The game battle is performed using the function of
  • the client 410 when the user controls the first virtual object 110 to move to the virtual prop 120 (ie, the idle virtual prop) in the virtual scene through the client 410 , the client 410 sends the location information of the first virtual object 110 to the The server 200, the server 200 performs obstacle detection on the virtual prop 120 and the first virtual object 110 according to the pickup logic. When there is an obstacle, the pickup prompt of the virtual prop 120 is sent to the client 410 . After receiving the pickup prompt of the virtual prop 120 , the client 410 presents the pickup prompt of the virtual prop 120 , and the user is based on the pickup of the virtual prop 120 . Prompt, control the first virtual object 110 to pick up the virtual prop 120 in the virtual scene.
  • the terminal 400 may implement the method for interactive processing of virtual items provided by the embodiments of the present application by running a computer program.
  • the computer program may be a native program or software module in an operating system; it may be a native (Native) An application program (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a game APP (that is, the above-mentioned client 410); it can also be a small program, that is, it only needs to be downloaded to the browser environment to run It can also be a game applet that can be embedded into any APP.
  • APP Native
  • the above-mentioned computer programs may be any form of application, module or plug-in.
  • Cloud technology refers to a kind of hosting that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data computing, storage, processing, and sharing. technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. Cloud computing technology will become an important support. Background services of technical network systems require a lot of computing and storage resources.
  • the server 200 may be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communication, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
  • FIG. 2A is a schematic structural diagram of an electronic device for interactive processing of virtual props provided by an embodiment of the present application.
  • the electronic device shown in FIG. 2A includes: at least one processor 410 , memory 450 , at least one network interface 420 and user interface 430 .
  • the various components in electronic device 400 are coupled together by bus system 440 .
  • the bus system 440 is used to implement the connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 2A.
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., where a general-purpose processor may be a microprocessor or any conventional processor or the like.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers and/or one or more visual display screens.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 450 includes, for example, one or more storage devices that are physically remote from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 450 described in the embodiments of the present application is intended to include any suitable type of memory.
  • memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • the operating system 451 includes system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • a presentation module 453 for enabling presentation of information (eg, a user interface for operating peripherals and displaying content and information) via one or more output devices 431 (eg, a display screen, speakers, etc.) associated with the user interface 430 );
  • An input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
  • the interactive processing apparatus for virtual props provided by the embodiments of the present application may be implemented in software.
  • FIG. 2A shows the interactive processing apparatus 455 for virtual props stored in the memory 450, which may be programs, plug-ins, etc.
  • Form software including the following software modules: presentation module 4551, response module 4552, processing module 4553, detection module 4554 and timing module 4555, these modules are logical, so any combination or further disassembly can be carried out according to the realized functions. The function of each module will be explained below.
  • FIG. 2B is a schematic diagram of the principle of a human-computer interaction engine installed in an interaction processing device for virtual props provided by an embodiment of the present application.
  • a game engine refers to some The core components of written editable computer game systems or some interactive real-time graphics applications, these systems provide game designers with various tools needed to write games, the purpose is to allow game designers to easily and quickly Make game programs without starting from scratch.
  • Game engines include: rendering engine (ie "renderer", including 2D graphics engine and 3D graphics engine), physics engine, obstacle detection system, sound effects, scripting engine, computer animation, Artificial intelligence, network engine and scene management.
  • a game engine is a set of codes (instructions) that can be recognized by a machine designed for a machine running a certain type of game. It is like an engine and controls the operation of the game.
  • a game program can be divided into It is divided into two parts: game engine and game resources.
  • the interaction processing method for virtual props provided by the embodiment of the present application is implemented by each module in the interactive processing device for virtual props shown in FIG. 2A by calling the relevant components of the human-computer interaction engine shown in FIG. 2B .
  • the presentation module 4551 is used to present at least one idle virtual prop in the virtual scene, and the presentation module 4551 invokes the user interface part in the game engine shown in FIG. 2B to realize the interaction between the user and the game, by calling the model part in the game engine Make a two-dimensional or three-dimensional model, and after the model is made, assign material maps to the model according to different faces through the skeletal animation part, which is equivalent to covering the bones with skin, and finally through the rendering part, the model, animation, light and shadow, special effects All the effects are calculated in real time and displayed on the human-computer interface.
  • the response module 4552 is configured to present the movement process of the first virtual object in the virtual scene in response to controlling the movement operation of the first virtual object, and the response module 4552 invokes the rendering module of FIG. 2B to perform a real-time image based on the calculated movement trajectory. Calculated and displayed on the human-computer interface.
  • the processing module 4553 presents a pick-up prompt for the idle virtual prop when the idle virtual prop is located in the direction of the first virtual object in which the virtual prop is used and there is no obstacle between the virtual prop and the used virtual prop, and the processing module 4553 calls FIG. 2B
  • the rendering module in the shown game engine when there is no obstacle between the idle virtual props and the used virtual props, renders the idle virtual props through the rendering module and displays them on the human-computer interaction interface.
  • the detection module 4554 is used to detect the distance between the first virtual object and the idle virtual prop during the moving process; when the distance is less than the distance threshold, the first virtual object is used to perform an obstacle between the virtual prop and the idle virtual prop. detection.
  • the detection module 4554 invokes the camera part of the game engine shown in FIG. 2B to implement obstacle detection, and specifically performs obstacle detection through the detection rays emitted by the camera bound on the first virtual object using the virtual props, wherein the bound in the Using the camera on the virtual prop is configured through the camera section.
  • the timing module 4555 is used to bind a timer on an idle virtual item.
  • the timing module 4555 calls the rendering module in the game engine shown in FIG. 2B, through the rendering module Render idle virtual props to cancel rendering of idle virtual props in the virtual scene.
  • FIG. 3A is a schematic flowchart of a method for interactive processing of virtual props provided by an embodiment of the present application, which is described in conjunction with the steps shown in FIG. 3A .
  • idle virtual props refer to idle and unused virtual props, including virtual props initialized in the virtual scene (that is, virtual props that have not been used by any virtual objects) and dropped virtual props (for example, a virtual prop If the object is killed, the virtual props held by the virtual object will drop at the killed position for other virtual objects to pick up).
  • step 101 at least one idle virtual prop is presented in the virtual scene.
  • the virtual props in the virtual scene are initialized, and the virtual props can be randomly set in various positions of the virtual scene, and the positions of the virtual props initialized in each game can be set to be the same.
  • the positions of the virtual props initialized in each game can be set to be different (in order to increase the difficulty of the game, the user cannot predict the position of the virtual props).
  • the initialized virtual props (idle virtual props) can be presented in the virtual scene, and the virtual objects controlled by the user can pick up the initialized virtual props to attack or defend through the picked-up virtual props.
  • presenting at least one idle virtual prop in the virtual scene includes: when the second virtual object is attacked in the virtual scene and loses the ability to hold the virtual prop, using the held virtual prop as the idle virtual prop; At the position where the second virtual object is attacked, at least one virtual prop dropped by the second virtual object is presented.
  • the idle virtual props include virtual props dropped by virtual objects.
  • a second virtual object a virtual object other than the first virtual object, such as an enemy or teammate of the first virtual object
  • a third virtual object a virtual object other than the second virtual object
  • kills the second virtual object the second virtual object is killed by the third virtual object; If the three virtual objects hit the second virtual object hard, the second virtual object is injured, and the virtual prop held by the second virtual object will drop at the position where the second virtual object is attacked.
  • At the position where the second virtual object is attacked at least one virtual prop dropped by the second virtual object is presented for pickup by virtual objects other than the second virtual object, especially a third virtual object that attacks the second virtual object,
  • the dropped virtual item serves as a reward for the third virtual object.
  • presenting at least one idle virtual prop in the virtual scene includes: when the second virtual object actively discards the at least one held virtual prop in the virtual scene, using the held virtual prop as an idle virtual prop; The virtual object discards at least one virtual prop that is actively discarded at the position holding the virtual prop.
  • the idle virtual props include the virtual props discarded by the virtual object.
  • the virtual object can choose Actively discard the held virtual props.
  • the second virtual object a virtual object other than the first virtual object, such as the enemy or teammate of the first virtual object
  • the second virtual object a virtual object other than the first virtual object, such as the enemy or teammate of the first virtual object
  • the second virtual object a virtual object other than the first virtual object, such as the enemy or teammate of the first virtual object
  • the second virtual object a virtual object other than the first virtual object, such as the enemy or teammate of the first virtual object
  • you can choose to take the initiative If at least one held virtual prop is discarded, the virtual prop held by the second virtual object will be dropped at the discarded position, and the discarded held virtual prop will be regarded as an idle virtual prop. Therefore, at the position where the second virtual object discards and holds virtual props, at least one virtual prop that is actively discarded is presented, so
  • presenting at least one idle virtual prop in the virtual scene includes: when a teammate of the first virtual object places at least one holding virtual prop at a placement position of the virtual scene, using the holding virtual prop as an idle virtual prop ; wherein, the idle virtual prop is used for picking up by the first virtual object; the placement position in the map of the virtual scene presents at least one idle virtual prop placed by a teammate.
  • the idle virtual props include virtual props placed by virtual objects.
  • the virtual object in order to achieve teamwork, when the virtual object wants to give the virtual object held by the teammates, the virtual object can choose to place the virtual props in the Somewhere in the virtual scene for teammates to pick up.
  • the fourth virtual object (the teammate of the first virtual object) is placed at the placement position 401 of the virtual scene with at least one holding There are virtual props, the holding virtual props are used as idle virtual props, and the idle virtual props are used to be picked up by the first virtual object 402 and any virtual object of the same team, and are displayed on the map 403 of the virtual scene (the entire virtual scene of the game will be played.
  • the placement position 401 in the formed guide map presents the idle virtual prop, so that the first virtual object and any virtual object of the same team can check where the fourth virtual object is placed through the map 403 of the virtual scene virtual props to quickly reach the placement location to pick up virtual props.
  • step 102 in response to a movement operation of controlling the first virtual object, the first virtual object is controlled to move in the virtual scene.
  • the first virtual object may be an object controlled by a user in a game or military simulation.
  • the virtual scene may also include other virtual objects, which may be controlled by other users or by a robot program.
  • the virtual objects can be divided into multiple teams, the teams can be in an adversarial relationship or a cooperative relationship, and the team in the virtual scene can include one or all of the above relationships.
  • the user controls the movement operation of the first virtual object to control the first virtual object to move, flip, jump, etc. in the virtual scene, and receive the movement operation of the first virtual object through the human-computer interaction interface, thereby controlling the first virtual object to move in the virtual scene.
  • the content presented in the human-computer interaction interface changes with the movement of the first virtual object.
  • the viewing position and field of view of the viewing user in the complete virtual scene are determined to determine the field of view area of the viewing object, and the virtual scene is presented in the virtual scene.
  • the partial virtual scene located in the field of view area ie the displayed virtual scene, may be a partial virtual scene relative to the panoramic virtual scene.
  • FIG. 5A is a schematic diagram of a virtual reality interface provided by an embodiment of the present application, and a user (ie, a real user) can perceive a virtual scene through a lens in the virtual reality device.
  • a user ie, a real user
  • the virtual reality device is provided with a sensor (such as a nine-axis sensor) for posture detection, which is used to detect the posture change of the virtual reality device in real time.
  • the real-time posture of the head will be transmitted to the processor to calculate the gaze point of the user's line of sight in the virtual scene. That is, the image of the field of view), and display it on the display screen, making it an immersive experience as if watching in a real environment.
  • virtual reality equipment such as mobile virtual reality equipment (PCVR)
  • PCVR mobile virtual reality equipment
  • the principle of realizing visual perception is similar to the above, the difference is that PCVR, mobile virtual reality equipment, etc. do not have integrated processors for related computing, and do not have independent Virtual reality input and output capabilities.
  • FIG. 5B is a schematic diagram of the virtual reality interface provided by the embodiment of the present application.
  • the viewing position and field of view of the object 503 in the complete virtual scene 501 control the first virtual object 503 to perform movement operations, such as running, squatting, etc., and present the moving process of the first virtual object 503 in the virtual scene.
  • step 103 when the idle virtual prop is located in the direction of the first virtual object to use the virtual prop, and there is no obstacle between the idle virtual prop and the used virtual prop, a pick-up prompt of the idle virtual prop is presented.
  • the embodiment of the present application found that due to a loophole in the pickup logic, the virtual object can automatically pick up virtual props that should not be picked up across obstacles. For example, when an idle virtual prop is separated from the first virtual object by a wall, the first The virtual object can still automatically pick up the idle virtual prop.
  • a non-pickup prompt of the idle virtual prop is displayed in the virtual scene to remind that the obstacle needs to be bypassed before the idle virtual object can be picked up.
  • a pick-up prompt of the idle virtual prop is presented in the virtual scene to prompt that the idle virtual prop can be picked up.
  • FIG. 3B is an optional schematic flowchart of a method for interactive processing of virtual props provided by an embodiment of the present invention
  • FIG. 3B shows that FIG. 3A further includes steps 105 to 106: in step 105, a first virtual item is detected The distance between the object and the idle virtual prop during the moving process; in step 106, when the distance is less than the distance threshold, the obstacle detection is performed between the used virtual prop and the idle virtual prop of the first virtual object.
  • the distance between the first virtual object and the idle virtual props during the movement process can be detected first, for example, the first coordinates of the first virtual object in the virtual scene and the idle virtual props in the virtual scene can be determined.
  • the second coordinate in the scene so as to determine the distance between the first virtual object and the idle virtual prop during the moving process according to the first coordinate and the distance between the second coordinates.
  • the distance is less than the distance threshold, it means that the first virtual object is relatively close to the idle virtual prop, and is capable of picking up the idle virtual prop.
  • the obstacle between the used virtual prop and the idle virtual prop of the first virtual object is detected, and when an obstacle is detected between the idle virtual prop and the used virtual prop, it means that the first virtual object faces the obstacle, even if Get close enough to idle virtual props to also not pick up obstacles.
  • an obstacle detection is performed between the used virtual prop of the first virtual object and the idle virtual prop based on each real-time position of the first virtual object during movement before presenting the pickup prompt of the idle virtual prop .
  • obstacle detection can be performed between the used virtual prop and the idle virtual prop of the first virtual object at each real-time position of the first virtual object during the movement process.
  • it means that there is an obstacle in front of the first virtual object, and the obstacle cannot be picked up.
  • the obstacle detection between the used virtual prop and the idle virtual prop of the first virtual object includes: emitting a detection ray at the position of the used virtual prop through a camera component bound to the used virtual prop , wherein the direction of the detection ray is consistent with the direction of the used virtual prop; based on the detection ray, it is determined whether there is an obstacle between the used virtual prop and the idle virtual prop.
  • a detection ray consistent with the direction of the used virtual prop is emitted from the position of the used virtual prop, and whether there is a gap between the used virtual prop and the idle virtual prop is determined by the detection ray Obstacle, that is, whether there is an obstacle between the first virtual object and the idle virtual prop, when the detection ray is bound to the collider component (such as a wall, oil barrel, etc.)
  • the collider component such as a wall, oil barrel, etc.
  • a detection ray is emitted from the position where the virtual prop is used, the end point of the detection ray is the position of the idle virtual prop, and the used virtual prop and the idle virtual prop are determined based on the detection ray Whether there is an obstacle in between, for example, if the detection ray transmits an obstacle, it means that there is an obstacle between the first virtual object and the idle virtual prop; if the detection ray does not transmit the obstacle, it means that the first virtual object is connected to the idle virtual prop. There are no obstacles between props.
  • presenting a pick-up prompt for the idle virtual prop including: when multiple When the idle virtual prop is located in the direction of the first virtual object to use the virtual prop, and there is no obstacle between the idle virtual prop and the used virtual prop, a pick-up prompt of some idle virtual props of the plurality of idle virtual props is presented.
  • FIG. 6 is a schematic interface diagram of multiple idle virtual props provided by an embodiment of the present application.
  • idle virtual props 601 , idle virtual props 602 , and idle virtual props 603 are all located in the virtual scene of the first virtual object.
  • the props 604 are oriented upwards, and there is no obstacle between the virtual props 604 and the virtual props 604 being used, and a pick-up prompt of some idle virtual props is presented in the virtual scene, for example, a pick-up control of the idle virtual prop 601 is presented.
  • FIG. 3C is an optional schematic flowchart of a method for interactive processing of virtual props provided by an embodiment of the present invention
  • FIG. 3C shows that step 103 in FIG. 3A can be implemented through steps 1031 to 1032 : in step 1031 , perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain the distance between the first virtual object and the idle virtual prop during the moving process; in step 1032, compare the plurality of idle virtual props with the first The distances between virtual objects are sorted, and the idle virtual props corresponding to the smallest distances are selected for pickup prompts.
  • the distance between the idle virtual prop 601 and the used virtual prop is the smallest, that is, the distance between the idle virtual prop 601 and the first virtual object is the smallest. If the distance between them is the smallest, a pick-up prompt of the idle virtual prop 601 is presented in the virtual scene, that is, the control can be picked up, indicating that the idle virtual prop 601 can be picked up.
  • the idle virtual prop closest to the first virtual object is selected from the plurality of idle virtual props for pickup prompt, so that the first virtual object can be at the fastest speed Pick up idle virtual props to avoid idle virtual props being picked up by other virtual objects.
  • presenting a pick-up prompt for a part of the idle virtual props of the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: based on the first virtual object for the virtual prop Use preference, obtain the matching degree of idle virtual items and use preferences; sort the matching degree of multiple idle virtual items and use preferences, and select the idle virtual item corresponding to the highest matching degree for pickup prompts.
  • the neural network model combines the virtual props used in the history of the first virtual object to predict The usage preference of the first virtual object, that is, the preference of the first virtual object for virtual props is obtained. Based on the usage preference of the first virtual object, the matching degree of the idle virtual item 601 and the usage preference is determined, the matching degree of the idle virtual item 602 and the usage preference is determined, and the matching degree of the idle virtual item 603 and the usage preference is determined.
  • the idle virtual item 601 with the highest matching degree is selected from among the idle virtual items 601 to be picked up, so that the idle virtual item that the first virtual object likes the most is selected from the multiple idle virtual items.
  • presenting a pick-up prompt for part of the idle virtual props in the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: obtaining the idle virtual props being replaced by other virtual objects Frequency of use; sort the frequencies of multiple idle virtual props used by other virtual objects, and select the idle virtual prop corresponding to the maximum frequency for pickup prompts.
  • the idle virtual prop 601 is most frequently used by other virtual objects (virtual objects other than the first virtual object), that is, The idle virtual prop 601 is often used by other virtual objects and has a high degree of use, and a pick-up prompt of the idle virtual prop 601 is presented in the virtual scene, that is, a pick-up control, indicating that the idle virtual prop 601 can be picked up.
  • the idle virtual prop with the highest usage is selected from multiple idle virtual props for pickup prompt, indicating that the idle virtual prop is easy to use and has a certain use value, so the first virtual object picks up Valuable idle virtual props are beneficial for the first virtual object to play a battle game.
  • presenting a pick-up prompt for some idle virtual props in the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: obtaining the idle virtual prop in the virtual scene performance parameters; sort the performance parameters of multiple idle virtual props in the virtual scene, and select the idle virtual prop corresponding to the maximum performance parameter for pickup prompts.
  • the idle virtual prop 601 has the highest performance parameters (for example, parameters such as combat value, defense value, etc.) in the virtual scene, then In the virtual scene, a pickable prompt of the idle virtual prop 601 , that is, a pick-up control, is presented, indicating that the idle virtual prop 601 can be picked up.
  • a pickable prompt of the idle virtual prop 601 that is, a pick-up control
  • the idle virtual prop with the highest performance parameter is selected from the multiple idle virtual props for pickup prompt, indicating that the idle virtual prop is easy to use and has a certain use value, so the first virtual object picks up the Valuable idle virtual props are beneficial for the first virtual object to play a battle game.
  • the following processing can also be performed for any idle virtual prop among the plurality of idle virtual props: obtaining the virtual coins occupied by the idle virtual prop in the virtual scene; The coins are sorted, and the idle virtual props corresponding to the largest virtual coins are selected for pickup prompts, so that the first virtual object picks up the idle virtual props of the largest virtual coins, so as to obtain the maximum benefits in the virtual scene.
  • presenting a pick-up prompt for a part of the idle virtual props in the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: acquiring the possession of the first virtual object The type of virtual prop; when the type of the idle virtual prop is different from the type of the holding virtual prop, a pick-up prompt is given to the idle virtual prop.
  • the types of virtual props are various, for example, the types of virtual props include shooting, throwing, defense, and attack. etc., the idle virtual item 601 is a defensive virtual item, the idle virtual item 602 is a shooting virtual item, the idle virtual item 603 is a throwing virtual item, and the type of the virtual item held by the first virtual object includes a shooting type.
  • a pick-up prompt of the idle virtual item 601 is presented in the virtual scene, and the control can be picked up, indicating that the idle virtual item 601 can be picked up. pick up.
  • the virtual props missing from the first virtual object are screened out from the plurality of idle virtual props, so that the first virtual object picks up the missing idle virtual props, so that the first virtual object has all types of virtual props. props for more versatile game battles.
  • presenting a pick-up prompt for some idle virtual props in the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: obtaining the information of the team where the first virtual object is located. The type of the virtual item held; when the type of the idle virtual item is different from the type of the virtual item held, a pick-up prompt is given to the idle virtual item.
  • the types of virtual props are various, for example, the types of virtual props include shooting, throwing, defense, and attack.
  • the idle virtual item 601 is a defensive virtual item
  • the idle virtual item 602 is a shooting virtual item
  • the idle virtual item 603 is a throwing virtual item
  • the types of virtual items held by the team where the first virtual object is located include Shooting, throwing, and attacking, then the type of the idle virtual item 601 is different from the type of the holding virtual item, then a pick-up prompt of the idle virtual item 601 is presented in the virtual scene, and the control can be picked up, indicating the idle virtual item 601 can be picked up.
  • the virtual props that the team lacks are screened from multiple idle virtual props, so that the first virtual object picks up the missing idle virtual props, so that the team where the first virtual object belongs has all types of virtual props , which is conducive to teamwork.
  • presenting a pick-up prompt for part of the idle virtual props in the plurality of idle virtual props includes: performing the following processing for any idle virtual prop in the plurality of idle virtual props: obtaining the first virtual object in the team The assigned character; when the type of the idle virtual item matches the character, a pick-up prompt will be given to the idle virtual item.
  • the types of virtual props are various, for example, the types of virtual props include shooting, throwing, defense, and attack.
  • the idle virtual item 601 is a defensive virtual item
  • the idle virtual item 602 is a shooting virtual item
  • the idle virtual item 603 is a throwing virtual item
  • the role assigned to the first virtual object in the team is a shooter
  • the type of the idle virtual prop 601 matches the role of the first virtual object
  • a pick-up prompt of the idle virtual prop 601 is presented in the virtual scene, that is, a pick-up control, indicating that the idle virtual prop 601 can be picked up.
  • the virtual props matching the role of the first virtual object are selected from the plurality of idle virtual props, so that the first virtual object picks up the matching idle virtual props, so that the A virtual object has enough virtual props for game battles.
  • presenting the pickup prompt of the idle virtual prop includes: presenting the pickup prompt of the idle virtual prop through a target display style; wherein the target display style indicates that the idle virtual prop is in a pickup state.
  • the pick-up prompt of the idle virtual prop can be displayed through the target display style, wherein the target display style includes highlight; flash; different colors (according to the function of the idle virtual prop, determine the corresponding Display color) and other different presentation styles to highlight that idle virtual props are in a pickable state.
  • step 104 in response to controlling the picking operation of the first virtual object, the first virtual object is controlled to pick up the idle virtual prop.
  • the user can control the first virtual object to perform a pickup operation, thereby presenting the process of picking up the idle virtual prop by the first virtual object in the virtual scene.
  • the user controls the first virtual object to approach the Idle virtual props, and click the pick-up control, the first virtual object presents a crouching posture to pick up the idle virtual props, and replace the currently used virtual props with the idle virtual props.
  • a timer is bound to the idle virtual prop, wherein the timer is used to start timing when the idle virtual prop is presented in the virtual scene; after the at least one idle virtual prop in the virtual scene is presented, the timer is used to determine When the idle virtual prop is not picked up within the set time period, cancel the rendering of the idle virtual prop in the virtual scene.
  • the enemy when the enemy is killed by the first virtual object, the enemy will drop the virtual props it holds, and bind a timer to the dropped virtual props. Within a set period of time, the dropped virtual props will be dropped. If it is not picked up, the display of idle virtual props in the virtual scene will be canceled. For example, within an hour, if the falling bow and arrow are not picked up, the bow and arrow will be cancelled in the virtual scene, and all virtual objects in the virtual scene will not be able to pick up this item. bow and arrow.
  • different virtual props can have different corresponding set time periods. For example, according to factors such as the type of idle virtual props, performance parameters, occupied virtual coins, etc., the corresponding set time period is determined, that is, the more valuable virtual props. Items, the longer the corresponding set time period, for example, the battle value of idle virtual item 1 is 2000, the set time period of idle virtual item 1 is 2 hours, and the idle virtual item 1 has not been picked up within 2 hours , then cancel the presentation of idle virtual item 1 in the virtual scene; the combat value of idle virtual item 2 is 1000, the set time period of idle virtual item 2 is 1 hour, and idle virtual item 2 has not been picked up within 1 hour , the presentation of the idle virtual prop 2 in the virtual scene is cancelled.
  • the embodiment of the present application optimizes the function of picking up items, so that the virtual object cannot be picked up through the wall (that is, when the player cannot see it), the pick-up control will not be displayed.
  • the pickup controls for item pickup are also very poor experience.
  • the embodiment of the present application optimizes the function of picking up items in the following ways: 1) When passing through the wall, the pickable controls for items are not displayed; 2) The detection logic for judging whether the items can be picked up is optimized; The following describes how to optimize the function of item pickup:
  • FIG. 7 is a schematic diagram of an interface for dropping weapons provided by an embodiment of the present application.
  • the enemy 701 will drop at the position where it was killed. Drops its currently used weapon 702.
  • FIG. 8 is a schematic diagram of the interface of the dropped weapon provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interface for dropping weapons provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interface for dropping weapons provided by an embodiment of the present application.
  • the pick-up control cannot be displayed.
  • the weapon 702 cannot be picked up, and the virtual object 902 controlled by the user must go around the left side of the wall to pick up the weapon 702.
  • the dropped weapon can only be detected when entering the collision box and exiting the collision box. If the dropped weapon just falls on the position of the virtual object, since the virtual object is already falling Inside the collision box of the weapon, the virtual object needs to exit the collision box first, and then enter the collision box to trigger the logic of displaying the pickable controls;
  • Dropped weapons on the top floor may pass through the floor and be picked up by virtual objects on the bottom layer.
  • the collision box may not be bound to the dropped weapon, and the problem caused by the above collision box may be solved by detecting whether there is an obstacle between the virtual object and the dropped weapon. .
  • FIG. 10 is a schematic diagram of an interface of a dropped weapon provided by an embodiment of the present application.
  • a dropped weapon provided by an embodiment of the present application.
  • weapon 1002 and weapon 1003 at the same location 1001, if the user-controlled virtual When the object is close to the position 1001 and the weapon can be picked up, the weapon 1002 which is closer to the virtual object is displayed first.
  • FIG. 11 is a schematic flowchart of the interaction processing of virtual props provided by the embodiment of the present application, and the description is combined with the steps shown in FIG. 11 .
  • Step 1101 After starting the game, the virtual object controlled by the user can find the target (enemy) to kill. During the game, the target must be killed before the weapon can be dropped. Of course, it is not necessarily the enemy who was killed. Killed, it can also be a teammate killed (if a teammate is killed, it shows that the dropped weapon is dropped by our teammate).
  • Step 1102 When a target dies on the scene, the virtual object controlled by the user can move to approach the falling weapon. Since there is no collision box mounted on the falling weapon, the weapon is not picked up by triggering the response, but by Mathematical location distances are calculated.
  • FIG. 12 is a schematic diagram of the interface for distance calculation provided by the embodiment of the present application.
  • Point 1201 represents the position of the dropped weapon
  • R represents the radius that can be picked up, and then the distance between point 1201 and virtual object 1202 is calculated, The distance D is obtained. If the distance D is less than R, the virtual object 1202 is within the pick-up range, that is, the virtual object 1202 is close to the dropped weapon; if the distance D is greater than R, the virtual object 1202 is not within the pick-up range, that is, the virtual object 1202 Subject 1202 moves away from the dropped weapon.
  • Step 1103 determine whether there is an obstacle between the virtual object and the falling weapon.
  • the detection method is to shoot a detection ray through the muzzle position of the weapon using the virtual object, and the end point is the falling weapon, as shown in Figure 13.
  • 13 is a schematic diagram of an interface for obstacle detection provided by an embodiment of the present application.
  • Line segment 1301 is a detection ray.
  • an obstacle such as a wall 1302
  • the dropped weapon cannot be picked up.
  • Step 1104 If there are multiple dropped weapons within the same range, the distances between the positions of all dropped weapons and the virtual object are calculated at this time, and then a pick-up control of the dropped weapon with the closest distance to the virtual object is displayed.
  • Step 1105 After the user clicks the pickup control, the pickup protocol will be sent to the server, and the server will verify whether the current weapon is picked up or has disappeared. Replace with the current weapon.
  • Step 1106 If the dropped weapon is not picked up for a certain period of time, the dropped weapon will disappear in the game.
  • the embodiment of the present application optimizes the function of picking up items, so that the virtual object cannot be picked up through the wall (ie, when the player cannot see it), the pick-up control will not be displayed, and the user experience will be improved.
  • the presentation module 4551 is configured to present at least one idle virtual prop in the virtual scene; the response module 4552 is configured to control the first virtual object to move in the virtual scene in response to the movement operation of controlling the first virtual object; the processing module 4553, configured to present a pick-up prompt of the idle virtual prop when the idle virtual prop is located in the direction of the used virtual prop of the first virtual object and there is no obstacle between the idle virtual prop and the used virtual prop ; The response module 4552 is further configured to control the first virtual object to pick up the idle virtual prop in response to controlling the picking operation of the first virtual object.
  • the interaction processing device 455 for the virtual prop further includes: a detection module 4554, configured to detect the distance between the first virtual object and the idle virtual prop during the moving process; when the distance is When the distance is less than the distance threshold, obstacle detection is performed between the used virtual prop of the first virtual object and the idle virtual prop.
  • a detection module 4554 configured to detect the distance between the first virtual object and the idle virtual prop during the moving process; when the distance is When the distance is less than the distance threshold, obstacle detection is performed between the used virtual prop of the first virtual object and the idle virtual prop.
  • the detection module 4554 is further configured to, based on each real-time position of the first virtual object in the moving process, use virtual props and the idle virtual props for the first virtual object Obstacle detection in between.
  • the detection module 4554 is further configured to emit a detection ray at the position of the virtual prop through the camera component bound on the virtual prop, wherein the detection ray and the virtual prop are used.
  • the orientations of the used virtual props are consistent; and based on the detection ray, it is determined whether there is an obstacle between the used virtual prop and the idle virtual prop.
  • the detection module 4554 is further configured to, when the detection ray intersects with the collider component bound on the obstacle, determine that there is a difference between the used virtual prop and the idle virtual prop the obstacle; when the detection ray does not intersect with the collider component bound to the obstacle, it is determined that the obstacle does not exist between the used virtual prop and the idle virtual prop.
  • the processing module 4553 is further configured to present the idle virtual prop when the idle virtual prop is located in the direction of the used virtual prop of the first virtual object and there is an obstacle between the virtual prop and the used virtual prop.
  • the unpickable hint of the idle virtual item is further configured to present the idle virtual prop when the idle virtual prop is located in the direction of the used virtual prop of the first virtual object and there is an obstacle between the virtual prop and the used virtual prop. The unpickable hint of the idle virtual item.
  • the processing module 4553 is further configured to: when a plurality of the idle virtual props are located in the direction of the used virtual props of the first virtual object, and there is no obstacle between them and the used virtual props When the object is selected, a pick-up prompt of a part of the idle virtual props of the plurality of idle virtual props is presented.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: acquiring the relationship between the first virtual object and the idle virtual prop during the moving process The distance between the plurality of idle virtual props and the first virtual object is sorted, and the idle virtual prop corresponding to the smallest distance is selected for pickup prompt.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain the Matching degree between the idle virtual items and the usage preference; sort the matching degrees of a plurality of the idle virtual items and the usage preferences, and select the idle virtual item corresponding to the highest matching degree for pickup prompt.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain the frequency of the idle virtual prop being used by other virtual objects; The frequencies of the idle virtual props being used by other virtual objects are sorted, and the idle virtual props corresponding to the maximum frequency are selected for pickup prompts.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain performance parameters of the idle virtual prop in the virtual scene; The performance parameters of the plurality of idle virtual props in the virtual scene are sorted, and the idle virtual prop corresponding to the maximum performance parameter is selected for pickup prompt.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain the type of the held virtual prop of the first virtual object; When the type of the idle virtual item is different from the type of the held virtual item, a pick-up prompt is given to the idle virtual item.
  • the processing module 4553 is further configured to perform the following processing for any idle virtual prop among the plurality of idle virtual props: obtain the assigned role of the first virtual object in the team; When the type of the idle virtual item matches the character, a pick-up prompt is given to the idle virtual item.
  • the processing module 4553 is further configured to present a pick-up prompt of the idle virtual prop through a target display style; wherein the target display style indicates that the idle virtual prop is in a pick-up state.
  • the interaction processing apparatus 455 of the virtual prop further includes: a timing module 4555, configured to bind a timer on the idle virtual prop; wherein the timer is used for presenting in the virtual scene Start timing when the idle virtual item is available; when it is determined by the timer that the idle virtual item has not been picked up within a set period of time, stop presenting the idle virtual item in the virtual scene.
  • a timing module 4555 configured to bind a timer on the idle virtual prop; wherein the timer is used for presenting in the virtual scene Start timing when the idle virtual item is available; when it is determined by the timer that the idle virtual item has not been picked up within a set period of time, stop presenting the idle virtual item in the virtual scene.
  • the presentation module 4551 is further configured to, when the second virtual object is attacked in the virtual scene and loses the ability to hold the virtual prop, present the second virtual object at the position where the second virtual object is attacked. At least one virtual prop dropped by the second virtual object.
  • the presentation module 4551 is further configured to use the held virtual prop as an idle virtual prop when the second virtual object actively discards at least one held virtual prop in the virtual scene; The object discarding the position holding the virtual prop presents at least one virtual prop that is actively discarded.
  • the presenting module 4551 is further configured to use the holding virtual prop as an idle virtual prop when the teammate of the first virtual object places at least one holding virtual prop at the placement position of the virtual scene; Wherein, the idle virtual prop is used for picking up by the first virtual object; at least one idle virtual prop placed by the teammate is presented at the placement position in the map of the virtual scene.
  • Embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the above-mentioned method for interactive processing of virtual items in the embodiments of the present application.
  • the embodiments of the present application provide a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when the executable instructions are executed by a processor, the processor will cause the processor to execute the virtual props provided by the embodiments of the present application.
  • the interaction processing method for example, the interaction processing method of virtual props as shown in FIGS. 3A-3C .
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the foregoing memories Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and which Deployment may be in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, a Hyper Text Markup Language (HTML, Hyper Text Markup Language) document
  • HTML Hyper Text Markup Language
  • One or more scripts in stored in a single file dedicated to the program in question, or in multiple cooperating files (eg, files that store one or more modules, subroutines, or code sections).
  • executable instructions may be deployed to execute on one electronic device, or on multiple electronic devices located at one site, or alternatively, multiple electronic devices distributed across multiple sites and interconnected by a communication network execute on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟道具的交互处理方法、装置、电子设备及计算机可读存储介质;方法包括:在虚拟场景中呈现至少一个空闲虚拟道具;响应于控制第一虚拟对象的移动操作,控制第一虚拟对象在虚拟场景中移动;当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间不存在障碍物时,呈现空闲虚拟道具的可拾取提示;响应于控制第一虚拟对象的拾取操作,控制第一虚拟对象拾取空闲虚拟道具。

Description

虚拟道具的交互处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请实施例基于申请号为202011057428.1、申请日为2020年09月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请实施例作为参考。
技术领域
本申请涉及计算机人机交互技术,尤其涉及一种虚拟道具的交互处理方法、装置、电子设备及计算机可读存储介质。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在军事演习仿真、以及游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
为了充分利用虚拟场景中的空闲虚拟道具,例如掉落的虚拟道具。相关技术提供虚拟道具的拾取功能,受玩家控制的虚拟对象能够在虚拟场景中拾取空闲虚拟道具。
然而,申请人发现,由于相关技术中拾取操作是通过绑定碰撞盒子实现的,会出现违背物理规律的拾取(例如穿墙拾取),影响了虚拟场景中人机交互的准确性,进而影响使用体验。
发明内容
本申请实施例提供一种虚拟道具的交互处理方法、装置、电子设备及计算机可读存储介质,能够实现虚拟场景中符合物理规律的准确拾取。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟道具的交互处理方法,包括:
在虚拟场景中呈现至少一个空闲虚拟道具;
响应于控制第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示;
响应于控制所述第一虚拟对象的拾取操作,控制所述第一虚拟对象拾取所述空闲虚拟道具。
本申请实施例提供一种虚拟道具的交互处理装置,包括:
呈现模块,配置为在虚拟场景中呈现至少一个空闲虚拟道具;
响应模块,配置为响应于控制第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
处理模块,配置为当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示;
所述响应模块还配置为响应于控制所述第一虚拟对象的拾取操作,控制所述第一虚拟对象拾取所述空闲虚拟道具。
本申请实施例提供一种用于虚拟道具的交互处理的电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟道具的交互处理方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟道具的交互处理方法。
本申请实施例具有以下有益效果:
通过检测空闲虚拟道具与使用虚拟道具之间是否存在障碍物,以实现对空闲虚拟道具与虚拟对象之间的障碍物检测,从而在空闲虚拟道具与虚拟对象之间不存在障碍物时,才能实现虚拟道具的拾取功能,实现了符合物理规律的准确拾取,提升了虚拟场景中人机交互的准确性,进而提升虚拟场景中所耗费的计算资源的实际利用率。
附图说明
图1A-1B是本申请实施例提供的虚拟场景的交互处理方法的应用模式示意图;
图2A是本申请实施例提供的虚拟场景的交互处理的电子设备的结构示意图;
图2B是本申请实施例提供的虚拟道具的交互处理装置中安装的人机交互引擎的原理示意图;
图3A-3C是本申请实施例提供的虚拟道具的交互处理方法的流程示意图;
图4是本申请实施例提供的虚拟道具的交互处理方法的界面示意图;
图5A-5B是本申请实施例提供的虚拟现实的界面示意图;
图6是本申请实施例提供的多个空闲虚拟道具的界面示意图;
图7是本申请实施例提供的掉落武器的界面示意图;
图8是本申请实施例提供的掉落武器的界面示意图;
图9是本申请实施例提供的掉落武器的界面示意图;
图10是本申请实施例提供的掉落武器的界面示意图;
图11是本申请实施例提供的虚拟道具的交互处理的流程示意图;
图12是本申请实施例提供的距离计算的界面示意图;
图13是本申请实施例提供的障碍物检测的界面示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,所涉及的术语“第一\第二”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景:利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于:用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)客户端:终端中运行的用于提供各种服务的应用程序,例如游戏客户端等、军事演习仿真客户端。
4)虚拟对象:虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如,在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
例如,该虚拟对象可以是通过客户端上的操作进行控制的用户角色,也可以是通过训练设置在虚拟场景对战中的人工智能(AI,Artificial Intelligence),还可以是设置在虚拟场景互动中的非用户角色(NPC,Non-Player Character)。例如,该虚拟对象可以是在虚拟场景中进行对抗式交互的虚拟人物。例如,该虚拟场景中参与互动的虚拟对象的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
以射击类游戏为例,用户可以控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也可以控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也可以控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具可以是虚拟汽车、虚拟飞行器、虚拟游艇等,在此仅以上述场景进行举例说明,本申请实施例对此不作具体限定。用户也可以控制虚拟对象通过虚拟道具与其他虚拟对象进行对抗式的交互,例如,该虚拟道具可以是手雷、集束雷、粘性手雷等投掷类虚拟道具,也可以是机枪、手枪、步枪等射击类虚拟道具,本申请对虚拟道具的类型不作具体限定。
5)场景数据:表示虚拟场景中的对象在交互过程中受所表现的各种特征,例如,可以包括对象在虚拟场景中的位置。当然,根据虚拟场景的类型可以包括不同类型的特征;例如,在游戏的虚拟场景中,场景数据可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏角色的各种状态的属性值,例如包括生命值(也称为红量)和魔法值(也称为蓝量)等。
相关技术中,在移动端射击游戏中,用户可以控制虚拟对象在开局前装备两把武器进入,然后在游戏过程中击杀敌人,被击杀的人会将当前使用的武器掉落,用户控制的虚拟对象可以拾取掉落的武器或者其他装备。但是,拾取武器装备存在一个问题,即可能会出现穿墙拾取物品(虚拟道具),即隔着障碍物拾取物品。
本申请实施例发现造成上述问题,主要是因为物品的拾取是通过触发碰撞盒来实现的,由于碰撞盒子是绑定在掉落物品上的,而掉落的位置有可能使得碰撞盒穿过了墙壁等障碍物,从而导致玩家能够穿墙拾取物品。
为了解决上述问题,本申请实施例提供一种虚拟道具的交互处理方法、装置、电子设备和计算机可读存储介质,能够实现虚拟场景中符合物理规律的准确拾取,下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备,车载终端,智能电视)等各种类型的用户终端,也可以实施为服务器。下面,将说明设备实施为终端时示例性应用。
为便于更容易理解本申请实施例提供的虚拟道具的交互处理方法,首先说明本申请实施例提供的虚拟道具的交互处理方法的示例性实施场景,虚拟场景可以完全基于终端输出,或者基于终端和服务器的协同来输出。
在一些实施例中,虚拟场景可以是军事演习仿真中所呈现的画面,用户可以在虚拟场 景中,通过属于不同团队的虚拟对象来模拟战局、战略或战术,对于军事作战的指挥有着很大的指导作用。
在一些实施例中,虚拟场景可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战,通过控制虚拟对象的行动可以在虚拟场景中进行双方互动,从而使用户能够在游戏的过程中舒缓生活压力。
在一个实施场景中,参见图1A,图1A是本申请实施例提供的虚拟道具的交互处理方法的应用模式示意图,适用于一些完全依赖终端400的计算能力即可完成虚拟场景100的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等终端400完成虚拟场景的输出。
当形成虚拟场景100的视觉感知时,终端400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,设备还可以借助不同的硬件来形成听觉感知、触觉感知、运动感知和味觉感知的一种或多种。
作为示例,终端400运行客户端410(例如单机版的游戏应用),在客户端410的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景中包括第一虚拟对象110和虚拟道具120,第一虚拟对象110可以是受用户(或称玩家)控制的游戏角色,即第一虚拟对象110受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟道具120可以是在虚拟场景中被第一虚拟对象110使用的对战工具,例如,第一虚拟对象110可以拾取虚拟场景中的虚拟道具120,从而使用虚拟道具120的功能进行游戏对战。
举例来说,当用户通过客户端410控制第一虚拟对象110移动至虚拟场景中的虚拟道具120(即空闲虚拟道具)时,当虚拟道具120位于第一虚拟对象110的使用虚拟道具的朝向上,且与使用虚拟道具之间不存在障碍物时,用户通过客户端410控制第一虚拟对象110拾取虚拟场景中的虚拟道具120。
在另一个实施场景中,参见图1B,图1B是本申请实施例提供的虚拟道具的交互处理方法的应用模式示意图,应用于终端400和服务器200,适用于依赖服务器200的计算能力完成虚拟场景计算、并在终端400输出虚拟场景的应用模式。
以形成虚拟场景100的视觉感知为例,服务器200进行虚拟场景相关显示数据的计算并发送到终端400,终端400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端的相应硬件输出,例如使用麦克风输出形成听觉感知,使用振动器输出形成触觉感知等等。
作为示例,终端400运行客户端410(例如网络版的游戏应用),通过连接游戏服务器(即服务器200)与其他用户进行游戏互动,终端400输出客户端410的虚拟场景100,其中包括第一虚拟对象110和虚拟道具120,第一虚拟对象110可以是受用户控制的游戏角色,即第一虚拟对象110受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);虚拟道具120可以是在虚拟场景中被第一虚拟对象110使用的对战工具,例如,第一虚拟对象110可以拾取虚拟场景中的虚拟道具120,从而使用虚拟 道具120的功能进行游戏对战。
举例来说,当用户通过客户端410控制第一虚拟对象110移动至虚拟场景中的虚拟道具120(即空闲虚拟道具)时,客户端410将第一虚拟对象110的位置信息通过网络300发送至服务器200,服务器200根据拾取逻辑,对虚拟道具120与第一虚拟对象110进行障碍物检测,当虚拟道具120位于第一虚拟对象110的使用虚拟道具的朝向上,且与使用虚拟道具之间不存在障碍物时,将虚拟道具120的可拾取提示发送至客户端410,客户端410接收到虚拟道具120的可拾取提示后,呈现虚拟道具120的可拾取提示,用户基于虚拟道具120的可拾取提示,控制第一虚拟对象110拾取虚拟场景中的虚拟道具120。
在一些实施例中,终端400可以通过运行计算机程序来实现本申请实施例提供的虚拟道具的交互处理方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序,例如游戏APP(即上述的客户端410);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
作为示例,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
参见图2A,图2A是本申请实施例提供的用于虚拟道具的交互处理的电子设备的结构示意图,以电子设备为终端为例进行说明,图2A所示的电子设备包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。电子设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2A中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450例如包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450 旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他计算设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟道具的交互处理装置可以采用软件方式实现,图2A示出了存储在存储器450中的虚拟道具的交互处理装置455,其可以是程序和插件等形式的软件,包括以下软件模块:呈现模块4551、响应模块4552、处理模块4553、检测模块4554以及定时模块4555,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,将在下文中说明各个模块的功能。
参见图2B,图2B是本申请实施例提供的虚拟道具的交互处理装置中安装的人机交互引擎的原理示意图,以应用于游戏为例,又可以称为游戏引擎,游戏引擎是指一些已编写好的可编辑电脑游戏系统或者一些交互式实时图像应用程序的核心组件,这些系统为游戏设计者提供各种编写游戏所需的各种工具,其目的在于让游戏设计者能容易和快速地做出游戏程式而不用由零开始,游戏引擎包括:渲染引擎(即“渲染器”,含二维图像引擎和三维图像引擎)、物理引擎、障碍物检测系统、音效、脚本引擎、电脑动画、人工智能、网络引擎以及场景管理,游戏引擎是一个为运行某一类游戏的机器设计的能够被机器识别的代码(指令)集合,它像一个发动机,控制着游戏的运行,一个游戏程序可以分为游戏引擎和游戏资源两大部分,游戏资源包括图像,声音,动画等部分,游戏=引擎(程序代码)+资源(图像,声音,动画等),游戏引擎则是按游戏设计的要求顺序地调用这些资源。
本申请实施例提供的虚拟道具的交互处理方法是由图2A中所示出的虚拟道具的交互处理装置中的各个模块通过调用图2B所示出的人机交互引擎的相关组件实现的,下面示例性说明。
例如,呈现模块4551用于呈现虚拟场景中的至少一个空闲虚拟道具,呈现模块4551调用图2B所示游戏引擎中的用户界面部分实现用户与游戏之间的交互,通过调用游戏引擎中的模型部分制作二维或者三维模型,并在模型制作完毕之后,通过骨骼动画部分按照不同的面把材质贴图赋予模型,这相当于为骨骼蒙上皮肤,最后再通过渲染部分将模型、动画、光影、特效等所有效果实时计算出来并展示在人机交互界面上。
例如,响应模块4552用于响应于控制第一虚拟对象的移动操作,呈现第一虚拟对象在虚拟场景中的移动过程,响应模块4552调用图2B的渲染模块,基于计算出的移动轨迹进行实时图像计算并展示在人机交互界面上。
例如,处理模块4553当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上,且与使用虚拟道具之间不存在障碍物时,呈现空闲虚拟道具的可拾取提示,处理模块4553调用图2B所示游戏引擎中的渲染模块,当空闲虚拟道具与使用虚拟道具之间不存在障碍物时,通过渲染模块对空闲虚拟道具进行渲染并展示在人机交互界面上。
例如,检测模块4554用于检测第一虚拟对象在移动过程中与空闲虚拟道具之间的距离;当距离小于距离阈值时,对第一虚拟对象的使用虚拟道具与空闲虚拟道具之间进行障 碍物检测。检测模块4554调用图2B所示游戏引擎中的相机部分实现障碍物检测,具体是通过绑定在第一虚拟对象的使用虚拟道具上的相机发射的检测射线进行障碍物检测,其中,绑定在使用虚拟道具上的相机是通过相机部分配置的。
例如,定时模块4555用于在空闲虚拟道具上绑定定时器,当空闲虚拟道具在设定时间段内未被拾取时,定时模块4555调用图2B所示游戏引擎中的渲染模块,通过渲染模块对空闲虚拟道具进行渲染,以取消在虚拟场景中呈现空闲虚拟道具。
如前所述,本申请实施例提供的虚拟道具的交互处理方法可以由各种类型的电子设备实施,例如终端。参见图3A,图3A是本申请实施例提供的虚拟道具的交互处理方法的流程示意图,结合图3A示出的步骤进行说明。
在下面步骤中,空闲虚拟道具表示闲置、未被使用的虚拟道具,包括虚拟场景中初始化的虚拟道具(即未被任何虚拟对象使用过的虚拟道具)以及掉落的虚拟道具(例如,某虚拟对象被击杀,则该虚拟对象所持有的虚拟道具将掉落在被击杀的位置,以供其他虚拟对象拾取)。
在步骤101中,在虚拟场景中呈现至少一个空闲虚拟道具。
例如,在游戏开始时,对虚拟场景中的虚拟道具进行初始化,可以将虚拟道具随机设置在虚拟场景的各个位置,还可以设置每局游戏中初始化的虚拟道具的所处的位置都相同,还可以设置每局游戏中初始化的虚拟道具的所处的位置都不相同(以加大游戏难度,用户无法预测虚拟道具的位置)。在初始化虚拟道具后,可以在虚拟场景中呈现初始化的虚拟道具(空闲虚拟道具),用户控制的虚拟对象可以拾取初始化的虚拟道具,以通过拾取的虚拟道具进行攻击或者防御。
在一些实施例中,在虚拟场景中呈现至少一个空闲虚拟道具,包括:当第二虚拟对象在虚拟场景中遭受攻击而丧失持有虚拟道具的能力时,将持有虚拟道具作为空闲虚拟道具;在第二虚拟对象遭受攻击的位置,呈现第二虚拟对象掉落的至少一个虚拟道具。
其中,空闲虚拟道具包括虚拟对象掉落的虚拟道具,例如,在游戏过程中,第二虚拟对象(除第一虚拟对象外的虚拟对象,例如第一虚拟对象的敌方或者队友)在虚拟场景中遭受攻击而丧失持有虚拟道具的能力时,例如第三虚拟对象(除第二虚拟对象外的虚拟对象)击杀第二虚拟对象,则第二虚拟对象被第三虚拟对象击杀;第三虚拟对象重击第二虚拟对象,则第二虚拟对象遭到创伤,第二虚拟对象所持有的虚拟道具将掉落在第二虚拟对象遭受攻击的位置。因此,在第二虚拟对象遭受攻击的位置,呈现第二虚拟对象掉落的至少一个虚拟道具,以便除第二虚拟对象外的虚拟对象拾取,尤其是攻击第二虚拟对象的第三虚拟对象,该掉落的虚拟道具作为第三虚拟对象的奖励。
在一些实施例中,在虚拟场景中呈现至少一个空闲虚拟道具,包括:当第二虚拟对象在虚拟场景主动丢弃至少一个持有虚拟道具时,将持有虚拟道具作为空闲虚拟道具;在第二虚拟对象丢弃持有虚拟道具的位置呈现主动丢弃的至少一个虚拟道具。
其中,空闲虚拟道具包括虚拟对象丢弃的虚拟道具,例如,在游戏过程中,虚拟对象只能持有固定数量的虚拟道具时,虚拟对象为了能够持有足够多的、有用的虚拟道具,可以选择主动丢弃持有的虚拟道具,例如,第二虚拟对象(除第一虚拟对象外的虚拟对象,例如第一虚拟对象的敌方或者队友)不想要性能太低的持有道具时,可以选择主动丢弃至少一个持有虚拟道具,则第二虚拟对象所持有的虚拟道具将掉落在丢弃的位置,将被丢弃的持有虚拟道具作为空闲虚拟道具。因此,在第二虚拟对象丢弃持有虚拟道具的位置,呈现主动丢弃的至少一个虚拟道具,以便除第二虚拟对象外的虚拟对象拾取,还能够使得第二虚拟对象能够拾取其他的虚拟道具。
在一些实施例中,在虚拟场景中呈现至少一个空闲虚拟道具,包括:当第一虚拟对象的队友在虚拟场景的放置位置放置至少一个持有虚拟道具时,将持有虚拟道具作为空闲虚拟道具;其中,空闲虚拟道具用于供第一虚拟对象拾取;在虚拟场景的地图中的放置位置 呈现队友放置的至少一个空闲虚拟道具。
其中,空闲虚拟道具包括虚拟对象放置的虚拟道具,例如,在游戏过程中,为了实现团队合作,虚拟对象想给予队友自己所持有的虚拟对象时,虚拟对象可以选择将持有虚拟道具放置在虚拟场景中某处,以供队友拾取。
例如,如图4所示,图4是本申请实施例提供的虚拟道具的交互处理方法的界面示意图,第四虚拟对象(第一虚拟对象的队友)在虚拟场景的放置位置401放置至少一个持有虚拟道具,该持有虚拟道具作为空闲虚拟道具,该空闲虚拟道具用于供第一虚拟对象402以及相同团队的任意虚拟对象拾取,并在虚拟场景的地图403(将整个游戏的虚拟场景进行缩略,所形成的指引图)中的放置位置401呈现该空闲虚拟道具,从而第一虚拟对象以及相同团队的任意虚拟对象,可以通过虚拟场景的地图403查看第四虚拟对象在何处放置了虚拟道具,从而快速到达放置位置以拾取虚拟道具。
在步骤102中,响应于控制第一虚拟对象的移动操作,控制第一虚拟对象在虚拟场景中移动。
其中,第一虚拟对象可以是由游戏或者军事仿真中的用户所控制的对象,当然,虚拟场景中还可以包括其他的虚拟对象,可以由其他用户控制或由机器人程序控制。虚拟对象可以被划分到多个团队,团队之间可以是敌对关系或协作关系,虚拟场景中的团队可以包括上述关系的一种或者全部。
其中,用户控制第一虚拟对象的移动操作可以控制第一虚拟对象在虚拟场景中移动、翻转、跳跃等等,通过人机交互界面接收第一虚拟对象的移动操作,从而控制第一虚拟对象在虚拟场景中进行移动,在移动过程中人机交互界面中所呈现的内容随着第一虚拟对象的移动发生变化。
其中,在人机交互界面中显示第一虚拟对象在虚拟场景中的移动过程时,根据观看用户在完整虚拟场景中的观看位置和视场角,确定观看对象的视场区域,呈现虚拟场景中位于视场区域中的部分虚拟场景,即所显示的虚拟场景可以是相对于全景虚拟场景的部分虚拟场景。
例如,以用户佩戴虚拟现实设备为例,参见图5A,图5A是本申请实施例提供的虚拟现实的界面示意图,观看用户(即真实用户)在虚拟现实设备中通过透镜所能够感知到虚拟场景501中位于视场区域中的部分虚拟场景502,虚拟现实设备中设置有姿态检测的传感器(如九轴传感器),用于实时检测虚拟现实设备的姿态变化,如果用户佩戴了虚拟现实设备,那么当用户头部姿态发生变化时,会将头部的实时姿态传给处理器,以此计算用户的视线在虚拟场景中的注视点,根据注视点计算虚拟场景的三维模型中处于用户注视范围(即视场区域)的图像,并在显示屏上显示,使人仿佛在置身于现实环境中观看一样的沉浸式体验。对于其它类型的虚拟现实设备,如移动虚拟现实设备(PCVR),实现视觉感知的原理与上述类似,不同的是PCVR、移动虚拟现实设备等本身没有集成实现相关计算的处理器,不具备独立的虚拟现实输入和输出的功能。
以用户操控虚拟场景中的第一虚拟对象503为例,即观看用户是第一虚拟对象503,参见图5B,图5B是本申请实施例提供的虚拟现实的界面示意图,用户通过控制第一虚拟对象503在完整虚拟场景501中的观看位置和视场角,控制第一虚拟对象503进行移动操作,例如奔跑、下蹲等,并呈现第一虚拟对象503在虚拟场景中的移动过程。
在步骤103中,当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间不存在障碍物时,呈现空闲虚拟道具的可拾取提示。
其中,本申请实施例发现,由于拾取逻辑的漏洞,导致虚拟对象能够隔着障碍物自动拾取不应该被拾取的虚拟道具,例如空闲虚拟道具与第一虚拟对象隔着一堵墙时,第一虚拟对象仍然能够自动拾取该空闲虚拟道具。
为了修正该拾取逻辑的漏洞,可以检测空闲虚拟道具与第一虚拟对象的使用虚拟道具 之间是否存在障碍物,当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间存在障碍物时,则说明第一虚拟对象面对障碍物,不能拾取障碍物,则在虚拟场景呈现空闲虚拟道具的不可拾取提示,以提示需要绕过障碍物,才能够拾取空闲虚拟道具;当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间不存在障碍物时,则说明第一虚拟对象面对空闲虚拟道具,能够正常拾取障碍物,则在虚拟场景呈现空闲虚拟道具的可拾取提示,以提示可以拾取该空闲虚拟道具。
参见图3B,图3B是本发明实施例提供的虚拟道具的交互处理方法的一个可选的流程示意图,图3B示出图3A还包括步骤105-步骤106:在步骤105中,检测第一虚拟对象在移动过程中与空闲虚拟道具之间的距离;在步骤106中,当距离小于距离阈值时,对第一虚拟对象的使用虚拟道具与空闲虚拟道具之间进行障碍物检测。
例如,为了修正拾取逻辑的漏洞,可以先检测第一虚拟对象在移动过程中与空闲虚拟道具之间的距离,例如,确定第一虚拟对象在虚拟场景中的第一坐标以及空闲虚拟道具在虚拟场景中的第二坐标,从而根据第一坐标以及第二坐标之间的距离,确定第一虚拟对象在移动过程中与空闲虚拟道具之间的距离。当距离小于距离阈值时,说明第一虚拟对象与空闲虚拟道具距离较近,有能力拾取空闲虚拟道具。然后,检测第一虚拟对象的使用虚拟道具与空闲虚拟道具之间的障碍物,当检测到空闲虚拟道具与使用虚拟道具之间存在障碍物时,则说明第一虚拟对象面对障碍物,即使与空闲虚拟道具足够近,也不能拾取障碍物。
在一些实施例中,呈现空闲虚拟道具的可拾取提示之前,基于第一虚拟对象在移动过程中的每个实时位置,对第一虚拟对象的使用虚拟道具与空闲虚拟道具之间进行障碍物检测。
例如,为了修正拾取逻辑的漏洞,可以在第一虚拟对象在移动过程中的每个实时位置,对第一虚拟对象的使用虚拟道具与空闲虚拟道具之间进行障碍物检测,当检测到空闲虚拟道具与使用虚拟道具之间存在障碍物时,则说明第一虚拟对象前面有障碍物,不能拾取障碍物。
在一些实施例中,对第一虚拟对象的使用虚拟道具与空闲虚拟道具之间进行障碍物检测,包括:通过绑定在使用虚拟道具上的相机组件,在使用虚拟道具的位置上发射检测射线,其中,检测射线与使用虚拟道具的朝向一致;基于检测射线确定使用虚拟道具与空闲虚拟道具之间是否存在障碍物。
例如,通过第一虚拟对象的使用虚拟道具上的相机组件,从使用虚拟道具的位置上发射与使用虚拟道具的朝向一致的检测射线,通过检测射线确定使用虚拟道具与空闲虚拟道具之间是否存在障碍物,即第一虚拟对象与空闲虚拟道具之间是否存在障碍物,当检测射线与绑定在障碍物(例如墙壁、油桶等阻碍虚拟对象行动的物体)上的碰撞器组件(例如碰撞盒子、碰撞球等)存在交叉时,则说明使用虚拟道具与空闲虚拟道具之间存在障碍物,即第一虚拟对象与空闲虚拟道具之间存在障碍物;当检测射线与绑定在障碍物上的碰撞器组件不存在交叉时,则说明使用虚拟道具与空闲虚拟道具之间不存在障碍物,即第一虚拟对象与空闲虚拟道具之间不存在障碍物。
另外,通过绑定在使用虚拟道具上的相机组件,从使用虚拟道具的位置上发射检测射线,该检测射线的终点为空闲虚拟道具所处的位置,基于检测射线确定使用虚拟道具与空闲虚拟道具之间是否存在障碍物,例如,该检测射线透射某障碍物,则说明第一虚拟对象与空闲虚拟道具之间存在障碍物;该检测射线未透射障碍物,则说明第一虚拟对象与空闲虚拟道具之间不存在障碍物。
在一些实施例中,当空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间不存在障碍物时,呈现空闲虚拟道具的可拾取提示,包括:当多个空闲虚拟道具位于第一虚拟对象的使用虚拟道具的朝向上、且与使用虚拟道具之间不存在障碍物时,呈现多个空闲虚拟道具的部分空闲虚拟道具的可拾取提示。
例如,参见图6,图6是本申请实施例提供的多个空闲虚拟道具的界面示意图,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603都位于第一虚拟对象的使用虚拟道具604的朝向上,且与使用虚拟道具604之间不存在障碍物,在虚拟场景中呈现部分空闲虚拟道具的可拾取提示,例如呈现空闲虚拟道具601的可拾取控件。
参见图3C,图3C是本发明实施例提供的虚拟道具的交互处理方法的一个可选的流程示意图,图3C示出图3A中的步骤103可通过步骤1031-步骤1032实现:在步骤1031中,针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取第一虚拟对象在移动过程中与空闲虚拟道具之间的距离;在步骤1032中,对多个空闲虚拟道具与第一虚拟对象之间的距离进行排序,选取最小距离所对应的空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,空闲虚拟道具601与使用虚拟道具之间的距离最小,即空闲虚拟道具601与第一虚拟对象之间的距离最小,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟道具601可被拾取。通过第一虚拟对象与空闲虚拟道具之间的距离,从多个空闲虚拟道具中筛选出与第一虚拟对象最近的空闲虚拟道具进行可拾取提示,从而方便第一虚拟对象能够以最快的速度拾取空闲虚拟道具,避免空闲虚拟道具被其他虚拟对象拾取。
在一些实施例中,呈现多个空闲虚拟道具的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:基于第一虚拟对象针对虚拟道具的使用偏好,获取空闲虚拟道具与使用偏好的匹配程度;对多个空闲虚拟道具与使用偏好的匹配程度进行排序,选取最高匹配程度所对应的空闲虚拟道具进行可拾取提示。
例如,参见图6,当虚拟场景中存在多个空闲虚拟道具时,例如空闲虚拟道具601、空闲虚拟道具602和空闲虚拟道具603,通过神经网络模型结合第一虚拟对象历史使用的虚拟道具,预测出第一虚拟对象的使用偏好,即第一虚拟对象对虚拟道具的喜好。基于第一虚拟对象的使用偏好,确定空闲虚拟道具601与使用偏好的匹配程度,确定空闲虚拟道具602与使用偏好的匹配程度,确定空闲虚拟道具603与使用偏好的匹配程度,从多个匹配程度中筛选出与最高匹配程度的空闲虚拟道具601进行可拾取提示,从而从多个空闲虚拟道具中筛选出第一虚拟对象最喜欢的空闲虚拟道具。
在一些实施例中,呈现多个空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取空闲虚拟道具被其他虚拟对象使用的频率;对多个空闲虚拟道具被其他虚拟对象使用的频率进行排序,选取最大频率所对应的空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,空闲虚拟道具601被其他虚拟对象(除第一虚拟对象外的虚拟对象)使用的频率最高,即空闲虚拟道具601经常被其他虚拟对象使用,使用度较高,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟道具601可被拾取。通过空闲虚拟道具的使用度,从多个空闲虚拟道具中筛选出使用度最高的空闲虚拟道具进行可拾取提示,说明该空闲虚拟道具比较好用,有一定使用价值,从而第一虚拟对象拾取到有价值的空闲虚拟道具,有利于第一虚拟对象进行对战游戏。
在一些实施例中,呈现多个空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取空闲虚拟道具在虚拟场景中的性能参数;对多个空闲虚拟道具在虚拟场景中的性能参数进行排序,选取最大性能参数所对应的空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,空闲虚拟道具601在虚拟场景中的性能参数(例如,战斗值、防御值等参数)最高,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟 道具601可被拾取。通过空闲虚拟道具的性能参数,从多个空闲虚拟道具中筛选出性能参数最高的空闲虚拟道具进行可拾取提示,说明该空闲虚拟道具比较好用,有一定使用价值,从而第一虚拟对象拾取到有价值的空闲虚拟道具,有利于第一虚拟对象进行对战游戏。
需要说明的是,还可以针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取空闲虚拟道具在虚拟场景中占用的虚拟币;对多个空闲虚拟道具在虚拟场景中占用的虚拟币进行排序,选取最大虚拟币所对应的空闲虚拟道具进行可拾取提示,从而第一虚拟对象拾取到最大虚拟币的空闲虚拟道具,以在虚拟场景中获得最大利益。
在一些实施例中,呈现多个空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取第一虚拟对象的持有虚拟道具的类型;当空闲虚拟道具的类型与持有虚拟道具的类型不同时,对空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,虚拟道具的类型是多样的,例如虚拟道具的类型包括射击类、投掷类、防御类、攻击类等,空闲虚拟道具601为防御类的虚拟道具、空闲虚拟道具602为射击类的虚拟道具、空闲虚拟道具603为投掷类的虚拟道具,且第一虚拟对象的持有虚拟道具的类型包括射击类、投掷类、攻击类,则空闲虚拟道具601的类型与持有虚拟道具的类型不同,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟道具601可被拾取。通过持有虚拟道具的类型,从多个空闲虚拟道具中筛选出第一虚拟对象所缺少的虚拟道具,从而第一虚拟对象拾取到缺少的空闲虚拟道具,使得第一虚拟对象具有所有类型的虚拟道具,以更加全能的进行游戏对战。
在一些实施例中,呈现多个空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取第一虚拟对象所在团队的持有虚拟道具的类型;当空闲虚拟道具的类型与持有虚拟道具的类型不同时,对空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,虚拟道具的类型是多样的,例如虚拟道具的类型包括射击类、投掷类、防御类、攻击类等,空闲虚拟道具601为防御类的虚拟道具、空闲虚拟道具602为射击类的虚拟道具、空闲虚拟道具603为投掷类的虚拟道具,且第一虚拟对象所在团队的持有虚拟道具的类型包括射击类、投掷类、攻击类,则空闲虚拟道具601的类型与持有虚拟道具的类型不同,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟道具601可被拾取。通过持有虚拟道具的类型,从多个空闲虚拟道具中筛选出团队所缺少的虚拟道具,从而第一虚拟对象拾取到缺少的空闲虚拟道具,使得第一虚拟对象所在团队具有所有类型的虚拟道具,有利于进行团队作战。
在一些实施例中,呈现多个空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:针对多个空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取第一虚拟对象在团队中被分配的角色;当空闲虚拟道具的类型与角色匹配时,对空闲虚拟道具进行可拾取提示。
例如,参见图6,虚拟场景中空闲虚拟道具601、空闲虚拟道具602、空闲虚拟道具603中,虚拟道具的类型是多样的,例如虚拟道具的类型包括射击类、投掷类、防御类、攻击类等,空闲虚拟道具601为防御类的虚拟道具、空闲虚拟道具602为射击类的虚拟道具、空闲虚拟道具603为投掷类的虚拟道具,且第一虚拟对象在团队中被分配的角色为射手,则空闲虚拟道具601的类型与第一虚拟对象的角色匹配,则在虚拟场景中呈现空闲虚拟道具601的可拾取提示,即可拾取控件,表示空闲虚拟道具601可被拾取。通过第一虚拟对象在团队中被分配的角色,从多个空闲虚拟道具中筛选出与第一虚拟对象的角色所匹配的虚拟道具,从而第一虚拟对象拾取到匹配的空闲虚拟道具,使得第一虚拟对象有足够的虚拟道具进行游戏对战。
在一些实施例中,呈现空闲虚拟道具的可拾取提示,包括:通过目标显示样式呈现空闲虚拟道具的可拾取提示;其中,目标显示样式表征空闲虚拟道具处于可拾取状态。
例如,在确定空闲虚拟道具可拾取后,可以通过目标显示样式显示空闲虚拟道具的可拾取提示,其中,目标显示样式包括高亮;闪动;不同颜色(根据空闲虚拟道具的功能,确定对应的显示颜色)等区别呈现样式,以突出空闲虚拟道具处于可拾取状态。
在步骤104中,响应于控制第一虚拟对象的拾取操作,控制第一虚拟对象拾取空闲虚拟道具。
例如,在呈现空闲虚拟道具的可拾取提示后,用户可以控制第一虚拟对象进行拾取操作,从而在虚拟场景中呈现第一虚拟对象拾取空闲虚拟道具的过程,例如,用户控制第一虚拟对象靠近空闲虚拟道具,并点击可拾取控件,则第一虚拟对象呈现下蹲姿势,以拾取该空闲虚拟道具,并将当前使用的虚拟道具替换成该空闲虚拟道具。
在一些实施例中,在空闲虚拟道具上绑定定时器,其中,定时器用于在虚拟场景中呈现空闲虚拟道具时开始计时;则呈现虚拟场景中的至少一个空闲虚拟道具之后,通过定时器确定空闲虚拟道具在设定时间段内未被拾取时,取消在虚拟场景中呈现空闲虚拟道具。
例如,当敌方被第一虚拟对象击杀时,敌方将掉落所持有的虚拟道具,在掉落虚拟道具上绑定定时器,当在设定时间段内,该掉落虚拟道具未被拾取,则取消在虚拟场景中呈现空闲虚拟道具,例如在一个小时内,掉落的弓箭未被拾取,则取消在虚拟场景中呈现该弓箭,虚拟场景中所有的虚拟对象将无法拾取该弓箭。
另外,不同的虚拟道具,其对应的设定时间段可以不同,例如,根据空闲虚拟道具的类型、性能参数、占用的虚拟币等因素,确定对应的设定时间段,即越有价值的虚拟道具,其对应的设定时间段越长,例如空闲虚拟道具1的战斗值为2000,其空闲虚拟道具1的设定时间段为2个小时,空闲虚拟道具1在2个小时内未被拾取,则取消在虚拟场景中呈现空闲虚拟道具1;空闲虚拟道具2的战斗值为1000,其空闲虚拟道具2的设定时间段为1个小时,空闲虚拟道具2在1个小时内未被拾取,则取消在虚拟场景中呈现空闲虚拟道具2。
下面,将说明本申请实施例提供的虚拟道具的交互处理方法在游戏的应用场景中的示例性应用。
本申请实施例对物品拾取的功能做优化,使得虚拟对象无法穿墙拾取(即在玩家视线无法看到的时候)不会显示拾取控件,否则会出现明明看不到物品,却显示出了一个物品拾取的拾取控件,这种体验感也是非常差的。
本申请实施例通过以下方式对物品拾取的功能进行优化:1)穿墙时候不显示物品可拾取控件;2)判断可否拾取的检测逻辑的优化;3)按照先近后远的拾取顺序拾取。下面具体说明物品拾取的功能的优化方式:
1)穿墙时候不显示物品可拾取控件
如图7所示,图7是本申请实施例提供的掉落武器的界面示意图,在游戏中,每次击杀了一个敌方,则该敌方701就会在其击杀的位置上掉落其当前使用的武器702。如图8所示,图8是本申请实施例提供的掉落武器的界面示意图,当用户控制的虚拟对象靠近地面上掉落的武器702时,则在虚拟场景中显示该武器信息,并且点击可拾取控件801,就可以拾取该武器702。
但是,如图9所示,图9是本申请实施例提供的掉落武器的界面示意图,在游戏的过程存在一种情况,即击杀敌方后,敌方的武器702刚好落到墙壁901的左侧,而用户控制的虚拟对象902在墙壁901的右侧。
此时,若用户控制的虚拟对象902靠近该墙壁901时,将判断武器702与虚拟对象902之间是否有障碍物,当武器702与虚拟对象902之间有障碍物时,不能显示可拾取控件,也不能拾取该武器702,用户控制的虚拟对象902必须绕到墙壁的左侧才能拾取该武器702。
2)判断可否拾取的检测逻辑的优化
相关技术中,在掉落的武器上面绑定一个碰撞盒子,这种做法会产生下面问题:
a)隔墙可拾取;
b)受限于碰撞盒子,只有在进入碰撞盒子与退出碰撞盒子的时候才能检测到掉落的武器,如果掉落的武器刚好掉落在虚拟对象的位置上时,由于虚拟对象已经处于掉落的武器的碰撞盒子的内部,虚拟对象需要先退出碰撞盒子,然后再进入碰撞盒子,才能触发显示可拾取控件的逻辑;
c)处于高层的掉落武器可能会穿过地板被底层的虚拟对象拾取。
为了解决优化可否拾取的检测逻辑,本申请实施例可以不在掉落的武器上绑定碰撞盒子,通过检测虚拟对象与掉落武器之间是否有障碍物,以解决上述碰撞盒子所带来的问题。
3)按照先近后远的拾取顺序拾取
如图10所示,图10是本申请实施例提供的掉落武器的界面示意图,当同一个位置1001同时存在多个掉落武器(武器1002和武器1003)时,此时如果用户控制的虚拟对象接近位置1001,可以拾取武器时,先显示离虚拟对象较近的武器1002。
为方便理解本申请实施例,如图11所示,图11是本申请实施例提供的虚拟道具的交互处理的流程示意图,结合图11示出的步骤进行说明。
步骤1101:开始游戏后,用户控制的虚拟对象可以寻找目标(敌方)进行击杀,在游戏过程中,必须要有目标被击杀,才能够掉落武器,当然这里不一定是敌方被击杀,也可以是队友被击杀(如果是队友被击杀,显示掉落武器是我方队友所掉落的)。
步骤1102:当现场有一个目标死亡时,用户控制的虚拟对象可以移动以靠近掉落武器,由于掉落武器上面没有挂载碰撞盒子,因此,在这里并非是通过触发响应拾取武器,而是通过数学位置距离计算的。
如图12所示,图12是本申请实施例提供的距离计算的界面示意图,点1201表示掉落武器的位置,R表示可拾取的半径,然后计算点1201与虚拟对象1202之间的距离,得出距离D,若距离D小于R,则虚拟对象1202在可拾取的范围内,即虚拟对象1202靠近掉落武器;若距离D大于R,则虚拟对象1202不在可拾取的范围内,即虚拟对象1202远离掉落武器。
步骤1103:接着判断虚拟对象与掉落武器之间有没有障碍物,检测的方式是通过虚拟对象的使用武器的枪口位置打出一条检测射线,终点为掉落武器,如图13所示,图13是本申请实施例提供的障碍物检测的界面示意图,线段1301为检测射线,当射线与障碍物有交叉时,即检测到障碍物,例如墙壁1302。因此,即使满足距离范围的检测,当检测到障碍物,也无法拾取掉落武器。
步骤1104:如果同一个范围内出现多个掉落武器,此时计算所有的掉落武器的位置与虚拟对象的距离,然后显示一个与虚拟对象距离最近的掉落武器的可拾取控件。
步骤1105:当用户点击拾取控件后,即会发送拾取协议给服务器,服务器校验当前武器是否被拾取或者是否已经消失,若确定当前武器可拾取,则返回拾取成功,并将虚拟对象使用的武器替换为当前武器。
步骤1106:若掉落武器超过一定的时间没有被拾取,则游戏中掉落武器就会消失。
因此,本申请实施例对物品拾取的功能做优化,使得虚拟对象无法穿墙拾取(即在玩家视线无法看到的时候)不会显示拾取控件,提高用户体验感。
至此已经结合本申请实施例提供的终端的示例性应用和实施,说明本申请实施例提供的虚拟道具的交互处理方法,下面继续说明本申请实施例提供的虚拟道具的交互处理装置455中各个模块配合实现多媒体信息的虚拟道具的交互处理的方案。
呈现模块4551,配置为在虚拟场景中呈现至少一个空闲虚拟道具;响应模块4552,配置为响应于控制第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移 动;处理模块4553,配置为当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示;所述响应模块4552还配置为响应于控制所述第一虚拟对象的拾取操作,控制所述第一虚拟对象拾取所述空闲虚拟道具。
在一些实施例中,所述虚拟道具的交互处理装置455还包括:检测模块4554,配置为检测所述第一虚拟对象在移动过程中与所述空闲虚拟道具之间的距离;当所述距离小于距离阈值时,对所述第一虚拟对象的使用虚拟道具与所述空闲虚拟道具之间进行障碍物检测。
在一些实施例中,所述检测模块4554还配置为基于所述第一虚拟对象在所述移动过程中的每个实时位置,对所述第一虚拟对象的使用虚拟道具与所述空闲虚拟道具之间进行障碍物检测。
在一些实施例中,所述检测模块4554还配置为通过绑定在所述使用虚拟道具上的相机组件,在所述使用虚拟道具的位置上发射检测射线,其中,所述检测射线与所述使用虚拟道具的朝向一致;基于所述检测射线确定所述使用虚拟道具与所述空闲虚拟道具之间是否存在障碍物。
在一些实施例中,所述检测模块4554还配置为当所述检测射线与绑定在障碍物上的碰撞器组件存在交叉时,确定所述使用虚拟道具与所述空闲虚拟道具之间存在所述障碍物;当所述检测射线与绑定在障碍物上的碰撞器组件不存在交叉时,确定所述使用虚拟道具与所述空闲虚拟道具之间不存在所述障碍物。
在一些实施例中,所述处理模块4553还配置为当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间存在障碍物时,呈现所述空闲虚拟道具的不可拾取提示。
在一些实施例中,所述处理模块4553还配置为当多个所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且均与所述使用虚拟道具之间不存在障碍物时,呈现多个所述空闲虚拟道具的部分空闲虚拟道具的可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取所述第一虚拟对象在移动过程中与所述空闲虚拟道具之间的距离;对多个所述空闲虚拟道具与所述第一虚拟对象之间的距离进行排序,选取最小距离所对应的空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:基于所述第一虚拟对象针对虚拟道具的使用偏好,获取所述空闲虚拟道具与所述使用偏好的匹配程度;对多个所述空闲虚拟道具与所述使用偏好的匹配程度进行排序,选取最高匹配程度所对应的空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取所述空闲虚拟道具被其他虚拟对象使用的频率;对多个所述空闲虚拟道具被其他虚拟对象使用的频率进行排序,选取最大频率所对应的空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取所述空闲虚拟道具在所述虚拟场景中的性能参数;对多个所述空闲虚拟道具在所述虚拟场景中的性能参数进行排序,选取最大性能参数所对应的空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取所述第一虚拟对象的持有虚拟道具的类型;当所述空闲虚拟道具的类型与所述持有虚拟道具的类型不同时,对所述空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:获取所述第一虚拟对象在团队中被分配的角色;当所述空闲 虚拟道具的类型与所述角色匹配时,对所述空闲虚拟道具进行可拾取提示。
在一些实施例中,所述处理模块4553还配置为通过目标显示样式呈现所述空闲虚拟道具的可拾取提示;其中,所述目标显示样式表征所述空闲虚拟道具处于可拾取状态。
在一些实施例中,所述虚拟道具的交互处理装置455还包括:定时模块4555,配置为在所述空闲虚拟道具上绑定定时器;其中,所述定时器用于在所述虚拟场景中呈现所述空闲虚拟道具时开始计时;通过所述定时器确定所述空闲虚拟道具在设定时间段内未被拾取时,停止在所述虚拟场景中呈现所述空闲虚拟道具。
在一些实施例中,所述呈现模块4551还配置为当第二虚拟对象在虚拟场景中遭受攻击而丧失持有虚拟道具的能力时,在所述第二虚拟对象遭受攻击的位置,呈现所述第二虚拟对象掉落的至少一个虚拟道具。
在一些实施例中,所述呈现模块4551还配置为当第二虚拟对象在虚拟场景主动丢弃至少一个持有虚拟道具时,将所述持有虚拟道具作为空闲虚拟道具;在所述第二虚拟对象丢弃所述持有虚拟道具的位置呈现主动丢弃的至少一个虚拟道具。
在一些实施例中,所述呈现模块4551还配置为当所述第一虚拟对象的队友在虚拟场景的放置位置放置至少一个持有虚拟道具时,将所述持有虚拟道具作为空闲虚拟道具;其中,所述空闲虚拟道具用于供所述第一虚拟对象拾取;在所述虚拟场景的地图中的所述放置位置呈现所述队友放置的至少一个空闲虚拟道具。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该电子设备执行本申请实施例上述的虚拟道具的交互处理方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟道具的交互处理方法,例如,如图3A-3C示出的虚拟道具的交互处理方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (21)

  1. 一种虚拟道具的交互处理方法,包括:
    在虚拟场景中呈现至少一个空闲虚拟道具;
    响应于控制第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
    当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示;
    响应于控制所述第一虚拟对象的拾取操作,控制所述第一虚拟对象拾取所述空闲虚拟道具。
  2. 根据权利要求1所述的方法,其中,所述呈现所述空闲虚拟道具的可拾取提示之前,所述方法还包括:
    检测所述第一虚拟对象在移动过程中与所述空闲虚拟道具之间的距离;
    当所述距离小于距离阈值时,对所述第一虚拟对象的使用虚拟道具与所述空闲虚拟道具之间进行障碍物检测。
  3. 根据权利要求1所述的方法,其中,所述呈现所述空闲虚拟道具的可拾取提示之前,所述方法还包括:
    基于所述第一虚拟对象在所述移动过程中的每个实时位置,对所述第一虚拟对象的使用虚拟道具与所述空闲虚拟道具之间进行障碍物检测。
  4. 根据权利要求2或3所述的方法,其中,所述对所述第一虚拟对象的使用虚拟道具与所述空闲虚拟道具之间进行障碍物检测,包括:
    通过绑定在所述使用虚拟道具上的相机组件,在所述使用虚拟道具的位置上发射检测射线,其中,所述检测射线与所述使用虚拟道具的朝向一致;
    基于所述检测射线确定所述使用虚拟道具与所述空闲虚拟道具之间是否存在障碍物。
  5. 根据权利要求4所述的方法,其中,所述基于所述检测射线确定所述使用虚拟道具与所述空闲虚拟道具之间是否存在障碍物,包括:
    当所述检测射线与绑定在障碍物上的碰撞器组件存在交叉时,确定所述使用虚拟道具与所述空闲虚拟道具之间存在所述障碍物;
    当所述检测射线与绑定在障碍物上的碰撞器组件不存在交叉时,确定所述使用虚拟道具与所述空闲虚拟道具之间不存在所述障碍物。
  6. 根据权利要求1所述的方法,其中,所述控制所述第一虚拟对象在所述虚拟场景中移动之后,所述方法还包括:
    当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间存在障碍物时,呈现所述空闲虚拟道具的不可拾取提示。
  7. 根据权利要求1所述的方法,其中,所述当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示,包括:
    当多个所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且均与所述使用虚拟道具之间不存在障碍物时,呈现多个所述空闲虚拟道具的部分空闲虚拟道具的可拾取提示。
  8. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    获取所述第一虚拟对象在移动过程中与所述空闲虚拟道具之间的距离;
    对多个所述空闲虚拟道具与所述第一虚拟对象之间的距离进行排序,选取最小距离所对应的空闲虚拟道具进行可拾取提示。
  9. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    基于所述第一虚拟对象针对虚拟道具的使用偏好,获取所述空闲虚拟道具与所述使用偏好的匹配程度;
    对多个所述空闲虚拟道具与所述使用偏好的匹配程度进行排序,选取最高匹配程度所对应的空闲虚拟道具进行可拾取提示。
  10. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    获取所述空闲虚拟道具被其他虚拟对象使用的频率;
    对多个所述空闲虚拟道具被其他虚拟对象使用的频率进行排序,选取最大频率所对应的空闲虚拟道具进行可拾取提示。
  11. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    获取所述空闲虚拟道具在所述虚拟场景中的性能参数;
    对多个所述空闲虚拟道具在所述虚拟场景中的性能参数进行排序,选取最大性能参数所对应的空闲虚拟道具进行可拾取提示。
  12. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    获取所述第一虚拟对象的持有虚拟道具的类型;
    当所述空闲虚拟道具的类型与所述持有虚拟道具的类型不同时,对所述空闲虚拟道具进行可拾取提示。
  13. 根据权利要求7所述的方法,其中,所述呈现多个所述空闲虚拟道具中的部分空闲虚拟道具的可拾取提示,包括:
    针对多个所述空闲虚拟道具中的任一空闲虚拟道具执行以下处理:
    获取所述第一虚拟对象在团队中被分配的角色;
    当所述空闲虚拟道具的类型与所述角色匹配时,对所述空闲虚拟道具进行可拾取提示。
  14. 根据权利要求1所述的方法,其中,所述呈现所述空闲虚拟道具的可拾取提示,包括:
    通过目标显示样式呈现所述空闲虚拟道具的可拾取提示;
    其中,所述目标显示样式表征所述空闲虚拟道具处于可拾取状态。
  15. 根据权利要求1所述的方法,其中,
    所述在虚拟场景中呈现至少一个空闲虚拟道具之前,所述方法还包括:
    在所述空闲虚拟道具上绑定定时器;
    其中,所述定时器用于在所述虚拟场景中呈现所述空闲虚拟道具时开始计时;
    所述在虚拟场景中呈现至少一个空闲虚拟道具之后,所述方法还包括:
    通过所述定时器确定所述空闲虚拟道具在设定时间段内未被拾取时,停止在所述虚拟场景中呈现所述空闲虚拟道具。
  16. 根据权利要求1所述的方法,其中,所述在虚拟场景中呈现至少一个空闲虚拟道具,包括:
    当第二虚拟对象在虚拟场景中遭受攻击、且丧失持有虚拟道具的能力时,将所述持有虚拟道具作为空闲虚拟道具;
    在所述第二虚拟对象遭受攻击的位置,呈现所述第二虚拟对象掉落的至少一个虚拟道具。
  17. 根据权利要求1所述的方法,其中,所述在虚拟场景中呈现至少一个空闲虚拟道具,包括:
    当第二虚拟对象在虚拟场景主动丢弃至少一个持有虚拟道具时,将所述持有虚拟道具作为空闲虚拟道具;
    在所述第二虚拟对象丢弃所述持有虚拟道具的位置呈现主动丢弃的至少一个虚拟道具。
  18. 根据权利要求1所述的方法,其中,所述在虚拟场景中呈现至少一个空闲虚拟道具,包括:
    当所述第一虚拟对象的队友在虚拟场景的放置位置放置至少一个持有虚拟道具时,将所述持有虚拟道具作为空闲虚拟道具;
    其中,所述空闲虚拟道具用于供所述第一虚拟对象拾取;
    在所述虚拟场景的地图中的所述放置位置呈现所述队友放置的至少一个空闲虚拟道具。
  19. 一种虚拟道具的交互处理装置,所述装置包括:
    呈现模块,配置为在虚拟场景中呈现至少一个空闲虚拟道具;
    响应模块,配置为响应于控制第一虚拟对象的移动操作,控制所述第一虚拟对象在所述虚拟场景中移动;
    处理模块,配置为当所述空闲虚拟道具位于所述第一虚拟对象的使用虚拟道具的朝向上、且与所述使用虚拟道具之间不存在障碍物时,呈现所述空闲虚拟道具的可拾取提示;
    所述响应模块还配置为响应于控制所述第一虚拟对象的拾取操作,控制所述第一虚拟对象拾取所述空闲虚拟道具。
  20. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至18任一项所述的虚拟道具的交互处理方法。
  21. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至18任一项所述的虚拟道具的交互处理方法。
PCT/CN2021/113264 2020-09-29 2021-08-18 虚拟道具的交互处理方法、装置、电子设备及可读存储介质 WO2022068452A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020227038479A KR20220163452A (ko) 2020-09-29 2021-08-18 가상 소품의 상호 작용 처리 방법, 장치, 전자 기기 및 판독 가능한 저장 매체
JP2022555126A JP7447296B2 (ja) 2020-09-29 2021-08-18 仮想道具のインタラクティブ処理方法、装置、電子機器及びコンピュータプログラム
US17/971,943 US20230040737A1 (en) 2020-09-29 2022-10-24 Method and apparatus for interaction processing of virtual item, electronic device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011057428.1 2020-09-29
CN202011057428.1A CN112121431A (zh) 2020-09-29 2020-09-29 虚拟道具的交互处理方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/971,943 Continuation US20230040737A1 (en) 2020-09-29 2022-10-24 Method and apparatus for interaction processing of virtual item, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022068452A1 true WO2022068452A1 (zh) 2022-04-07

Family

ID=73843388

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113264 WO2022068452A1 (zh) 2020-09-29 2021-08-18 虚拟道具的交互处理方法、装置、电子设备及可读存储介质

Country Status (5)

Country Link
US (1) US20230040737A1 (zh)
JP (1) JP7447296B2 (zh)
KR (1) KR20220163452A (zh)
CN (1) CN112121431A (zh)
WO (1) WO2022068452A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231553A1 (zh) * 2022-06-02 2023-12-07 腾讯科技(深圳)有限公司 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2589870A (en) * 2019-12-10 2021-06-16 Nokia Technologies Oy Placing a sound within content
CN112121431A (zh) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及存储介质
CN112717404B (zh) * 2021-01-25 2022-11-29 腾讯科技(深圳)有限公司 虚拟对象的移动处理方法、装置、电子设备及存储介质
CN114296597A (zh) * 2021-12-01 2022-04-08 腾讯科技(深圳)有限公司 虚拟场景中的对象交互方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130059634A1 (en) * 2011-09-02 2013-03-07 Zynga Inc. Apparatus, method and computer readable storage medium for guiding game play via a show me button
CN109876438A (zh) * 2019-02-20 2019-06-14 腾讯科技(深圳)有限公司 用户界面显示方法、装置、设备及存储介质
CN111282275A (zh) * 2020-03-06 2020-06-16 腾讯科技(深圳)有限公司 虚拟场景中的碰撞痕迹展示方法、装置、设备及存储介质
CN111672123A (zh) * 2020-06-10 2020-09-18 腾讯科技(深圳)有限公司 虚拟操作对象的控制方法和装置、存储介质及电子设备
CN112121431A (zh) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7245605B2 (ja) 2018-02-13 2023-03-24 株式会社バンダイナムコエンターテインメント ゲームシステム、ゲーム提供方法及びプログラム
CN108815849B (zh) * 2018-04-17 2022-02-22 腾讯科技(深圳)有限公司 虚拟场景中的物品展示方法、装置和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130059634A1 (en) * 2011-09-02 2013-03-07 Zynga Inc. Apparatus, method and computer readable storage medium for guiding game play via a show me button
CN109876438A (zh) * 2019-02-20 2019-06-14 腾讯科技(深圳)有限公司 用户界面显示方法、装置、设备及存储介质
CN111282275A (zh) * 2020-03-06 2020-06-16 腾讯科技(深圳)有限公司 虚拟场景中的碰撞痕迹展示方法、装置、设备及存储介质
CN111672123A (zh) * 2020-06-10 2020-09-18 腾讯科技(深圳)有限公司 虚拟操作对象的控制方法和装置、存储介质及电子设备
CN112121431A (zh) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的交互处理方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231553A1 (zh) * 2022-06-02 2023-12-07 腾讯科技(深圳)有限公司 虚拟场景的道具交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Also Published As

Publication number Publication date
CN112121431A (zh) 2020-12-25
KR20220163452A (ko) 2022-12-09
US20230040737A1 (en) 2023-02-09
JP7447296B2 (ja) 2024-03-11
JP2023517115A (ja) 2023-04-21

Similar Documents

Publication Publication Date Title
WO2022068452A1 (zh) 虚拟道具的交互处理方法、装置、电子设备及可读存储介质
CN112691377B (zh) 虚拟角色的控制方法、装置、电子设备及存储介质
US9327195B2 (en) Accommodating latency in a server-based application
CN112402960B (zh) 虚拟场景中状态切换方法、装置、设备及存储介质
US20230013663A1 (en) Information display method and apparatus in virtual scene, device, and computer-readable storage medium
CN112121414B (zh) 虚拟场景中的追踪方法、装置、电子设备及存储介质
US20230072503A1 (en) Display method and apparatus for virtual vehicle, device, and storage medium
CN112295230B (zh) 虚拟场景中虚拟道具的激活方法、装置、设备及存储介质
CN111803944B (zh) 一种图像处理方法、装置、电子设备及存储介质
CN112295228B (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN112057863A (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
CN112057860B (zh) 虚拟场景中激活操作控件的方法、装置、设备及存储介质
WO2023142617A1 (zh) 基于虚拟场景的射线显示方法、装置、设备以及存储介质
US20230124014A1 (en) Image display method and apparatus, device and storage medium
CN113633964A (zh) 虚拟技能的控制方法、装置、设备及计算机可读存储介质
CN112156472B (zh) 虚拟道具的控制方法、装置、设备及计算机可读存储介质
CN113144617B (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN112121433B (zh) 虚拟道具的处理方法、装置、设备及计算机可读存储介质
JP2023548922A (ja) 仮想対象の制御方法、装置、電子機器、及びコンピュータプログラム
CN113769392B (zh) 虚拟场景的状态处理方法、装置、电子设备及存储介质
CN112891930B (zh) 虚拟场景中的信息展示方法、装置、设备及存储介质
CN116764196A (zh) 虚拟场景中的处理方法、装置、设备、介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874114

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022555126

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227038479

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM XXXX DATED 11.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21874114

Country of ref document: EP

Kind code of ref document: A1