WO2023082927A1 - 虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品 - Google Patents

虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023082927A1
WO2023082927A1 PCT/CN2022/125059 CN2022125059W WO2023082927A1 WO 2023082927 A1 WO2023082927 A1 WO 2023082927A1 CN 2022125059 W CN2022125059 W CN 2022125059W WO 2023082927 A1 WO2023082927 A1 WO 2023082927A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
task
guidance
interaction
virtual object
Prior art date
Application number
PCT/CN2022/125059
Other languages
English (en)
French (fr)
Inventor
黄曼丽
张博宇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023082927A1 publication Critical patent/WO2023082927A1/zh
Priority to US18/218,387 priority Critical patent/US20230347243A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/307Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying an additional window with a view from the top of the game field, e.g. radar screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • the present application relates to human-computer interaction technology, and in particular to a task guidance method, device, electronic equipment, computer-readable storage medium and computer program product in a virtual scene.
  • Embodiments of the present application provide a task guidance method, device, electronic device, computer-readable storage medium, and computer program product in a virtual scene, which can provide targeted virtual task guidance prompts for virtual objects, thereby improving the guidance of virtual tasks. efficiency.
  • An embodiment of the present application provides a task guidance method in a virtual scene, including:
  • the interactive guidance instruction is used to instruct the non-user character to guide the virtual task of the virtual object in the virtual scene
  • the task guide information is used to guide the virtual object to execute at least one virtual task
  • An embodiment of the present application provides a task guidance device in a virtual scene, including:
  • the first receiving module is configured to receive an interactive guidance instruction for a non-user role associated with a virtual object
  • the interactive guidance instruction is used to instruct the non-user character to guide the virtual task of the virtual object in the virtual scene
  • the first presentation module is configured to present task guidance information corresponding to the interaction progress of the virtual object in response to the interaction guidance instruction;
  • the task guide information is used to guide the virtual object to execute at least one virtual task
  • the second presentation module is configured to present position guidance information of an interaction position corresponding to the target task in response to a determination instruction for the target task in the at least one virtual task based on the task guidance information.
  • An embodiment of the present application provides an electronic device, including:
  • memory configured to store executable instructions
  • the processor is configured to implement the task guidance method in the virtual scene provided by the embodiment of the present application when executing the executable instructions stored in the memory.
  • the embodiment of the present application provides a computer-readable storage medium, which stores executable instructions, and is used to cause the processor to execute the task guidance method in the virtual scene provided by the embodiment of the present application.
  • the embodiment of the present application can provide virtual task guidance for virtual objects in the virtual scene.
  • the provided task guidance information corresponds to the interaction progress of virtual objects, that is, it can be based on the player's current interactive progress, and provide virtual objects with virtual task guidance corresponding to the current interactive progress in a targeted manner, so that players can quickly and easily locate the task guidance prompts they need, greatly shortening the guidance path and improving the virtual scene.
  • the guidance efficiency of virtual tasks and the hardware processing efficiency of equipment have improved the retention rate of users.
  • FIG. 1A is a schematic diagram of an application scenario of a task guidance method in a virtual scene provided by an embodiment of the present application
  • FIG. 1B is a schematic diagram of the application scenario of the task guidance method in the virtual scene provided by the embodiment of the present application;
  • FIG. 2 is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a task guidance method in a virtual scene provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of the triggering of the interactive guidance instruction provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of the triggering of the interactive guidance instruction provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of a display interface of task guidance information provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a display interface of location guidance information provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a guidance prompt interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic display of the guidance prompt provided by the embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a task guidance method in a virtual scene provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of screening guidance information provided by the embodiment of the present application.
  • first ⁇ second is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second" is allowed The specific order or sequencing may be interchanged such that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein.
  • Client an application running on a terminal to provide various services, such as a video playback client, a game client, and the like.
  • Response is used to represent the condition or state on which the executed operation depends.
  • one or more operations to be executed may be real-time or have a set delay; Unless otherwise specified, there is no restriction on the order in which the operations are performed.
  • the virtual scene is the virtual scene displayed (or provided) when the application program is running on the terminal.
  • the virtual scene can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a pure fiction virtual environment.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimensions of the virtual scene.
  • the virtual scene may include sky, land, ocean, etc.
  • the land may include environmental elements such as deserts and cities, and the user may control virtual objects to move in the virtual scene.
  • the movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, an animal, etc. displayed in a virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • Scene data representing feature data in the virtual scene, for example, may include the position of the virtual object in the virtual scene, and may also include the waiting time for various functions configured in the virtual scene (depending on the availability of The number of times of the same function), it can also represent the attribute values of various states of the game virtual object, such as life value and magic value.
  • Embodiments of the present application provide a task guidance method, device, electronic device, computer-readable storage medium, and computer program product in a virtual scene, which can provide targeted virtual task guidance prompts for virtual objects, thereby improving the guidance of virtual tasks. efficiency.
  • an exemplary implementation scene is first described.
  • the virtual scene provided in the embodiment of the present application can be output independently based on a terminal device or a server, or based on a terminal device and a server Collaborative output.
  • the virtual scene can also be an environment for game characters to interact.
  • it can be used for game characters to fight in the virtual scene.
  • game characters By controlling the actions of the game characters, both parties can interact in the virtual scene, so that users can Relieve the stress of life during the game.
  • FIG. 1A is a schematic diagram of the application scenario of the task guidance method in the virtual scene provided by the embodiment of the present application.
  • the computing power of the graphics processing hardware of the terminal device 400 can be used to complete the calculation of the relevant data of the virtual scene 100, such as a stand-alone version/offline mode game, through a smart phone, a tablet computer and a virtual reality/augmented reality device, etc.
  • Various types of terminal devices 400 complete the output of the virtual scene.
  • the type of graphics processing hardware includes a central processing unit (CPU, Central Processing Unit) and a graphics processing unit (GPU, Graphics Processing Unit).
  • the terminal device 400 calculates and displays the required data through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs video frames capable of forming a visual perception of the virtual scene on the graphics output hardware , for example, displaying a two-dimensional video frame on the display screen of a smart phone, or projecting a three-dimensional display effect video frame on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the terminal device 400 can also use Different hardware to form one or more of auditory perception, tactile perception, motion perception and taste perception.
  • a client 410 (such as a stand-alone game application) is run on the terminal device 400, and a virtual scene 100 including role-playing is output during the running of the client 410.
  • the virtual scene 100 may be an environment for game characters to interact. , for example, can be plains, streets, valleys, etc.
  • the virtual scene 100 includes a virtual object 110 and a non-user character 120
  • the virtual object 110 can be a game character controlled by a user (or player) , that is, the virtual object 110 is controlled by the real user, and will move in the virtual scene in response to the real user's operation on the controller (including touch screen, voice switch, keyboard, mouse, joystick, etc.), for example, when the real user moves to the When the rocker is moved to the left, the virtual object 110 will move to the left in the virtual scene, and can also keep in place, jump and use various functions (such as skills and props); the non-user character 120 is the virtual object in the virtual scene 110
  • the associated physical objects can exist in the virtual scene at all times, move with the movement of the virtual object 110, or appear in the virtual scene through the call of the virtual object 110, and are used to control the virtual object 110 in the virtual scene.
  • the virtual task in provides guidance, such as guiding the virtual object 110 to execute the virtual task corresponding to the current interaction progress.
  • the terminal receives an interaction guidance instruction for a non-user character associated with the virtual object; in response to the interaction guidance instruction, task guidance information corresponding to the interaction progress of the virtual object is presented; wherein, the task guidance information is used to guide the virtual object Executing at least one virtual task; based on the task guidance information, in response to a determination instruction for the target task in the at least one virtual task, presenting position guidance information of an interaction position corresponding to the target task; since the presented task guidance information is related to the virtual object Corresponding to the current interaction progress of the user, therefore, the virtual tasks guided and executed are more targeted, and the guiding efficiency is improved.
  • FIG. 1B is a schematic diagram of the application scenario of the task guidance method in the virtual scenario provided by the embodiment of the present application.
  • the relevant data of the scene is calculated, and the virtual scene is output on the terminal device 400 .
  • the server 200 calculates the display data related to the virtual scene (such as scene data) and sends it to the terminal device 400 through the network 300, and the terminal device 400 relies on graphics computing hardware to complete the calculation and display data loading.
  • parsing and rendering relying on graphics output hardware to output virtual scenes to form visual perception, for example, two-dimensional video frames can be presented on the display screen of a smartphone, or projected on the lenses of augmented reality/virtual reality glasses to achieve three-dimensional display effects
  • graphics output hardware to output virtual scenes to form visual perception
  • two-dimensional video frames can be presented on the display screen of a smartphone, or projected on the lenses of augmented reality/virtual reality glasses to achieve three-dimensional display effects
  • the corresponding hardware output of the terminal device 400 can be used, such as using a microphone to form an auditory perception, using a vibrator to form a tactile perception, and so on.
  • the terminal device 400 runs a client 410 (such as a game application in the online version), and interacts with other users by connecting to the server 200 (such as a game server), and the terminal device 400 outputs the virtual scene 100 of the client 410.
  • the scene 100 can be an environment for game characters to interact, for example, it can be a plain, a street, a valley, etc.
  • the game character controlled by the user that is, the virtual object 110 is controlled by the real user, and will respond to the real user's operation on the controller (including touch screen, voice-activated switch, keyboard, mouse and joystick, etc.) Movement in the virtual scene, for example, when the real user moves the joystick to the left, the virtual object 110 will move to the left in the virtual scene, and can also stay still in place, jump and use various functions (such as skills and props);
  • the user role 120 is an entity object related to the virtual object 110 in the virtual scene, which can exist in the virtual scene at all times, move with the movement of the virtual object 110, or appear in the virtual scene through the call of the virtual object 110 , for providing guidance to virtual tasks of the virtual object 110 in the virtual scene, for example, guiding the virtual object 110 to execute a virtual task corresponding to the current interaction progress.
  • the terminal receives an interaction guidance instruction for a non-user character associated with the virtual object; in response to the interaction guidance instruction, task guidance information corresponding to the interaction progress of the virtual object is presented; wherein, the task guidance information is used to guide the virtual object Executing at least one virtual task; based on the task guidance information, in response to a determination instruction for the target task in the at least one virtual task, presenting position guidance information of an interaction position corresponding to the target task; since the presented task guidance information is related to the virtual object Corresponding to the current interaction progress of the user, therefore, the guided execution of the virtual task is more targeted, and the guidance efficiency of the virtual task in the virtual scene is improved.
  • the terminal device 400 can implement the task guidance method in the virtual scene provided by the embodiment of the present application by running a computer program.
  • the computer program can be a native program or a software module in the operating system; it can be a local ( Native) application (APP, APPlication), that is, a program that needs to be installed in the operating system to run, such as a shooting game APP (that is, the above-mentioned client 410); it can also be a small program, that is, it only needs to be downloaded to the browser environment It can also be a program that can be run in any APP; it can also be a small game program that can be embedded in any APP.
  • the above-mentioned computer program can be any form of application program, module or plug-in.
  • the terminal device 400 installs and runs an application program supporting a virtual scene.
  • the application program may be any one of a first-person shooter game (FPS, First-Person Shooting game), a third-person shooter game, a virtual reality application program, a three-dimensional map program, or a multiplayer gun battle survival game.
  • the user uses the terminal device 400 to operate the virtual objects located in the virtual scene to carry out activities, such activities include but not limited to: adjusting body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building virtual At least one of the buildings.
  • the virtual object may be a virtual character, such as a simulated character or an anime character.
  • Cloud Technology refers to the unification of a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, A hosted technology for processing and sharing.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. It can form a resource pool and be used on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background service of the technical network system requires a large amount of computing and storage resources.
  • the server 200 in FIG. 1B can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, and cloud storage. , network services, cloud communications, middleware services, domain name services, security services, content distribution network (CDN, Content Delivery Network), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal device 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal device 400 and the server 200 may be connected directly or indirectly through wired or wireless communication, which is not limited in this embodiment of the present application.
  • the methods provided in the embodiments of the present application can be implemented by various electronic devices or computer devices, for example, it can be implemented by a terminal alone, it can be implemented by a server alone, or it can be implemented by a terminal and a server in cooperation.
  • the electronic device that implements the method provided by the embodiment of the present application will be described below.
  • the electronic device may be a terminal device or a server.
  • the electronic device is a terminal device as an example, and the structure of the terminal device 400 shown in FIG. 1A is described. illustrate.
  • FIG. 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • Various components in the terminal device 400 are coupled together via the bus system 450 .
  • bus system 450 is used to realize connection and communication between these components.
  • the bus system 450 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 450 in FIG. 2 .
  • Processor 420 can be a kind of integrated circuit chip, has signal processing capability, such as general-purpose processor, digital signal processor (DSP, Digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware Components, etc., wherein the general-purpose processor can be a microprocessor or any conventional processor, etc.
  • DSP digital signal processor
  • DSP Digital Signal Processor
  • User interface 440 includes one or more output devices 441 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
  • the user interface 440 also includes one or more input devices 442, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
  • Memory 460 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 460 optionally includes one or more storage devices located physically remote from processor 420 .
  • Memory 460 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), and the volatile memory can be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 460 described in the embodiment of the present application is intended to include any suitable type of memory.
  • memory 460 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • Operating system 461 including system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • a network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430.
  • Exemplary network interfaces 430 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • Presentation module 463 for enabling presentation of information via one or more output devices 441 (e.g., display screen, speakers, etc.) associated with user interface 440 (e.g., a user interface for operating peripherals and displaying content and information );
  • output devices 441 e.g., display screen, speakers, etc.
  • user interface 440 e.g., a user interface for operating peripherals and displaying content and information
  • the input processing module 464 is configured to detect one or more user inputs or interactions from one or more of the input devices 442 and translate the detected inputs or interactions.
  • the task guidance device in the virtual scene provided by the embodiment of the present application can be realized by software.
  • the task guidance device in the virtual scene provided by the embodiment of the present application may be realized by hardware.
  • the task guidance device in the virtual scene provided by the embodiment of the present application may use a hardware decoding processor
  • a processor in the form of a processor which is programmed to execute the task guidance method in the virtual scene provided by the embodiment of the present application
  • the processor in the form of a hardware decoding processor can adopt one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field Programmable Gate Array (FPGA, Field-Programmable GateArray) or other electronic components.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable GateArray
  • the task guidance method in the virtual scene provided by the embodiment of the present application will be described below with reference to the accompanying drawings.
  • the task guidance method in the virtual scene provided by the embodiment of the present application may be executed solely by the terminal device 400 in FIG. 1A , or may be executed cooperatively by the terminal device 400 and the server 200 in FIG. 1B .
  • FIG. 3 is a schematic flowchart of a task guidance method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 3 .
  • the method shown in FIG. 3 can be executed by various forms of computer programs running on the terminal device 400, and is not limited to the above-mentioned client 410, and can also be the above-mentioned operating system 461, software Modules and scripts, so the client should not be regarded as limiting the embodiment of this application.
  • Step 101 The terminal receives an interactive guidance instruction for a non-user character associated with a virtual object.
  • the interactive guidance instruction is used to instruct the non-user character to guide the virtual task of the virtual object in the virtual scene.
  • a client is installed on the terminal, and the client can be a client of the virtual scene, such as a game client, or other clients with game functions, such as a video client with game functions. Take the client as an example.
  • the terminal runs the client, it can display the interface of the virtual scene, and in the interface of the virtual scene, display the virtual object of the current login account.
  • the user can communicate with other users in the virtual scene. interact with virtual objects, or perform virtual tasks in virtual scenes, etc.
  • each virtual object can be associated with at least one non-user role, and the non-user role can guide the virtual task of the virtual object in the virtual scene, and the guidance can be triggered by the interaction of the terminal in the interface of the virtual scene implemented by the bootstrap command.
  • the terminal may receive the interactive guidance instruction for the non-user character associated with the virtual object in the following manner: present the non-user character associated with the virtual object in the virtual scene, wherein the non-user character follows the movement of the virtual object Move; an interactive guidance instruction is received in response to a trigger operation for a non-user character.
  • non-user roles can be physical objects that are associated with virtual objects in the virtual scene, and non-user roles can always exist in the virtual scene with virtual objects, or only appear in specific scenarios, such as non-user roles
  • the character appears to give guidance to the virtual object when the virtual object is in distress or confused, or appears when receiving a call instruction from the virtual object, etc.
  • This embodiment of the present application does not limit the timing and form of appearance of non-user characters .
  • FIG. 4 is a schematic diagram of the triggering of the interactive guidance instruction provided by the embodiment of the present application.
  • the non-user character 401 appears in the virtual scene, it moves with the movement of the virtual object 402.
  • the non-user role 401 can be triggered (click, double-click, long press, etc.), and the terminal receives the interactive guidance instruction in response to the trigger operation, And present corresponding task guidance information to guide the interaction of virtual objects.
  • the terminal may receive an interaction guide instruction for a non-user character associated with a virtual object in the following manner: present a guide control, wherein the guide control is used to trigger a guide session between the virtual object and the non-user character, and the guide session uses To guide the virtual task of the virtual object in the virtual scene; in response to the trigger operation on the guide control, an interactive guide instruction is received.
  • the guidance control is the entrance to trigger the guidance session between the virtual object and the non-user character, and it is also a tool for the user to interact with the non-user character (such as inputting or manipulating data), which can be presented in the interface of the virtual scene in the form of icons or buttons
  • the user can trigger the guide control at any time to establish a conversational connection between the non-user character and the virtual object, and request the non-user character's interactive guidance through the established conversational connection.
  • FIG. 5 is a schematic diagram of the triggering of the interactive guidance instruction provided by the embodiment of the present application.
  • the terminal When the user triggers (click, double-click, long press, etc.) the guidance control 501, the terminal establishes a non-user role in response to the trigger operation. It is connected with the session between the virtual object, and receives the interaction guidance instruction, so as to present the corresponding task guidance information for the interaction guidance of the virtual object.
  • the terminal may receive an interactive guidance instruction for a non-user character associated with a virtual object in the following manner: presenting a voice input control; in response to a trigger operation on the voice input control, presenting Collect instruction information, and when the collection instruction information indicates that the collection is complete, perform content recognition on the collected voice; when the voice content contains target content associated with non-user roles, an interactive guidance instruction is received.
  • the voice input control is the entrance to trigger the guidance session between the virtual object and the non-user character, and it is also a tool for the user to call out the non-user character through voice to establish a guidance session.
  • the terminal collects the voice entered by the user, And carry out content recognition on the voice.
  • the content of the user contains the target content for calling out the non-user character, the non-user character can be called out and the interactive guidance instruction is triggered.
  • the target content is a preset virtual object that can be called out.
  • the content of the associated non-user role is a preset virtual object that can be called out.
  • the terminal may receive an interactive guidance instruction for a non-user character associated with a virtual object in the following manner: acquire a timer for regularly guiding the virtual object to perform a virtual task; when it is determined based on the timer that the target time arrives, And when the virtual object does not execute the virtual task within the target time period before the target time, it receives the interactive guidance instruction triggered by the timer.
  • the above-mentioned target time and target time period can be set according to the actual situation, and the interactive guidance command can be triggered by a timer, that is, the non-user character is automatically triggered to interact with the virtual object, such as 5:00-7:00 pm every day, It is judged whether the virtual object has performed the virtual task of building a rest place, and when it is determined that the virtual object has not yet performed the virtual task of building a rest place, an interactive guidance instruction is automatically triggered to guide the virtual object to build a rest place for rest.
  • a timer that is, the non-user character is automatically triggered to interact with the virtual object, such as 5:00-7:00 pm every day
  • non-user characters can also actively provide strong guidance or weak guidance to virtual objects based on the current interaction progress of virtual objects and the necessity of virtual tasks to be guided.
  • strong guidance is used to prompt virtual objects that must be executed.
  • Virtual tasks such as virtual tasks that have not been executed but must be executed within the target time period, continue to prompt the guide until the virtual object executes the virtual task;
  • weak guidance is used to prompt the virtual task that the virtual object is recommended to perform, that is, under the weak guidance , the user can choose to execute or not execute the virtual task based on the virtual object. For example, for a virtual task that has not been executed within the target time period but is not required to be executed, it will be prompted several times (for example, three times periodically).
  • Step 102 In response to the interaction guidance instruction, present task guidance information corresponding to the progress of the interaction with the virtual object.
  • the task guiding information is used to guide the virtual object to execute at least one virtual task.
  • the virtual task may include at least one of the following tasks: performing target operations on virtual items in the virtual scene, making target props based on virtual materials in the virtual scene, and performing interactions with at least one target virtual object Operation; such as digging underground equipment accessories, money and other useful things, or collecting resources such as wood, stone, ore to manufacture tools, build houses, and kill various powerful opponents, etc.
  • the virtual scene interface presents the task guidance information given by the non-user character to the virtual task that the virtual object should perform at the moment, for example, in Figure 5, in the virtual scene interface Present the response information 502 of non-user characters responding to the trigger operation such as "creator, what can I do for you?", and present the candidate virtual tasks corresponding to the response information 502 that can be selected by the user, such as "what to do next?"
  • the terminal may present task guidance information corresponding to the progress of the interaction of the virtual object in the following manner: determine the interaction attribute of the virtual object, and the interaction attribute includes at least one of the following: interaction preference, interaction level, virtual Materials, interaction environment; determine the corresponding interaction progress based on the interaction attributes, and present task guidance information corresponding to the interaction progress.
  • the interaction preference is used to characterize the interaction tendency of the virtual object, such as whether the virtual object prefers the production of target props (such as building a house), or prefers to interact with the target virtual object (such as fighting with the target object), etc.
  • target props such as building a house
  • target virtual object such as fighting with the target object
  • the guidance information most needed at different interaction progress is often different. For example, for virtual objects that prefer construction rather than combat, they often need more guidance prompts about combat in the early stage of interaction, and more in the later stage of interaction. Or advanced stages, often need more guidance hints about building.
  • Interaction level refers to the advanced degree of interaction of virtual objects. Different interaction levels correspond to different difficulties or different types of virtual tasks. For example, the higher the interaction level, the more challenging the corresponding virtual task.
  • the virtual substance refers to the substance that the virtual object has acquired during the interaction process. According to the virtual substance obtained by the virtual object, the task guidance information corresponding to the virtual substance is provided. If the virtual object has obtained more building materials, it will give guidance tips on construction , to guide the virtual object to use the existing building materials to perform the virtual task of building a house; and if the virtual object has not obtained any wood so far, give a guidance prompt about cutting trees to guide the virtual object to perform the virtual task of cutting trees to obtain wood Prepare for the subsequent construction of houses or boats, etc.
  • the interactive environment refers to the situation of the virtual object in the virtual scene, or the environment of the target area targeted by the virtual props held by the virtual object.
  • a guidance prompt for guiding the virtual object to engage in combat with the enemy For example, if the strength of the virtual object is stronger than that of the enemy, and the probability of winning the battle with the enemy is high, a guidance prompt for guiding the virtual object to engage in combat with the enemy is displayed; when the virtual The strength of the object is very different from that of the enemy, and when the probability of the virtual object winning the battle with the enemy is small or impossible, a guidance prompt to guide the virtual object not to fight the enemy is presented.
  • the interaction progress of the target virtual task currently being executed by the virtual object determines the interaction progress of the target virtual task currently being executed by the virtual object according to the interaction attributes of the virtual object, and then determine task guidance information based on the interaction progress, for example, the virtual object Preference for construction, when the virtual object requests interactive guidance, when it is determined that the virtual object is under construction, the virtual object is provided with construction task guidance information about the corresponding interactive progress according to the interactive progress of the construction, so as to provide the virtual object with the most urgent and most suitable for the current interaction
  • the experience of virtual tasks in different situations has high guiding efficiency, which helps virtual objects experience interactive fun in virtual scenes and improves user retention.
  • the terminal may present task guidance information corresponding to the interactive progress of the virtual object in the following manner: present a guidance interface corresponding to a non-user character; in the guidance interface, present a task corresponding to the interactive progress of the virtual object boot information.
  • the guide interface includes a first display area and a second display area, and the terminal can present task guide information corresponding to the interactive progress of the virtual object in the following manner: in the first display area, display non-user characters for The guidance problem of the virtual object, and in the second display area, display the candidate virtual tasks corresponding to the guidance problem, wherein, the candidate virtual task corresponds to the interactive progress of the virtual object; the displayed guidance problem and the candidate virtual task are determined as task guidance information.
  • the task guidance information can be displayed in the form of a floating window or a pop-up window suspended in the interface of the virtual scene, or in the form of a guidance interface in an interface independent of the virtual scene.
  • the task guidance information can be displayed in text displayed in the style of voice, or in the form of voice.
  • the task guidance information may include non-user role guidance questions for virtual objects and candidate virtual tasks for guidance questions. For example, see FIG.
  • the guidance interface includes a first display area 601 and a second display area 602, the guidance problem is displayed in the first display area 601, and candidate virtual tasks for the guidance problem are displayed in the second display area 602, and these candidate virtual tasks are all Corresponding to the interactive progress of the virtual object, the user can select a target task to execute from the candidate virtual tasks.
  • the terminal can display the candidate virtual tasks corresponding to the guidance question in the second display area in the following manner: when the number of candidate virtual tasks is at least two, determine the priority of each candidate virtual task; In the display style in which the level is high and the corresponding candidate virtual tasks are in front, each candidate virtual task corresponding to the guidance question is displayed in the second display area.
  • the priority can be determined according to the importance, urgency, and beneficial effects brought to the virtual object by the candidate virtual task.
  • the more important, the more urgent, or the higher the beneficial effect brought by the candidate virtual task The higher the corresponding priority, the higher priority candidate virtual tasks are arranged first.
  • Ranking the candidate virtual task of "Building" before the candidate virtual task of "Adventure” makes it easier for users to choose and execute the candidate virtual task that is most conducive to the current interaction progress, and provides users with priority options that are more suitable for the user's actual situation , making it easier for the user to select a virtual task, saving the time for the user to select from many virtual tasks, improving the user's selection efficiency when there are multiple candidate virtual tasks, and improving user experience.
  • the second display area includes at least two sub-display areas, and each sub-display area corresponds to a task category; the terminal can display candidate virtual tasks corresponding to the guidance question in the second display area in the following manner: when the candidate virtual tasks When the number of tasks is at least two, the task category to which each candidate virtual task belongs is determined; according to the task category, corresponding candidate virtual tasks are displayed in a corresponding sub-display area in the second display area.
  • the virtual task is divided into multiple task categories, such as task categories may include: construction, adventure, combat, etc., usually, each task category includes There are multiple virtual tasks.
  • construction-type virtual tasks may include but not limited to: looking for material A, looking for material B, building a house 1, building a ship 2, and adventure-type virtual tasks include but are not limited to: adventure task 1, Adventure task 2, etc.; when the task guidance information includes multiple candidate virtual tasks, according to the task category to which each candidate virtual task belongs, the candidate virtual tasks belonging to different task categories are listed in different sub-display areas for display, It avoids displaying all the candidate virtual tasks together, which is convenient for the user to select and execute the candidate virtual task that is suitable for the current task category and is most conducive to the current interaction progress, thereby improving the user experience.
  • the terminal may present task guidance information corresponding to the interactive progress of the virtual object in the following manner: present a non-user character, and present a conversation bubble corresponding to the non-user character in an associated area of the non-user character; wherein, The conversation bubble includes: task guidance information corresponding to the interactive progress of the virtual object.
  • a conversation bubble appears in the associated area of the non-user character, such as a certain position around the non-user character.
  • the number of conversation bubbles can be one or more, each One or more pieces of task guidance information can be included in the conversation bubble, and the task guidance information is displayed in the style of the conversation bubble, which improves the simulation of the prompt, is more in line with the real guidance scene, and improves the user experience.
  • the terminal receives the input session response information for the non-user role, and presents the session response information in the form of a session bubble in the associated area of the virtual object; when the session response information indicates that a target task is selected, A definitive command is received for the target task.
  • the virtual object can also use the style of a conversation bubble to feed back conversation response information for the task guidance information.
  • the terminal responds to the non-user
  • the conversation bubbles of the user role and the conversation bubbles of the virtual objects are displayed differently, such as in different colors, or in different bubble sizes.
  • Responding to the determination instruction for the target task presenting the task guidance information about the target task; in this way, the conversation information is transmitted between the virtual object and the non-user character through the style of the conversation bubble, which improves the simulation of the prompt, It is more in line with the real guidance scene and improves the user experience.
  • the terminal may further perform the following processing: Acquire the interaction data of the virtual object, where the interaction data is used to indicate the interaction of the virtual object in the virtual scene progress, call the machine learning model based on the interaction data for prediction processing, and obtain candidate virtual tasks, where the machine learning model is trained based on the interaction data of the training samples and the marked virtual tasks; thus, by calling the machine learning model Predicting the virtual tasks corresponding to the current interaction progress of virtual objects can make the candidate virtual tasks for each recommended guidance more suitable for the current interaction progress and more targeted, and then based on targeted guidance prompts, it can improve guidance efficiency and User retention rate.
  • the above-mentioned machine learning model can be a neural network model (such as a convolutional neural network, a deep convolutional neural network, or a fully connected neural network, etc.), a decision tree model, a gradient boosting tree, multiple Layer perception machine, support vector machine, etc., the embodiment of the present application does not specifically limit the type of the machine learning model.
  • a neural network model such as a convolutional neural network, a deep convolutional neural network, or a fully connected neural network, etc.
  • a decision tree model such as a convolutional neural network, a deep convolutional neural network, or a fully connected neural network, etc.
  • a gradient boosting tree such as a gradient boosting tree, multiple Layer perception machine, support vector machine, etc.
  • the embodiment of this application involves the user's login account, interaction data and other related data.
  • the embodiment of the application is applied to a specific product or technology, it is necessary to obtain the user's permission or consent, and the collection of relevant data , use and processing need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • Step 103 Based on the task guidance information, in response to a determination instruction for at least one target task in the virtual task, present position guidance information of an interaction position corresponding to the target task.
  • the location guidance information is used to indicate the interactive location where the target task is performed, for example, when the target task is a tree felling task, the location guidance information is used to indicate the location of the tree felling (such as a forest).
  • the terminal may present the position guidance information of the interaction position corresponding to the target task in the following manner: present a map corresponding to the virtual scene; in the map, present the position guidance information of the interaction position corresponding to the target task.
  • the terminal receives a determination instruction for the target task in response to the user's selection operation, and presents a map of the virtual scene in the interface of the virtual scene , and present the location guidance information of the interaction location corresponding to the target task on the map, such as highlighting the interaction location corresponding to the target task to distinguish it from other locations, and using the highlighted interaction location as location guidance information.
  • FIG. 7 is a schematic diagram of the display interface of the location guidance information provided by the embodiment of the present application.
  • the terminal When the user selects the target task 701 of "I want to build", the terminal responds to the selection operation and presents a non-user character aiming at the target task 701. Simultaneously with the guidance prompt information 702 of the task, a map 703 of the virtual scene is presented, and a flashing special effect is displayed at the interaction position 704 corresponding to the target task in the map 703 to prompt the user to perform the target task at the interaction position 704 .
  • the terminal when the virtual object moves in the virtual scene, the non-user character moves in the virtual scene following the virtual object, and the terminal can also display a picture of the non-user character moving in the virtual scene following the virtual object; correspondingly, the terminal can use the following Ways to present the position guidance information of the interaction position corresponding to the target task: during the process of the non-user character moving with the virtual object, present the position guidance information of the non-user character guiding the virtual object to move to the interaction position corresponding to the target task.
  • non-user characters can move with the movement of virtual objects
  • the terminal responds to the user's selection operation and receives the
  • the task determination instruction presents non-user characters in the form of text or voice to guide the virtual object to move to the interactive position corresponding to the target task, such as "go straight ahead for 10 meters, turn left for 5 meters to reach the forest in the northeast corner of the plain ", the user can control the virtual object to move according to the location guidance information of the non-user character, so that it is more in line with the real guidance scene and improves the user experience; of course, in practical applications, the location guidance information can also be presented in other ways, Such as graphics, animation, etc.
  • the terminal can also control the virtual object to move toward the interaction position in response to the control instruction for the virtual object triggered based on the position guidance information; when the virtual object moves When reaching the interaction position, the virtual object is controlled to perform the target task at the interaction position.
  • the user can trigger the controller (including but not limited to touch screen, voice control switch, keyboard, mouse, joystick, etc.) or the operation control in the interface of the virtual scene, etc.) to control the virtual object to move towards the indicated interactive position in the virtual scene, when the virtual object moves to the interactive position, the controller controls the virtual object to execute the target at the interactive position Task.
  • the controller including but not limited to touch screen, voice control switch, keyboard, mouse, joystick, etc.
  • the controller controls the virtual object to execute the target at the interactive position Task.
  • the terminal after the terminal presents the position guidance information of the interaction position corresponding to the target task, it can also output the virtual prop guidance information of the non-user character for the virtual object; wherein, the virtual prop guidance information is used to guide the virtual object to use and Virtual props suitable for target tasks.
  • the non-user character can also provide the guidance prompt information of the virtual props needed to perform the target task.
  • the guidance prompts of the provided virtual props are also targeted.
  • the embodiment of the present application provides a task guidance method in a virtual scene, which can provide the player with the most urgent and appropriate guidance prompt information in a targeted manner based on the player's current game progress (that is, the above-mentioned interaction progress), It enables players to quickly and easily locate the prompts they need, greatly shortens the guidance path, improves guidance efficiency and user retention rate.
  • Fig. 9 is a schematic display diagram of the guidance prompt provided by the embodiment of the present application.
  • the player needs to get the guidance prompt, he can trigger a dialogue with a non-user character in the game. What can I do for you?" This response message, and presents the options for the user to choose corresponding to the response message, such as "what to do next", “it's okay, let's go”, when the player selects "next The option of "Why?” means that the player wants to get guidance instructions.
  • the guidance interface will display a guidance such as "I have lived in the plains for hundreds of years, and I know everything here.
  • guidance prompts mainly involve the following aspects: guidance conditions, guidance priority, guidance distinction and guidance strength.
  • the guidance condition refers to the condition that triggers the interactive guidance instruction.
  • Different guidance conditions correspond to different task guidance information.
  • the embodiment of this application strives to give the player the most suitable task guidance information for the current interaction progress. record, the player is prompted to perform the virtual task of cutting down trees, and at the same time, a reminder is flashed on the game mini-map where there are abundant tree resources.
  • Guidance priority mainly lies in setting the display order of guidance prompts.
  • the guidance prompt order of each candidate virtual task is determined according to the guidance priority of each candidate virtual task. Arranging the guidance prompts of candidate virtual tasks with high guidance priority in the front makes it easier for the user to select and execute the candidate virtual task that is most beneficial to the progress of the current interaction.
  • the guidance distinction mainly lies in classifying the guidance prompts according to the different depths of interactions that players prefer.
  • the candidate virtual tasks belonging to different task categories will be classified according to the task category to which each candidate virtual task belongs.
  • the tasks are displayed in different display areas, which avoids mixing all candidate virtual tasks together for display, and facilitates users to choose and execute the candidate virtual tasks that are suitable for the current task category and most conducive to the current interaction progress.
  • Guidance strength mainly lies in dividing guidance into strong guidance and weak guidance according to the necessity of guidance. Strong guidance is used to remind players of virtual tasks that must be performed. Until the player executes the virtual task; the weak guidance is used to prompt the virtual task that the player is suggested to perform, such as for a virtual task that has not been executed within the target time period but is not necessary to be executed, prompt several times, when the number of prompts reaches the preset target number of times , no longer prompt to boot. In this way, according to the player's current interaction progress and the necessity of virtual task guidance, timely and proactively give the player interactive guidance suitable for the current scene, which improves the guidance efficiency and interactive experience.
  • Table 1 shows the prompt types (such as construction prompts, creation prompts, and combat prompts), prompt priority numbers, prompts for different types of virtual tasks. Conditions, prompt strength (number of prompts), and the dialog content of the prompt.
  • prompt types such as construction prompts, creation prompts, and combat prompts
  • prompt priority numbers such as construction prompts, creation prompts, and combat prompts
  • prompts for different types of virtual tasks such as construction prompts, creation prompts, and combat prompts
  • Conditions prompt strength (number of prompts), and the dialog content of the prompt.
  • Table 2 shows that when the task guidance information indicates the interaction position of the corresponding virtual task, each virtual task corresponds to a marker ID, which is displayed in the mini-map of the game in the form of marker ID.
  • FIG. 10 is a schematic flow diagram of the task guidance method in the virtual scene provided by the embodiment of the present application. Taking the coordination between the terminal and the server to implement the task guidance method in the virtual scene provided by the embodiment of the present application as an example, this application The task guidance method in the virtual scene provided by the embodiment is described, and the method includes:
  • Step 201 The terminal receives and sends an interactive guidance instruction to the server in response to a trigger operation on the guidance control.
  • the guidance control is used to trigger a guidance session between the player and the non-user character
  • the guidance session is used to guide the player to perform game tasks in the game
  • the interactive guidance instruction is used to request the non-user character to be based on guidance prompt information.
  • Step 202 The server determines the response dialog information fed back by the non-user character in response to the interactive guidance instruction.
  • Step 203 the server returns the response dialog information to the terminal.
  • the server after the server receives the interactive guidance instruction, it triggers the guidance dialogue behavior of the non-user character, such as controlling the non-user character to turn to the player, and then feeds back the response dialogue information configured by the guidance dialogue behavior to the terminal for display.
  • the guidance dialogue behavior of the non-user character such as controlling the non-user character to turn to the player
  • Step 204 The terminal displays the response dialog information.
  • the response dialogue information includes response information and response options, such as the response information is "creator, what can I do for you?", the response options include the first option and the second option, wherein the first option is “what to do next?” OK", and the second option is "It's all right, let's move on”.
  • Step 205 The terminal ends the dialogue message in response to the trigger operation on the second option.
  • the terminal will send an end dialogue message to the server, and the server will end the non-user character's guided dialogue behavior after receiving the end message, so as to cancel the player and dialogue displayed on the terminal. Conversations between non-user characters.
  • Step 206 The terminal presents task guidance information corresponding to the interaction progress of the virtual object in response to the trigger operation on the first option.
  • the task guidance information includes guidance questions and corresponding Candidate virtual tasks for the guiding question, such as showing the guiding question "I have lived in the plains for hundreds of years, I know everything here, what do you want to do?", and show the candidate virtual tasks for the guiding question, such as "I want to build", “I want to take risks", and "I want to fight”, among which, these candidate virtual tasks correspond to the player's current interaction progress, and the user can select a target task from multiple candidate virtual tasks to execute.
  • Step 207 The terminal sends a guidance request for the target task to the server in response to the determination instruction for the target task based on the task guidance information.
  • the guidance request carries the task identifier of the target task.
  • Step 208 Based on the guidance request, the server determines guidance information corresponding to the target task.
  • the server screens out the guidance information corresponding to the target task.
  • the guidance information can be the position guidance information corresponding to the interaction position of the target task.
  • the guidance information is used to indicate the The location of the tree (eg forest).
  • the guide prompt table shown in Table 3 and the prompt type table shown in Table 4 can also be used to record the information corresponding to each prompt type. list of hints.
  • Fig. 11 is a schematic diagram of screening guidance information provided by the embodiment of the present application, corresponding to the three options of "I want to take risks", “I want to build” and “I want to fight”, and the guidance prompts are divided into three categories, namely "Exploration Tips", “Creation Tips” and “Battle Tips", each type of tips can contain multiple tips lists.
  • the server obtains the option selected by the user, it first obtains the corresponding tip list according to the corresponding tip type. For tips Each boot prompt in the list is first grouped and sorted according to the priority of the prompt.
  • Tips with the same priority are grouped into one group, and the groups are sorted according to the priority from high to low, and then the group of boot prompts with the highest priority , execute step 301-step 305 shown in Fig. 11, in step 301, judge whether to satisfy prompting condition, when confirming to satisfy prompting condition, execute step 302; otherwise execute step 305; Strong guidance, when it belongs to strong guidance, execute step 304; otherwise, execute step 303; in step 303, judge whether the number of prompts for guidance prompts reaches the preset target number of times (such as 5 times), when the target number of times is not reached, execute Step 304, keep the guide prompt in step 304, when the target number of times is reached, execute step 305, in step 305, filter the guide prompt.
  • the preset target number of times such as 5 times
  • Step 209 the server returns the guidance information corresponding to the target task to the terminal.
  • Step 210 The terminal displays the guidance information corresponding to the target task.
  • the terminal After receiving the guide information returned by the server, the terminal displays it in a corresponding display manner according to the information type of the guide information.
  • the small map of the game is displayed in the upper left corner, and flashing special effects are displayed at the interactive position in the small map to prompt the player to perform the target task at the interactive position, providing the player with the game destination and game experience goals.
  • the terminal When the player clicks the exit control for exiting the guidance in the game interface, or clicks anywhere in the game interface, the terminal responds to the user's trigger operation and sends an end dialogue message to the server, and the server ends the guidance after receiving the end dialogue message dialogue act.
  • the embodiment of the present application provides the most urgent next game goal according to the player's personal game preference and current game progress, helping the player focus on the next step behavior, so as to continue
  • the game allows players to quickly and easily locate the prompts they need, greatly shortening the guidance path and improving the guidance efficiency; at the same time, when providing guidance prompts through non-user characters, the prompt content can be packaged in copywriting, which improves the simulation of the prompts. Realization, so as to ensure the immersion and sense of substitution of open world games, improve user experience, and then increase user retention rate.
  • the following continues to illustrate the implementation of the task guidance device 465 in the virtual scene provided by the embodiment of the present application as an exemplary structure of a software module.
  • it is stored in the task guidance device 465 in the virtual scene of the memory 460 in FIG. 2
  • the software modules can include:
  • the first receiving module 4651 is configured to receive an interactive guidance instruction for a non-user character associated with a virtual object
  • the interactive guidance instruction is used to instruct the non-user character to guide the virtual task of the virtual object in the virtual scene
  • the first presentation module 4652 is configured to present task guidance information corresponding to the interaction progress of the virtual object in response to the interaction guidance instruction;
  • the task guide information is used to guide the virtual object to execute at least one virtual task
  • the second presentation module 4653 is configured to present position guidance information of an interaction position corresponding to the target task in response to a determination instruction for the target task in the at least one virtual task based on the task guidance information.
  • the first receiving module is further configured to present a non-user character associated with the virtual object in the virtual scene, and the non-user character moves following the movement of the virtual object; responding The interaction guidance instruction is received based on the trigger operation for the non-user role.
  • the first receiving module is further configured to present a guidance control
  • the guidance control is configured to trigger a guidance session between the virtual object and the non-user character, and the guidance session is used to guide the The virtual task of the virtual object in the virtual scene; in response to a trigger operation on the guidance control, the interaction guidance instruction is received.
  • the first receiving module is further configured to present a voice input control; in response to a trigger operation on the voice input control, present collection indication information for indicating that voice collection is in progress, and in the When the collection indication information indicates that the collection is completed, content recognition is performed on the collected voice; when the content of the voice contains the target content associated with the non-user role, the interaction guidance instruction is received.
  • the first receiving module is further configured to obtain a timer for regularly guiding the virtual object to execute a virtual task; When the virtual task has not been executed within the target time period before the time, the interaction guidance instruction triggered by the timer is received.
  • the first presentation module is further configured to present a guidance interface corresponding to the non-user role; in the guidance interface, task guidance information corresponding to the interaction progress of the virtual object is presented.
  • the guide interface includes a first display area and a second display area
  • the second presentation module is further configured to display, in the first display area, the non-user character for the virtual The guiding question of the object, and in the second display area, display the candidate virtual task corresponding to the guiding question, the candidate virtual task corresponding to the interactive progress of the virtual object; the guiding question and The candidate virtual task is determined as the task guide information.
  • the first presentation module is further configured to determine the priority of each of the candidate virtual tasks when the number of the candidate virtual tasks is at least two; In the display style of the virtual task first, each of the candidate virtual tasks corresponding to the guidance question is displayed in the second display area.
  • the second display area includes at least two sub-display areas, and each of the sub-display areas corresponds to a task category; the first presentation module is further configured to when the number of candidate virtual tasks is When there are at least two, determine the task category to which each candidate virtual task belongs; and display the corresponding candidate virtual task in a corresponding sub-display area in the second display area according to the task category.
  • the second presentation module is further configured to present the non-user role, and present a conversation bubble corresponding to the non-user role in an associated area of the non-user role; wherein, the conversation The bubble includes: task guidance information corresponding to the progress of the virtual object interaction.
  • the device further includes: a second receiving module configured to receive the input conversation response information for the non-user character, and present it in the form of conversation bubbles in the associated area of the virtual object The session response information; when the session response information indicates that the target task is selected, a determination instruction for the target task is received.
  • a second receiving module configured to receive the input conversation response information for the non-user character, and present it in the form of conversation bubbles in the associated area of the virtual object The session response information; when the session response information indicates that the target task is selected, a determination instruction for the target task is received.
  • the first presentation module is further configured to determine the interaction attribute of the virtual object, and the interaction attribute includes at least one of the following: interaction preference, interaction level, virtual goods obtained through interaction, and interaction environment; A corresponding interaction progress is determined based on the interaction attribute, and task guidance information corresponding to the interaction progress is presented.
  • the second presentation module is further configured to present a map corresponding to the virtual scene; in the map, position guidance information of an interaction position corresponding to the target task is presented.
  • the device further includes: a third presentation module configured to display a picture of the non-user character following the virtual object moving in the virtual scene; the second presentation module is also configured to During the movement of the non-user character following the virtual object, position guidance information for the non-user character guiding the virtual object to move to the interaction position corresponding to the target task is presented.
  • a third presentation module configured to display a picture of the non-user character following the virtual object moving in the virtual scene
  • the second presentation module is also configured to During the movement of the non-user character following the virtual object, position guidance information for the non-user character guiding the virtual object to move to the interaction position corresponding to the target task is presented.
  • the device after presenting the position guidance information of the interaction position corresponding to the target task, the device further includes: a motion control module configured to respond to the virtual object triggered based on the position guidance information a control instruction to control the virtual object to move toward the interaction position; when the virtual object moves to the interaction position, control the virtual object to perform the target task at the interaction position.
  • a motion control module configured to respond to the virtual object triggered based on the position guidance information a control instruction to control the virtual object to move toward the interaction position; when the virtual object moves to the interaction position, control the virtual object to perform the target task at the interaction position.
  • the device after presenting the position guidance information of the interaction position corresponding to the target task, the device further includes: a guidance output module configured to output the virtual prop guidance of the non-user character for the virtual object information; wherein, the virtual prop guide information is used to guide the virtual object to use virtual props that are suitable for the target task.
  • a guidance output module configured to output the virtual prop guidance of the non-user character for the virtual object information; wherein, the virtual prop guide information is used to guide the virtual object to use virtual props that are suitable for the target task.
  • An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the task guidance method in the virtual scene described above in the embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored.
  • the processor When the executable instructions are executed by the processor, the processor will be caused to execute the virtual scene provided by the embodiment of the present application.
  • the task guidance method for example, the method shown in FIG. 3 .
  • the computer readable storage medium may be a ferroelectric random access memory (FRAM, Ferroelectric Random Access Memory), a read only memory (Read Only Memory, ROM), a programmable read only memory (Programmable Read Only Memory, PROM), Erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory, Magnetic Surface Memory, CD-ROM, or CD-ROM, etc. ; It can also be various devices including one or any combination of the above-mentioned memories.
  • FRAM Ferroelectric random access memory
  • PROM programmable read only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash Memory Magnetic Surface Memory, CD-ROM, or CD-ROM, etc.
  • It can also be various devices including one or any combination of the above-mentioned memories.
  • executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts of the program in question, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
  • HTML Hyper Text Markup Language
  • executable instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or alternatively, on multiple computing devices distributed across multiple sites and interconnected by a communication network. to execute.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景中的任务引导方法、装置、设备、计算机可读存储介质及计算机程序产品;方法包括:接收到针对虚拟对象关联的非用户角色的交互引导指令;其中,交互引导指令,用于指示非用户角色对虚拟对象在虚拟场景中的虚拟任务进行引导;响应于交互引导指令,呈现与虚拟对象的交互进度相对应的任务引导信息;其中,任务引导信息,用于引导虚拟对象执行至少一种虚拟任务;基于任务引导信息,响应于针对至少一种虚拟任务中目标任务的确定指令,呈现目标任务对应的交互位置的位置指引信息。

Description

虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202111662320.X、申请日为2021年12月31日的中国专利申请提出,并要求该中国专利申请的优先权,同时要求申请号为202111319975.7,申请日为2021年11月09日,名称为虚拟场景中任务引导方法、装置、设备、介质及程序产品的专利申请的优先权。
技术领域
本申请涉及人机交互技术,尤其涉及一种虚拟场景中的任务引导方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
在虚拟场景应用中,当虚拟对象在虚拟场景交互过程中感到迷茫而不知要执行什么虚拟任务时,可向非用户角色寻求引导提示。相关技术在给用户提供引导提示时,将虚拟场景中所有引导信息进行陈列供用户从中逐个选择,用户无法便捷地定位自己所需的引导信息导致缺乏交互目标,使得引导效率和用户留存率均较低。
发明内容
本申请实施例提供一种虚拟场景中的任务引导方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够为虚拟对象提供针对性地虚拟任务引导提示,进而提高了虚拟任务的引导效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中的任务引导方法,包括:
接收到针对虚拟对象关联的非用户角色的交互引导指令;
其中,所述交互引导指令,用于指示所述非用户角色,对所述虚拟对象在虚拟场景中的虚拟任务进行引导;
响应于所述交互引导指令,呈现与所述虚拟对象的交互进度相对应的任务引导信息;
其中,所述任务引导信息,用于引导所述虚拟对象执行至少一种虚拟任务;
基于所述任务引导信息,响应于针对所述至少一种虚拟任务中目标任务的确定指令,呈现所述目标任务对应的交互位置的位置指引信息。
本申请实施例提供一种虚拟场景中的任务引导装置,包括:
第一接收模块,配置为接收到针对虚拟对象关联的非用户角色的交互引导指令;
其中,所述交互引导指令,用于指示所述非用户角色,对所述虚拟对象在虚拟场景中的虚拟任务进行引导;
第一呈现模块,配置为响应于所述交互引导指令,呈现与所述虚拟对象的交互进度相对应的任务引导信息;
其中,所述任务引导信息,用于引导所述虚拟对象执行至少一种虚拟任务;
第二呈现模块,配置为基于所述任务引导信息,响应于针对所述至少一种虚拟任务 中目标任务的确定指令,呈现所述目标任务对应的交互位置的位置指引信息。
本申请实施例提供一种电子设备,包括:
存储器,配置为存储可执行指令;
处理器,配置为执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场景中的任务引导方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟场景中的任务引导方法。
本申请实施例具有以下有益效果:
本申请实施例,能够为虚拟场景中的虚拟对象提供虚拟任务的引导,在给虚拟对象提供引导提示时,由于所提供的任务引导信息与虚拟对象的交互进度相对应,也即能够基于玩家当前的交互进度,针对性地给虚拟对象提供与当前交互进度相对应的虚拟任务的引导,使得玩家能够快速便捷地定位到自己所需的任务引导提示,大大缩短了引导路径,提高了虚拟场景中虚拟任务的引导效率及设备的硬件处理效率,并提高了用户留存率。
附图说明
图1A是本申请实施例提供的虚拟场景中的任务引导方法的应用场景示意图;
图1B为本申请实施例提供的虚拟场景中的任务引导方法的应用场景示意图;
图2为本申请实施例提供的终端设备400的结构示意图;
图3为本申请实施例提供的虚拟场景中的任务引导方法的流程示意图;
图4为本申请实施例提供的交互引导指令的触发示意图;
图5为本申请实施例提供的交互引导指令的触发示意图;
图6为本申请实施例提供的任务引导信息的显示界面示意图;
图7为本申请实施例提供的位置指引信息的显示界面示意图;
图8为本申请实施例提供的引导提示界面示意图;
图9为本申请实施例提供的引导提示的显示示意图;
图10为本申请实施例提供的虚拟场景中的任务引导方法的流程示意图;
图11为本申请实施例提供的引导信息的筛选示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请进行详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二…”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二…”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目 的,不是旨在限制本申请。
对本申请实施例进行详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)客户端,终端中运行的用于提供各种服务的应用程序,例如视频播放客户端、游戏客户端等。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟场景,是应用程序在终端上运行时显示(或提供)的虚拟场景,该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,该虚拟场景中可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
4)虚拟对象,虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如在虚拟场景中显示的人物、动物等。该虚拟对象可以是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
5)场景数据,表示虚拟场景中的特征数据,例如可以包括虚拟对象在虚拟场景中的位置,还可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏虚拟对象的各种状态的属性值,例如生命值和魔法值等。
本申请实施例提供一种虚拟场景中的任务引导方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够为虚拟对象提供针对性地虚拟任务引导提示,进而提高了虚拟任务的引导效率。为便于更容易理解本申请实施例提供的虚拟场景中的任务引导方法,首先说明一个示例性实施场景,本申请实施例提供的虚拟场景可以基于终端设备或服务器单独输出,或者基于终端设备和服务器协同输出。
在一些实施例中,虚拟场景还可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战,通过控制游戏角色的行动可以在虚拟场景中进行双方互动,从而使用户能够在游戏的过程中舒缓生活压力。
在一个实施场景中,参见图1A,图1A为本申请实施例提供的虚拟场景中的任务引导方法的应用场景示意图,在该应用场景下,本申请实施例提供的虚拟场景中的任务引导方法完全依赖于终端设备,利用终端设备400的图形处理硬件计算能力即可完成虚拟场景100的相关数据计算,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等各种不同类型的终端设备400完成虚拟场景的输出。作为示例,图形处理硬件的类型包括中央处理器(CPU,Central ProcessingUnit)和图形处理器(GPU,Graphics ProcessingUnit)。
当形成虚拟场景100的视觉感知时,终端设备400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,终端设备400还可以借助不同的硬件来形成听觉感知、触觉感知、运动 感知和味觉感知的一种或多种。
作为示例,终端设备400上运行有客户端410(例如单机版的游戏应用),在客户端410的运行过程中输出包括有角色扮演的虚拟场景100,虚拟场景100可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景100中包括虚拟对象110和非用户角色120,虚拟对象110可以是受用户(或称玩家)控制的游戏角色,即虚拟对象110受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,虚拟对象110将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);非用户角色120是虚拟场景中与虚拟对象110存在关联性的实体对象,可以时刻存在于虚拟场景中,跟随着虚拟对象110的移动而移动,也可以通过虚拟对象110的呼唤而呈现于虚拟场景中,用于对虚拟对象110在虚拟场景中的虚拟任务提供引导,如引导虚拟对象110执行当前交互进度相对应的虚拟任务。
作为示例,终端接收到针对虚拟对象关联的非用户角色的交互引导指令;响应于交互引导指令,呈现与虚拟对象的交互进度相对应的任务引导信息;其中,任务引导信息,用于引导虚拟对象执行至少一种虚拟任务;基于任务引导信息,响应于针对至少一种虚拟任务中目标任务的确定指令,呈现目标任务对应的交互位置的位置指引信息;由于所呈现的任务引导信息是与虚拟对象的当前交互进度相对应的,因此,所引导执行的虚拟任务更具有针对性,提高了引导效率。
在另一个实施场景中,参见图1B,图1B为本申请实施例提供的虚拟场景中的任务引导方法的应用场景示意图,在该应用场景下,利用终端设备400和服务器200的计算能力完成虚拟场景的相关数据计算、并在终端设备400输出虚拟场景。以形成虚拟场景100的视觉感知为例,服务器200进行虚拟场景相关显示数据(例如场景数据)的计算并通过网络300发送到终端设备400,终端设备400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端设备400的相应硬件输出,例如使用麦克风形成听觉感知,使用振动器形成触觉感知等等。
作为示例,终端设备400上运行有客户端410(例如网络版的游戏应用),通过连接服务器200(例如游戏服务器)与其他用户进行游戏互动,终端设备400输出客户端410的虚拟场景100,虚拟场景100可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景100中包括虚拟对象110和非用户角色120,虚拟对象110可以是受用户(或称玩家)控制的游戏角色,即虚拟对象110受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,虚拟对象110将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);非用户角色120是虚拟场景中与虚拟对象110存在关联性的实体对象,可以时刻存在于虚拟场景中,跟随着虚拟对象110的移动而移动,也可以通过虚拟对象110的呼唤而呈现于虚拟场景中,用于对虚拟对象110在虚拟场景中的虚拟任务提供引导,如引导虚拟对象110执行当前交互进度相对应的虚拟任务。
作为示例,终端接收到针对虚拟对象关联的非用户角色的交互引导指令;响应于交互引导指令,呈现与虚拟对象的交互进度相对应的任务引导信息;其中,任务引导 信息,用于引导虚拟对象执行至少一种虚拟任务;基于任务引导信息,响应于针对至少一种虚拟任务中目标任务的确定指令,呈现目标任务对应的交互位置的位置指引信息;由于所呈现的任务引导信息是与虚拟对象的当前交互进度相对应的,因此,所引导执行的虚拟任务更具有针对性,提高了虚拟场景中虚拟任务的引导效率。
在一些实施例中,终端设备400可以通过运行计算机程序来实现本申请实施例提供的虚拟场景中的任务引导方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,APPlication),即需要在操作系统中安装才能运行的程序,例如射击类游戏APP(即上述的客户端410);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
以计算机程序为应用程序为例,在实际实施时,终端设备400安装和运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、虚拟现实应用程序、三维地图程序或者多人枪战类生存游戏中的任意一种。用户使用终端设备400操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷、建造虚拟建筑中的至少一种。示意性的,该虚拟对象可以是虚拟人物,比如仿真人物角色或动漫人物角色等。
在另一些实施例中,本申请实施例还可以借助于云技术(CloudTechnology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
示例的,图1B中的服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content Delivery Network)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端设备400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端设备400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
在一些实施例中,本申请实施例提供的方法可以由各种电子设备或计算机设备实施,例如,可以由终端单独实施,也可以由服务器单独实施,也可以由终端和服务器协同实施。下面对实施本申请实施例提供的方法的电子设备进行说明,该电子设备可以是终端设备或服务器,这里以电子设备是终端设备为例,对图1A中示出的终端设备400的结构进行说明。参见图2,图2为本申请实施例提供的终端设备400的结构示意图,图2所示的终端设备400包括:至少一个处理器420、存储器460、至少一个网络接口430和用户接口440。终端设备400中的各个组件通过总线系统450耦合在一起。可理解,总线系统450用于实现这些组件之间的连接通信。总线系统450除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统450。
处理器420可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何 常规的处理器等。
用户接口440包括使得能够呈现媒体内容的一个或多个输出装置441,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口440还包括一个或多个输入装置442,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器460可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器460可选地包括在物理位置上远离处理器420的一个或多个存储设备。
存储器460包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,RandomAccess Memory)。本申请实施例描述的存储器460旨在包括任意适合类型的存储器。
在一些实施例中,存储器460能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统461,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块462,用于经由一个或多个(有线或无线)网络接口430到达其他计算设备,示例性的网络接口430包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块463,用于经由一个或多个与用户接口440相关联的输出装置441(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块464,用于对一个或多个来自一个或多个输入装置442之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景中的任务引导装置可以采用软件方式实现,图2示出了存储在存储器460中的虚拟场景中的任务引导装置465,其可以是程序和插件等形式的软件,包括以下软件模块:第一接收模块4651、第一呈现模块4652和第二呈现模块4653,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或拆分,将在下文中说明各个模块的功能。
在另一些实施例中,本申请实施例提供的虚拟场景中的任务引导装置可以采用硬件方式实现,作为示例,本申请实施例提供的虚拟场景中的任务引导装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的虚拟场景中的任务引导方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable GateArray)或其他电子元件。
下面将结合附图对本申请实施例提供的虚拟场景中的任务引导方法进行说明。本申请实施例提供的虚拟场景中的任务引导方法可以由图1A中的终端设备400单独执行,也可以由图1B中的终端设备400和服务器200协同执行。
下面,以由图1A中的终端设备400单独执行本申请实施例提供的虚拟场景中的任务引导方法为例进行说明。参见图3,图3为本申请实施例提供的虚拟场景中的任务引导方法的流程示意图,将结合图3示出的步骤进行说明。需要说明的是,图3示 出的方法可以由终端设备400上运行的各种形式的计算机程序执行,并不局限于上述的客户端410,还可以是上文所述的操作系统461、软件模块和脚本,因此客户端不应视为对本申请实施例的限定。
步骤101:终端接收到针对虚拟对象关联的非用户角色的交互引导指令。
其中,交互引导指令,用于指示非用户角色对虚拟对象在虚拟场景中的虚拟任务进行引导。
这里,终端上安装有客户端,该客户端可以为虚拟场景的客户端,如游戏客户端,或者具有游戏功能的其它客户端,如具有游戏功能的视频客户端,该客户端为虚拟场景的客户端为例,当终端运行该客户端时,可显示虚拟场景的界面,并在该虚拟场景的界面中,显示当前登录账号的虚拟对象,用户基于该虚拟对象可在虚拟场景中与其它用户的虚拟对象进行交互,或在虚拟场景中执行虚拟任务等。在实际应用中,每个虚拟对象可关联至少一个非用户角色,该非用户角色可对虚拟对象在虚拟场景中的虚拟任务进行引导,该引导可通过终端在虚拟场景的界面中所触发的交互引导指令所实现。
在一些实施例中,终端可通过如下方式接收到针对虚拟对象关联的非用户角色的交互引导指令:在虚拟场景中呈现虚拟对象关联的非用户角色,其中,非用户角色跟随虚拟对象的移动而移动;响应于针对非用户角色的触发操作,接收到交互引导指令。
在实际应用中,非用户角色可以是虚拟场景中与虚拟对象存在关联性的实体对象,非用户角色可始终伴随虚拟对象存在于虚拟场景中,也可在特定的情景中才出现,如非用户角色在虚拟对象处于危难或迷惘时出现给予虚拟对象以指引,还可以是在接收到虚拟对象的呼叫指令时而出现,等等,本申请实施例并不对非用户角色的出现时机和出现形式进行限定。例如,参见图4,图4为本申请实施例提供的交互引导指令的触发示意图,当非用户角色401出现于虚拟场景中后,随着虚拟对象402的移动而移动,当用户在虚拟场景的交互过程中不明确接下来要执行怎样的交互操作或执行怎样的虚拟任务时,可触发(点击、双击、长按等)非用户角色401,终端响应于该触发操作,接收到交互引导指令,并呈现相应的任务引导信息以对虚拟对象进行交互指引。
在一些实施例中,终端可通过如下方式接收到针对虚拟对象关联的非用户角色的交互引导指令:呈现引导控件,其中,引导控件用于触发虚拟对象与非用户角色的引导会话,引导会话用于引导虚拟对象在虚拟场景中的虚拟任务;响应于针对引导控件的触发操作,接收到交互引导指令。
其中,引导控件为触发虚拟对象与非用户角色之间引导会话的入口,也是用户与非用户角色进行交互(如输入或操作数据)的工具,可以以图标或按键的形式呈现在虚拟场景的界面中,用户可随时触发该引导控件,以建立非用户角色与虚拟对象之间的会话连接,通过建立的会话连接请求非用户角色的交互指引。例如,参见图5,图5为本申请实施例提供的交互引导指令的触发示意图,当用户触发(点击、双击、长按等)引导控件501时,终端响应于该触发操作,建立非用户角色与虚拟对象之间的会话连接,并接收交互引导指令,以呈现相应对虚拟对象进行交互指引的任务引导信息。
在一些实施例中,终端可通过如下方式接收到针对虚拟对象关联的非用户角色的交互引导指令:呈现语音输入控件;响应于针对语音输入控件的触发操作,呈现用于指示正在进行语音采集的采集指示信息,并在采集指示信息指示采集完毕时,对所采集的语音进行内容识别;当语音的内容中包含关联非用户角色的目标内容时,接收到交互引导指令。
其中,语音输入控件为触发虚拟对象与非用户角色之间引导会话的入口,也是用户通过语音呼出非用户角色以建立引导会话的工具,当用户点击语音输入控件时,终端采集用户录入的语音,并对语音进行内容识别,当用户的内容中包含用于呼出非用户角色的目标内容时,即可呼唤出非用户角色,并触发交互引导指令,其中,目标内容是预先设置的能够呼出虚拟对象关联的非用户角色的内容。
在一些实施例中,终端可通过如下方式接收到针对虚拟对象关联的非用户角色的交互引导指令:获取用于定时引导虚拟对象执行虚拟任务的定时器;当基于该定时器确定目标时刻到达、且虚拟对象在目标时刻之前的目标时间段内未执行虚拟任务时,接收到定时器定时触发的交互引导指令。
这里,上述目标时刻及目标时间段可依据实际情况进行设定,交互引导指令可以定时器定时触发的,也即自动触发非用户角色对虚拟对象进行交互指引,如每天下午5点-7点,判断虚拟对象是否已执行建造休息场所的虚拟任务,当确定虚拟对象尚未执行建造休息场所的虚拟任务时,自动触发交互引导指令,以引导虚拟对象建造休息场所供休息。
在实际应用中,非用户角色还可基于虚拟对象当前的交互进度,及待引导虚拟任务的必要程度,主动给虚拟对象提供强引导或弱引导,其中,强引导用于提示虚拟对象必须执行的虚拟任务,如对于在目标时间段内尚未执行但必须执行的虚拟任务,持续提示引导直至虚拟对象执行该虚拟任务;弱引导用于提示建议虚拟对象执行的虚拟任务,也即在该弱引导下,用户基于该虚拟对象可选择执行或不执行该虚拟任务,如对于在目标时间段内尚未执行但非必须执行的虚拟任务,提示若干次(例如周期性的提示3次),当提示次数达到预设的目标次数时,不再提示引导。如此,根据虚拟对象当前的交互进度和虚拟任务的类别,及时主动给予玩家适合当下场景的交互引导,提高了引导效率和交互体验。
步骤102:响应于交互引导指令,呈现与虚拟对象的交互进度相对应的任务引导信息。
其中,任务引导信息用于引导虚拟对象执行至少一种虚拟任务。在一些实施例中,虚拟任务可以包括以下任务中至少之一:执行针对虚拟场景中虚拟物品的目标操作、基于虚拟场景中的虚拟材料进行目标道具的制作,执行与至少一个目标虚拟对象的交互操作;例如挖掘地下器材配件、金钱和其他有用的东西,或收集木材、石料、矿石等资源制造工具、搭建房屋,还可以是击杀各种强大对手,等等。
在实际应用中,当终端接收到交互引导指令时,在虚拟场景的界面中呈现非用户角色给予虚拟对象当下应该执行的虚拟任务的任务引导信息,例如,图5中,在虚拟场景的界面中呈现非用户角色响应该触发操作的如“创造师,找我有什么事?”这一应答信息502,并呈现应答信息502对应的可供用户选择的候选虚拟任务,如“接下来干嘛好”这一虚拟任务503、“没事,继续走吧”这一虚拟任务504,用户可从中选择需要所需要执行的虚拟任务。
在一些实施例中,终端可通过如下方式呈现与虚拟对象的交互进度相对应的任务引导信息:确定虚拟对象的交互属性,交互属性包括以下至少之一:交互偏好、交互等级、交互所得的虚拟物资、交互环境;基于交互属性确定对应的交互进度,并呈现与交互进度相对应的任务引导信息。
其中,交互偏好用于表征虚拟对象的交互倾向性,如虚拟对象是偏好目标道具的制作(如建造房屋)、还是偏好与目标虚拟对象进行交互(如与目标对象进行战斗)等,对于具有特定交互偏好的虚拟对象而言,在不同交互进度所最需要的引导信息往往是不同的,如对于偏好建造而不偏好战斗的虚拟对象,在交互初期往往更需要有关 战斗的引导提示,在交互后期或高级阶段,往往更需要有关建造的引导提示。
交互等级是指虚拟对象的交互进阶程度,不同的交互等级对应不同难度或不同类别的虚拟任务,如交互等级越高、相应的虚拟任务就越具有挑战性。虚拟物质是指虚拟对象在交互过程中已获取的物质,根据虚拟对象所得的虚拟物质,提供与虚拟物质对应的任务引导信息,如虚拟对象已获得较多建筑材料,则给予有关建造的引导提示,以引导虚拟对象利用已有建筑材料执行建造房屋的虚拟任务;又如虚拟对象至今尚未获取任何木材,则给予有关伐树的引导提示,以引导虚拟对象执行伐树的虚拟任务,以获取木材为后续建造房屋或船只等做准备。交互环境是指虚拟对象在虚拟场景中的处境,或虚拟对象持有的虚拟道具所针对目标区域的环境,如虚拟对象所持瞄准镜瞄准存在敌人的小岛时,根据虚拟对象与敌人的实力对比,呈现是否引导虚拟对象与敌人进行交战的引导提示,如虚拟对象的实力强于敌人的实力,与敌人交战时获胜的概率较大时,呈现引导虚拟对象与敌人进行交战的引导提示;当虚拟对象的实力与敌人实力相差悬殊,虚拟对象与敌人交战获胜概率较小或无获胜可能时,呈现引导虚拟对象不与敌人进行交战的引导提示。
在基于上述至少一种交互属性给虚拟对象提供引导提示时,根据虚拟对象的交互属性,确定当前虚拟对象正在执行的目标虚拟任务的交互进度,进而基于交互进度确定任务引导信息,例如,虚拟对象偏好建造,当虚拟对象请求交互引导时,当确定虚拟对象正在建造时,根据建造的交互进度给虚拟对象提供有关相应交互进度的建造任务引导信息,从而给虚拟对象提供最迫切、最适合当前交互情况的虚拟任务的体验,引导效率较高,有助虚拟对象在虚拟场景中体验交互乐趣,提高了用户留存率。
在一些实施例中,终端可通过如下方式呈现与虚拟对象的交互进度相对应的任务引导信息:呈现非用户角色对应的引导界面;在引导界面中,呈现与虚拟对象的交互进度相对应的任务引导信息。在一些实施例中,引导界面包括第一展示区域和第二展示区域,终端可通过如下方式呈现与虚拟对象的交互进度相对应的任务引导信息:在第一展示区域中,展示非用户角色针对虚拟对象的引导问题,并在第二展示区域中,展示对应引导问题的候选虚拟任务,其中,候选虚拟任务与虚拟对象的交互进度相对应;将展示的引导问题及候选虚拟任务确定为任务引导信息。
这里,任务引导信息可通过悬浮于虚拟场景的界面中的浮窗或弹窗的形式进行展示,也可通过独立于虚拟场景的界面中的引导界面的形式进行展示,任务引导信息可以是以文字的样式进行展示,也可以是以语音的样式进行展示。在实际应用中,任务引导信息可包括非用户角色针对虚拟对象的引导问题、以及针对引导问题的候选虚拟任务,例如,参见图6,图6为本申请实施例提供的任务引导信息的显示界面示意图,引导界面包括第一展示区域601和第二展示区域602,在第一展示区域601中展示引导问题,并在第二展示区域602中展示针对引导问题的候选虚拟任务,这些候选虚拟任务均与虚拟对象的交互进度相对应,用户可从候选虚拟任务中选择目标任务去执行。
在一些实施例中,终端可通过如下方式在第二展示区域中,展示对应引导问题的候选虚拟任务:当候选虚拟任务的数量为至少两个时,确定各候选虚拟任务的优先级;按照优先级高、相应候选虚拟任务在前的展示样式,在第二展示区域中,展示对应引导问题的各候选虚拟任务。
其中,优先级可根据候选虚拟任务的重要性、紧迫性、以及给虚拟对象带来的有益效果等来确定,通常而言,候选虚拟任务越重要、越紧迫或带来的有益效果越高,其对应的优先级越高,将优先级高的候选虚拟任务排列在前,例如,在当前交互进度下,“建造”比“冒险”更与交互进度相适配,则显示候选虚拟任务时,将“建造” 这一候选虚拟任务排在“冒险”这一候选虚拟任务之前,更便于用户从中选择执行最有利于当前交互进度的候选虚拟任务,为用户优先提供更贴合用户实际情况的选项,使得用户更容易进行虚拟任务的选择,节省了用户从众多虚拟任务进行选择的时间,提高了候选虚拟任务的数量为多个时,用户的选择效率,提升了用户体验。
在一些实施例中,第二展示区域包括至少两个子展示区域,每个子展示区域对应一个任务类别;终端可通过如下方式在第二展示区域中,展示对应引导问题的候选虚拟任务:当候选虚拟任务的数量为至少两个时,确定各候选虚拟任务所归属的任务类别;按照任务类别,在第二展示区域中相应子展示区域,展示相应的候选虚拟任务。
这里,在实际应用中,根据虚拟任务属性的不同,将虚拟任务划分多个任务类别,如任务类别可包括:建造类、冒险类、战斗类等,通常情况下,每种任务类别所包括的虚拟任务的数量为多个,如建造类的虚拟任务可包括但不限于:寻找材料A、寻找材料B、建造房屋1、建造船舶2,冒险类的虚拟任务包括但不限于:冒险任务1、冒险任务2等;当任务引导信息中包括多个候选虚拟任务时,根据各候选虚拟任务所归属的任务类别,将属于不同任务类别的候选虚拟任务分列在不同的子展示区域中进行展示,避免了将所有候选虚拟任务混合在一起进行展示,便于用户从中选择执行适合当下任务类别的、最有利于当前交互进度的候选虚拟任务,提升了用户体验。
在一些实施例中,终端可通过如下方式呈现与虚拟对象的交互进度相对应的任务引导信息:呈现非用户角色,并在非用户角色的关联区域,呈现非用户角色对应的会话气泡;其中,会话气泡中包括:与虚拟对象的交互进度相对应的任务引导信息。
这里,在非用户角色给予虚拟对象任务引导信息的过程中,在非用户角色的关联区域,如非用户角色四周某一方位处呈现会话气泡,会话气泡的数可为一个或多个,每个会话气泡中可包括一条或多条任务引导信息,采用会话气泡的样式显示任务引导信息,提高了提示的拟真化,更加符合真实的引导场景,提升了用户体验。
在一些实施例中,终端接收到输入的针对非用户角色的会话响应信息,并在虚拟对象的关联区域,采用会话气泡的形式呈现会话响应信息;当会话响应信息表征对目标任务进行选择时,接收到针对目标任务的确定指令。
这里,当非用户角色采用会话气泡的样式给予虚拟对象任务引导信息时,同样地,虚拟对象也可采用会话气泡的样式反馈针对任务引导信息的会话响应信息,在一些实施例中,终端对非用户角色的会话气泡及虚拟对象的会话气泡区别显示,如采用不同的颜色显示,或采用不同的气泡大小显示,当会话响应信息指示对多个候选虚拟任务中的目标任务进行选择时,针对接收到针对目标任务的确定指令,以响应于确定指,呈现有关目标任务的任务引导信息;如此,虚拟对象与非用户角色之间通过会话气泡的样式传输会话信息,提高了提示的拟真化,更加符合真实的引导场景,提升了用户体验。
在一些实施例中,在与虚拟对象的交互进度相对应的任务引导信息之前,终端还可执行以下处理:获取虚拟对象的交互数据,其中,交互数据用于指示虚拟对象在虚拟场景中的交互进度,基于交互数据调用机器学习模型进行预测处理,得到候选虚拟任务,其中,机器学习模型是基于训练样本的交互数据、以及标注的虚拟任务进行训练得到的;如此,通过调用机器学习模型的方式来预测与虚拟对象当前交互进度相对应的虚拟任务,能够使每次推荐引导的候选虚拟任务与当前交互进度更加适配,更加具备针对性,进而基于针对性地引导提示,能够提高引导效率和用户留存率。
需要说明的是,在实际应用中,上述的机器学习模型可以是神经网络模型(例如卷积神经网络、深度卷积神经网络、或者全连接神经网络等)、决策树模型、梯度提升树、多层感知机、以及支持向量机等,本申请实施例对机器学习模型的类型不作具 体限定。
可以理解的是,在本申请实施例中涉及到用户的登录账号、交互数据等相关数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
步骤103:基于任务引导信息,响应于针对至少一种虚拟任务中目标任务的确定指令,呈现目标任务对应的交互位置的位置指引信息。
其中,位置指引信息用于指示执行目标任务所处的交互位置,如目标任务为伐树任务时,位置指引信息用于指示伐树的位置(如森林)。在一些实施例中,终端可通过如下方式呈现目标任务对应的交互位置的位置指引信息:呈现对应虚拟场景的地图;在地图中,呈现目标任务对应的交互位置的位置指引信息。
这里,当用户从任务引导信息所指示的多个候选虚拟任务中选择目标任务时,终端响应于用户的选择操作,接收到针对目标任务的确定指令,在虚拟场景的界面中呈现虚拟场景的地图,并在地图中呈现目标任务对应的交互位置的位置指引信息,如突出显示目标任务对应的交互位置以区别于其他位置,并将突出显示的交互位置作为位置指引信息。
参见图7,图7为本申请实施例提供的位置指引信息的显示界面示意图,当用户选择“我想建造”这一目标任务701时,终端响应于该选择操作,在呈现非用户角色针对目标任务的引导提示信息702的同时,呈现虚拟场景的地图703,在地图703中对应目标任务的交互位置704处呈现闪烁特效,以提示用户在交互位置704处执行目标任务。
在一些实施例中,当虚拟对象在虚拟场景中移动时,非用户角色跟随虚拟对象在虚拟场景中移动,终端还可展示非用户角色跟随虚拟对象在虚拟场景中移动的画面;相应的,终端可通过如下方式呈现目标任务对应的交互位置的位置指引信息:在非用户角色跟随虚拟对象移动的过程中,呈现非用户角色指引虚拟对象移动至目标任务对应的交互位置的位置指引信息。
在实际应用中,由于非用户角色可随虚拟对象的移动而移动,当用户从任务引导信息所指示的多个候选虚拟任务中选择目标任务时,终端响应于用户的选择操作,接收到针对目标任务的确定指令,以文字或语音的样式呈现非用户角色指引虚拟对象移动至目标任务对应的交互位置的位置指引信息,如“向前直走10米,左转5米到达平原东北角的森林”,用户可控制虚拟对象按照非用户角色的位置指引信息进行移动,如此,更加符合真实的引导场景,提升了用户体验;当然,在实际应用中该位置指引信息还可以采用其它方式被呈现,如图形、动画等。
在一些实施例中,终端在呈现目标任务对应的交互位置的位置指引信息之后,还可响应于基于位置指引信息触发的针对虚拟对象的控制指令,控制虚拟对象朝交互位置移动;当虚拟对象移动至交互位置时,控制虚拟对象在交互位置处执行目标任务。
在实际应用中,用户可基于位置指引信息所指示的针对目标任务的交互位置,通过触发用于控制虚拟对象移动的控制器(包括但不限于触控屏、声控开关、键盘、鼠标、摇杆或虚拟场景的界面中的操作控件等)而控制虚拟对象在虚拟场景中朝着所指示的交互位置进行运动,当虚拟对象运动至交互位置时,通过控制器控制虚拟对象在交互位置处执行目标任务。
在一些实施例中,终端在呈现目标任务对应的交互位置的位置指引信息之后,还可输出非用户角色针对虚拟对象的虚拟道具引导信息;其中,虚拟道具引导信息,用于引导虚拟对象使用与目标任务相适配的虚拟道具。
这里,非用户角色除了给虚拟对象提供执行目标任务的交互位置之外,还可提供 执行目标任务所需使用的虚拟道具的引导提示信息,如此,提供的虚拟道具的引导提示也具备针对性,以控制虚拟对象使用最适合的虚拟道具执行目标任务,更加符合真实的引导场景,提升了用户体验。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。以虚拟场景为游戏为例,当玩家(即上述的虚拟对象)在游戏过程中失去游戏目标导致游戏失焦(即不知执行什么虚拟任务)时,可向向导(即上述的非用户角色)寻求引导提示;相关技术在给玩家提供引导提示时,将游戏中所有提示都陈列出来,如图8所示,图8为本申请实施例提供的引导提示界面示意图,在游戏界面中呈现统一的引导提示信息以及“帮助”控件,当呈现的引导提示信息不是当前所需要时,玩家可点击“帮助”控件以切换下一条引导提示信息,按照此种方式直至出现自己所需要的引导提示信息,此种方式给所有玩家提供的引导提示信息都相同,不具备针对性,使得玩家无法便捷地定位到自己所需的引导而失去交互目标,降低了用户留存率;同时,玩家有时甚至为了获取当前所需要的引导不得不多次点击“帮助”控件,引导路径较长,导致引导效率较低。
为此,本申请实施例提供一种虚拟场景中的任务引导方法,能够基于玩家当前的游戏进度(即上述的交互进度),针对性地给玩家提供当下最迫切、最合适的引导提示信息,使得玩家快速便捷地定位到自己所需的提示,大大缩短了引导路径,提高了引导效率和用户留存率。
参见图9,图9为本申请实施例提供的引导提示的显示示意图,当玩家需要得到引导提示时,可在游戏中触发与非用户角色进行对话,届时在引导界面中呈现如“创造师,找我有什么事?”这一应答信息,并呈现与应答信息对相应的可供用户选择的选项,如“接下来干嘛好”、“没事,继续走吧”,当玩家选择“接下来干嘛好”的选项,意味着玩家希望得到引导指示,此时在引导界面中展示如“我在平原生活了几百年,这里的一切我都很了解,你想做些什么?”的引导问题,并展示针对该引导问题的候选虚拟任务,如“我想建造”、“我想冒险”、“我想战斗”,其中,这些候选虚拟任务均与玩家当前交互进度相对应,用户可从候选虚拟任务中选择目标任务去执行,如当玩家选择“我想建造”这一目标任务时,筛选出针对该目标任务的提示进行呈现,如呈现所选择的目标任务对应的交互位置的位置指引信息,提示过程中,可在游戏界面的左上角显示游戏的小地图,并在小地图中的交互位置处展示闪烁特效,以提示玩家在交互位置处执行该目标任务。
在实际实施时,引导提示主要涉及以下方面:引导条件、引导优先级、引导区分和引导强度。其中,引导条件是指触发交互引导指令的条件,不同的引导条件对应不同的任务引导信息,本申请实施例力争给予玩家最适合当下交互进度的任务引导信息,如玩家背包中没有过获取木材的记录,则提示玩家去执行砍树的虚拟任务,同时在游戏小地图中树木资源丰富的位置闪烁提示。
引导优先级主要在于设置引导提示的显示顺序,在实际应用中,当存在多个满足引导条件的候选虚拟任务时,根据各候选虚拟任务的引导优先级,确定各候选虚拟任务的引导提示顺序,将引导优先级高的候选虚拟任务的引导提示排列在前,更便于用户从中选择执行最有利于当前交互进度的候选虚拟任务。
引导区分主要在于根据玩家喜爱交互深度不同,对引导提示进行分类,如当任务引导信息中包括多个候选虚拟任务时,根据各候选虚拟任务所归属的任务类别,将属于不同任务类别的候选虚拟任务分列在不同的展示区域中进行展示,避免了将所有候选虚拟任务混合在一起进行展示,便于用户从中选择执行适合当下任务类别的、最有 利于当前交互进度的候选虚拟任务。
引导强度主要在于根据引导的必要性将引导划分为强引导和弱引导,强引导用于提示玩家必须执行的虚拟任务,如对于在目标时间段内尚未执行但必须执行的虚拟任务,持续提示引导直至玩家执行该虚拟任务;弱引导用于提示建议玩家执行的虚拟任务,如对于在目标时间段内尚未执行但非必须执行的虚拟任务,提示若干次,当提示次数达到预设的目标次数时,不再提示引导。如此,根据玩家当前的交互进度和虚拟任务的引导必要性,及时主动给予玩家适合当下场景的交互引导,提高了引导效率和交互体验。
以上方面可通过表1和表2示出的引导提示表来实现:表1示出了针对不同类别的虚拟任务的提示类型(如建造提示、创造提示、战斗提示)、提示优先级序号、提示条件、提示强度(提示次数)和提示的对话内容。
表1
Figure PCTCN2022125059-appb-000001
表2示出了当任务引导信息指示对应虚拟任务的交互位置时,每个虚拟任务对应一个标记ID,以标记ID的显示方式在游戏的小地图中展示。
表2
Figure PCTCN2022125059-appb-000002
接下来结合图10,图10为本申请实施例提供的虚拟场景中的任务引导方法的流程示意图,以终端和服务器协调实施本申请实施例提供的虚拟场景中的任务引导方法为例,对本申请实施例提供的虚拟场景中的任务引导方法进行说明,该方法包括:
步骤201:终端响应于针对引导控件的触发操作,接收并发送交互引导指令至服务器。
其中,引导控件用于触发玩家与非用户角色的引导会话,引导会话用于引导玩家在游戏中的游戏任务,交互引导指令用于请求非用户角色基于引导提示信息。
步骤202:服务器响应于交互引导指令,确定非用户角色反馈的应答对话信息。
步骤203:服务器返回应答对话信息至终端。
其中,服务器接收到交互引导指令后,触发非用户角色的引导对话行为,如控制非用户角色转向玩家,然后将引导对话行为配置的应答对话信息反馈至终端进行展示。
步骤204:终端显示应答对话信息。
其中,应答对话信息包括应答信息和应答选项,如应答信息为“创造师,找我有什么事?”,应答选项包括第一选项和第二选项,其中,第一选项为“接下来干嘛好”、第二选项为“没事,继续走吧”。
步骤205:终端响应于针对第二选项的触发操作,结束对话消息。
这里,当玩家选择“没事,继续走吧”这一选项时,终端会向服务端发送结束对话消息,服务端收到结束消息会结束非用户角色的引导对话行为,以取消终端显示的玩家与非用户角色之间的对话。
步骤206:终端响应于针对第一选项的触发操作,呈现与虚拟对象的交互进度相对应的任务引导信息。
例如,当玩家选择“接下来干嘛好”的选项,意味着玩家希望得到引导指示,此时跳转到下一条对话,即在引导界面中展示任务引导信息,任务引导信息包括引导问题及对应引导问题的候选虚拟任务,如展示“我在平原生活了几百年,这里的一切我都很了解,你想做些什么?”的引导问题,并展示针对该引导问题的候选虚拟任务,如“我想建造”、“我想冒险”、“我想战斗”,其中,这些候选虚拟任务均与玩家当前交互进度相对应,用户可从多个候选虚拟任务中选择目标任务去执行。
步骤207:终端基于任务引导信息,响应于针对目标任务的确定指令,发送针对目标任务的引导请求至服务器。
其中,引导请求携带目标任务的任务标识。
步骤208:服务器基于引导请求,确定目标任务对应的引导信息。
这里,服务器基于引导请求中的任务标识,筛选出目标任务对应的引导信息,引导信息可为对应目标任务的交互位置的位置指引信息,如目标任务为伐树任务时,引导信息用于指示伐树的位置(如森林)。
在实际应用中,为了实现根据提示类型提供合适的提示,除了上述表1-2,还可根据表3示出的引导提示表和表4示出的提示类型表来记录每种提示类型对应的提示列表。
表3
Figure PCTCN2022125059-appb-000003
表4
提示类型 #描述 提示列表
1 探索提示 1
2 创造提示 2,4
3 战斗提示 3
参见图11,图11为本申请实施例提供的引导信息的筛选示意图,对应于我想冒险”、“我想建造”和“我想战斗”三个选项,引导提示分为三类,分别为“探索提示”、“创造提示”和“战斗提示”,每类提示可以包含多个提示列表,当服务器获得用户选择的选项时,首先根据对应的提示类型获得该类型对应的提示列表,对于提示列表中的每个引导提示,先根据提示的优先级进行分组与排序,相同优先级的提示为一组,按优先级从高到低对组进行排序,然后对最高优先级的一组引导提示,执行图11所示的步骤301-步骤305,在步骤301中,判断是否满足提示条件,当确定满足提示条件时,执行步骤302;否则执行步骤305;在步骤302中,判断引导提示是否属于强引导,当属于强引导时,执行步骤304;否则执行步骤303;在步骤303中,判断引导提示的提示次数是否达到预设的目标次数(如5次),当未达到目标次数时,执行步骤304,在步骤304中保留引导提示,当达到目标次数时,执行步骤305,在步骤305中,过滤引导提示。如此,根据是否满足提示条件、与是否弱引导进行过滤,从该组过滤后的引导提示中随机选择一个引导提示返回给终端进行展示,若该最高优先级的提示组中所有引导提示都被过滤掉了,则从下一优先级中进行过滤与选择,依此类推。若提示列表中所有引导提示均不满足条件,则告知玩家已完成该类引导的全部内容,并结束引导对话行为。
步骤209:服务器返回目标任务对应的引导信息至终端。
步骤210:终端显示目标任务对应的引导信息。
这里,终端接收到服务器返回的引导信息后,根据引导信息的信息类别采用相应的展示方式进行展示,如当引导信息用于指示执行目标任务的交互位置的位置指引信息时,可在游戏界面的左上角显示游戏的小地图,并在小地图中的交互位置处展示闪烁特效,以提示玩家在交互位置处执行该目标任务,为玩家提供游戏目的地和游戏体验目标。当玩家点击游戏界面中用于退出引导的退出控件、或点击游戏界面中任意位置后,终端响应于用户的触发操作,向服务器发送结束对话消息至服务器,服务器接收到结束对话消息后结束该引导对话行为。
通过上述方式,本申请实施例当玩家在游戏过程中失去游戏目标时,根据玩家的个人游戏偏好以及当前游戏进度,提供当前最迫切的下一步游戏目标,帮助玩家聚焦下一步行为,从而继续进行游戏,使得玩家快速便捷地定位到自己所需的提示,大大缩短了引导路径,提高了引导效率;同时,通过非用户角色提供引导提示时,可以将提示内容进行文案包装,提高了提示的拟真化,从而保证开放世界游戏的沉浸感与代入感,提高了用户体验,进而提高用户留存率。
下面继续说明本申请实施例提供的虚拟场景中的任务引导装置465的实施为软件模块的示例性结构,在一些实施例中,存储在图2中存储器460的虚拟场景中的任务引导装置465中的软件模块可以包括:
第一接收模块4651,配置为接收到针对虚拟对象关联的非用户角色的交互引导指令;
其中,所述交互引导指令,用于指示所述非用户角色,对所述虚拟对象在所述虚拟场景中的虚拟任务进行引导;
第一呈现模块4652,配置为响应于所述交互引导指令,呈现与所述虚拟对象的交互进度相对应的任务引导信息;
其中,所述任务引导信息,用于引导所述虚拟对象执行至少一种虚拟任务;
第二呈现模块4653,配置为基于所述任务引导信息,响应于针对所述至少一种虚拟任务中目标任务的确定指令,呈现所述目标任务对应的交互位置的位置指引信息。
在一些实施例中,所述第一接收模块,还配置为在所述虚拟场景中,呈现所述虚拟对象关联的非用户角色,所述非用户角色跟随所述虚拟对象的移动而移动;响应于针对所述非用户角色的触发操作,接收到所述交互引导指令。
在一些实施例中,所述第一接收模块,还配置为呈现引导控件,所述引导控件,配置为触发所述虚拟对象与所述非用户角色的引导会话,所述引导会话用于引导所述虚拟对象在所述虚拟场景中的虚拟任务;响应于针对所述引导控件的触发操作,接收到所述交互引导指令。
在一些实施例中,所述第一接收模块,还配置为呈现语音输入控件;响应于针对所述语音输入控件的触发操作,呈现用于指示正在进行语音采集的采集指示信息,并在所述采集指示信息指示采集完毕时,对所采集的语音进行内容识别;当所述语音的内容中包含关联所述非用户角色的目标内容时,接收到所述交互引导指令。
在一些实施例中,所述第一接收模块,还配置为获取用于定时引导所述虚拟对象执行虚拟任务的定时器;当基于定时器确定目标时刻到达、且所述虚拟对象在所述目标时刻之前的目标时间段内尚未执行所述虚拟任务时,接收到所述定时器定时触发的所述交互引导指令。
在一些实施例中,所述第一呈现模块,还配置为呈现所述非用户角色对应的引导界面;在所述引导界面中,呈现与所述虚拟对象的交互进度相对应的任务引导信息。
在一些实施例中,所述引导界面包括第一展示区域和第二展示区域,所述第二呈现模块,还配置为在所述第一展示区域中,展示所述非用户角色针对所述虚拟对象的引导问题,并在所述第二展示区域中,展示对应所述引导问题的候选虚拟任务,所述候选虚拟任务与所述虚拟对象的交互进度相对应;将展示的所述引导问题及所述候选虚拟任务确定为所述任务引导信息。
在一些实施例中,所述第一呈现模块,还配置为当所述候选虚拟任务的数量为至少两个时,确定各所述候选虚拟任务的优先级;按照所述优先级高、相应候选虚拟任务在前的展示样式,在所述第二展示区域中,展示对应所述引导问题的各所述候选虚拟任务。
在一些实施例中,所述第二展示区域包括至少两个子展示区域,每个所述子展示区域对应一个任务类别;所述第一呈现模块,还配置为当所述候选虚拟任务的数量为至少两个时,确定各所述候选虚拟任务所归属的任务类别;按照所述任务类别,在所述第二展示区域中相应子展示区域,展示相应的所述候选虚拟任务。
在一些实施例中,所述第二呈现模块,还配置为呈现所述非用户角色,并在所述非用户角色的关联区域,呈现所述非用户角色对应的会话气泡;其中,所述会话气泡中包括:与所述虚拟对象的交互进度相对应的任务引导信息。
在一些实施例中,所述装置还包括:第二接收模块,配置为接收到输入的针对所述非用户角色的会话响应信息,并在所述虚拟对象的关联区域,采用会话气泡的形式呈现所述会话响应信息;当所述会话响应信息表征对所述目标任务进行选择时,接收到针对所述目标任务的确定指令。
在一些实施例中,所述第一呈现模块,还配置为确定所述虚拟对象的交互属性,所述交互属性包括以下至少之一:交互偏好、交互等级、交互所得的虚拟物资、交互环境;基于所述交互属性确定对应的交互进度,并呈现与所述交互进度相对应的任务 引导信息。
在一些实施例中,所述第二呈现模块,还配置为呈现对应所述虚拟场景的地图;在所述地图中,呈现所述目标任务对应的交互位置的位置指引信息。
在一些实施例中,所述装置还包括:第三呈现模块,配置为展示所述非用户角色跟随所述虚拟对象在所述虚拟场景中移动的画面;所述第二呈现模块,还配置为在所述非用户角色跟随所述虚拟对象移动的过程中,呈现所述非用户角色指引所述虚拟对象移动至所述目标任务对应的交互位置的位置指引信息。
在一些实施例中,所述呈现所述目标任务对应的交互位置的位置指引信息之后,所述装置还包括:运动控制模块,配置为响应于基于所述位置指引信息触发的针对所述虚拟对象的控制指令,控制所述虚拟对象朝所述交互位置移动;当所述虚拟对象移动至所述交互位置时,控制所述虚拟对象在所述交互位置处执行所述目标任务。
在一些实施例中,所述呈现所述目标任务对应的交互位置的位置指引信息之后,所述装置还包括:引导输出模块,配置为输出所述非用户角色针对所述虚拟对象的虚拟道具引导信息;其中,所述虚拟道具引导信息,用于引导所述虚拟对象使用与所述目标任务相适配的虚拟道具。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中的任务引导方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景中的任务引导方法,例如,如图3示出的方法。
在一些实施例中,计算机可读存储介质可以是铁电随机存储器(FRAM,Ferroelectric RandomAccess Memory)、只读存储器(Read Only Memory,ROM)、可编程只读存储器(Programmable Read Only Memory,PROM)、可擦编程只读存储器(EPROM,Erasable Programmable Read OnlyMemory)、带电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper TextMarkup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟场景中的任务引导方法,所述方法由电子设备执行,所述方法包括:
    接收到针对虚拟对象关联的非用户角色的交互引导指令,所述交互引导指令,用于指示所述非用户角色对所述虚拟对象在虚拟场景中的虚拟任务进行引导;
    响应于所述交互引导指令,呈现与所述虚拟对象的交互进度相对应的任务引导信息,所述任务引导信息,用于引导所述虚拟对象执行至少一种虚拟任务;
    基于所述任务引导信息,响应于针对所述至少一种虚拟任务中目标任务的确定指令,呈现所述目标任务对应的交互位置的位置指引信息。
  2. 如权利要求1所述的方法,其中,所述接收到针对虚拟对象关联的非用户角色的交互引导指令,包括:
    在所述虚拟场景中,呈现所述虚拟对象关联的非用户角色,所述非用户角色跟随所述虚拟对象的移动而移动;
    响应于针对所述非用户角色的触发操作,接收到所述交互引导指令。
  3. 如权利要求1所述的方法,其中,所述接收到针对虚拟对象关联的非用户角色的交互引导指令,包括:
    呈现引导控件,所述引导控件,用于触发所述虚拟对象与所述非用户角色的引导会话,所述引导会话用于引导所述虚拟对象执行所述虚拟场景中的虚拟任务;
    响应于针对所述引导控件的触发操作,接收到所述交互引导指令。
  4. 如权利要求1所述的方法,其中,所述接收到针对虚拟对象关联的非用户角色的交互引导指令,包括:
    呈现语音输入控件;
    响应于针对所述语音输入控件的触发操作,呈现用于指示正在进行语音采集的采集指示信息,并在所述采集指示信息指示语音采集完毕时,对所采集的语音进行内容识别;
    当所述语音的内容中包含关联所述非用户角色的目标内容时,接收到所述交互引导指令。
  5. 如权利要求1所述的方法,其中,所述接收到针对虚拟对象关联的非用户角色的交互引导指令,包括:
    获取用于定时引导所述虚拟对象执行虚拟任务的定时器;
    当基于所述定时器确定目标时刻到达、且所述虚拟对象在所述目标时刻之前的目标时间段内未执行所述虚拟任务时,接收到通过所述定时器触发的所述交互引导指令。
  6. 如权利要求1所述的方法,其中,所述呈现与所述虚拟对象的交互进度相对应的任务引导信息,包括:
    呈现所述非用户角色对应的引导界面;
    在所述引导界面中,呈现与所述虚拟对象的交互进度相对应的任务引导信息。
  7. 如权利要求6所述的方法,其中,所述引导界面包括第一展示区域和第二展示区域,所述呈现与所述虚拟对象的交互进度相对应的任务引导信息,包括:
    在所述第一展示区域中,展示所述非用户角色针对所述虚拟对象的引导问题,并在所述第二展示区域中,展示对应所述引导问题的候选虚拟任务,所述候选虚拟任务与所述虚拟对象的交互进度相对应;
    将展示的所述引导问题及所述候选虚拟任务确定为所述任务引导信息。
  8. 如权利要求7所述的方法,其中,所述在所述第二展示区域中,展示对应所述引导问题的候选虚拟任务,包括:
    当所述候选虚拟任务的数量为至少两个时,确定各所述候选虚拟任务的优先级;
    按照所述优先级由高到低的顺序,在所述第二展示区域中,展示各所述候选虚拟任务。
  9. 如权利要求7所述的方法,其中,所述第二展示区域包括至少两个子展示区域,每个所述子展示区域对应一个任务类别;所述在所述第二展示区域中,展示对应所述引导问题的候选虚拟任务,包括:
    当所述候选虚拟任务的数量为至少两个时,确定各所述候选虚拟任务所归属的任务类别;
    在所述第二展示区域的各所述子展示区域,展示相应所述任务类别的所述候选虚拟任务。
  10. 如权利要求1所述的方法,其中,所述呈现与所述虚拟对象的交互进度相对应的任务引导信息,包括:
    呈现所述非用户角色,并在所述非用户角色的关联区域,呈现所述非用户角色对应的会话气泡;
    其中,所述会话气泡中包括:与所述虚拟对象的交互进度相对应的任务引导信息。
  11. 如权利要求10所述的方法,其中,所述方法还包括:
    接收到输入的针对所述非用户角色的会话响应信息,并在所述虚拟对象的关联区域,采用会话气泡的形式呈现所述会话响应信息;
    当所述会话响应信息指示对所述目标任务进行选择时,接收到针对所述目标任务的确定指令。
  12. 如权利要求1所述的方法,其中,所述呈现与所述虚拟对象的交互进度相对应的任务引导信息,包括:
    确定所述虚拟对象的交互属性,所述交互属性包括以下至少之一:交互偏好、交互等级、交互所得的虚拟物资、交互环境;
    基于所述交互属性确定所述虚拟对象的交互进度,并呈现与所述交互进度相对应的任务引导信息。
  13. 如权利要求1所述的方法,其中,所述呈现所述目标任务对应的交互位置的位置指引信息,包括:
    呈现所述虚拟场景的地图;
    在所述地图中,呈现所述目标任务对应的交互位置的位置指引信息。
  14. 如权利要求1所述的方法,其中,所述方法还包括:
    控制所述非用户角色跟随所述虚拟对象在所述虚拟场景中移动;
    所述呈现所述目标任务对应的交互位置的位置指引信息,包括:
    在所述非用户角色跟随所述虚拟对象移动的过程中,呈现所述位置指引信息,所述位置指引信息,用于所述非用户角色指引所述虚拟对象移动至所述目标任务对应的交互位置。
  15. 如权利要求1所述的方法,其中,所述呈现所述目标任务对应的交互位置的位置指引信息之后,所述方法还包括:
    基于所述位置指引信息,响应于针对所述虚拟对象的控制指令,控制所述虚拟对象朝所述交互位置移动;
    当所述虚拟对象移动至所述交互位置时,控制所述虚拟对象在所述交互位置处 执行所述目标任务。
  16. 如权利要求1所述的方法,其中,所述呈现所述目标任务对应的交互位置的位置指引信息之后,所述方法还包括:
    输出所述非用户角色针对所述虚拟对象的虚拟道具引导信息;
    其中,所述虚拟道具引导信息,用于引导所述虚拟对象使用与所述目标任务相适配的虚拟道具。
  17. 一种虚拟场景中的任务引导装置,所述装置包括:
    第一接收模块,配置为接收到针对虚拟对象关联的非用户角色的交互引导指令,所述交互引导指令,用于指示所述非用户角色对所述虚拟对象在虚拟场景中的虚拟任务进行引导;
    第一呈现模块,配置为响应于所述交互引导指令,呈现与所述虚拟对象的交互进度相对应的任务引导信息,所述任务引导信息,用于引导所述虚拟对象执行至少一种虚拟任务;
    第二呈现模块,配置为基于所述任务引导信息,响应于针对所述至少一种虚拟任务中目标任务的确定指令,呈现所述目标任务对应的交互位置的位置指引信息。
  18. 一种电子设备,所述电子设备包括:
    存储器,配置为存储可执行指令;
    处理器,配置为执行所述存储器中存储的可执行指令时,实现权利要求1至16任一项所述的虚拟场景中的任务引导方法。
  19. 一种计算机可读存储介质,所述计算机可读存储介质存储有可执行指令,用于被处理器执行时,实现权利要求1至16任一项所述的虚拟场景中的任务引导方法。
  20. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至16任一项所述的虚拟场景中的任务引导方法。
PCT/CN2022/125059 2021-11-09 2022-10-13 虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品 WO2023082927A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/218,387 US20230347243A1 (en) 2021-11-09 2023-07-05 Task guidance method and apparatus in virtual scene, electronic device, storage medium, and program product

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111319975 2021-11-09
CN202111319975.7 2021-11-09
CN202111662320.XA CN114247141B (zh) 2021-11-09 2021-12-31 虚拟场景中任务引导方法、装置、设备、介质及程序产品
CN202111662320.X 2021-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/218,387 Continuation US20230347243A1 (en) 2021-11-09 2023-07-05 Task guidance method and apparatus in virtual scene, electronic device, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2023082927A1 true WO2023082927A1 (zh) 2023-05-19

Family

ID=80798994

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125059 WO2023082927A1 (zh) 2021-11-09 2022-10-13 虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品

Country Status (3)

Country Link
US (1) US20230347243A1 (zh)
CN (1) CN114247141B (zh)
WO (1) WO2023082927A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114247141B (zh) * 2021-11-09 2023-07-25 腾讯科技(深圳)有限公司 虚拟场景中任务引导方法、装置、设备、介质及程序产品
CN115068948A (zh) * 2022-07-18 2022-09-20 北京字跳网络技术有限公司 在虚拟环境中执行动作的方法和装置
CN115328354B (zh) * 2022-08-16 2024-05-10 网易(杭州)网络有限公司 游戏中的交互处理方法、装置、电子设备及存储介质
CN117695644A (zh) * 2022-09-15 2024-03-15 网易(杭州)网络有限公司 播放音频的交互控制方法、装置和电子设备
CN115509361A (zh) * 2022-10-12 2022-12-23 北京字跳网络技术有限公司 虚拟空间交互方法、装置、设备和介质
CN115601485B (zh) * 2022-12-15 2023-04-07 阿里巴巴(中国)有限公司 任务处理模型的数据处理方法及虚拟人物动画生成方法
CN116168704B (zh) * 2023-04-26 2023-07-18 长城汽车股份有限公司 语音交互的引导方法、装置、设备、介质及车辆
CN116540881B (zh) * 2023-06-27 2023-09-08 江西瞳爱教育科技有限公司 基于vr的教学方法及系统
CN117078270B (zh) * 2023-10-17 2024-02-02 彩讯科技股份有限公司 用于网络产品营销的智能交互方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200197812A1 (en) * 2018-12-19 2020-06-25 Nintendo Co., Ltd. Information processing system, storage medium, information processing apparatus and information processing method
US20200330870A1 (en) * 2018-06-01 2020-10-22 Tencent Technology (Shenzhen) Company Limited Information prompting method and apparatus, storage medium, and electronic device
US20210104100A1 (en) * 2019-10-02 2021-04-08 Magic Leap, Inc. Mission driven virtual character for user interaction
CN112843703A (zh) * 2021-03-11 2021-05-28 腾讯科技(深圳)有限公司 信息显示方法、装置、终端及存储介质
CN114247141A (zh) * 2021-11-09 2022-03-29 腾讯科技(深圳)有限公司 虚拟场景中任务引导方法、装置、设备、介质及程序产品

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002200356A (ja) * 2000-04-28 2002-07-16 Square Co Ltd インタラクティブなゲームを処理するゲーム制御方法及び記録媒体、ゲーム装置、及びゲームプログラム
CN111083509B (zh) * 2019-12-16 2021-02-09 腾讯科技(深圳)有限公司 交互任务执行方法、装置、存储介质和计算机设备
CN113760142A (zh) * 2020-09-30 2021-12-07 完美鲲鹏(北京)动漫科技有限公司 基于虚拟角色的交互方法及装置、存储介质、计算机设备
CN113181650B (zh) * 2021-05-31 2023-04-25 腾讯科技(深圳)有限公司 虚拟场景中召唤对象的控制方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200330870A1 (en) * 2018-06-01 2020-10-22 Tencent Technology (Shenzhen) Company Limited Information prompting method and apparatus, storage medium, and electronic device
US20200197812A1 (en) * 2018-12-19 2020-06-25 Nintendo Co., Ltd. Information processing system, storage medium, information processing apparatus and information processing method
US20210104100A1 (en) * 2019-10-02 2021-04-08 Magic Leap, Inc. Mission driven virtual character for user interaction
CN112843703A (zh) * 2021-03-11 2021-05-28 腾讯科技(深圳)有限公司 信息显示方法、装置、终端及存储介质
CN114247141A (zh) * 2021-11-09 2022-03-29 腾讯科技(深圳)有限公司 虚拟场景中任务引导方法、装置、设备、介质及程序产品

Also Published As

Publication number Publication date
US20230347243A1 (en) 2023-11-02
CN114247141A (zh) 2022-03-29
CN114247141B (zh) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2023082927A1 (zh) 虚拟场景中任务引导方法、装置、电子设备、存储介质及程序产品
JP2023538962A (ja) 仮想キャラクタの制御方法、装置、電子機器、コンピュータ読み取り可能な記憶媒体及びコンピュータプログラム
WO2022242260A1 (zh) 游戏中的互动方法、装置、设备及存储介质
WO2023088024A1 (zh) 虚拟场景的交互处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
TWI831074B (zh) 虛擬場景中的信息處理方法、裝置、設備、媒體及程式產品
TW202220731A (zh) 虛擬場景中狀態切換方法、裝置、設備、媒體及程式產品
WO2023109288A1 (zh) 虚拟场景中开局操作的控制方法、装置、设备、存储介质及程序产品
CN112306321B (zh) 一种信息展示方法、装置、设备及计算机可读存储介质
CN115228088A (zh) 一种npc和玩家角色聊天的方法、设备及介质
CN113018862B (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
CN114296597A (zh) 虚拟场景中的对象交互方法、装置、设备及存储介质
WO2023138142A1 (zh) 虚拟场景中的运动处理方法、装置、设备、存储介质及程序产品
WO2022156629A1 (zh) 虚拟对象的控制方法、装置、电子设备、存储介质及计算机程序产品
US20240091643A1 (en) Method and apparatus for controlling virtual objects in game, and electronic device and storage medium
CN117635891A (zh) 虚拟场景中的模型展示方法、装置、设备及存储介质
CN114210051A (zh) 虚拟场景中的载具控制方法、装置、设备及存储介质
WO2024037139A1 (zh) 虚拟场景中的信息提示方法、装置、电子设备、存储介质及程序产品
CN114247132B (zh) 虚拟对象的控制处理方法、装置、设备、介质及程序产品
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品
WO2024146246A1 (zh) 虚拟场景的交互处理方法、装置、电子设备及计算机存储介质
WO2024021750A1 (zh) 虚拟场景中的交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2024051398A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024037142A1 (zh) 虚拟对象的移动引导方法、装置、电子设备、存储介质及程序产品
CN118057277A (zh) 虚拟场景的交互处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22891726

Country of ref document: EP

Kind code of ref document: A1