WO2024037154A1 - 虚拟对象的控制方法、装置、终端、存储介质及程序产品 - Google Patents

虚拟对象的控制方法、装置、终端、存储介质及程序产品 Download PDF

Info

Publication number
WO2024037154A1
WO2024037154A1 PCT/CN2023/099645 CN2023099645W WO2024037154A1 WO 2024037154 A1 WO2024037154 A1 WO 2024037154A1 CN 2023099645 W CN2023099645 W CN 2023099645W WO 2024037154 A1 WO2024037154 A1 WO 2024037154A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
state
control
virtual object
movement
Prior art date
Application number
PCT/CN2023/099645
Other languages
English (en)
French (fr)
Inventor
崔维健
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024037154A1 publication Critical patent/WO2024037154A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Definitions

  • Embodiments of the present application relate to the fields of computer and Internet technologies, and in particular to a virtual object control method, device, terminal, storage medium, and program product.
  • players can use a virtual joystick to control the movement of a virtual character, or use a virtual joystick to control a virtual character to enter an automatic fast running state (i.e., a sprint state).
  • an automatic fast running state i.e., a sprint state
  • the player first slides up the virtual joystick with his left hand to trigger the virtual character to enter a normal sprint state (such as holding a virtual prop), and then triggers another control with his right hand to complete the retracting of the virtual prop. Allows the virtual character to sprint at a faster sprint speed.
  • a normal sprint state such as holding a virtual prop
  • Embodiments of the present application provide a virtual object control method, device, terminal, storage medium and program product, which can reduce the complexity of triggering the second movement state, improve the consistency and convenience of operations, and thereby improve operation efficiency.
  • the technical solutions are as follows:
  • a method for controlling a virtual object is provided.
  • the method is executed by a terminal.
  • the method includes:
  • the recognition area In response to sliding the operation control from the location of the operation control to a recognition area, controlling the virtual object to automatically move in the virtual scene in a first movement state, the recognition area is used to identify operations after the sliding operation.
  • the location of the control
  • control The virtual object automatically moves in the virtual scene in a second movement state; wherein the movement speed of the second movement state is greater than the movement speed of the first movement state.
  • a device for controlling a virtual object includes:
  • a display module used to display operating controls for controlling the movement of virtual objects in the virtual scene
  • a first state trigger module configured to control the virtual object to automatically move in the virtual scene in a first movement state in response to sliding the operation control from the position of the operation control to an identification area, where the identification area is used to Identify the position of the operation control after the sliding operation;
  • a second state triggering module configured to control the virtual object to automatically move in the virtual scene in a second movement state when the operating control stays in the recognition area after the sliding operation reaches a threshold. Movement; wherein the moving speed of the second moving state is greater than the moving speed of the first moving state.
  • a terminal device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the computer program is loaded and executed by the processor to implement the above. Control methods of virtual objects.
  • a computer-readable storage medium in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the above-mentioned control method of a virtual object.
  • a computer program product or a computer program is provided, the computer program product or the computer program including computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal device executes the above-mentioned control method of the virtual object.
  • the virtual object When the virtual object is in the first movement state, the virtual object can be triggered to automatically move in the virtual scene in the second movement state when the position of the operation control stays in the recognition area reaches a threshold, enabling single-pointer control.
  • the first moving state and the second moving state can be triggered continuously.
  • the triggering of the second moving state is completed through a combination of hands or a combination of multiple fingers, which reduces the risk of the second moving state.
  • the triggering operation is less cumbersome and the consistency and convenience of the triggering operation are improved, thereby improving the operation efficiency.
  • Figure 1 is a schematic diagram of a solution computer system provided by an embodiment of the present application.
  • Figure 2 is a flow chart of a virtual object control method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a virtual object in a first movement state provided by an embodiment of the present application
  • Figure 4 is a schematic diagram of a virtual object in a second movement state provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a first state control and a second state control provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of triggering a virtual object under a first state control provided by an embodiment of the present application
  • Figure 7 is a schematic diagram of triggering a virtual object under a second state control provided by an embodiment of the present application.
  • Figure 8 is a flow chart of a virtual object control method provided by another embodiment of the present application.
  • Figure 9 is a flow chart of a method for controlling virtual objects in a shooting game application provided by an embodiment of the present application.
  • Figure 10 is a flow chart of a method for controlling virtual objects in a shooting game application provided by another embodiment of the present application.
  • Figure 11 is a block diagram of a virtual object control device provided by an embodiment of the present application.
  • Figure 12 is a block diagram of a virtual object control device provided by another embodiment of the present application.
  • Figure 13 is a block diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 1 shows a schematic diagram of a computer system provided by an embodiment of the present application.
  • the computer system may include: a terminal 10 and a server 20 .
  • the terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a game console, a multimedia playback device, a PC (Personal Computer), etc.
  • the client of the target application can be installed in the terminal 10, such as game applications, simulation learning applications, virtual reality (Virtual Reality, VR) applications, augmented reality (Augmented Reality, AR) applications, social networking applications Applications, interactive entertainment applications, etc.
  • the server 20 is used to provide background services for clients of application programs (such as game applications) in the terminal 10 .
  • the server 20 may be a backend server of the above-mentioned target application (such as a game application).
  • the server 20 may be one server, a server cluster composed of multiple servers, or a cloud computing service center.
  • the terminal 10 and the server 20 can communicate with each other through the network 30 .
  • the network 30 may be a wired network or a wireless network.
  • the client displays a user interface including an operation control for controlling the movement of the virtual object, and responds to the position of the operation control after the user's operation, starting from the position of the operation control.
  • the setting slides to the recognition area, and the client controls the virtual object to automatically move in the virtual scene in the first movement state.
  • the client controls the virtual object to move in the second state.
  • the moving state moves automatically in the virtual scene; wherein the moving speed of the second moving state is greater than the moving speed of the first moving state.
  • FIG 2 shows a flow chart of a virtual object control method provided by an embodiment of the present application.
  • the execution subject of each step of the method may be the terminal 10 in the computer system shown in Figure 1.
  • the method may include The following steps (step 201 to step 203):
  • Step 201 Display operation controls for controlling the movement of virtual objects in the virtual scene.
  • the virtual scene is displayed in the user interface.
  • the user interface refers to the display interface of the application program, such as the display interface of the above-mentioned target application program.
  • the user interface may be a display interface of the game, and the user interface is used to present a virtual scene of the game to the user.
  • the user interface may be a display interface of the learning scene, and the user interface is used to present the simulation environment in the learning scene to the user.
  • the user interface includes a display layer and a control layer. Among them, the display level of the control layer is higher than that of the display layer.
  • the display layer is used to display screen information (such as virtual scenes, moving pictures of virtual objects, etc.), and the control layer is used to display UI (User Interface, user interface) controls (such as operation controls, buttons, sliders, etc.).
  • the above-mentioned virtual object may refer to a virtual object controlled by the user account in an application program (such as a game application program).
  • the virtual object may refer to a virtual character controlled by the user account in the game application.
  • the virtual object may also refer to a virtual vehicle driven by a virtual character in the application program, such as a virtual vehicle, a virtual aircraft, a virtual hot air balloon, etc.
  • the embodiment of the present application does not limit the virtual object.
  • the above-mentioned virtual scene refers to the environment displayed (or provided) when the client of the application (such as a game application) is running on the terminal.
  • the virtual scene refers to the environment created for virtual objects to carry out activities (such as game play).
  • the environment can be, for example, a virtual house, a virtual island, a virtual sky, a virtual land, etc.
  • the virtual scene may be a simulation environment of the real world, a semi-simulation and semi-fictitious environment, or a purely fictitious environment, which is not limited in the embodiments of the present application.
  • the user can control the virtual object to move in the virtual scene by operating the controls.
  • the operation control may refer to a rocker control, a handle control, a direction control, etc.
  • Step 202 In response to sliding the operation control from the location of the operation control to the identification area in the user interface, control the virtual object to automatically move in the virtual scene in the first movement state.
  • the location of the operation control refers to the position of the contact point on the operation control.
  • Touch points refer to the points of contact when performing operations on the user interface, or touch points refer to user operations on the display screen of the terminal.
  • the contact point made by the user such as the contact point between the user's finger and the display screen of the terminal.
  • the identification area refers to the area associated with the operation control, which can be used to change the movement state of the virtual object, or the identification area is used to identify the contact point slid by the operation control. For example, by identifying the area, the virtual character can be switched from a walking state to an automatic fast running state.
  • the location of the operation control refers to the location of the operation control in the game joystick.
  • the location of the operation control refers to the starting or default position or origin position of the operation control in the game joystick.
  • the first movement state includes at least one of movement speed, movement posture, and movement attributes.
  • the moving speed may refer to the moving speed of the virtual character in the first moving state.
  • the moving posture may include walking, running, squatting, sprinting, holding a virtual character, etc. Props, not holding virtual props, etc.
  • the movement attributes may include using a virtual vehicle, not using a virtual vehicle, etc.
  • the virtual character in the first movement state is: the virtual character holds a virtual prop and sprints at a moving speed of 2m/s without using a virtual vehicle.
  • the moving speed may refer to the driving speed of the virtual vehicle in the first moving state
  • the moving posture may refer to the driving posture of the virtual vehicle
  • the Mobility attributes can include flying in the air, traveling on water, traveling on land, etc.
  • the virtual vehicle in the first moving state is: the virtual vehicle drifts on the water surface at a traveling speed of 20 m/s.
  • the client controls the virtual object 303 to walk in the virtual scene.
  • the client displays an identification area 304 at a set distance above the rocker control 302 .
  • the client controls the virtual object 303 to enter the first movement state, that is, the virtual object 303 automatically runs in the virtual scene in the first movement state. .
  • the first movement state can also be called a normal sprint state (ie, automatic fast running state), and the movement speed of the first movement state is greater than the movement speed of the walking state.
  • an icon corresponding to the first movement state is also displayed in the identification area 304 to prompt the user that the virtual object 304 is in the first movement state.
  • the icon can be equipped with text information, such as "continue sprinting".
  • the virtual object in the first movement state holds the virtual prop and automatically moves in the virtual scene.
  • the virtual object 303 in the first moving state holds the virtual prop 305 and automatically runs in the virtual scene at the moving speed of the first moving state.
  • the virtual object can automatically move in the virtual scene with empty hands (that is, without holding virtual props).
  • the client in response to the operation control after the sliding operation, slides from the location of the operation control to the recognition area, and displays a dynamic prompt icon in the vicinity of the recognition area.
  • the dynamic prompt icon is used to indicate the length of stay.
  • Dynamic prompt icons can be used to dynamically display the dwell time.
  • the filling degree of the elements in the dynamic prompt icon changes with the dwell time.
  • a first icon for indicating the first movement state is displayed on one end of the dynamic prompt icon
  • a second icon for indicating the second movement state is displayed on the other end of the dynamic prompt icon.
  • the dynamic prompt icon may be a progress bar, an hourglass, etc.
  • the client in response to the position of the joystick control 302 sliding from the position to the identification area 304 , the client displays a dynamic prompt icon 306 above the identification area 304 .
  • the left end of the dynamic prompt icon 306 displays a first icon for indicating the first movement state
  • the right end of the dynamic prompt icon 306 displays a second icon for indicating the second movement state.
  • the moving speed of the second moving state is greater than the moving speed of the first moving state.
  • the moving posture of the second moving state may be different from the moving posture of the first moving state.
  • the movement posture of the second movement state is a half-crouch movement
  • the movement posture of the first movement state is a standing movement.
  • the virtual object in the second moving state puts away the virtual prop and automatically moves in the virtual scene.
  • the virtual object 303 in the second moving state puts away the virtual props 305 and automatically runs in the virtual scene at the moving speed of the second moving state.
  • the movement attribute of the second movement state may be the same as the movement attribute of the first movement state. For example, neither the virtual object in the second movement state nor the first movement state uses a virtual vehicle.
  • the residence time refers to the time the operation control stays in the recognition area after the sliding operation.
  • the dynamic prompt icon is not displayed.
  • the filling degree of the elements in the dynamic prompt icon is positively correlated with the length of stay.
  • the filling process of the elements in the dynamic prompt icon can be as follows: when the operation control after the sliding operation stays in the recognition area, the dynamic prompt icon is displayed. The animation of dynamic filling of the elements in the prompt icon; when the dwell time reaches the threshold, the animation of the completion of filling of the elements in the dynamic prompt icon is displayed; the dynamic prompt icon and recognition area are cancelled.
  • the visual display of the stay time through the dynamic prompt icon can allow the user to obtain the stay time more intuitively for subsequent operations, thereby improving the user experience.
  • Step 203 When the stay time of the operation control in the recognition area after the sliding operation reaches the threshold, control the virtual object to automatically move in the virtual scene in the second movement state; wherein the movement speed of the second movement state is greater than the first movement. The state's movement speed.
  • the dwell time starts to be calculated.
  • Threshold can Adaptability settings and adjustments are made based on actual usage requirements.
  • the threshold can be 2s, 2.5s, 3s, etc.
  • the client controls the virtual object to enter the second movement state, that is, controls the virtual object to move in the virtual scene. Automatically moves according to the moving speed, moving posture, etc. corresponding to the second moving state.
  • the virtual object 303 puts away the virtual props 305 and automatically runs in the virtual scene at the moving speed corresponding to the second movement state, that is, the virtual object 303 Enter the sprint state to collect props.
  • the icon corresponding to the second movement state is updated and displayed in the identification area 304 to prompt the user that the virtual object 303 is in the second movement state.
  • the icon can be equipped with text information, such as "Collecting props and sprinting”.
  • the joystick control 302 displays a prompt message of "receiving items and sprinting".
  • the client after the virtual object enters the second movement state, the client also displays a first state control and a second state control; wherein, the first state control is used to switch the movement state of the virtual object, and the second state control is used to switch the movement state of the virtual object. Used to control virtual objects to stop automatic movement.
  • the client virtual object 303 in response to the user ending the triggering operation on the joystick control 302 , the client virtual object 303 continues to maintain the second moving state, and displays the first state control 307 and the first state control 307 in the movable area of the joystick control 302 .
  • the first status control 307 is located above the second status control 308 .
  • the first status control 307 and the second status control 308 may be buttons.
  • the first state control 307 displays prompt information corresponding to the movement state that can be switched to.
  • the second status control 308 displays prompt information corresponding to "stop automatic movement".
  • the client controls the virtual object to switch from the second movement state to the first movement state in response to a triggering operation for the first state control.
  • the virtual object 303 in response to the user's triggering operation on the first state control 307 , automatically runs fast without holding the virtual prop 305 , and switches to automatically running fast while holding the virtual prop 305 .
  • the client controls the virtual object to stop automatically moving and hold the virtual prop in response to a triggering operation for the second state control.
  • the virtual object 303 stops automatically running fast and takes out the virtual prop 305 , or the virtual object 303 takes out the virtual prop 305 and stops running fast automatically. In this way, one-click pulling out virtual props can be realized to stop automatic running.
  • the virtual object is first controlled to stop running through operating controls, and then the virtual object is controlled to take out the virtual props through the holding control corresponding to the virtual props.
  • Embodiments of the present application The switching operation in the provided technical solution is simpler and more convenient.
  • the client hides and displays the first status control in response to a triggering operation on the first status control or the second status control. state control and second state control. In this way, the operation control can be restored to a normal state, so that the user can continue to use the operation control normally.
  • the step of controlling the virtual object to automatically move in the virtual scene in the first movement state is executed; if the virtual object If the object does not hold a virtual prop, the step of controlling the virtual object to automatically move in the virtual scene in the second movement state is executed.
  • the client controls the virtual object to hold the virtual props and automatically move in the virtual scene at the moving speed of the first movement state if the virtual object holds the virtual props. Move; if the virtual object does not hold virtual props, control the virtual object to be empty-handed and automatically move in the virtual scene at the moving speed of the second moving state.
  • the method for determining the moving speed of the virtual object can be as follows: when the stay duration is greater than or equal to 0 and less than the threshold, determine the moving speed of the virtual object as the first moving speed; when the stay duration is greater than or equal to the threshold In the case of , the moving speed of the virtual object is determined to be the second moving speed, and the second moving speed is the sum of the product of the stay duration and the set ratio and the first moving speed.
  • the virtual object switches to the first movement state, that is, the movement speed of the first movement state is fixed to the first movement speed.
  • the virtual object switches to the second movement state, that is, the movement speed of the second movement state increases as the stay duration increases based on the first movement speed.
  • the technical solution provided by the embodiments of the present application supports the triggering of the virtual object in the second movement state by allowing the operation control to stay in the identification area for a duration reaching a threshold when the virtual object is in the first movement state.
  • Automatic movement in the virtual scene enables the operation of the identification area with a single finger to trigger the first moving state and the second moving state in a coherent manner.
  • the second moving state is completed through a combination of two hands or a combination of multiple fingers. The triggering of the moving state reduces the tediousness of the triggering operation of the second moving state and improves the consistency and convenience of the triggering operation, thus improving the operation efficiency.
  • the virtual object is first controlled to stop running through operating controls, and then the virtual object is controlled to take out the virtual props through the holding controls corresponding to the virtual props.
  • This application implements The switching operation in the technical solution provided in the example is simpler and more convenient, further improving operating efficiency.
  • FIG 8 shows a flow chart of a virtual object control method provided by another embodiment of the present application.
  • the execution subject of each step of the method can be the terminal 10 in the computer system shown in Figure 1.
  • the method can Including the following Steps (step 801 ⁇ step 803):
  • Step 801 Display operation controls for controlling the movement of virtual objects in the virtual scene.
  • Step 802 In response to sliding the operation control from the position of the operation control to the recognition area, control the virtual object to automatically move in the virtual scene in the first movement state.
  • Steps 801 and 802 are the same as those introduced in the above embodiments. For content not explained in the embodiments of this application, please refer to the above embodiments and will not be described again here.
  • Step 803 In response to a quick click operation on the set area, control the virtual object to automatically move in the virtual scene in the second movement state.
  • the quick click operation may refer to the operation of performing multiple consecutive click operations, and in the continuous multiple click operations, the time interval between two adjacent click operations is less than a preset threshold.
  • the quick click operation may be a double click operation.
  • a double-click operation refers to an operation in which two consecutive clicks are performed, and the time interval between the two consecutive clicks is less than a preset threshold.
  • the quick click operation may also be a three-click operation, a four-click operation, etc., which is not limited in the embodiments of the present application.
  • the embodiment of the present application does not limit the setting area, and the setting area may refer to the identification area.
  • the recognition area in response to the user ending the sliding operation on the operation control, the recognition area will still continue to be displayed for a set period of time for the user to perform a quick click operation.
  • the above setting area may also refer to a newly added click operation detection area for detecting quick click operations.
  • the recognition area in response to the user ending the sliding operation on the operation control, the recognition area is canceled and the click operation detection area is displayed at or near the display position corresponding to the recognition area for the user to perform a quick click operation.
  • the client controls the virtual object to automatically move in the virtual scene in the second movement state.
  • the acquisition process of the movement speed corresponding to the second movement state may be as follows:
  • the attribute information of the quick click operation may refer to the number of clicks, click speed, etc. of the quick click operation.
  • the movement speed corresponding to the second movement state is positively correlated with the attribute information of the quick click operation.
  • the movement speed corresponding to the second movement state is positively correlated with the number of clicks of the quick click operation.
  • the movement speed corresponding to the second movement state is the first movement speed.
  • the movement speed corresponding to the second movement state is The speed is the second moving speed.
  • the moving speed corresponding to the second moving state is the third moving speed.
  • the moving speed of the third gear is greater than the moving speed of the second gear
  • the moving speed of the second gear is greater than the moving speed of the first gear.
  • the movement speed corresponding to the second movement state may also be positively correlated with the click speed of the quick click operation.
  • the client controls the virtual object to automatically move in the virtual scene at a moving speed corresponding to the second movement state determined above.
  • the technical solution provided by the embodiments of the present application can trigger the virtual object to move in the virtual scene in the second moving state by supporting a quick click operation on the set area when the virtual object is in the first moving state.
  • Automatic movement in the middle realizes that by operating on the identification area with one hand, the first movement state and the second movement state can be triggered continuously.
  • the second movement state is completed through a combination of two hands or a combination of multiple fingers. Triggering reduces the tediousness of the triggering operation in the second movement state, and improves the consistency and convenience of the triggering operation, thereby improving operation efficiency.
  • the virtual object control method provided by the embodiment of the present application may also include the following content:
  • the client displays a user interface, and the virtual scene screen is displayed in the user interface.
  • the virtual character holds virtual shooting props.
  • a joystick control for controlling the movement of the virtual character is also displayed in the user interface.
  • the client In response to the user's upward sliding operation on the rocker control, the client displays an identification area associated with the rocker control. This recognition area can be used to control the virtual character to enter the first sprint state.
  • the virtual character in the first sprint state holds virtual shooting props and automatically runs quickly in the virtual scene.
  • the client detects whether the virtual character holds virtual shooting props. If it is detected that the virtual character holds a virtual shooting prop, the client displays a progress bar (i.e., the dynamic prompt icon above) above the recognition area. Otherwise, the client does not display the progress bar and continues to detect whether the virtual character holds virtual shooting props. Among them, the progress bar is used to visually indicate how long the sliding operation continues to stay in the recognition area, and the reading progress of the progress bar is positively correlated with the length of stay.
  • a progress bar i.e., the dynamic prompt icon above
  • a first sprint icon corresponding to the first sprint state is displayed on the left end of the progress bar
  • a second sprint icon corresponding to the second sprint state is displayed on the right end of the progress bar.
  • the virtual character in the second sprint state puts away the virtual shooting props and automatically runs quickly in the virtual scene.
  • the running speed of the second sprint state is greater than the running speed of the first sprint state.
  • the client detects whether the current position of the operation control reaches the recognition area, that is, detects whether the current position of the operation control contacts or overlaps with the recognition area. If it is detected that the current position of the operation control reaches the recognition area, the progress bar will start to adjust from left to right, that is, the bar reading progress will increase (such as 0-100%). If it is not detected that the current position of the operation control reaches the recognition area, the client continues to detect whether the current position of the operation control reaches the recognition area.
  • the client controls the virtual object to enter the first sprint state.
  • the virtual object holds a virtual shooting prop and automatically runs in the virtual scene at the running speed of the first sprint state.
  • the words "Sprint with props" are displayed in the user interface.
  • the client When the current position of the operation control does not leave the recognition area, the client continues to count the corresponding stay time of the operation control and detects whether the stay time reaches 2 seconds. If the stay time reaches 2s, the client controls the virtual object to enter the second sprint state. For example, the virtual object puts away the virtual shooting prop and automatically runs in the virtual scene at the running speed of the second sprint state. The words "Collect props and sprint" are displayed in the user interface. Otherwise, the client continues to detect whether the dwell time reaches 2 seconds.
  • the client determines the running speed of the second sprint state based on the dwell time. At the same time, the client cancels the display of the recognition area and displays a screen where the progress bar is filled to 100% and then canceled.
  • the first state control and the second state control are displayed in the movable area of the joystick control.
  • the first state control is used to control the virtual object to switch from the second sprint state to the first sprint state.
  • the second state control is used to control the virtual object to stop running quickly automatically and hold virtual shooting props.
  • the client detects the first state control and the second state control.
  • the client controls the virtual object to switch from the second sprint state to the first sprint state, and hides and displays the first state control and the second state control, that is, the joystick becomes the default static state.
  • the words "Sprint with props" are displayed in the user interface.
  • the client controls the virtual object to stop sprinting and take out the virtual shooting prop to wait for further control by the user.
  • the client hides and displays the first state control and the second state control. The words "Sprint with props" are no longer displayed in the user interface.
  • the technical solution provided by the embodiments of the present application supports the triggering of the virtual object in the second movement state by allowing the operation control to stay in the identification area for a duration reaching a threshold when the virtual object is in the first movement state.
  • Automatic movement in the virtual scene enables the operation of the identification area with a single finger to trigger the first moving state and the second moving state in a coherent manner.
  • the second moving state is completed through a combination of two hands or a combination of multiple fingers. The triggering of the moving state reduces the tediousness of the triggering operation of the second moving state and improves the consistency and convenience of the triggering operation, thus improving the operation efficiency.
  • FIG. 11 shows a block diagram of a virtual object control device provided by an embodiment of the present application.
  • the device has the function of implementing the above method example, and the function can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the device can be the terminal equipment introduced above, or can be set in the terminal equipment.
  • the device 1100 includes: a display module 1101, a first state trigger module 1102 and a second state trigger module 1103.
  • the display module 1101 is used to display operating controls for controlling the movement of virtual objects in the virtual scene.
  • the first state triggering module 1102 is configured to control the virtual object to automatically move in the virtual scene in the first movement state in response to sliding the operation control from the location of the operation control to the identification area in the user interface. , the identification area is used to identify the position of the operation control after the sliding operation.
  • the second state triggering module 1103 is configured to control the virtual object to move in the virtual scene in the second movement state when the duration of the operation control in the recognition area after the sliding operation reaches a threshold. Automatic movement; wherein the moving speed of the second moving state is greater than the moving speed of the first moving state.
  • the device 1100 further includes: a prompt icon display module 1104.
  • Prompt icon display module 1104 configured to display a dynamic prompt icon in the vicinity of the recognition area in response to sliding the operation control from the location of the operation control to the identification area, where the dynamic prompt icon is used to indicate The stated length of stay.
  • the prompt icon display module 1104 is used to:
  • a first icon for indicating the first movement state is displayed on one end of the dynamic prompt icon, and a third icon for indicating the second movement state is displayed on the other end of the dynamic prompt icon. Two icons.
  • the device 1100 further includes: a status control display module 1105.
  • the status control display module 1105 is used to display a first status control and a second status control; wherein the first status control is used to switch the movement status of the virtual object, and the second status control is used to control the virtual object. object stopped from Move.
  • the device 1100 further includes: a third state trigger module 1106.
  • the first state triggering module 1102 is configured to control the virtual object to switch from the second moving state to the first moving state in response to a triggering operation on the first state control.
  • the third state triggering module 1106 is configured to control the virtual object to stop automatically moving and hold virtual props in response to a triggering operation for the second state control.
  • the status control display module 1105 is configured to hide and display the first status control and the second status control in response to a triggering operation for the first status control or the second status control. Status control.
  • the first state triggering module 1102 is also configured to respond to the operation control sliding from the position of the operation control to the identification area, if the virtual object holds a virtual prop, Then execute the step of controlling the virtual object to automatically move in the virtual scene in the first movement state.
  • the second state trigger module 1103 is also configured to respond to the operation control sliding from the position of the operation control to the identification area, and if the virtual object does not hold a virtual prop, execute the control step. The step of automatically moving the virtual object in the virtual scene in the second movement state.
  • the virtual object in the first movement state holds virtual props and moves automatically in the virtual scene; the virtual object in the second movement state puts away the Virtual props move automatically in the virtual scene.
  • the device 1100 further includes: a movement speed determination module 1107.
  • the moving speed determination module 1107 is configured to determine the moving speed of the virtual object as the first moving speed when the stay duration is greater than or equal to 0 and less than the threshold.
  • the moving speed determination module 1107 is also configured to determine the moving speed of the virtual object as the second moving speed when the stay duration is greater than or equal to the threshold, and the second moving speed is the stay time. The product between the duration and the set ratio, and the sum of the first moving speed.
  • the second state triggering module 1103 is further configured to control the virtual object in response to a quick click operation on the set area when the virtual object is in the first movement state.
  • the virtual object automatically moves in the virtual scene in the second movement state.
  • the second state trigger module 1103 is also used to:
  • the technical solution provided by the embodiments of the present application supports the triggering of the virtual object in the second movement state by allowing the operation control to stay in the identification area for a duration reaching a threshold when the virtual object is in the first movement state.
  • Automatic movement in the virtual scene enables the operation of the identification area with a single finger to trigger the first moving state and the second moving state in a coherent manner.
  • the second moving state is completed through a combination of two hands or a combination of multiple fingers. The triggering of the moving state reduces the tediousness of the triggering operation of the second moving state and improves the consistency and convenience of the triggering operation, thus improving the operation efficiency.
  • FIG. 13 shows a structural block diagram of a terminal device 1300 provided by an embodiment of the present application.
  • the terminal device is used to implement the virtual object control method provided in the above embodiment.
  • the terminal device may be the terminal 10 in the computer system shown in FIG. 1 . Specifically:
  • the terminal device 1300 includes: a processor 1301 and a memory 1302.
  • the processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 1301 can be implemented using at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field Programmable Gate Array, field programmable gate array), and PLA (Programmable Logic Array, programmable logic array).
  • the processor 1301 can also include a main processor and a co-processor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the co-processor is A low-power processor used to process data in standby mode.
  • the processor 1301 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 1301 may also include an AI (Artificial Intelligence, artificial intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • memory 1302 may include one or more computer-readable storage media, which may be non-transitory.
  • Memory 1302 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash memory storage devices.
  • non-transitory The computer-readable storage medium is used to store a computer program, and the computer program is configured to be executed by one or more processors to implement the above-mentioned control method of virtual objects.
  • the terminal device 1300 optionally further includes: a peripheral device interface 1303 and at least one peripheral device.
  • the processor 1301, the memory 1302 and the peripheral device interface 1303 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1304, a display screen 1305, an audio circuit 1306, and a power supply 1307.
  • FIG. 13 does not constitute a limitation on the terminal device 1300, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium.
  • the computer program When executed by a processor, the computer program implements the above control method of a virtual object.
  • the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid state drive) or optical disk, etc.
  • random access memory can include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product or computer program is also provided, the computer program product or computer program including computer instructions stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal device executes the above control method of the virtual object.
  • the information including but not limited to subject device information, subject personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application All are authorized by the subject or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • the virtual objects and operations involved in this application were obtained with full authorization.

Abstract

一种虚拟对象(303)的控制方法,包括:显示用于控制虚拟场景中的虚拟对象(303)移动的操作控件(302);响应于将操作控件(302)从操作控件(302)所在位置滑动至识别区域(304),控制虚拟对象(303)以第一移动状态在虚拟场景中自动移动;在滑动操作后的操作控件(302)在识别区域(304)的停留时长达到阈值的情况下,控制虚拟对象(303)以第二移动状态在虚拟场景中自动移动;其中,第二移动状态的移动速度大于第一移动状态的移动速度。还提供了一种虚拟对象(303)的控制装置、终端设备、计算机可读存储介质及计算机程序产品。该虚拟对象的控制方法可实现单指即可连贯地进行第一移动状态和第二移动状态的触发,提高了操作效率和便捷性。

Description

虚拟对象的控制方法、装置、终端、存储介质及程序产品
本申请要求于2022年08月18日提交的申请号为202210992629.3、发明名称为“虚拟对象的控制方法、装置、终端、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机和互联网技术领域,特别涉及一种虚拟对象的控制方法、装置、终端、存储介质及程序产品。
背景技术
目前,在游戏类应用程序中,玩家可以通过虚拟摇杆控制虚拟角色移动,也可以通过虚拟摇杆控制虚拟角色进入自动快速奔跑状态(即冲刺状态)。
以射击游戏类应用程序为例,玩家先通过左手上滑移动虚拟摇杆,触发虚拟角色进入普通冲刺状态(如持有虚拟道具),再通过右手触发另一控件,完成虚拟道具的收起,使得虚拟角色能够以更快的冲刺速度进行冲刺。
然而,上述控制虚拟角色以更快的冲刺速度进行冲刺的操作过于繁琐,不够连贯便捷。
发明内容
本申请实施例提供了一种虚拟对象的控制方法、装置、终端、存储介质及程序产品,能够降低触发第二移动状态的繁琐程度,以及提高操作的连贯性和便捷性,从而提高操作效率。所述技术方案如下:
根据本申请实施例的一个方面,提供了一种虚拟对象的控制方法,所述方法由终端执行,所述方法包括:
显示用于控制虚拟场景中的虚拟对象移动的操作控件;
响应于将所述操作控件从所述操作控件所在位置滑动至识别区域,控制所述虚拟对象以第一移动状态在所述虚拟场景中自动移动,所述识别区域用于识别滑动操作后的操作控件的位置;
在所述滑动操作后的所述操作控件在所述识别区域的停留时长达到阈值的情况下,控制 所述虚拟对象以第二移动状态在所述虚拟场景中自动移动;其中,所述第二移动状态的移动速度大于所述第一移动状态的移动速度。
根据本申请实施例的一个方面,提供了一种虚拟对象的控制装置,所述装置包括:
显示模块,用于显示用于控制虚拟场景中的虚拟对象移动的操作控件;
第一状态触发模块,用于响应于将所述操作控件从所述操作控件所在位置滑动至识别区域,控制所述虚拟对象以第一移动状态在虚拟场景中自动移动,所述识别区域用于识别滑动操作后的操作控件的位置;
第二状态触发模块,用于在所述滑动操作后的所述操作控件在所述识别区域的停留时长达到阈值的情况下,控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动;其中,所述第二移动状态的移动速度大于所述第一移动状态的移动速度。
根据本申请实施例的一个方面,提供了一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述虚拟对象的控制方法。
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述虚拟对象的控制方法。
根据本申请实施例的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端设备执行上述虚拟对象的控制方法。
本申请实施例提供的技术方案可以包括如下有益效果:
通过在虚拟对象处于第一移动状态的情况下,支持针对操作控件的位置在识别区域的停留时长达到阈值,即可触发虚拟对象以第二移动状态在虚拟场景中自动移动,实现通过单指针对识别区域进行操作,即可连贯地进行第一移动状态和第二移动状态的触发,相比于相关技术中通过双手组合或多指组合完成第二移动状态的触发,降低了第二移动状态的触发操作繁琐程度,以及提高了触发操作的连贯性和便捷性,从而提高了操作效率。
附图说明
图1是本申请一个实施例提供的方案计算机系统的示意图;
图2是本申请一个实施例提供的虚拟对象的控制方法的流程图;
图3是本申请一个实施例提供的第一移动状态下的虚拟对象的示意图;
图4是本申请一个实施例提供的第二移动状态下的虚拟对象的示意图;
图5是本申请一个实施例提供的第一状态控件和第二状态控件的示意图;
图6是本申请一个实施例提供的触发第一状态控件下的虚拟对象的示意图;
图7是本申请一个实施例提供的触发第二状态控件下的虚拟对象的示意图;
图8是本申请另一个实施例提供的虚拟对象的控制方法的流程图;
图9是本申请一个实施例提供的射击游戏类应用程序下的虚拟对象的控制方法的流程图;
图10是本申请另一个实施例提供的射击游戏类应用程序下的虚拟对象的控制方法的流程图;
图11是本申请一个实施例提供的虚拟对象的控制装置的框图;
图12是本申请另一个实施例提供的虚拟对象的控制装置的框图;
图13是本申请一个实施例提供的终端设备的框图。
具体实施方式
请参考图1,其示出了本申请一个实施例提供的方案计算机系统的示意图。该计算机系统可以包括:终端10和服务器20。
终端10可以是诸如手机、平板电脑、游戏主机、多媒体播放设备、PC(Personal Computer,个人计算机)等电子设备。终端10中可以安装目标应用程序的客户端,诸如游戏类应用程序、模拟学习类应用程序、虚拟现实(Virtual Reality,VR)类应用程序、增强现实(Augmented Reality,AR)类应用应用程序、社交类应用程序、互动娱乐类应用程序等的客户端。
服务器20用于为终端10中的应用程序(如游戏类应用程序)的客户端提供后台服务。例如,服务器20可以是上述目标应用程序(如游戏类应用程序)的后台服务器。服务器20可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。
终端10和服务器20之间可通过网络30进行互相通信。该网络30可以是有线网络,也可以是无线网络。
示例性地,以应用游戏类应用程序的客户端为例。客户端显示包括用于控制虚拟对象移动的操作控件的用户界面,响应于用户针对操作控件的操作后的位置,从操作控件所在位 置滑动至识别区域,客户端控制虚拟对象以第一移动状态在虚拟场景中自动移动,在滑动操作后的操作控件在识别区域的停留时长达到阈值的情况下,客户端控制虚拟对象以第二移动状态在虚拟场景中自动移动;其中,第二移动状态的移动速度大于第一移动状态的移动速度。
请参考图2,其示出了本申请一个实施例提供的虚拟对象的控制方法的流程图,该方法各步骤的执行主体可以是图1所示方案计算机系统中的终端10,该方法可以包括如下几个步骤(步骤201~步骤203):
步骤201,显示用于控制虚拟场景中的虚拟对象移动的操作控件。
虚拟场景显示于用户界面中。用户界面是指应用程序的显示界面,诸如上述目标应用程序的显示界面。示例性地,在射击游戏类应用程序中,用户界面可以是游戏对局的显示界面,该用户界面用于向用户呈现游戏对局的虚拟场景。在模拟学习类应用程序中,用户界面可以是学习场景的显示界面,该用户界面用于向用户呈现学习场景中的模拟环境。可选地,用户界面包括显示层和控件层。其中,控件层的显示层级高于显示层的显示层级。显示层用于显示画面信息(如虚拟场景、虚拟对象的移动画面等),控件层用于显示UI(User Interface,用户界面)控件(如操作控件、按钮、滑块等)。
上述虚拟对象可以是指用户帐号在应用程序(如游戏类应用程序)中控制的虚拟对象。以射击游戏类应用程序为例,该虚拟对象可以是指用户帐号在游戏类应用程序中控制的虚拟角色。该虚拟对象也可以是指应用程序中,虚拟角色所驾驶的虚拟载具,诸如虚拟车辆、虚拟飞行器、虚拟热气球等,本申请实施例对虚拟对象不作限定。
上述虚拟场景是指应用程序(如游戏类应用程序)的客户端在终端上运行时显示(或提供)的环境,该虚拟场景是指营造出的供虚拟对象进行活动(如游戏对局)的环境,如可以是虚拟房屋、虚拟岛屿、虚拟天空、虚拟陆地等。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境,本申请实施例对此不作限定。
在本申请实施例中,用户可以通过操作控件,控制虚拟对象在虚拟场景中移动。示例性地,该操作控件可以是指摇杆控件、手柄控件、方向控件等。
步骤202,响应于将操作控件从操作控件所在位置滑动至用户界面中的识别区域,控制虚拟对象以第一移动状态在虚拟场景中自动移动。
在一些实施例中,操作控件所在位置是指操作控件上的触点位置。
触点是指在用户界面上进行操作时的接触点,或,触点是指用户针对终端的显示屏的操 作的触点,如用户的手指与终端的显示屏之间的接触点。识别区域是指与操作控件相关联的区域,其可用于改变虚拟对象的移动状态,或,识别区域用于识别由操作控件滑动过来的触点。例如,通过识别区域,可以使得虚拟角色从行走状态切换至自动快速奔跑状态。
在一些实施例中,操作控件所在位置是指游戏操控杆中的操作控件的位置。可选地,操作控件所在位置是指游戏操控杆中的操作控件的起始或默认位置或原点位置。
在本申请实施例中,第一移动状态包括移动速度、移动姿态、移动属性中的至少一种。
在一些实施例中,以虚拟对象为虚拟角色为例,该移动速度可以是指虚拟角色在第一移动状态下的移动速度,该移动姿态可以包括行走、奔跑、半蹲、冲刺、持有虚拟道具、未持有虚拟道具等姿态,该移动属性可以包括使用虚拟载具、未使用虚拟载具等。
例如,第一移动状态下的虚拟角色为:虚拟角色在未使用虚拟载具的情况下持有虚拟道具以2m/s的移动速度冲刺。
在另一些实施例中,以虚拟对象为虚拟载具为例,该移动速度可以是指虚拟载具在第一移动状态下的行驶速度,该移动姿态可以是指虚拟载具的行驶姿态,该移动属性可以包括空中飞行、水面行驶、陆地行驶等。
例如,第一移动状态下的虚拟载具为:虚拟载具以20m/s的行驶速度在水面上进行飘移。
例如,参考图3,响应于用户针对用户界面301中的摇杆控件302的触发操作,客户端控制虚拟对象303在虚拟场景中行走。响应于用户向上滑动摇杆控件302,客户端在摇杆控件302的上方设定距离处显示识别区域304。响应于用户针对摇杆控件302的位置从摇杆控件302所在位置滑动至识别区域304,客户端控制虚拟对象303进入第一移动状态,即虚拟对象303以第一移动状态在虚拟场景中自动奔跑。其中,第一移动状态也可以被称之为普通冲刺状态(即自动快速奔跑状态),第一移动状态的移动速度大于行走状态的移动速度。可选地,识别区域304中还显示有第一移动状态对应的图标,以向用户提示虚拟对象304正处于第一移动状态。图标中可配有文字信息,如“持续冲刺”。
在一个示例中,在虚拟对象持有虚拟道具的情况下,第一移动状态下的虚拟对象持有虚拟道具,在虚拟场景中自动移动。例如,参考图3,第一移动状态下的虚拟对象303持有虚拟道具305,以第一移动状态的移动速度在虚拟场景中自动奔跑。可选地,在虚拟对象未持有虚拟道具的情况下,虚拟对象可以空手(即未持有虚拟道具)在虚拟场景中自动移动。
在一个示例中,客户端响应于滑动操作后的操作控件从操作控件所在位置滑动至识别区域,在识别区域的附近区域显示动态提示图标,该动态提示图标用于指示停留时长。
动态提示图标可用于动态展示停留时长,动态提示图标中的元素的填充程度跟随停留时长而变化。可选地,该动态提示图标的一端显示有用于指示第一移动状态的第一图标,动态提示图标的另一端显示有用于指示第二移动状态的第二图标。示例性地,该动态提示图标可以为进度条、沙漏等。
例如,参考图3,响应于摇杆控件302的位置从所在位置滑动至识别区域304,客户端在识别区域304的上方显示动态提示图标306。该动态提示图标306的左端显示有用于指示第一移动状态的第一图标,该动态提示图标306的右端显示有用于指示第二移动状态的第二图标。
其中,第二移动状态的移动速度大于第一移动状态的移动速度。可选地,第二移动状态的移动姿态可以和第一移动状态的移动姿态不同。例如,第二移动状态的移动姿态为半蹲移动,第一移动状态的移动姿态为站立移动。在一个示例中,在虚拟对象持有虚拟道具的情况下,第二移动状态下的虚拟对象收起虚拟道具,在虚拟场景中自动移动。例如,参考图3和图4,第二移动状态下的虚拟对象303收起虚拟道具305,以第二移动状态的移动速度在虚拟场景中自动奔跑。第二移动状态的移动属性可以和第一移动状态的移动属性相同。例如,第二移动状态和第一移动状态的虚拟对象均未使用虚拟载具。
在本申请实施例中,停留时长是指滑动操作后的操作控件停留在识别区域中的时长。可选地,在操作控件滑动至识别区域,且立即离开识别区域的情况下,不显示动态提示图标。
在一个示例中,动态提示图标中的元素的填充程度与停留时长呈正相关关系,动态提示图标中的元素的填充过程可以如下:在滑动操作后的操作控件停留在识别区域的情况下,显示动态提示图标中的元素动态填充的动画;在停留时长达到阈值的情况下,显示动态提示图标中的元素填充完成的动画;取消显示动态提示图标和识别区域。
示例性地,参考图3、图4和图5。随着停留时长的增加,动态提示图标306中的元素逐渐填充动态提示图标306,在停留时长达到阈值的情况下,动态提示图标306中的元素完成填充,同时,客户端取消显示动态提示图标306和识别区域304。
如此通过动态提示图标对停留时长进行可视化显示,可以使得用户更加直观地获取停留时长,以进行后续操作,从而提高了用户体验。
步骤203,在滑动操作后的操作控件在识别区域的停留时长达到阈值的情况下,控制虚拟对象以第二移动状态在虚拟场景中自动移动;其中,第二移动状态的移动速度大于第一移动状态的移动速度。
可选地,在滑动操作后的操作控件被滑动至识别区域时,开始计算停留时长。阈值可以 根据实际使用需求进行适应性设置与调整,例如,该阈值可以为2s、2.5s、3s等。示例性地,在虚拟对象处于第一移动状态的情况下,响应于停留时长达到阈值(即大于或等于阈值),客户端控制虚拟对象进入第二移动状态,即控制虚拟对象在虚拟场景中以第二移动状态对应的移动速度、移动姿态等进行自动移动。
例如,参考图4和图5,在动态提示图标306完成读条时,虚拟对象303收起虚拟道具305,以第二移动状态对应的移动速度,在虚拟场景中自动奔跑,也即虚拟对象303进入收道具冲刺状态。可选地,识别区域304中更新显示有第二移动状态对应的图标,以向用户提示虚拟对象303正处于第二移动状态。图标中可配有文字信息,如“收道具冲刺中”。响应于用户结束针对摇杆控件302的触发操作,摇杆控件302对应显示有“收道具冲刺中”的提示信息。
在一个示例中,在虚拟对象进入第二移动状态之后,客户端还显示有第一状态控件和第二状态控件;其中,第一状态控件用于切换虚拟对象的移动状态,第二状态控件用于控制虚拟对象停止自动移动。
例如,参考图5,响应于用户结束针对摇杆控件302的触发操作,客户端虚拟对象303持续保持第二移动状态,并在摇杆控件302的可移动区域内显示第一状态控件307和第二状态控件308。其中,第一状态控件307位于第二状态控件308的上方。可选地,第一状态控件307和第二状态控件308可以是按钮。第一状态控件307对应显示有可以切换至的移动状态的提示信息。第二状态控件308对应显示有“停止自动移动”的提示信息。
在一个示例中,客户端响应于针对第一状态控件的触发操作,控制虚拟对象从第二移动状态切换为第一移动状态。
例如,参考图5和图6,响应于用户针对第一状态控件307的触发操作,虚拟对象303从未持有虚拟道具305自动快速奔跑,切换为持有虚拟道具305自动快速奔跑。
在另一个示例中,客户端响应于针对第二状态控件的触发操作,控制虚拟对象停止自动移动并持有虚拟道具。
例如,参考图7,响应于用户针对第二状态控件308的触发操作,虚拟对象303停止自动快速奔跑,并掏出虚拟道具305,或者虚拟对象303掏出虚拟道具305并停止自动快速奔跑。如此可以实现一键掏虚拟道具停止自动奔跑,相比于相关技术中先通过操作控件控制虚拟对象停止奔跑,再通过虚拟道具对应的持有控件,控制虚拟对象掏出虚拟道具,本申请实施例提供的技术方案中的切换操作更加简单便捷。
可选地,客户端响应于针对第一状态控件或第二状态控件的触发操作,隐藏显示第一状 态控件和第二状态控件。如此,操作控件可以恢复至正常状态,以使得用户可以继续正常使用操作控件。
在一个示例中,客户端响应于操作控件从操作控件所在位置滑动至识别区域,若虚拟对象持有虚拟道具,则执行控制虚拟对象以第一移动状态在虚拟场景中自动移动的步骤;若虚拟对象未持有虚拟道具,则执行控制虚拟对象以第二移动状态在虚拟场景中自动移动的步骤。
示例性地,客户端响应于操作控件从操作控件所在位置滑动至识别区域,若虚拟对象持有虚拟道具,则控制虚拟对象持有虚拟道具,以第一移动状态的移动速度在虚拟场景中自动移动;若虚拟对象未持有虚拟道具,则控制虚拟对象空手,以第二移动状态的移动速度在虚拟场景中自动移动。
在一个示例中,虚拟对象的移动速度的确定方法可以如下:在停留时长大于或等于0,且小于阈值的情况下,确定虚拟对象的移动速度为第一移动速度;在停留时长大于或等于阈值的情况下,确定虚拟对象的移动速度为第二移动速度,该第二移动速度为停留时长和设定比例之间的乘积,与第一移动速度的和值。
在停留时长大于或等于0,且小于阈值的情况下,虚拟对象切换至第一移动状态,也即第一移动状态的移动速度固定为第一移动速度。在停留时长大于或等于阈值的情况下,虚拟对象切换至第二移动状态,也即第二移动状态的移动速度在第一移动速度的基础上,随着停留时长的变大而变大。
综上所述,本申请实施例提供的技术方案,通过在虚拟对象处于第一移动状态的情况下,支持针对操作控件在识别区域的停留时长达到阈值,即可触发虚拟对象以第二移动状态在虚拟场景中自动移动,实现通过单指针对识别区域进行操作,即可连贯地进行第一移动状态和第二移动状态的触发,相比于相关技术中通过双手组合或多指组合完成第二移动状态的触发,降低了第二移动状态的触发操作繁琐程度,以及提高了触发操作的连贯性和便捷性,从而提高了操作效率。
另外,通过支持一键掏虚拟道具停止自动奔跑,相比于相关技术中先通过操作控件控制虚拟对象停止奔跑,再通过虚拟道具对应的持有控件,控制虚拟对象掏出虚拟道具,本申请实施例提供的技术方案中的切换操作更加简单便捷,进一步提高了操作效率。
请参考图8,其示出了本申请另一个实施例提供的虚拟对象的控制方法的流程图,该方法各步骤的执行主体可以是图1所示方案计算机系统中的终端10,该方法可以包括如下几 个步骤(步骤801~步骤803):
步骤801,显示用于控制虚拟场景中的虚拟对象移动的操作控件。
步骤802,响应于将操作控件从操作控件所在位置滑动至识别区域,控制虚拟对象以第一移动状态在虚拟场景中自动移动。
步骤801和步骤802与上述实施例介绍相同,本申请实施例未说明的内容,可以参考上述实施例,这里不再赘述。
步骤803,响应于针对设定区域的快速点击操作,控制虚拟对象以第二移动状态在虚拟场景中自动移动。
其中,快速点击操作可以是指进行连续多次点击的操作,且该连续多次点击操作中,相邻两次点击操作的时间间隔小于预设阈值。可选地,该快速点击操作可以是双击操作。双击操作是指连续两次点击,且该连续两次点击的时间间隔小于预设阈值的操作。在一些实施例中,快速点击操作也可以是三击操作、四击操作等等,本申请实施例对此不作限定。本申请实施例对设定区域不作限定,上述设定区域可以是指上述识别区域。例如,响应于用户结束针对操作控件的滑动操作,识别区域仍会持续显示设定时长,以供用户实施快速点击操作。上述设定区域也可以是指新增的点击操作检测区域,以用于检测快速点击操作。例如,响应于用户结束针对操作控件的滑动操作,取消显示识别区域,并在识别区域对应的显示位置或附近显示点击操作检测区域,以供用户实施快速点击操作。
示例性地,在虚拟对象处于第一移动状态的情况下,响应于用户针对设定区域的双击操作,客户端控制虚拟对象以第二移动状态在虚拟场景中自动移动。
在本申请实施例中,第二移动状态对应的移动速度的获取过程可以如下:
1、获取快速点击操作的属性信息。
可选地,快速点击操作的属性信息可以是指快速点击操作的点击次数、点击速度等。
2、根据快速点击操作的属性信息,确定第二移动状态对应的移动速度。
可选地,第二移动状态对应的移动速度与快速点击操作的属性信息呈正相关关系。
示例性地,第二移动状态对应的移动速度与快速点击操作的点击次数呈正相关关系。例如,在快速点击操作的点击次数为2的情况下,第二移动状态对应的移动速度为第一档移动速度,在快速点击操作的点击次数为3的情况下,第二移动状态对应的移动速度为第二档移动速度,在快速点击操作的点击次数为4的情况下,第二移动状态对应的移动速度为第三档移动速度。其中,第三档移动速度大于第二档移动速度,第二档移动速度大于第一档移动速度。
可选地,第二移动状态对应的移动速度还可以与快速点击操作的点击速度呈正相关关系,快速点击操作的点击速度越快,第二移动状态对应的移动速度越快。
3、控制虚拟对象以第二移动状态在虚拟场景中自动移动。
可选地,客户端控制虚拟对象以上述确定的第二移动状态对应的移动速度,在虚拟场景中自动移动。
综上所述,本申请实施例提供的技术方案,通过在虚拟对象处于第一移动状态的情况下,支持针对设定区域的快速点击操作,即可触发虚拟对象以第二移动状态在虚拟场景中自动移动,实现通过单手针对识别区域进行操作,即可连贯地进行第一移动状态和第二移动状态的触发,相比于相关技术中通过双手组合或多指组合完成第二移动状态的触发,降低了第二移动状态的触发操作繁琐程度,以及提高了触发操作的连贯性和便捷性,从而提高了操作效率。
在一个示例性实施例中,参考图9,以射击游戏类应用程序中的虚拟角色为例,本申请实施例提供的虚拟对象的控制方法还可以包括如下内容:
客户端显示用户界面,用户界面中显示虚拟场景画面。在该虚拟场景画面中,虚拟角色持有虚拟射击道具。可选地,用户界面中还显示有用于控制虚拟角色移动的摇杆控件。
响应于用户针对摇杆控件的向上滑动操作,客户端显示与摇杆控件相关联的识别区域。该识别区域可用于控制虚拟角色进入第一冲刺状态。第一冲刺状态下的虚拟角色持有虚拟射击道具,在虚拟场景中自动快速奔跑。
客户端检测虚拟角色是否持有虚拟射击道具。若检测到虚拟角色持有虚拟射击道具,客户端则在识别区域的上方显示进度条(即上文的动态提示图标)。否则,客户端不显进度条,并继续检测虚拟角色是否持有虚拟射击道具。其中,进度条用于可视化指示滑动操作持续停留在识别区域中的时长,进度条的读条进度与该停留时长呈正相关关系。
可选地,进度条的左端显示有第一冲刺状态对应的第一冲刺图标,进度条的右端显示有第二冲刺状态对应的第二冲刺图标。其中,第二冲刺状态下的虚拟角色收起虚拟射击道具,在虚拟场景中自动快速奔跑。第二冲刺状态的奔跑速度大于第一冲刺状态的奔跑速度。
客户端检测操作控件的当前位置是否到达识别区域,即检测操作控件的当前位置是否与识别区域发生接触或重叠。若检测到操作控件的当前位置达到识别区域,进度条则开始从左向右开始调整,即增长读条进度(如0-100%)。若未检测到操作控件的当前位置达到识别区域,客户端则继续检测操作控件的当前位置是否到达识别区域。
同时,客户端控制虚拟对象进入第一冲刺状态。示例性地,虚拟对象持有虚拟射击道具,在虚拟场景中以第一冲刺状态的奔跑速度进行自动奔跑。用户界面中显示“持道具冲刺”字样。
在操作控件的当前位置未离开识别区域的情况下,客户端持续统计操作控件对应的停留时长,并检测停留时长是否达到2s。若停留时长达到2s,客户端则控制虚拟对象进入第二冲刺状态。示例性地,虚拟对象收起虚拟射击道具,在虚拟场景中以第二冲刺状态的奔跑速度进行自动奔跑。用户界面中显示“收道具冲刺”字样。否则,客户端继续检测停留时长是否达到2s。
在虚拟对象进入第二冲刺状态之后,响应于用户结束该滑动操作,客户端根据停留时长,确定第二冲刺状态的奔跑速度。与此同时,客户端取消显示识别区域,并显示进度条填充至100%之后取消显示的画面。
可选地,在虚拟对象进入第二冲刺状态之后,摇杆控件的可移动区域中显示有第一状态控件和第二状态控件。其中,第一状态控件用于控制虚拟对象从第二冲刺状态切换至第一冲刺状态。第二状态控件用于控制虚拟对象停止自动快速奔跑,并持有虚拟射击道具。
参考图10,在显示有第一状态控件和第二状态控件之后,客户端对第一状态控件和第二状态控件进行检测。
在用户触发第一状态控件的情况下,客户端控制虚拟对象从第二冲刺状态切换至第一冲刺状态,并隐藏显示第一状态控件和第二状态控件,即摇杆变为默认的静止状态。用户界面中显示“持道具冲刺”字样。
在用户触发第二状态控件的情况下,客户端控制虚拟对象停止冲刺,掏出虚拟射击道具,以等待用户的进一步控制。同时,客户端隐藏显示第一状态控件和第二状态控件。用户界面中取消显示“持道具冲刺”字样。
综上所述,本申请实施例提供的技术方案,通过在虚拟对象处于第一移动状态的情况下,支持针对操作控件在识别区域的停留时长达到阈值,即可触发虚拟对象以第二移动状态在虚拟场景中自动移动,实现通过单指针对识别区域进行操作,即可连贯地进行第一移动状态和第二移动状态的触发,相比于相关技术中通过双手组合或多指组合完成第二移动状态的触发,降低了第二移动状态的触发操作繁琐程度,以及提高了触发操作的连贯性和便捷性,从而提高了操作效率。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中 未披露的细节,请参照本申请方法实施例。
请参考图11,其示出了本申请一个实施例提供的虚拟对象的控制装置的框图。该装置具有实现上述方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是上文介绍的终端设备,也可以设置在终端设备中。如图11所示,该装置1100包括:显示模块1101、第一状态触发模块1102和第二状态触发模块1103。
显示模块1101,用于显示用于控制虚拟场景中的虚拟对象移动的操作控件。
第一状态触发模块1102,用于响应于将所述操作控件从所述操作控件所在位置滑动至所述用户界面中的识别区域,控制所述虚拟对象以第一移动状态在虚拟场景中自动移动,所述识别区域用于识别滑动操作后的操作控件的位置。
第二状态触发模块1103,用于在所述滑动操作后的所述操作控件在所述识别区域的停留时长达到阈值的情况下,控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动;其中,所述第二移动状态的移动速度大于所述第一移动状态的移动速度。
在一个示例性实施例中,如图12所示,所述装置1100,还包括:提示图标显示模块1104。
提示图标显示模块1104,用于响应于将所述操作控件从所述操作控件所在位置滑动至所述识别区域,在所述识别区域的附近区域显示动态提示图标,所述动态提示图标用于指示所述停留时长。
在一个示例性实施例中,所述提示图标显示模块1104,用于:
在所述滑动操作后的所述操作控件停留在所述识别区域的情况下,显示所述动态提示图标中的元素动态填充的动画;其中,所述动态提示图标中的元素的填充程度与所述停留时长呈正相关关系;
在所述停留时长达到所述阈值的情况下,显示所述动态提示图标中的元素填充完成的动画;
取消显示所述动态提示图标和所述识别区域。
在一个示例性实施例中,所述动态提示图标的一端显示有用于指示所述第一移动状态的第一图标,所述动态提示图标的另一端显示有用于指示所述第二移动状态的第二图标。
在一个示例性实施例中,如图12所示,所述装置1100,还包括:状态控件显示模块1105。
状态控件显示模块1105,用于显示第一状态控件和第二状态控件;其中,所述第一状态控件用于切换所述虚拟对象的移动状态,所述第二状态控件用于控制所述虚拟对象停止自 动移动。
在一个示例性实施例中,如图12所示,所述装置1100,还包括:第三状态触发模块1106。
所述第一状态触发模块1102,用于响应于针对所述第一状态控件的触发操作,控制所述虚拟对象从所述第二移动状态切换为所述第一移动状态。
或者,第三状态触发模块1106,用于响应于针对所述第二状态控件的触发操作,控制所述虚拟对象停止自动移动并持有虚拟道具。
在一个示例性实施例中,所述状态控件显示模块1105用于响应于针对所述第一状态控件或所述第二状态控件的触发操作,隐藏显示所述第一状态控件和所述第二状态控件。
在一个示例性实施例中,所述第一状态触发模块1102,还用于响应于所述操作控件从所述操作控件所在位置滑动至所述识别区域,若所述虚拟对象持有虚拟道具,则执行所述控制所述虚拟对象以第一移动状态在虚拟场景中自动移动的步骤。
所述第二状态触发模块1103,还用于响应于所述操作控件从所述操作控件所在位置滑动至所述识别区域,若所述虚拟对象未持有虚拟道具,则执行所述控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动的步骤。
在一个示例性实施例中,所述第一移动状态下的所述虚拟对象持有虚拟道具,在所述虚拟场景中自动移动;所述第二移动状态下的所述虚拟对象收起所述虚拟道具,在所述虚拟场景中自动移动。
在一个示例性实施例中,如图12所示,所述装置1100,还包括:移动速度确定模块1107。
移动速度确定模块1107,用于在所述停留时长大于或等于0,且小于所述阈值的情况下,确定所述虚拟对象的移动速度为第一移动速度。
所述移动速度确定模块1107,还用于在所述停留时长大于或等于所述阈值的情况下,确定所述虚拟对象的移动速度为第二移动速度,所述第二移动速度为所述停留时长和设定比例之间的乘积,与所述第一移动速度的和值。
在一个示例性实施例中,所述第二状态触发模块1103,还用于在所述虚拟对象处于所述第一移动状态的情况下,响应于针对设定区域的快速点击操作,控制所述虚拟对象以所述第二移动状态在所述虚拟场景中自动移动。
在一个示例性实施例中,所述第二状态触发模块1103,还用于:
根据所述快速点击操作的属性信息,确定所述第二移动状态对应的移动速度;
控制所述虚拟对象以所述第二移动状态在所述虚拟场景中自动移动。
综上所述,本申请实施例提供的技术方案,通过在虚拟对象处于第一移动状态的情况下,支持针对操作控件在识别区域的停留时长达到阈值,即可触发虚拟对象以第二移动状态在虚拟场景中自动移动,实现通过单指针对识别区域进行操作,即可连贯地进行第一移动状态和第二移动状态的触发,相比于相关技术中通过双手组合或多指组合完成第二移动状态的触发,降低了第二移动状态的触发操作繁琐程度,以及提高了触发操作的连贯性和便捷性,从而提高了操作效率。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图13,其示出了本申请一个实施例提供的终端设备1300的结构框图。该终端设备用于实施上述实施例中提供的虚拟对象的控制方法。该终端设备可以是图1所示计算机系统中的终端10。具体来讲:
通常,终端设备1300包括有:处理器1301和存储器1302。
可选地,处理器1301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
可选地,存储器1302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1302中的非暂态的 计算机可读存储介质用于存储计算机程序,所述计算机程序,且经配置以由一个或者一个以上处理器执行,以实现上述虚拟对象的控制方法。
在一些实施例中,终端设备1300还可选包括有:外围设备接口1303和至少一个外围设备。处理器1301、存储器1302和外围设备接口1303之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1303相连。具体地,外围设备包括:射频电路1304、显示屏1305、音频电路1306和电源1307中的至少一种。
本领域技术人员可以理解,图13中示出的结构并不构成对终端设备1300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一个示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序在被处理器执行时以实现上述虚拟对象的控制方法。
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random-Access Memory,随机存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取记忆体可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取记忆体)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。
在一个示例性实施例中,还提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。终端设备的处理器从所述计算机可读存储介质中读取所述计算机指令,所述处理器执行所述计算机指令,使得所述终端设备执行上述虚拟对象的控制方法。
需要说明的是,本申请所涉及的信息(包括但不限于对象设备信息、对象个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经对象授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的虚拟对象、操作等都是在充分授权的情况下获取的。

Claims (16)

  1. 一种虚拟对象的控制方法,所述方法由终端执行,所述方法包括:
    显示用于控制虚拟场景中的虚拟对象移动的操作控件;
    响应于将所述操作控件从所述操作控件所在位置滑动至识别区域,控制所述虚拟对象以第一移动状态在所述虚拟场景中自动移动,所述识别区域用于识别滑动操作后的操作控件的位置;
    在所述滑动操作后的所述操作控件在所述识别区域的停留时长达到阈值的情况下,控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动;其中,所述第二移动状态的移动速度大于所述第一移动状态的移动速度。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于将所述操作控件从所述操作控件所在位置滑动至所述识别区域,在所述识别区域的附近区域显示动态提示图标,所述动态提示图标用于指示所述停留时长。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    在所述滑动操作后的所述操作控件停留在所述识别区域的情况下,显示所述动态提示图标中的元素动态填充的动画;其中,所述动态提示图标中的元素的填充程度与所述停留时长呈正相关关系;
    在所述停留时长达到所述阈值的情况下,显示所述动态提示图标中的元素填充完成的动画;
    取消显示所述动态提示图标和所述识别区域。
  4. 根据权利要求2所述的方法,其中,所述动态提示图标的一端显示有用于指示所述第一移动状态的第一图标,所述动态提示图标的另一端显示有用于指示所述第二移动状态的第二图标。
  5. 根据权利要求1所述的方法,其中,所述控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动之后,还包括:
    显示第一状态控件和第二状态控件;
    其中,所述第一状态控件用于切换所述虚拟对象的移动状态,所述第二状态控件用于控制所述虚拟对象停止自动移动。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    响应于针对所述第一状态控件的触发操作,控制所述虚拟对象从所述第二移动状态切换 为所述第一移动状态;
    或者,响应于针对所述第二状态控件的触发操作,控制所述虚拟对象停止自动移动并持有虚拟道具。
  7. 根据权利要求5所述的方法,其中,所述方法还包括:
    响应于针对所述第一状态控件或所述第二状态控件的触发操作,隐藏显示所述第一状态控件和所述第二状态控件。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于所述操作控件从所述操作控件所在位置滑动至所述识别区域,若所述虚拟对象持有虚拟道具,则执行所述控制所述虚拟对象以第一移动状态在虚拟场景中自动移动的步骤;若所述虚拟对象未持有虚拟道具,则执行所述控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动的步骤。
  9. 根据权利要求1至8任一项所述的方法,其中,所述第一移动状态下的所述虚拟对象持有虚拟道具,在所述虚拟场景中自动移动;所述第二移动状态下的所述虚拟对象收起所述虚拟道具,在所述虚拟场景中自动移动。
  10. 根据权利要求1至8任一项所述的方法,其中,所述方法还包括:
    在所述停留时长大于或等于0,且小于所述阈值的情况下,确定所述虚拟对象的移动速度为第一移动速度;
    在所述停留时长大于或等于所述阈值的情况下,确定所述虚拟对象的移动速度为第二移动速度,所述第二移动速度为所述停留时长和设定比例之间的乘积,与所述第一移动速度的和值。
  11. 根据权利要求1所述的方法,其中,所述方法还包括:
    在所述虚拟对象处于所述第一移动状态的情况下,响应于针对设定区域的快速点击操作,控制所述虚拟对象以所述第二移动状态在所述虚拟场景中自动移动。
  12. 根据权利要求11所述的方法,其中,所述响应于针对设定区域的快速点击操作,控制所述虚拟对象以所述第二移动状态在所述虚拟场景中自动移动,包括:
    根据所述快速点击操作的属性信息,确定所述第二移动状态对应的移动速度;
    控制所述虚拟对象以所述第二移动状态在所述虚拟场景中自动移动。
  13. 一种虚拟对象的控制装置,其中,所述装置包括:
    显示模块,用于显示用于控制虚拟场景中的虚拟对象移动的操作控件;
    第一状态触发模块,用于响应于将所述操作控件从所述操作控件所在位置滑动至识别区 域,控制所述虚拟对象以第一移动状态在所述虚拟场景中自动移动,所述识别区域用于识别滑动操作后的操作控件的位置;
    第二状态触发模块,用于在所述滑动操作后的所述操作控件在所述识别区域的停留时长达到阈值的情况下,控制所述虚拟对象以第二移动状态在所述虚拟场景中自动移动;其中,所述第二移动状态的移动速度大于所述第一移动状态的移动速度。
  14. 一种终端设备,其中,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至12任一项所述的虚拟对象的控制方法。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如上述权利要求1至12任一项所述的虚拟对象的控制方法。
  16. 一种计算机程序产品,其中,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至12任一项所述的虚拟对象的控制方法。
PCT/CN2023/099645 2022-08-18 2023-06-12 虚拟对象的控制方法、装置、终端、存储介质及程序产品 WO2024037154A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210992629.3A CN117618903A (zh) 2022-08-18 2022-08-18 虚拟对象的控制方法、装置、终端、存储介质及程序产品
CN202210992629.3 2022-08-18

Publications (1)

Publication Number Publication Date
WO2024037154A1 true WO2024037154A1 (zh) 2024-02-22

Family

ID=89940575

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/099645 WO2024037154A1 (zh) 2022-08-18 2023-06-12 虚拟对象的控制方法、装置、终端、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN117618903A (zh)
WO (1) WO2024037154A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110172013A1 (en) * 2010-01-06 2011-07-14 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) User interface processing apparatus, method of processing user interface, and program for processing user interface
CN108509139A (zh) * 2018-03-30 2018-09-07 腾讯科技(深圳)有限公司 虚拟对象的移动控制方法、装置、电子装置及存储介质
CN110523085A (zh) * 2019-08-30 2019-12-03 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、终端及存储介质
CN113181651A (zh) * 2021-04-06 2021-07-30 网易(杭州)网络有限公司 游戏中控制虚拟对象移动的方法、装置、电子设备及存储介质
CN113908550A (zh) * 2021-10-20 2022-01-11 网易(杭州)网络有限公司 虚拟角色控制方法、非易失性存储介质及电子装置
CN114011063A (zh) * 2021-11-15 2022-02-08 网易(杭州)网络有限公司 游戏中虚拟角色控制方法及电子设备
CN114011062A (zh) * 2021-11-03 2022-02-08 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN114288659A (zh) * 2021-12-29 2022-04-08 腾讯科技(深圳)有限公司 基于虚拟对象的交互方法、装置、设备、介质及程序产品
CN114522423A (zh) * 2022-01-25 2022-05-24 网易(杭州)网络有限公司 虚拟对象的控制方法、装置、存储介质及计算机设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110172013A1 (en) * 2010-01-06 2011-07-14 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) User interface processing apparatus, method of processing user interface, and program for processing user interface
CN108509139A (zh) * 2018-03-30 2018-09-07 腾讯科技(深圳)有限公司 虚拟对象的移动控制方法、装置、电子装置及存储介质
CN110523085A (zh) * 2019-08-30 2019-12-03 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、终端及存储介质
CN113181651A (zh) * 2021-04-06 2021-07-30 网易(杭州)网络有限公司 游戏中控制虚拟对象移动的方法、装置、电子设备及存储介质
CN113908550A (zh) * 2021-10-20 2022-01-11 网易(杭州)网络有限公司 虚拟角色控制方法、非易失性存储介质及电子装置
CN114011062A (zh) * 2021-11-03 2022-02-08 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN114011063A (zh) * 2021-11-15 2022-02-08 网易(杭州)网络有限公司 游戏中虚拟角色控制方法及电子设备
CN114288659A (zh) * 2021-12-29 2022-04-08 腾讯科技(深圳)有限公司 基于虚拟对象的交互方法、装置、设备、介质及程序产品
CN114522423A (zh) * 2022-01-25 2022-05-24 网易(杭州)网络有限公司 虚拟对象的控制方法、装置、存储介质及计算机设备

Also Published As

Publication number Publication date
CN117618903A (zh) 2024-03-01

Similar Documents

Publication Publication Date Title
WO2021036581A1 (zh) 虚拟对象的控制方法和相关装置
US10990274B2 (en) Information processing program, information processing method, and information processing device
US20140372923A1 (en) High Performance Touch Drag and Drop
JP2023162233A (ja) 仮想オブジェクトの制御方法、装置、端末及び記憶媒体
WO2022121528A1 (zh) 互动信息处理方法、装置、终端、存储介质及程序产品
CN111760274A (zh) 技能的控制方法、装置、存储介质及计算机设备
US9437158B2 (en) Electronic device for controlling multi-display and display control method thereof
JP2023552772A (ja) 仮想アイテムの切り替え方法、装置、端末及びコンピュータプログラム
US20220334716A1 (en) Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product
US20230241499A1 (en) Position adjustment method and apparatus for operation control, terminal, and storage medium
CN113350779A (zh) 游戏虚拟角色动作控制方法及装置、存储介质及电子设备
CN111643890A (zh) 卡牌游戏的交互方法、装置、电子设备及存储介质
CN112114734A (zh) 在线文档的显示方法、装置、终端及存储介质
US20220337745A1 (en) Method for playing video
CN108153475B (zh) 一种对象位置切换方法以及移动终端
CN113952709A (zh) 游戏交互的方法及装置、存储介质、电子设备
CN102693064B (zh) 一种终端退出保护屏幕的方法及系统
WO2024037154A1 (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
CN111638820A (zh) 一种信息处理方法、装置、处理器及终端
CN115105831A (zh) 虚拟对象的切换方法、装置、存储介质及电子装置
CN112783386A (zh) 页面跳转方法、装置、存储介质及计算机设备
CN117150166A (zh) 页面交互方法、装置、电子设备和计算机可读存储介质
CN114307131A (zh) 游戏控制方法及装置
JP2024511304A (ja) 状態ベースのアクションボタン
CN118001725A (zh) 游戏信息显示方法、装置、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854059

Country of ref document: EP

Kind code of ref document: A1