WO2019153824A1 - 虚拟对象控制方法、装置、计算机设备及存储介质 - Google Patents

虚拟对象控制方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2019153824A1
WO2019153824A1 PCT/CN2018/115924 CN2018115924W WO2019153824A1 WO 2019153824 A1 WO2019153824 A1 WO 2019153824A1 CN 2018115924 W CN2018115924 W CN 2018115924W WO 2019153824 A1 WO2019153824 A1 WO 2019153824A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
points
instruction
virtual environment
virtual object
Prior art date
Application number
PCT/CN2018/115924
Other languages
English (en)
French (fr)
Inventor
仇蒙
汪俊明
潘佳绮
张雅
张书婷
肖庆华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019153824A1 publication Critical patent/WO2019153824A1/zh
Priority to US16/909,954 priority Critical patent/US11565181B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/306Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying a marker associated to an object or location in the game field
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present application relates to the field of computer application technologies, and in particular, to a virtual object control method, apparatus, computer device, and storage medium.
  • the passable area in the virtual environment is generally divided into several navigation grids.
  • the preset algorithm is used to calculate the need between the grid from the starting point and the grid where the end point is located.
  • the grid path is obtained from the list of grid paths.
  • the A-star algorithm is generally used to calculate the list of navigation grid paths that need to pass, and then the list of path points to be used is calculated according to the navigation grid path list, and the path point list is Each path point is connected, that is, a path finding path is obtained, and finally the virtual object is automatically controlled to move along the path finding path.
  • the path finding path between the start point and the end point is calculated according to a fixed algorithm. As long as the start point and the end point are determined, the calculated path finding path is also fixed, and the moving path of the virtual object is relatively simple, resulting in automatic control of the virtual object. The effect of moving is poor.
  • the embodiment of the present application provides a virtual object control method, device, computer device, and storage medium, which can be used to solve the problem that the path finding path calculated in the related art is fixed, and the moving path of the virtual object is relatively simple, resulting in automatic control of the virtual object.
  • the problem of poor mobile effect is as follows:
  • a virtual object control method is provided, the method being performed by a first terminal, the method comprising:
  • the moving path is a path generated according to an operation trajectory of a drawing operation performed in a path drawing interface, where the path drawing interface includes a map of a virtual environment, and the drawing operation is a sliding drawing operation performed on a map of the virtual environment;
  • a virtual object control method is provided, the method being performed by a terminal, the method comprising:
  • the drawing operation is a sliding drawing operation performed on a map of the virtual environment
  • the moving path is a path generated according to an operation trajectory of the drawing operation
  • a virtual object control method is provided, the method being performed by a terminal, the method comprising:
  • the moving path selection interface including at least one alternative path, the alternative path being a path generated according to an operation trajectory of a drawing operation performed in the path drawing interface, where the path drawing interface includes a map of the virtual environment, and the drawing operation is a sliding drawing operation performed on a map of the virtual environment;
  • the control virtual object moves along the movement path in the virtual environment; the movement path is an alternative path corresponding to the path selection operation.
  • a virtual object control apparatus comprising:
  • An instruction acquisition module configured to acquire a pathfinding instruction
  • a path obtaining module configured to acquire a moving path according to the path finding instruction, where the moving path is a path generated according to an operation trajectory of a drawing operation performed in a path drawing interface, where the path drawing interface includes a map of a virtual environment, And the drawing operation is a sliding drawing operation performed on a map of the virtual environment;
  • control module configured to control the virtual object to move along the moving path in the virtual environment.
  • a computer apparatus comprising a processor and a memory, the memory storing at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one segment A program, the set of codes, or a set of instructions is loaded and executed by the processor to implement the virtual object control method described above.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, the code set Or the set of instructions is loaded and executed by the processor to implement the virtual object control method described above.
  • the terminal may acquire the moving path according to the path finding instruction, and control the virtual object to move along the moving path in the virtual environment.
  • the moving path acquired by the terminal is a path generated according to an operation trajectory of a sliding drawing operation performed by the user in the path drawing interface of the map including the virtual environment, that is, the user can execute on the map of the virtual environment.
  • the sliding drawing operation is performed, and the moving path of the subsequent virtual object in the virtual environment is automatically indicated by the sliding track, and the moving path can be flexibly set by the user according to actual needs, so that the moving path of the automatic path finding for controlling the virtual object is more diversified.
  • FIG. 1 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • FIG. 2 is a schematic diagram of a display interface of a virtual environment provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of a virtual object control flow provided by an exemplary embodiment of the present application.
  • FIG. 4 is a flowchart of a virtual object control method provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of a display path drawing interface according to the embodiment shown in FIG. 4;
  • FIG. 5 is a schematic diagram of a display path drawing interface according to the embodiment shown in FIG. 4;
  • FIG. 6 is a schematic diagram of a drawing operation involved in the embodiment shown in FIG. 4;
  • FIG. 7 is a schematic diagram of a mutual capacitance touch screen according to the embodiment shown in FIG. 4;
  • FIG. 8 is a schematic diagram of determining an operation trajectory according to a touch event according to the embodiment shown in FIG. 4;
  • FIG. 9 is a schematic diagram of an operation point collection according to the embodiment shown in FIG. 4; FIG.
  • Figure 10 is a schematic view showing the position adjustment of the embodiment shown in Figure 4.
  • FIG. 11 is a schematic flow chart of an automatic path finding shown in an exemplary embodiment of the present application.
  • FIG. 12 is a block diagram of an execution module of an automatic path finding according to an exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of a virtual object control method according to an exemplary embodiment of the present application.
  • FIG. 14 is a schematic diagram of operations for selecting a moving path according to an exemplary embodiment of the present application.
  • FIG. 15 is a structural block diagram of a virtual object control apparatus according to an exemplary embodiment of the present application.
  • FIG. 16 is a structural block diagram of a computer device according to an exemplary embodiment of the present application.
  • Virtual environment A virtual environment that is displayed (or provided) when an application is running on a terminal.
  • the virtual environment can be a real-world simulation environment, a semi-simulated semi-fictional three-dimensional environment, or a purely fictitious three-dimensional environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a two-dimensional virtual environment, and a three-dimensional virtual environment.
  • the following embodiments are exemplified by the virtual environment being a three-dimensional virtual environment, but are not limited thereto.
  • the virtual environment is also used for a virtual environment battle between at least two virtual characters.
  • the virtual environment is further used for playing a virtual firearm between at least two virtual characters.
  • the virtual environment is further configured to use a virtual firearm to compete between at least two virtual characters within a target area, and the target area range is continuously smaller as time passes in the virtual environment.
  • Virtual object refers to an active object in a virtual environment.
  • the movable object may be at least one of a virtual character, a virtual creature, and an anime character.
  • the virtual environment is a three-dimensional virtual environment
  • the virtual object is a three-dimensional model created based on animated bone technology.
  • Each virtual object has its own shape and volume in a three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment.
  • a virtual environment is typically displayed by an application in a computer device such as a terminal based on hardware (such as a screen) in the terminal.
  • the terminal may be a mobile terminal such as a smart phone, a tablet computer or an e-book reader; or the terminal may also be a personal computer device of a notebook computer or a stationary computer.
  • FIG. 1 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • the terminal includes a main board 110, an external output/input device 120, a memory 130, an external interface 140, a capacitive touch system 150, and a power supply 160.
  • the processing element such as a processor and a controller is integrated in the main board 110.
  • the external output/input device 120 may include a display component such as a display screen, a sound playback component such as a speaker, a sound collection component such as a microphone, and various types of keys and the like.
  • Program code and data are stored in the memory 130.
  • the external interface 140 can include a headphone interface, a charging interface, a data interface, and the like.
  • the capacitive touch system 150 can be integrated in a display component or button of the external output/input device 120 for detecting a touch operation performed by the user on the display component or the button.
  • Power source 160 is used to power other various components in the terminal.
  • the processor in the main board 110 can generate a virtual environment by executing or calling program code and data stored in the memory, and display the generated virtual environment through the external output/input device 120.
  • the capacitive touch system 150 can detect the touch operation performed when the user interacts with the virtual environment.
  • the virtual environment may be a three-dimensional virtual environment, or the virtual environment may also be a two-dimensional virtual environment.
  • the virtual environment is a three-dimensional virtual environment.
  • FIG. 2 a schematic diagram of a display interface of a virtual environment provided by an exemplary embodiment of the present application is shown.
  • the display interface 200 of the virtual environment includes a virtual object 210 , an environment screen 220 of the three-dimensional virtual environment, and at least one set of virtual control buttons 230 , wherein the virtual control button 230 is an optional control element, and the user can pass
  • the virtual control button 230 manipulates the virtual object 210.
  • the virtual object 210 is a three-dimensional model in a three-dimensional virtual environment
  • the environment image of the three-dimensional virtual environment displayed in the display interface 200 is an object observed by the angle of view of the virtual object 210, for example, such as As shown in FIG. 2, under the perspective of the virtual object 210, the environment screen 220 of the displayed three-dimensional virtual environment is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory 222.
  • the virtual object 210 can be moved immediately under the control of the user.
  • the virtual control button 230 shown in FIG. 2 is a virtual button for controlling the movement of the virtual object 210.
  • the virtual object 210 can be in the virtual object 210.
  • the touch point is moved in the direction of the center of the virtual control button 230.
  • the virtual objects in the virtual environment can also be automatically moved along the pre-planned movement path.
  • the user can plan the moving path through the path planning operation. After that, the user does not need to touch the virtual control button 230 again, and the virtual object 210 can be planned along the virtual environment.
  • the moving path moves by itself.
  • FIG. 3 shows a schematic diagram of a virtual object control flow provided by an exemplary embodiment of the present application.
  • the terminal running the application corresponding to the virtual environment may control the virtual object to move along the planned moving path by performing the following steps.
  • step 31 a path finding instruction is obtained.
  • Step 32 Acquire a moving path according to the path finding instruction, and the moving path is a path generated according to an operation trajectory of a drawing operation performed in the path drawing interface.
  • the path drawing interface includes a map of the virtual environment, and the drawing operation is a sliding drawing operation performed on the map of the virtual environment.
  • the specific operation form of the sliding drawing operation may be different according to different operation modes of the virtual object by the user.
  • the above sliding drawing operation may be a touch sliding operation performed by the user in the map area of the virtual environment.
  • the sliding drawing operation may be an operation in which the user clicks a certain button of the mouse and keeps the click state, and then moves the cursor in the map of the virtual environment through the mouse while keeping the click state.
  • the sliding drawing operation may be an operation of moving the cursor in the map of the virtual environment through the joystick in the handle after the user presses a button in the handle.
  • Step 33 Control the virtual object to move along the moving path in the virtual environment.
  • the terminal can receive the mobile control operation performed by the user through the keyboard physical button, the handle rocker or the virtual button in the touch screen touch screen when displaying the display screen of the virtual environment, and control the virtual object in the virtual environment according to the received mobile control operation. Move in.
  • the upper layer of the display screen of the virtual environment may further display a path drawing interface of the map including the virtual environment, and the sliding operation track of the sliding drawing operation performed by the user in the map of the virtual environment may correspondingly generate the indication moving path
  • the terminal can automatically control the virtual object to move according to the moving path indicated by the path finding instruction.
  • the mobile path when the application program runs in the terminal and the virtual environment is generated and displayed, if the terminal receives the path finding instruction, the mobile path can be acquired according to the path finding instruction. And control the virtual object to move along the moving path in the virtual environment.
  • the moving path acquired by the terminal is a path generated according to the sliding trajectory of the sliding drawing operation performed by the user in the map displayed by the path drawing interface, that is, the user can pass the solution shown in FIG.
  • the sliding operation is performed on the map displayed by the path drawing interface, and the moving path of the subsequent virtual object in the virtual environment is automatically indicated by the sliding track.
  • the moving path can be flexibly set by the user according to actual needs, so that the automatic searching of the virtual object is controlled.
  • the path of the road is more diverse, thereby improving the control effect on the virtual objects in the virtual environment.
  • the user may perform the foregoing drawing operation in the path drawing interface displayed by the current terminal, so that the current terminal controls the movement of the virtual object in the virtual environment displayed by the current terminal along the operation track of the drawing operation.
  • the path moves.
  • FIG. 4 is a flowchart of a virtual object control method provided by an exemplary embodiment of the present application, which may be used in a first terminal. Taking a drawing operation performed by the user in the path drawing interface displayed by the first terminal, so that the first terminal controls the virtual object in the virtual environment displayed by the first terminal to move along the moving path corresponding to the operation track of the drawing operation, for example.
  • the virtual object control method can include the following steps:
  • Step 401 Acquire a first instruction triggered by a drawing operation, where the first instruction includes location information of at least three operation points on an operation track of a drawing operation performed by the user in the path drawing interface.
  • the location information of the operation point may be information indicating a location of the corresponding operation point in the path rendering interface.
  • the location information may include coordinates of a corresponding operation point in a coordinate system corresponding to the path rendering interface.
  • the pathfinding instruction is a first instruction triggered by a drawing operation performed by the user in the path drawing interface displayed by the first terminal.
  • the first terminal may obtain the first instruction by using the following steps 401a to 401d.
  • Step 401a showing the path drawing interface.
  • the user when the user needs to independently plan the moving path of the virtual object, the user may perform a predetermined operation on the drawing interface portal in the virtual environment. After detecting the predetermined operation, the first terminal displays the upper layer of the display interface of the virtual environment. The path draws the interface.
  • the path drawing interface may be a map display interface, the map display interface displays a map of the virtual environment, and the drawing interface entrance may be a thumbnail map.
  • FIG. 5 is a schematic diagram of a display path drawing interface according to an embodiment of the present application.
  • the upper right corner contains the thumbnail map 51.
  • the first terminal displays the complete map 52 on the upper layer of the virtual environment 50, and the interface where the complete map 52 is located is the path drawing interface described above.
  • Step 401b Acquire an operation trajectory of a drawing operation performed in the path drawing interface.
  • the user can perform a drawing operation in the path drawing interface.
  • the drawing operation may be a touch sliding operation of the user's finger in the path drawing interface.
  • the first terminal may display a trigger control in the path drawing interface, and when receiving the predetermined operation of the trigger control, the first terminal controls the path to be drawn.
  • the interface enters the acceptance drawing state, and acquires the operation trajectory of the drawing operation performed in the path drawing interface.
  • a trigger control is displayed in the path drawing interface. If the trigger control does not receive the predetermined operation, the path drawing interface does not enter the acceptance drawing state, and the first terminal does not at this time. The operation performed in the path drawing interface is detected, or the first terminal discards the drawing operation performed in the path drawing interface to avoid user misoperation.
  • the trigger control may perform a predetermined operation (such as a click operation). At this time, the path drawing interface enters the acceptance drawing state, and when the path drawing interface is in the accepted drawing state, the first terminal detects The drawing operation performed in the path drawing interface.
  • FIG. 6 is a schematic diagram of a drawing operation involved in an embodiment of the present application.
  • the top left corner of the complete map 52 is displayed with a trigger control 52a.
  • the first terminal controls the complete map 52 to enter the drawing.
  • the first terminal can display a special mark, such as a bold border, a bright border or a text prompt.
  • the user can perform a touch sliding operation in the complete map 52.
  • the first terminal can acquire the operation track 53 of the touch sliding operation (that is, the touch of the touch sliding operation). Sliding track).
  • the first terminal can obtain the operation track of the touch sliding operation through the hardware level and the program level, and the principle is as follows:
  • the mutual capacitance touch screen may include a protective layer, a transparent electrode pattern layer, and a glass substrate.
  • the transparent electrode pattern layer has two upper and lower layers of ITO (Indium Tin Oxides) conductive film. There is a lot of charge stored between the two layers of ITO conductive film. When the user touches a certain point on the touch screen, the charge on the two ITO conductive films corresponding to the point position will be partially lost and transferred to the human body. A weak current is generated at this point.
  • ITO Indium Tin Oxides
  • the two ITO conductive films are respectively distributed with electrodes representing the horizontal axis (X-axis) and electrodes representing the vertical axis (Y-axis).
  • the two ITO conductive films are superimposed on each other, which constitutes a precise set.
  • the two-dimensional coordinate system, the mutual capacitance touch screen can locate the point of charge loss in the two-dimensional coordinate system (ie, the user touch point) by detecting a weak current.
  • a touch event is triggered in the operating system of the first terminal.
  • the touch event in the operating system of the first terminal is triggered when the user's finger is placed on the screen, when sliding on the screen, or when the screen is removed from the screen. Touch events can be as follows:
  • Touchstart event Triggered when a finger starts to touch the screen, even if there is already a finger on the screen, the event is triggered when another finger touches the screen.
  • Touchmove event Triggers continuously as the finger slides across the screen. Calling the preventDefault() event prevents scrolling during this event.
  • Touchend event Triggered when a finger leaves the screen.
  • Touchcancel event Triggered when the system stops tracking touch.
  • each touch event also contains the following three attributes for tracking touch:
  • Touches An array of touch objects representing the currently tracked touch operations.
  • targetTouches An array of Touch objects specific to the event target.
  • each Touch object can contain the following attributes:
  • clientX Touch the x coordinate of the target in the viewport.
  • clientY Touch the y coordinate of the target in the viewport.
  • Identifier Identifies the unique ID of the touch.
  • pageX Touch the x coordinate of the target in the page.
  • pageY Touch the y coordinate of the target in the page.
  • screenX Touch the x coordinate of the target on the screen.
  • Target Touched DOM (Document Object Model) node target.
  • the application in the first terminal can acquire the operation trajectory of the drawing operation performed in the path drawing interface through the touch event obtained by the above program level.
  • FIG. 8 illustrates a schematic diagram of determining an operation trajectory according to a touch event according to an embodiment of the present application.
  • the first terminal may acquire an operation trajectory of a drawing operation performed in the path drawing interface according to coordinates corresponding to the touch start event, the touch end event, and the touch movement event between the touch start event and the touch end event. .
  • Step 401c Obtain position information of at least three operation points from the operation track according to a preset sampling rule, where the at least three operation points include a start point, an end point, and at least one intermediate point of the operation track.
  • the operation track is composed of position information of a plurality of operation points.
  • the first terminal obtains at least three samples from a plurality of operation points constituting the operation track.
  • An operating point wherein the at least three operating points include a starting point, an ending point, and at least one intermediate point of the operating trajectory.
  • the at least three operating points collected above need to be able to restore the approximate contour of the operating trajectory, that is to say that at least three operating points need to meet a certain number of requirements.
  • the first terminal may sample from a plurality of operation points according to a fixed sampling rate.
  • the above at least three operating points for example, taking the sampling rate of 1/20 as an example, the first terminal may first collect the start point and the end point of the operation track, and from each of the operation points except the start point and the end point, One operating point is sampled from the 20 operating points, and the starting point and the ending point of the operating track and one operating point sampled in every 20 operating points are finally used as the at least three operating points.
  • the operation points other than the start point and the end point among the several operation points may be arranged in the order in the operation track, and Samples are taken at a fixed sampling rate in each of the arranged operating points.
  • the first terminal may use an operating point at a predetermined position among every 20 operating points after the arrangement as an operation point obtained by sampling, for example, A terminal may use the last operation point of every 20 operation points as an operation point obtained by sampling; or, in another possible sampling mode, the first terminal may also be used in every 20 operation points after the arrangement.
  • a random operating point is taken as the operating point obtained by sampling.
  • the first terminal may divide the operation trajectory into at least two trajectory segments, and Obtaining respective curvatures of at least two segments of the track segments, and obtaining sampling rates corresponding to the at least two segments of the track segments according to respective curvatures of the at least two segments of the track segments, respectively, according to respective sampling rates of the at least two segments of the segment segments, respectively Two segments of the track segment are sampled to obtain position information of at least three operating points.
  • the curvature of the operation track the fewer the number of operation points required to restore the operation track. For example, when a certain operation track is a straight line, only two operation points are needed to restore the operation track; correspondingly, the curvature is larger The operation track, the more the number of operating points required to restore the operation track.
  • the operation trajectory of the user's drawing operation is not a straight line in a large case, but is complicated and variable, in order to generate a moving path that matches the operation trajectory as much as possible by using as few operating points as possible.
  • the operation trajectory may be divided into at least two trajectory segments, and the corresponding sampling rate is determined according to the respective curvatures of each segment trajectory segment.
  • the curvature of the trajectory segment represents the smoothness of the trajectory segment, the smaller the curvature of the trajectory segment is, the smoother the trajectory segment is, and the curvature of the trajectory segment corresponds to the trajectory segment.
  • the sampling rate is proportional, that is, the larger the curvature of the trajectory segment, the higher the sampling rate corresponding to the trajectory segment.
  • FIG. 9 illustrates a schematic diagram of an operation point collection according to an embodiment of the present application.
  • the first terminal may divide the operation track 90 into a track segment 91, a track segment 92, a track segment 93, and a track segment 94 (divided by the segmentation marks in FIG. 9).
  • the segmentation mark is a mark introduced for ease of understanding), wherein the curvature of the track segment 91 and the track segment 94 is small (close to 0), and the corresponding sampling rate is low.
  • the first terminal samples three operation points in the track segment 91 and the track segment 94, respectively, and the curvature of the track segment 92 and the track segment 93 is larger.
  • the corresponding sampling rate is high.
  • the first terminal samples 8 operating points in the track segment 92 and the track segment 93, respectively.
  • Step 401d generating a first instruction including location information of at least three operation points on an operation track of the drawing operation.
  • the first instruction including the position information obtained by the sampling may be generated to trigger the subsequent path generation.
  • Step 402 Generate a moving path according to position information of at least three operating points on the operation track of the drawing operation.
  • the first terminal may generate a path for controlling the movement of the virtual object according to the location information of the operation point included in the first instruction and the location point corresponding to each operation point in the virtual environment.
  • the first terminal acquires at least three locations respectively corresponding to the location information of the at least three operation points in the virtual environment. Pointing, according to each position point that can pass in the virtual environment, generating a sub-path between each two adjacent position points by using a preset path generation algorithm, and each of the at least three position points The sub-paths between two adjacent position points are spliced into a moving path.
  • the first terminal may determine, according to the location information of the at least three operation points included in the first instruction, that each of the at least three operation points corresponds to a location point in the virtual environment, and obtains the same At least three position points of the quantity, and determining a sub-path between each adjacent two position points according to a passable position in the virtual environment, and then completing a complete path composed of sub-paths between each adjacent two position points As the final moving path.
  • the first terminal may generate a sub-path by using a preset path generation algorithm, for example, the first terminal may pass the A star, when generating a sub-path between each two adjacent two-position points.
  • the algorithm generates a sub-path between each two adjacent location points.
  • the A-star algorithm also known as the A* search algorithm, is an algorithm for finding the lowest pass cost from the path of multiple nodes on the graphics plane, which is often used for the movement of the player character (PC) in the game. Calculate, or move the robot's mobile calculations online.
  • the A-star algorithm divides the path-finding area into multiple connected polygon meshes (such as triangles). Each polygon mesh is a path-finding node, and the navigation mesh path finding from the starting point to the target point is from the starting point.
  • the grid to the grid where the target point is located, which grid paths need to go through.
  • the A-star algorithm can calculate the list of navigation mesh paths that need to pass from the starting point to the target point, and after obtaining the list of navigation mesh paths that need to pass, calculate the path point list of the traffic, and connect the path points, that is, The path of pathfinding that was finally determined. With each adjacent two position points as the starting point and the target point, the path finding path obtained by the A-star algorithm is the sub-path between the two adjacent position points.
  • the first terminal before generating, by using a preset path generation algorithm, a sub-path between each two adjacent ones of the at least three location points, the first terminal further detects whether the target location point is a valid location point, and the target location point Is any position point of at least three position points, the effective position point is a position point reachable by the virtual object; when the target position point is not the effective position point, determining the first effective position point, the first effective position point is at least three a valid position point other than the position point and closest to the target position point; determining whether the distance between the target position point and the first effective position point is less than a preset distance threshold; between the target position point and the first effective position point When the distance is less than the preset distance threshold, the target position point is replaced with the first effective position point; when the distance between the target position point and the first effective position point is not less than the preset distance threshold, the target position point is at least three Removed from the location.
  • the preset distance threshold may be preset by a developer of an application corresponding to the virtual environment.
  • location points there may be some location points (ie, unreachable location points) that cannot be reached by the virtual object in the virtual environment.
  • some locations in some virtual environments may be in water or on the mountain, when at least three locations.
  • the at least three position points are adjusted, and the adjustment may be performed by replacing the unreachable position point with the nearest one of the reachable position points in the preset range, if there is no reachable position point within the preset range,
  • the unreachable location point can be removed directly from at least three location points.
  • FIG. 10 illustrates a schematic diagram of position point adjustment according to an embodiment of the present application.
  • the position points 101 and the position points 102 do not belong to the effective position points, wherein the position points 101 are preset around There are other effective location points in the range area, and there are no other valid location points in the preset range area around the location point 102.
  • the first terminal adjusts the location point 101 and the location point 102, as shown in FIG. 10 ( As shown in part b), the first terminal replaces the location point 101 by the nearest effective location point 103 from the location point 101 and removes the location point 102.
  • the first terminal before generating the movement path according to the position information of the at least three operation points on the operation track of the drawing operation, the first terminal further acquires the number of the effective points in the at least three operation points, where the effective points correspond to in the virtual environment.
  • the position point is a position point reachable by the virtual object; when the ratio between the number of the effective points and the number of the at least three operation points is higher than a preset value, the first terminal performs at least three on the operation track according to the drawing operation
  • the step information of the operation points generates a moving path.
  • the first terminal may detect whether the operation trajectory of the drawing operation of the user satisfies the condition, and only when the operation trajectory satisfies the condition, the step of generating the moving path is performed, in a possibility
  • the ratio of the number of the effective points to the number of the at least three operation points in the at least three operation points may be calculated, when the ratio is When the value is greater than the preset value, for example, greater than 95%, the operation trajectory of the user's drawing operation is considered to satisfy the condition; otherwise, the operation trajectory of the user's drawing operation may not be considered to satisfy the condition.
  • the first terminal may calculate a matching degree between the moving path and an operation trajectory of the user's drawing operation, when the matching degree is higher than a preset matching degree threshold (the When the preset matching degree threshold can be set by the developer in advance, the step of controlling the movement of the virtual object can be performed.
  • the first terminal may acquire a line drawing of the moving path, and obtain a line drawing of the operation track of the drawing operation, and then calculate the similarity between the two line drawings, and compare the similarities between the two line drawings.
  • Degree is the degree of matching between the movement path and the operation trajectory of the user's drawing operation.
  • the first terminal further displays the generated moving path in the path drawing interface.
  • Step 403 controlling the virtual object to move along the moving path in the virtual environment.
  • the user After obtaining the above moving path, the user does not need to perform the control operation again, and the first terminal can automatically control the virtual object to move along the moving path in the virtual environment.
  • the solution shown in the embodiment of the present application acquires a first instruction triggered by a sliding drawing operation performed by a user in the path drawing interface (including at least an operation track of a drawing operation performed by the user in the path drawing interface).
  • Position information of the three operating points generating a moving path according to the position information of the at least three operating points, and controlling the virtual object to move along the moving path in the virtual environment, the moving path can be flexibly set by the user according to actual needs, so that the control The moving path of the automatic path finding of the virtual object is more diverse, thereby improving the control effect on the virtual object in the virtual environment.
  • At least three operation points are obtained by sampling an operation trajectory of the sliding drawing operation, and at least three operation points obtained according to the sampling are corresponding to the virtual environment.
  • the at least three position points generate a moving path, which simplifies the calculation amount of generating the moving path and improves the efficiency of the moving path generation.
  • the virtual object when at least three operation points obtained according to the sampling are corresponding to generate a movement path corresponding to at least three position points in the virtual environment, the virtual object is unreachable to the at least three position points.
  • the position point is adjusted to increase the success rate of generating the moving path.
  • the first terminal when the first terminal controls the virtual object to move, the first terminal may display the path drawing interface, and when the drawing operation performed in the path drawing interface is acquired, the moving path is displayed in the path drawing interface.
  • the moving path is a path generated according to the operation trajectory of the drawing operation, and controls the virtual object to move along the moving path in the virtual environment.
  • FIG. 11 is a schematic flowchart of an automatic path finding shown in an embodiment of the present application.
  • the user can plan the path by sliding touch operation in the map area of the game, and the processor of the terminal collects the touch event corresponding to the sliding touch operation, and obtains the path finding instruction according to the sliding track of the user on the game map.
  • the pathfinding instruction includes position information of a start point, an end point, and a detailed path point (ie, at least one intermediate point).
  • the terminal determines whether the location information indicated by the pathfinding instruction (ie, the sliding trajectory path) satisfies the condition. If the condition is met, the terminal determines the game character in the game scene according to the location information indicated by the path finding instruction and the in-game walkable area data.
  • the path finding path on the map ie, the above moving path
  • controlling the game character to automatically complete the pathfinding process from the starting point of the path (trigger starting point) to the end point of the path (triggering end point) according to the path finding path, if the path finding instruction indicates The location information ends when the condition is not met.
  • FIG. 12 is a structural diagram of an execution module of an automatic path finding shown in an embodiment of the present application.
  • the foregoing process may be performed by an obtaining module, a determining module, and a path finding module in the terminal, wherein the obtaining module performs the step of acquiring the path finding instruction, and the determining module executes the position indicated by the determining path finding instruction. Whether the information (ie, the sliding track path) satisfies the condition, and the path finding module performs the above-described position information indicated by the path finding instruction and the in-game walkable area data to determine the path of the game character on the game scene map. And the steps to control the pathfinding process of the game character.
  • the path indication information may be sent to the second terminal, where the path indication information is used to indicate that the second terminal controls the second
  • the virtual object corresponding to the terminal moves along the moving path in the virtual scene.
  • the generated mobile path of the first terminal may be displayed by the second terminal.
  • the second terminal controls the virtual object in the second terminal to move along the moving path.
  • the first terminal can also receive the path indication information sent by the other terminal, and when the user corresponding to the first terminal selects the mobile path generated by the other terminal, the second terminal can also control the virtual terminal.
  • the object moves along the path of movement generated by other terminals.
  • FIG. 13 is a flowchart of a virtual object control method provided by an exemplary embodiment of the present application, which may be used in a first terminal.
  • the second terminal may also control the movement of the virtual object along the movement path generated by the other terminal.
  • the virtual object control method may include the following steps:
  • Step 1301 Receive path indication information sent by the third terminal.
  • the path indication information is used to indicate part or all of the at least one candidate path, and the candidate path indicated by the path indication information is executed by the third terminal according to the path drawing interface displayed by the user in the third terminal.
  • the third terminal may generate a moving path according to the operation trajectory of the drawing operation performed by the user in the path drawing interface according to the foregoing FIG. 4, and generate some or all of the at least one alternative path.
  • Step 1302 showing a moving path selection interface, where the at least one alternative path is included in the moving path selection interface.
  • the other candidate path of the at least one candidate path may be the first terminal according to FIG. 4
  • the alternative path generated by the method for generating a moving path according to the operation trajectory of the drawing operation performed by the user in the path drawing interface, or the other alternative path may also be indicated by other terminals other than the third terminal.
  • the alternate path indicated by the information when the path indication information is used to indicate a part of the at least one candidate path, the other candidate path of the at least one candidate path may be the first terminal according to FIG. 4
  • the alternate path indicated by the information may be indicated by other terminals other than the third terminal.
  • Step 1303 Acquire a second instruction triggered by a path selection operation performed in the moving path selection interface, where the second instruction is used to indicate an alternate path corresponding to the path selection operation.
  • the user may select an alternate path from each of the alternative paths generated according to the operation trajectory of the drawing operation performed in the path drawing interface, and when the first terminal receives the path selection operation of the user, A second instruction for indicating an alternate path corresponding to the path selection operation may be generated.
  • Step 1304 Acquire an alternate path indicated by the second instruction as a moving path, and control the virtual object to move along the moving path in the virtual environment.
  • the solution shown in the embodiment of the present application acquires a second instruction triggered by the path selection operation performed by the user in the path selection interface (indicating an alternate path corresponding to the path selection operation), and indicates the second instruction.
  • the alternative path is obtained as a moving path, and controls the virtual object to move along the moving path in the virtual environment, and the moving path can be flexibly set by the user according to actual needs, so that the moving path of the automatic path finding for controlling the virtual object is more diverse. Thereby improving the control effect on the virtual objects in the virtual environment.
  • the first terminal may display a moving path selection interface, where the moving path selection interface includes at least one candidate path, and the candidate path is generated according to an operation track of a drawing operation performed by the user in the path drawing interface.
  • Path when the first terminal acquires the path selection operation performed in the movement path selection interface, the control virtual object moves along the movement path in the virtual environment; the movement path is an alternative path corresponding to the path selection operation.
  • the moving path selection interface and the path drawing interface may be the same interface.
  • the moving path selection interface and the path drawing interface may both be map display interfaces.
  • FIG. 14 illustrates an operation diagram of selecting a moving path provided by an exemplary embodiment of the present application.
  • the upper right corner contains the thumbnail map 141.
  • the first terminal displays the complete map 142 on the upper layer of the virtual environment 140, which is the above-mentioned moving path selection interface.
  • the complete map 142 includes at least one moving path (shown as a moving path 142a and a moving path 142b in FIG. 14) generated and indicated by the third terminal in addition to the map content, when the user clicks the moving path 142a.
  • the first terminal controls the virtual object to move along the movement path 142a in the virtual environment. Accordingly, when the user clicks on the movement path 142b, the first terminal controls the virtual object to move along the movement path 142b in the virtual environment.
  • user a and user b play a team game
  • user a can plan a moving path by a touch sliding operation in the map area.
  • user a and user b respective corresponding terminals, which can control the game characters of user a and user b, respectively, and move along the planned movement path in the game scene.
  • the user a can plan two or more alternative paths by a touch sliding operation in the map area.
  • the user a and the user b respectively select an alternative path as the respective moving path
  • the user a and the user b respectively
  • the corresponding terminal can control the game characters of the user a and the user b, respectively, and move along the respective selected moving paths in the game scene.
  • This program will be widely used in games such as competitive survival, which require frequent running operations. It can only be changed by setting a single target position to find a special path to move, which can significantly enhance the player's An automatic pathfinding experience that provides more fun for the game.
  • FIG. 15 is a block diagram showing the structure of a virtual object control apparatus according to an exemplary embodiment.
  • the virtual object control device can be used in the terminal to perform all or part of the steps of the method shown in any of the embodiments of FIG. 3, FIG. 4 or FIG.
  • the virtual object control device can include:
  • An instruction acquisition module 1501 configured to acquire a pathfinding instruction
  • the path obtaining module 1502 is configured to acquire a moving path according to the path finding instruction, where the moving path is a path generated according to an operation trajectory of a drawing operation performed in a path drawing interface, where the path drawing interface includes a map of a virtual environment And the drawing operation is a sliding drawing operation performed on a map of the virtual environment;
  • the control module 1503 is configured to control the virtual object to move along the moving path in the virtual environment.
  • the instruction acquiring module 1501 is specifically configured to:
  • the path obtaining module 1502 is specifically configured to:
  • the moving path is generated according to position information of at least three operating points on an operation trajectory of the drawing operation.
  • the instruction acquiring module 1501 is specifically configured to:
  • Generating the first instruction including location information of at least three operating points on an operational trajectory of the rendering operation.
  • the instruction acquisition module 1501 is specifically configured to:
  • sampling the at least two segments of the trajectory according to sampling rates corresponding to the at least two segments of the trajectory segments to obtain position information of the at least three operating points.
  • the path obtaining module 1502 is specifically configured to:
  • a sub-path between each of the at least three position points is spliced into the moving path.
  • the path obtaining module 1502 is specifically configured to:
  • the target location point is removed from the at least three location points.
  • the device further includes:
  • the quantity obtaining module 1504 is configured to acquire, before the path obtaining module 1502 generates the moving path according to position information of at least three operating points on the operation track of the drawing operation, acquire valid points of the at least three operating points a quantity, where the effective point corresponds to a location point in the virtual environment is a location point reachable by the virtual object;
  • the path obtaining module 1502 is specifically configured to:
  • the device further includes:
  • the information sending module 1505 is configured to send the path indication information to the second terminal, where the path indication information is used to indicate that the second terminal controls the virtual object corresponding to the second terminal to be along the moving path in the virtual scene. Move.
  • the instruction acquiring module 1501 is specifically configured to:
  • the second instruction is an instruction triggered by a path selection operation performed in the moving path selection interface, where the second instruction is used to indicate an alternate path corresponding to the path selection operation;
  • the path obtaining module 1502 is specifically configured to:
  • the device further includes:
  • the information receiving module 1506 is configured to receive the path indication information sent by the third terminal, where the path indication information is used to indicate a part of the at least one alternative path or before the instruction acquiring module 1501 displays the moving path selection interface. All of the alternative paths, the alternate path indicated by the path indication information is a path generated by the third terminal according to an operation trajectory of a drawing operation performed by the user in the path drawing interface.
  • the device when the application program runs in the terminal and generates and displays the virtual environment, if the terminal receives the path finding instruction, the mobile path can be acquired according to the path finding instruction, and Controls the movement of virtual objects along a moving path in a virtual environment.
  • the moving path acquired by the terminal is a path generated according to the operation trajectory of the sliding drawing operation performed by the user in the path drawing interface of the map containing the virtual environment, that is, the user can pass the map on the virtual environment.
  • the sliding drawing operation is performed, and the moving path of the subsequent virtual object in the virtual environment is automatically indicated by the sliding track, and the moving path can be flexibly set by the user according to actual needs, so that the moving path of the automatic path finding for controlling the virtual object is more diverse. To improve the control of virtual objects in the virtual environment.
  • FIG. 16 is a block diagram showing the structure of a computer device 1600, according to an exemplary embodiment.
  • the computer device 1600 can be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV) motion picture. Experts compress standard audio layers 4) players, laptops or desktops.
  • Computer device 1600 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, and the like.
  • computer device 1600 includes a processor 1601 and a memory 1602.
  • the processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1601 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 1601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and rendering of the content that the display needs to display.
  • the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1602 can include one or more computer readable storage media, which can be non-transitory.
  • the memory 1602 can also include high speed random access memory, as well as non-volatile memory such as one or more disk storage devices, flash storage devices.
  • the non-transitory computer readable storage medium in the memory 1602 is for storing at least one instruction for execution by the processor 1601 to implement the virtual object provided by the method embodiment of the present application. Control Method.
  • computer device 1600 can also optionally include a peripheral device interface 1603 and at least one peripheral device.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1603 via a bus, signal line or circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1604, a touch display screen 1605, a camera 1606, an audio circuit 1607, a positioning component 1608, and a power source 1609.
  • Peripheral device interface 1603 can be used to connect at least one peripheral device associated with I/O (Input/Output) to processor 1601 and memory 1602.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1601, the memory 1602, and the peripheral device interface 1603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 1604 is configured to receive and transmit an RF (Radio Frequency) signal, also referred to as an electromagnetic signal.
  • Radio frequency circuit 1604 communicates with the communication network and other communication devices via electromagnetic signals.
  • the RF circuit 1604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1604 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1604 can communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, the World Wide Web, a metropolitan area network, an intranet, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
  • the RF circuit 1604 may also include NFC (Near Field Communication) related circuitry, which is not limited in this application.
  • the display 1605 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display 1605 also has the ability to capture touch signals over the surface or surface of the display 1605.
  • the touch signal can be input to the processor 1601 as a control signal for processing.
  • the display 1605 can also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1605 can be one, providing a front panel of the computer device 1600; in other embodiments, the display screen 1605 can be at least two, respectively disposed on different surfaces of the computer device 1600 or in a folded design.
  • display screen 1605 can be a flexible display screen disposed on a curved surface or a folded surface of computer device 1600. Even the display screen 1605 can be set to a non-rectangular irregular pattern, that is, a profiled screen.
  • the display 1605 can be made of a material such as an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • Camera component 1606 is used to capture images or video.
  • camera assembly 1606 includes a front camera and a rear camera.
  • the front camera is placed on the front panel of the terminal, and the rear camera is placed on the back of the terminal.
  • the rear camera is at least two, which are respectively a main camera, a depth camera, a wide-angle camera, and a telephoto camera, so as to realize the background blur function of the main camera and the depth camera, and the main camera Combine with a wide-angle camera for panoramic shooting and VR (Virtual Reality) shooting or other integrated shooting functions.
  • camera assembly 1606 can also include a flash.
  • the flash can be a monochrome temperature flash or a two-color temperature flash.
  • the two-color temperature flash is a combination of a warm flash and a cool flash that can be used for light compensation at different color temperatures.
  • the audio circuit 1607 can include a microphone and a speaker.
  • the microphone is used to collect sound waves from the user and the environment, and convert the sound waves into electrical signals for processing to the processor 1601 for processing or input to the RF circuit 1604 for voice communication.
  • the microphones may be multiple, respectively disposed at different locations of the computer device 1600.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is then used to convert electrical signals from the processor 1601 or the RF circuit 1604 into sound waves.
  • the speaker can be a conventional film speaker or a piezoelectric ceramic speaker.
  • the audio circuit 1607 can also include a headphone jack.
  • the location component 1608 is used to locate the current geographic location of the computer device 1600 to implement navigation or LBS (Location Based Service).
  • the positioning component 1608 can be a positioning component based on a US-based GPS (Global Positioning System), a Chinese Beidou system, or a Russian Galileo system.
  • a power supply 1609 is used to power various components in the computer device 1600.
  • the power source 1609 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery that is charged by a wired line
  • a wireless rechargeable battery is a battery that is charged by a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • computer device 1600 also includes one or more sensors 1610.
  • the one or more sensors 1610 include, but are not limited to, an acceleration sensor 1611, a gyro sensor 1612, a pressure sensor 1613, a fingerprint sensor 1614, an optical sensor 1615, and a proximity sensor 1616.
  • the acceleration sensor 1611 can detect the magnitude of the acceleration on the three coordinate axes of the coordinate system established by the computer device 1600.
  • the acceleration sensor 1611 can be used to detect components of gravity acceleration on three coordinate axes.
  • the processor 1601 can control the touch display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1611.
  • the acceleration sensor 1611 can also be used for the acquisition of game or user motion data.
  • the gyro sensor 1612 can detect the body direction and angle of rotation of the computer device 1600, and the gyro sensor 1612 can cooperate with the acceleration sensor 1611 to collect 3D motions of the user on the computer device 1600. Based on the data collected by the gyro sensor 1612, the processor 1601 can implement functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • Pressure sensor 1613 can be disposed on a side frame of computer device 1600 and/or a lower layer of touch display 1605.
  • the pressure sensor 1613 When the pressure sensor 1613 is disposed on the side frame of the computer device 1600, the user's holding signal to the computer device 1600 can be detected, and the processor 1601 performs left and right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613.
  • the operability control on the UI interface is controlled by the processor 1601 according to the user's pressure operation on the touch display screen 1605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1614 is configured to collect the fingerprint of the user, and the processor 1601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform related sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying and changing settings, and the like.
  • Fingerprint sensor 1614 can be provided with the front, back or sides of computer device 1600. When a physical button or vendor logo is provided on computer device 1600, fingerprint sensor 1614 can be integrated with a physical button or vendor logo.
  • Optical sensor 1615 is used to collect ambient light intensity.
  • the processor 1601 can control the display brightness of the touch display 1605 based on the ambient light intensity acquired by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is raised; when the ambient light intensity is low, the display brightness of the touch display screen 1605 is lowered.
  • the processor 1601 can also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity acquired by the optical sensor 1615.
  • Proximity sensor 1616 also referred to as a distance sensor, is typically disposed on the front panel of computer device 1600. Proximity sensor 1616 is used to capture the distance between the user and the front of computer device 1600. In one embodiment, when the proximity sensor 1616 detects that the distance between the user and the front of the computer device 1600 is getting smaller, the processor 1601 controls the touch display 1605 to switch from the bright screen state to the touch screen state; when the proximity sensor 1616 When it is detected that the distance between the user and the front side of the computer device 1600 gradually becomes larger, the processor 1601 controls the touch display screen 1605 to switch from the state of the screen to the bright state.
  • FIG. 16 does not constitute a limitation to computer device 1600, may include more or fewer components than illustrated, or may be combined with certain components or with different component arrangements.
  • a non-transitory computer readable storage medium comprising instructions, such as a memory comprising at least one instruction, at least one program, a code set or a set of instructions, at least one instruction, at least one segment
  • the program, code set or set of instructions may be executed by the processor to perform all or part of the steps of the method illustrated in any of the above-described embodiments of FIG. 3, FIG. 4 or FIG.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Abstract

一种虚拟对象控制方法、装置及计算机设备,其中方法包括:获取寻路指令;根据寻路指令获取移动路径,移动路径是根据用户在路径绘制界面中执行的绘制操作的操作轨迹生成的路径;控制虚拟对象在虚拟环境中沿移动路径进行移动。在此过程中,终端获取到的移动路径是根据用户在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,用户可以通过在路径绘制界面中执行绘制操作,并以操作轨迹来指示后续虚拟对象在虚拟环境中自动寻路的移动路径,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。

Description

虚拟对象控制方法、装置、计算机设备及存储介质
本申请要求于2018年02月09日提交的申请号为2018101327521、发明名称为“虚拟对象控制方法、装置及计算机设备”的中国专利申请的优先权,上述申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机应用技术领域,特别涉及一种虚拟对象控制方法、装置、计算机设备及存储介质。
背景技术
在很多构建虚拟环境的应用程序(比如虚拟现实应用程序、三维地图程序、军事仿真程序、第一人称射击游戏、多人在线战术竞技游戏等)中,自动寻路是常用的功能之一。
在相关技术中,通常将虚拟环境中的可通过区域划分成若干个导航网格,在自动寻路时,通过预设的算法计算从起点所在的网格到终点所在的网格之间需要经过的网格路径,得到网格路径列表,相关技术中一般采用A星算法计算出需要经过的导航网格路径列表,再根据导航网格路径列表计算通行的路径点列表,将路径点列表中的各个路径点连线,即得到寻路路径,最后自动控制虚拟对象沿着寻路路径移动。
然而,相关技术中按照固定的算法计算起点和终点之间的寻路路径,只要起点和终点确定,则计算出的寻路路径也就固定,虚拟对象的移动路线较为单一,导致自动控制虚拟对象进行移动的效果较差。
发明内容
本申请实施例提供了一种虚拟对象控制方法、装置、计算机设备及存储介质,可以用于解决相关技术中计算出的寻路路径固定,虚拟对象的移动路线较为单一,导致自动控制虚拟对象进行移动的效果较差的问题,技术方案如下:
一方面,提供了一种虚拟对象控制方法,所述方法由第一终端执行,所述方法包括:
获取寻路指令;
根据所述寻路指令获取移动路径,所述移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
控制虚拟对象在虚拟环境中沿所述移动路径进行移动。
一方面,提供了一种虚拟对象控制方法,所述方法由终端执行,所述方法包括:
展示路径绘制界面,所述路径绘制界面中包含虚拟环境的地图;
获取到在所述路径绘制界面中执行的绘制操作时,在所述路径绘制界面中展示移动路径,所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作,所述移动路径是根据所述绘制操作的操作轨迹生成的路径;
控制虚拟对象在虚拟环境中沿所述移动路径进行移动。
一方面,提供了一种虚拟对象控制方法,所述方法由终端执行,所述方法包括:
展示移动路径选择界面,所述移动路径选择界面中包含至少一条备选路径,所述备选路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
获取到在所述移动路径选择界面中执行的路径选择操作时,控制虚拟对象在虚拟环境中沿移动路径进行移动;所述移动路径是所述路径选择操作对应的备选路径。
一方面,提供了一种虚拟对象控制装置,所述装置包括:
指令获取模块,用于获取寻路指令;
路径获取模块,用于根据所述寻路指令获取移动路径,所述移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
控制模块,用于控制虚拟对象在虚拟环境中沿所述移动路径进行移动。
一方面,提供了一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述虚拟对象控制方法。
一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述虚拟对象控制方法。
本申请提供的技术方案至少包括以下有益效果:
当应用程序在终端中运行,且生成并展示虚拟环境时,若终端接收到寻路指令,则可以根据该寻路指令获取移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动,在此过程中,终端获取到的移动路径是根据用户在包含虚拟环境的地图的路径绘制界面中执行的滑动绘制操作的操作轨迹生成的路径,也就是说,用户可以通过在虚拟环境的地图上执行滑动绘制操作,并以滑动轨迹来指示后续虚拟对象在虚拟环境中自动寻路的移动路径,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是本申请一个示例性的实施例提供的终端的结构示意图;
图2是本申请一个示例性实施例提供的虚拟环境的显示界面示意图;
图3是本申请一个示例性实施例提供的虚拟对象控制流程的示意;
图4是本申请一个示例性实施例提供的一种虚拟对象控制方法的流程图;
图5是图4所示实施例涉及的一种展示路径绘制界面的示意图;
图6是图4所示实施例涉及的一种绘制操作示意图;
图7是图4所示实施例涉及的一种互电容触控屏原理图;
图8是图4所示实施例涉及的一种根据触摸事件确定操作轨迹的示意图;
图9是图4所示实施例涉及的一种操作点采集示意图;
图10是图4所示实施例涉及的位置点调整示意图;
图11是本申请一个示例性实施例示出的一种自动寻路的流程示意图;
图12是本申请一个示例性实施例示出的一种自动寻路的执行模块架构图;
图13是本申请一个示例性实施例提供的一种虚拟对象控制方法的流程图;
图14是本申请一个示例性实施例提供的选择移动路径的操作示意图;
图15是本申请一示例性实施例提供的虚拟对象控制装置的结构方框图;
图16是本申请一示例性实施例提供的计算机设备的结构框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的三维环境,还可以是纯虚构的三维环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,下述实施例以虚拟环境是三维虚拟环境来举例说明,但对此不加以限定。可选地,该虚拟环境还用于至少两个虚拟角色之间的虚拟环境对战。可选地,该虚拟环境还用于至少两个虚拟角色之间使用虚拟枪械进行对战。可选地,该虚拟环境还用于在目标区域范围内,至少两个虚拟角色之间使用虚拟枪械进行对战,该目标区域范围会随虚拟环境中的时间推移而不断变小。
虚拟对象:是指在虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物中的至少一种。可选地,当虚拟环境为三维虚拟环境时,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
虚拟环境通常由终端等计算机设备中的应用程序生成基于终端中的硬件(比如屏幕)进行展示。该终端可以是智能手机、平板电脑或者电子书阅读器等移动终端;或者,该终端也可以是笔记本电脑或者固定式计算机的个人计算机设备。
请参考图1,其示出了本申请一个示例性的实施例提供的终端的结构示意图。如图1所示,该终端包括主板110、外部输出/输入设备120、存储器130、 外部接口140、电容触控系统150以及电源160。
其中,主板110中集成有处理器和控制器等处理元件。
外部输出/输入设备120可以包括显示组件(比如显示屏)、声音播放组件(比如扬声器)、声音采集组件(比如麦克风)以及各类按键等。
存储器130中存储有程序代码和数据。
外部接口140可以包括耳机接口、充电接口以及数据接口等。
电容触控系统150可以集成在外部输出/输入设备120的显示组件或者按键中,电容触控系统150用于检测用户在显示组件或者按键上执行的触控操作。
电源160用于对终端中的其它各个部件进行供电。
在本申请实施例中,主板110中的处理器可以通过执行或者调用存储器中存储的程序代码和数据生成虚拟环境,并将生成的虚拟环境通过外部输出/输入设备120进行展示。在展示虚拟环境的过程中,可以通过电容触控系统150检测用户与虚拟环境进行交互时执行的触控操作。
其中,虚拟环境可以是三维的虚拟环境,或者,虚拟环境也可以是二维的虚拟环境。以虚拟环境是三维的虚拟环境为例,请参考图2,其示出了本申请一个示例性的实施例提供的虚拟环境的显示界面示意图。如图1所示,虚拟环境的显示界面200包括虚拟对象210、三维的虚拟环境的环境画面220以及至少一组虚拟控制按钮230,其中,虚拟控制按钮230为可选的控制元素,用户可通过虚拟控制按钮230操控虚拟对象210。
在图2中,虚拟对象210是在三维的虚拟环境中的三维模型,在显示界面200中显示的三维的虚拟环境的环境画面为虚拟对象210的视角所观察到的物体,示例性的,如图2所示,在虚拟对象210的视角观察下,显示的三维虚拟环境的环境画面220为大地224、天空225、地平线223、小山221以及厂房222。
虚拟对象210可以在用户的控制下即时移动,比如,图2示出的虚拟控制按钮230是用于控制虚拟对象210移动的虚拟按钮,用户触控该虚拟控制按钮230时,虚拟对象210可以在虚拟环境中,向触控点相对于虚拟控制按钮230的中心的方向移动。
此外,在本申请中,虚拟环境中的虚拟对象还可以沿着预先规划好的移动路径自动进行移动。比如,以图2所示的虚拟环境为例,用户可以通过路径规划操作规划好移动路径,之后,不需要用户再触控虚拟控制按钮230,虚拟对 象210即可以在虚拟环境中沿着规划好的移动路径自行移动。
请参考图3,其示出了本申请一个示例性的实施例提供的虚拟对象控制流程的示意图。如图3所示,运行上述虚拟环境对应的应用程序的终端,可以通过执行以下步骤来控制虚拟对象沿规划好的移动路径自行移动。
步骤31,获取寻路指令。
步骤32,根据寻路指令获取移动路径,移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径。
其中,该路径绘制界面中包含虚拟环境的地图,且该绘制操作是在该虚拟环境的地图上执行的滑动绘制操作。
其中,根据用户对虚拟对象的操作方式的不同,上述滑动绘制操作的具体操作形式也可以不同。
比如,当用户通过触摸屏中的虚拟按键控制虚拟对象时,上述滑动绘制操作可以是用户在虚拟环境的地图区域执行的触摸滑动操作。
或者,当用户通过键盘加鼠标控制虚拟对象时,上述滑动绘制操作可以是用户点击鼠标中某个按键并保持点击状态,之后在保持点击的状态下通过鼠标移动虚拟环境的地图中的光标的操作。
或者,当用户通过手柄控制虚拟对象时,上述滑动绘制操作可以是用户按下手柄中某个按键后,通过手柄中的摇杆移动虚拟环境的地图中的光标的操作。
步骤33,控制虚拟对象在虚拟环境中沿移动路径进行移动。
其中,终端在展示虚拟环境的显示画面时,可以接收用户通过键盘实体按键、手柄摇杆或者触摸屏触摸屏中的虚拟按键执行的移动控制操作,并根据接收到的移动控制操作控制虚拟对象在虚拟环境中移动。
进一步的,该虚拟环境的显示画面的上层还可以显示包含有虚拟环境的地图的路径绘制界面,用户在该虚拟环境的地图中执行的滑动绘制操作的滑动操作轨迹,可以对应生成指示移动路径的寻路指令,终端获取到该寻路指令后,可以根据寻路指令所指示的移动路径自动控制虚拟对象进行移动。
在本申请实施例中,通过图3所示的方案,当应用程序在终端中运行,且生成并展示虚拟环境时,若终端接收到寻路指令,则可以根据该寻路指令获取移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动。在此过程中,终端获取到的移动路径是根据用户在路径绘制界面显示的地图中执行的滑动 绘制操作的滑动轨迹生成的路径,也就是说,通过图3所示的方案,用户可以通过在路径绘制界面显示的地图中执行滑动操作,并以滑动轨迹来指示后续虚拟对象在虚拟环境中自动寻路的移动路径,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。
在一种可能的实现方式中,用户可以在当前终端展示的路径绘制界面中执行上述绘制操作,以使得当前终端控制当前终端展示的虚拟环境中的虚拟对象沿着绘制操作的操作轨迹对应的移动路径进行移动。
请参考图4,其示出了本申请一个示例性的实施例提供的一种虚拟对象控制方法的流程图,该方法可以用于第一终端中。以用户在第一终端展示的路径绘制界面中执行绘制操作,以使得第一终端控制该第一终端展示的虚拟环境中的虚拟对象沿着绘制操作的操作轨迹对应的移动路径进行移动为例,该虚拟对象控制方法可以包括如下几个步骤:
步骤401,获取由绘制操作触发的第一指令,第一指令中包含用户在路径绘制界面中执行的绘制操作的操作轨迹上的至少三个操作点的位置信息。
其中,上述操作点的位置信息,可以是指示对应的操作点在路径绘制界面中的位置的信息,比如,该位置信息可以包括对应的操作点在路径绘制界面对应的坐标系中的坐标。
在本申请实施中,寻路指令是用户在第一终端展示的路径绘制界面中执行的绘制操作触发生成的第一指令。其中,第一终端可以通过以下步骤401a至步骤401d获取该第一指令。
步骤401a,展示路径绘制界面。
在本申请实施例中,用户需要自主规划虚拟对象的移动路径时,可以对虚拟环境中的绘制界面入口执行预定操作,第一终端检测到该预定操作后,在虚拟环境的展示界面的上层展示该路径绘制界面。
其中,上述路径绘制界面可以是地图展示界面,该地图展示界面中展示有虚拟环境的地图,而绘制界面入口可以是缩略地图。请参考图5,其示出了本申请实施例涉及的一种展示路径绘制界面的示意图。如图5的(a)部分所示,在第一终端展示的虚拟环境50中,右上角包含缩略地图51。如图5的(b)部分所示,在检测到用户点击缩略地图51后,第一终端在虚拟环境50上层展示 完整地图52,该完整地图52所在的界面即为上述的路径绘制界面。
步骤401b,获取在路径绘制界面中执行的绘制操作的操作轨迹。
第一终端展示路径绘制界面后,用户即可以在该路径绘制界面中执行绘制操作。其中,以在触摸屏中展示该虚拟环境的显示界面为例,该绘制操作可以是用户手指在路径绘制界面中的触摸滑动操作。
为了避免用户在路径绘制界面发生误操作,在本申请实施例中,第一终端可以在路径绘制界面中展示一个触发控件,当接收到对该触发控件的预定操作时,第一终端控制路径绘制界面进入接受绘制状态,并获取该接受绘制状态,在路径绘制界面中执行的绘制操作的操作轨迹。
比如,第一终端展示路径绘制界面时,会在路径绘制界面中展示一个触发控件,若该触发控件未接收到预定操作,则路径绘制界面不会进入接受绘制状态,此时第一终端也不会检测在路径绘制界面中执行的操作,或者,第一终端丢弃在路径绘制界面中执行的绘制操作,避免用户误操作。当用户需要自主规划路径时,可以对该触发控件执行预定操作(比如点击操作),此时,路径绘制界面才会进入接受绘制状态,在路径绘制界面处于接受绘制状态时,第一终端检测在路径绘制界面中执行的绘制操作。
请参考图6,其示出了本申请实施例涉及的一种绘制操作示意图。如图6所示,在图5中(b)部分的基础上,完整地图52的左上角展示有触发控件52a,检测到用户点击该触发控件52a之后,第一终端控制完整地图52进入接受绘制状态,为了提高识别度,完整地图52进入接受绘制状态时,第一终端可以显示特殊的标记,比如边框加粗、边框高亮或者出现文本提示。完整地图52进入接受绘制状态后,用户可以在完整地图52中执行触摸滑动操作,第一终端检测到该触摸滑动操作后,可以获取该触摸滑动操作的操作轨迹53(也就是触摸滑动操作的触摸滑动轨迹)。
在实际应用中,第一终端可以通过硬件层面和程序层面结合获取触摸滑动操作的操作轨迹,原理如下:
一、硬件层面:
如图7所示,其示出了本申请实施例涉及的一种互电容触控屏原理图。如图7所示,互电容触摸屏可以包含保护层、透明电极图形层以及玻璃衬底,其中,透明电极图形层内部有上下两层ITO(Indium Tin Oxides,氧化铟锡)导电膜。这两层ITO导电膜之间储存着很多电荷,当用户手指触摸到触摸屏上某 个点时,两层ITO导电膜上对应该点位置的电荷会有一部分流失并转移到人体,此时会在该点处产生微弱的电流,两层ITO导电膜分别分布着代表横轴(X轴)的电极和代表纵轴(Y轴)的电极,两层ITO导电膜相互叠加,本身就构成一套精确的二维坐标系,互电容触摸屏可以通过检测微弱的电流来定位该二维坐标系中电荷流失的点(也就是用户触摸点)。
二、程序层面:
当上述硬件层面检测到用户触摸点时,第一终端的操作系统中会触发触摸事件。其中,第一终端的操作系统中的触摸事件(touch)会在用户手指放在屏幕的时候、在屏幕上滑动的时候或者是从屏幕上移开的时候触发。触摸事件可以有如下几种:
touchstart事件(触摸开始事件):当手指开始触摸屏幕时触发,即使在已经有一个手指放在屏幕上的情况下,当有其它手指触摸屏幕时,也会触发该事件。
touchmove事件(触摸移动事件):当手指在屏幕上滑动的时候连续触发。在该事件发生期间,调用preventDefault()事件可以阻止滚动。
touchend事件(触摸结束事件):当手指从屏幕上离开的时候触发。
touchcancel事件(触摸取消事件):当系统停止跟踪触摸的时候触发。
其中,每个触摸事件还包含以下三个用于跟踪触摸的属性:
touches:表示当前跟踪的触摸操作的touch对象的数组。
targetTouches:特定于事件目标的Touch对象的数组。
changeTouches:表示自上次触摸以来发生了改变的Touch对象的数组。
其中,每个Touch对象可以包含如下属性:
clientX:触摸目标在视口中的x坐标。
clientY:触摸目标在视口中的y坐标。
identifier:标识触摸的唯一ID。
pageX:触摸目标在页面中的x坐标。
pageY:触摸目标在页面中的y坐标。
screenX:触摸目标在屏幕中的x坐标。
screenY:触摸目标在屏幕中的y坐标。
Target:触摸的DOM(Document Object Model,文档对象模型)节点目标。
第一终端中的应用程序可以通过上述程序层面获得的触摸事件获取在路 径绘制界面中执行的绘制操作的操作轨迹。比如,请参考图8,其示出了本申请实施例涉及的一种根据触摸事件确定操作轨迹的示意图。如图8所示,第一终端可以根据触摸开始事件、触摸结束事件以及触摸开始事件和触摸结束事件之间的触摸移动事件各自对应的坐标获取到在路径绘制界面中执行的绘制操作的操作轨迹。
步骤401c,按照预设的采样规则从操作轨迹中采样获得至少三个操作点的位置信息,至少三个操作点包括操作轨迹的起点、终点以及至少一个中间点。
其中,上述操作轨迹由若干个操作点的位置信息构成,为了降低后续处理过程中的计算量,在本申请实施例中,第一终端从组成操作轨迹的若干个操作点中采样获得至少三个操作点,其中,该至少三个操作点包括操作轨迹的起点、终点以及至少一个中间点。
其中,上述至少三个操作点的数量越多,该至少三个操作点之间的连线与上述操作轨迹越匹配,后续生成的移动路径与该操作轨迹也越匹配。因此,为了保证后续生成的移动路径能够与操作轨迹足够匹配,上述采集的至少三个操作点需要能够还原出操作轨迹的大致轮廓,也就是说至少三个操作点需要满足一定的数量要求。
在一种可能的实现方式中,在按照预设的采样规则从操作轨迹中采样获得至少三个操作点的位置信息时,第一终端可以按照固定的采样率,从若干个操作点中采样获得上述至少三个操作点,比如,以采样率为1/20为例,第一终端可以首先采集操作轨迹的起点和终点,并从操作轨迹中除了起点和终点之外的各个操作点中,每20个操作点中采样出1个操作点,最后将操作轨迹的起点、终点以及每20个操作点中采样出的1个操作点作为上述至少三个操作点。
其中,第一终端在按照固定的采样率在若干个操作点中进行采样时,可以将若干个操作点中除了起点和终点之外的操作点,按照在操作轨迹中的顺序进行排列,并在排列后的各个操作点中按照固定的采样率进行采样。以采样率为1/20为例,在一种可能的采样方式中,第一终端可以将排列后的每20个操作点中,处于预定位置的操作点作为采样获得的操作点,例如,第一终端可以将每20个操作点中的最后一个操作点作为采样获得的操作点;或者,在另一种可能的采样方式中,第一终端也可以将排列后的每20个操作点中,随机的一个操作点作为采样获得的操作点。
在另一种可能的实现方式中,在按照预设的采样规则从操作轨迹中采样获 得至少三个操作点的位置信息时,第一终端可以将操作轨迹划分为至少两段轨迹分段,并获取至少两段轨迹分段各自的曲率,根据至少两段轨迹分段各自的曲率获取至少两段轨迹分段各自对应的采样率,根据至少两段轨迹分段各自对应的采样率,分别对至少两段轨迹分段进行采样,获得至少三个操作点的位置信息。
曲率越小的操作轨迹,还原出该操作轨迹所需要的操作点的数量越少,比如,当某一操作轨迹为直线时,还原该操作轨迹只需要两个操作点;相应的,曲率越大的操作轨迹,还原出该操作轨迹所需要的操作点的数量越多。而在实际应用中,用户的绘制操作的操作轨迹很大情况下并不是平直的线条,而复杂多变的,为了能够通过尽可能少的操作点来生成与操作轨迹尽可能匹配的移动路径,在本申请实施例中,可以将操作轨迹划分为至少两段轨迹分段,并根据每一段轨迹分段各自的曲率确定对应的采样率。其中,在本申请实施例中,轨迹分段的曲率表示轨迹分段的平滑程度,轨迹分段的曲率越小,表示轨迹分段越平滑,并且,轨迹分段的曲率与轨迹分段对应的采样率成正比,即轨迹分段的曲率越大,轨迹分段对应的采样率越高。
比如,请参考图9,其示出了本申请实施例涉及的一种操作点采集示意图。以图6所示的操作轨迹为例,第一终端可以将操作轨迹90划分为轨迹分段91、轨迹分段92、轨迹分段93以及轨迹分段94(以图9中的分割标记进行划分,其中,分割标记是为了便于理解而引入的标记),其中,轨迹分段91和轨迹分段94的曲率较小(接近0),对应的采样率较低。在图9中,除了起点95和终点96之外,第一终端在轨迹分段91和轨迹分段94中分别采样出3个操作点,而轨迹分段92和轨迹分段93的曲率较大,对应的采样率较高,在图9中,第一终端在轨迹分段92和轨迹分段93中分别采样出8个操作点。
步骤401d,生成包含绘制操作的操作轨迹上的至少三个操作点的位置信息的第一指令。
第一终端采样获得上述至少三个操作点的位置信息之后,即可以生成包含采样获得的位置信息的第一指令,以触发后续的路径生成的步骤。
步骤402,根据绘制操作的操作轨迹上的至少三个操作点的位置信息生成移动路径。
在获得上述第一指令后,第一终端即可以根据上述第一指令包含的操作点的位置信息,以及各个操作点对应在虚拟环境中位置点,生成控制虚拟对象移 动的路径。
可选的,在根据绘制操作的操作轨迹上的至少三个操作点的位置信息生成移动路径时,第一终端获取虚拟环境中,与至少三个操作点的位置信息分别对应的至少三个位置点,根据虚拟环境中可通过的各个位置点,通过预设的路径生成算法生成至少三个位置点中每相邻两个位置点之间的子路径,并将至少三个位置点中每相邻两个位置点之间的子路径拼接为移动路径。
在本申请实施例中,第一终端可以根据第一指令中包含的至少三个操作点的位置信息,确定该至少三个操作点中每一个操作点对应在虚拟环境中的位置点,获得同等数量的至少三个位置点,并根据虚拟环境中可通过的位置确定每相邻两个位置点之间的子路径,再将每相邻两个位置点之间的子路径所组成的完整路径作为最终的移动路径。
其中,在生成至少三个位置点中每相邻两个位置点之间的子路径时,第一终端可以通过预设的路径生成算法进行子路径的生成,比如,第一终端可以通过A星算法生成每相邻两个位置点之间的子路径。
其中,A星算法也称为A*搜寻算法,是一种在图形平面上从多个节点的路径中求出最低通过成本的算法,常用于游戏中的玩家角色(Player Character,PC)的移动计算,或线上游戏的机器人(robot)的移动计算。A星算法将寻路区域分成多个相连的多边形网格(例如三角形),每个多边形网格是寻路的节点,而从起始点到目标点的导航网格寻路,就是从起始点所在的网格,到目标点所在的网格,中间需要经过哪些网格路径。通过A星算法可以计算出从起始点到目标点需要经过的导航网格路径列表,得出需要经过的导航网格路径列表后,再计算通行的路径点列表,将路径点连线,即是最终确定出的寻路路径。以每相邻两个位置点作为起始点和目标点,通过A星算法获得的寻路路径即为该相邻两个位置点之间的子路径。
可选的,在通过预设的路径生成算法生成至少三个位置点中每相邻两个位置点之间的子路径之前,第一终端还检测目标位置点是否为有效位置点,目标位置点是至少三个位置点中的任意位置点,有效位置点是虚拟对象可到达的位置点;当目标位置点不是有效位置点时,确定第一有效位置点,第一有效位置点是上述至少三个位置点之外,且距离目标位置点最近的有效位置点;判断目标位置点与第一有效位置点之间的距离是否小于预设距离阈值;当目标位置点与第一有效位置点之间的距离小于预设距离阈值时,将目标位置点替换为第一 有效位置点;当目标位置点与第一有效位置点之间的距离不小于预设距离阈值时,将目标位置点从至少三个位置点中移除。
其中,上述预设距离阈值可以是虚拟环境对应的应用程序的开发人员预先设置。
在实际应用中,虚拟环境中可能存在一些虚拟对象不可到达的位置点(即不可通过的位置点),比如,某些虚拟环境中某些位置点可能处于水中或者山上,当至少三个位置点中存在虚拟对象不可到达的位置点时,从至少三个位置点中,该不可到达的位置点的前一个位置点到该不可到达的位置点之间不存在寻路路径,此时,需要对该至少三个位置点进行调整,其调整方式可以为:将不可到达的位置点替换为预设范围内最近的一个可到达的位置点,若预设范围内不存在可到达的位置点,则可以直接将该不可到达的位置点从至少三个位置点中移除。
比如,请参考图10,其示出了本申请实施例涉及的位置点调整示意图。图10中(a)部分所示,虚拟环境中存在若干个与采集到的操作点相对应的位置点,且位置点101和位置点102不属于有效位置点,其中,位置点101周围预设范围区域内存在其它的有效位置点,而位置点102周围预设范围区域内不存在其它的有效位置点,此时,第一终端对位置点101和位置点102进行调整,如图10中(b)部分所示,第一终端通过距离位置点101最近的有效位置点103替换位置点101,并且将位置点102移除。
可选的,在根据绘制操作的操作轨迹上的至少三个操作点的位置信息生成移动路径之前,第一终端还获取至少三个操作点中的有效点的数量,有效点对应在虚拟环境中的位置点是虚拟对象可到达的位置点;当有效点的数量与至少三个操作点的数量之间的比值高于预设数值时,第一终端执行根据绘制操作的操作轨迹上的至少三个操作点的位置信息生成移动路径的步骤。
在本申请实施例中,第一终端在生成移动路径之前,可以检测用户的绘制操作的操作轨迹是否满足条件,只有当操作轨迹满足条件时,才会执行生成移动路径的步骤,在一种可能的实现方式中,第一终端检测用户的绘制操作的操作轨迹是否满足条件时,可以计算上述至少三个操作点中,有效点的数量占该至少三个操作点的数量的比值,当该比值大于预设数值,比如大于95%时,才认为用户的绘制操作的操作轨迹满足条件,否则,可以认为用户的绘制操作的操作轨迹不满足条件。
在一种可能的实现方式中,第一终端在生成移动路径之后,可以计算该移动路径与用户的绘制操作的操作轨迹之间的匹配度,当该匹配度高于预设匹配度阈值(该预设匹配度阈值可以由开发人员预先设置)时,可以执行控制虚拟对象进行移动的步骤。
在本申请实施例中,第一终端可以获取移动路径的线条图,并获取绘制操作的操作轨迹的线条图,然后计算两个线条图之间的相似度,将两个线条图之间的相似度作为移动路径与用户的绘制操作的操作轨迹之间的匹配度。
可选的,在生成上述移动路径之后,第一终端还在路径绘制界面中展示生成的上述移动路径。
步骤403,控制虚拟对象在虚拟环境中沿移动路径进行移动。
在获得上述移动路径之后,不需要用户再执行控制操作,第一终端即可以自动控制虚拟对象在虚拟环境中沿上述移动路径进行移动。
综上所述,本申请实施例所示的方案,获取由用户在路径绘制界面中执行的滑动绘制操作触发的第一指令(包含用户在路径绘制界面中执行的绘制操作的操作轨迹上的至少三个操作点的位置信息),根据至少三个操作点的位置信息生成移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。
此外,本申请实施例所示的方案,在获取第一指令时,通过对滑动绘制操作的操作轨迹进行采样获得至少三个操作点,并根据采样获得的至少三个操作点对应在虚拟环境中的至少三个位置点生成移动路径,简化了生成移动路径的计算量,提高移动路径生成的效率。
另外,本申请实施例所示的方案,在获取根据采样获得的至少三个操作点对应在虚拟环境中的至少三个位置点生成移动路径时,对至少三个位置点中虚拟对象不可到达的位置点进行调整,从而提高了生成移动路径的成功率。
通过上述图4所示的方案,第一终端在控制虚拟对象进行移动时,可以展示路径绘制界面,并获取到在路径绘制界面中执行的绘制操作时,在路径绘制界面中展示移动路径,该移动路径是根据绘制操作的操作轨迹生成的路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动。
其中,上述图4所示的方案可以应用于游戏场景中,将游戏内的自动寻路 功能与画线或画图形相结合,用户可以让游戏角色在移动过程中,走出自己规划好的路径。比如,请参考图11,其示出了本申请实施例示出的一种自动寻路的流程示意图。如图11所示,用户可以在游戏的地图区域中,通过滑动触摸操作规划路径,终端的处理器收集到滑动触摸操作对应的触摸事件后,根据用户在游戏地图上的滑动轨迹获取寻路指令,该寻路指令包括起点、终点和详细路径点(即至少一个中间点)的位置信息。终端判断寻路指令所指示的位置信息(即滑动轨迹路径)是否满足条件,若满足条件,则终端根据寻路指令所指示的位置信息,以及游戏内可行走区域数据,确定游戏角色在游戏场景地图上的寻路路径(即上述移动路径),并控制游戏角色根据寻路路径,自动完成从路径起点(触发起点)到路径终点(触发终点)的寻路过程,若寻路指令所指示的位置信息不满足条件则结束。
其中,请参考图12,其示出了本申请实施例示出的一种自动寻路的执行模块架构图。如图12所示,上述过程可以由终端中的获取模块、判断模块以及寻路模块来完成,其中,获取模块执行上述获取寻路指令的步骤,判断模块执行上述判断寻路指令所指示的位置信息(即滑动轨迹路径)是否满足条件的步骤,而寻路模块执行上述根据寻路指令所指示的位置信息,以及游戏内可行走区域数据,确定游戏角色在游戏场景地图上的寻路路径,并控制游戏角色的寻路过程的步骤。
在一种可能的实现方式中,第一终端通过上述图4所示实施例中的方案生成移动路径之后,可以向第二终端发送路径指示信息,路径指示信息用于指示第二终端控制第二终端对应的虚拟对象在虚拟场景中沿该移动路径进行移动。比如,可以由第二终端展示该第一终端的生成的移动路径,当第二终端对应的用户选择该移动路径后,第二终端控制第二终端中的虚拟对象沿着该移动路径进行移动。
相应的,第一终端同样可以接收其它终端发送的路径指示信息,并且,当第一终端对应的用户选择该路径指示信息指示的,由其它终端生成的移动路径时,第二终端也可以控制虚拟对象沿着其它终端生成的移动路径进行移动。
请参考图13,其示出了本申请一个示例性实施例提供的一种虚拟对象控制方法的流程图,该方法可以用于第一终端中。以第二终端也可以控制虚拟对象沿着其它终端生成的移动路径进行移动为例,该虚拟对象控制方法可以包括如 下几个步骤:
步骤1301,接收第三终端发送的路径指示信息。
其中,上述路径指示信息用于指示至少一条备选路径中的部分或者全部备选路径,且该路径指示信息指示的备选路径是第三终端根据用户在第三终端展示的路径绘制界面中执行的绘制操作的操作轨迹生成的路径。
其中,第三终端可以按照上述图4所示的,根据用户在路径绘制界面中执行的绘制操作的操作轨迹生成移动路径的方法,生成上述至少一条备选路径中的部分或者全部备选路径。
步骤1302,展示移动路径选择界面,该移动路径选择界面中包含该至少一条备选路径。
在本申请实施例中,当上述路径指示信息用于指示至少一条备选路径中的部分备选路径时,该至少一条备选路径中的其它备选路径可以是第一终端按照上述图4所示的,根据用户在路径绘制界面中执行的绘制操作的操作轨迹生成移动路径的方法所生成的备选路径,或者,上述其他备选路径也可以是第三终端之外的其它终端通过路径指示信息所指示的备选路径。
步骤1303,获取由在移动路径选择界面中执行的路径选择操作触发的第二指令,第二指令用于指示路径选择操作对应的备选路径。
在本申请实施例中,用户可以从根据在路径绘制界面中执行的绘制操作的操作轨迹生成的各条备选路径中选择一条备选路径,第一终端接收到用户的路径选择操作时,即可以生成用于指示路径选择操作对应的备选路径的第二指令。
步骤1304,将该第二指令指示的备选路径获取为移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动。
综上所述,本申请实施例所示的方案,获取由用户在路径选择界面中执行的路径选择操作触发的第二指令(指示路径选择操作对应的备选路径),将该第二指令指示的备选路径获取为移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。
通过上述图13所示的方案,第一终端可以展示移动路径选择界面,移动 路径选择界面中包含至少一条备选路径,备选路径是根据用户在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,第一终端获取到在移动路径选择界面中执行的路径选择操作时,控制虚拟对象在虚拟环境中沿移动路径进行移动;移动路径是路径选择操作对应的备选路径。
其中,上述移动路径选择界面与路径绘制界面可以是相同的界面,比如,移动路径选择界面与路径绘制界面可以都是地图展示界面。
请参考图14,其示出了本申请一个示例性实施例提供的选择移动路径的操作示意图。如图14的(a)部分所示,在第一终端展示的虚拟环境140中,右上角包含缩略地图141。用户点击缩略地图141后,如图14的(b)部分所示,第一终端在虚拟环境140上层展示完整地图142,该完整地图142即为上述的移动路径选择界面。其中,该完整地图142中除了地图内容之外,还包含至少一条由第三终端生成并指示的移动路径(图14中示出为移动路径142a和移动路径142b),当用户点击移动路径142a时,第一终端控制虚拟对象在虚拟环境中沿移动路径142a进行移动,相应的,当用户点击移动路径142b时,第一终端控制虚拟对象在虚拟环境中沿移动路径142b进行移动。
比如,在某联机游戏场景中,用户a和用户b组队游戏,用户a可以通过在地图区域中的触摸滑动操作规划一条移动路径,用户a和用户b选择该移动路径后,用户a和用户b各自对应的终端,可以分别控制用户a和用户b的游戏角色,在游戏场景中沿着规划好的移动路径进行移动。或者,用户a可以通过在地图区域中的触摸滑动操作规划两条或者两条以上的备选路径,用户a和用户b分别选择一条备选路径作为各自的移动路径后,用户a和用户b各自对应的终端,可以分别控制用户a和用户b的游戏角色,在游戏场景中沿着各自选择的移动路径进行移动。
本方案在竞技生存这类需要频繁跑动操作的游戏中将有广泛应用,将原先只能通过设定单一目标位置寻路的方式变换为可以设定特殊路径进行移动,能显著的提升玩家的自动寻路体验,提供更多的游戏乐趣。
图15是根据一示例性实施例示出的一种虚拟对象控制装置的结构方框图。该虚拟对象控制装置可以用于终端中,以执行图3、图4或图13任一实施例所示的方法的全部或者部分步骤。该虚拟对象控制装置可以包括:
指令获取模块1501,用于获取寻路指令;
路径获取模块1502,用于根据所述寻路指令获取移动路径,所述移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
控制模块1503,用于控制虚拟对象在虚拟环境中沿所述移动路径进行移动。
可选的,所述指令获取模块1501,具体用于,
获取由所述绘制操作触发的第一指令,所述第一指令中包含所述绘制操作的操作轨迹上的至少三个操作点的位置信息;
所述路径获取模块1502,具体用于,
根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径。
可选的,在获取由所述绘制操作触发的第一指令时,所述指令获取模块1501,具体用于,
展示所述路径绘制界面;
获取在所述路径绘制界面中执行的所述绘制操作的操作轨迹;
按照预设的采样规则从所述操作轨迹中采样获得所述至少三个操作点的位置信息,所述至少三个操作点包括所述操作轨迹的起点、终点以及至少一个中间点;
生成包含所述绘制操作的操作轨迹上的至少三个操作点的位置信息的所述第一指令。
可选的,在按照预设的采样规则从所述操作轨迹中采样获得所述至少三个操作点的位置信息时,所述指令获取模块1501,具体用于,
将所述操作轨迹划分为至少两段轨迹分段;
获取所述至少两段轨迹分段各自的曲率;
根据所述至少两段轨迹分段各自的曲率获取所述至少两段轨迹分段各自对应的采样率;
根据所述至少两段轨迹分段各自对应的采样率,分别对所述至少两段轨迹分段进行采样,获得所述至少三个操作点的位置信息。
可选的,在根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径时,所述路径获取模块1502,具体用于,
获取所述虚拟环境中,与所述至少三个操作点的位置信息分别对应的至少三个位置点;
通过预设的路径生成算法生成所述至少三个位置点中每相邻两个位置点之间的子路径;
将所述至少三个位置点中每相邻两个位置点之间的子路径拼接为所述移动路径。
可选的,在通过预设的路径生成算法生成所述至少三个位置点中每相邻两个位置点之间的子路径之前,所述路径获取模块1502,具体还用于,
检测目标位置点是否为有效位置点,所述目标位置点是所述至少三个位置点中的任意位置点,所述有效位置点是所述虚拟对象可到达的位置点;
当所述目标位置点不是有效位置点时,确定第一有效位置点,所述第一有效位置点是所述至少三个位置点之外,且距离所述目标位置点最近的有效位置点;
判断所述目标位置点与所述第一有效位置点之间的距离是否小于预设距离阈值;
当所述目标位置点与所述第一有效位置点之间的距离小于所述预设距离阈值时,将所述目标位置点替换为所述第一有效位置点;
当所述目标位置点与所述第一有效位置点之间的距离不小于所述预设距离阈值时,将所述目标位置点从所述至少三个位置点中移除。
可选的,所述装置还包括:
数量获取模块1504,用于在路径获取模块1502根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径之前,获取所述至少三个操作点中的有效点的数量,所述有效点对应在所述虚拟环境中的位置点是所述虚拟对象可到达的位置点;
所述路径获取模块1502,具体用于,
当所述有效点的数量与所述至少三个操作点的数量之间的比值高于预设数值时,执行所述根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径的步骤。
可选的,所述装置还包括:
信息发送模块1505,用于向第二终端发送路径指示信息,所述路径指示信息用于指示所述第二终端控制所述第二终端对应的虚拟对象在所述虚拟场景 中沿所述移动路径进行移动。
可选的,所述指令获取模块1501,具体用于,
展示移动路径选择界面,所述移动路径选择界面中包含至少一条备选路径;
获取第二指令,所述第二指令是在所述移动路径选择界面中执行的路径选择操作触发的指令,所述第二指令用于指示所述路径选择操作对应的备选路径;
所述路径获取模块1502,具体用于,
将所述第二指令指示的所述路径选择操作对应的备选路径获取为所述移动路径。
可选的,所述装置还包括:
信息接收模块1506,用于在所述指令获取模块1501展示移动路径选择界面之前,接收第三终端发送的路径指示信息,所述路径指示信息用于指示所述至少一条备选路径中的部分或者全部备选路径,所述路径指示信息指示的备选路径是所述第三终端根据用户在所述路径绘制界面中执行的绘制操作的操作轨迹生成的路径。
综上所述,通过本申请实施例提供的装置,当应用程序在终端中运行,且生成并展示虚拟环境时,若终端接收到寻路指令,则可以根据该寻路指令获取移动路径,并控制虚拟对象在虚拟环境中沿移动路径进行移动。在此过程中,终端获取到的移动路径是根据用户在包含虚拟环境的地图的路径绘制界面中执行的滑动绘制操作的操作轨迹生成的路径,也就是说,用户可以通过在虚拟环境的地图上执行滑动绘制操作,并以滑动轨迹来指示后续虚拟对象在虚拟环境中自动寻路的移动路径,该移动路径可以由用户按照实际需求灵活设置,使得控制虚拟对象的自动寻路的移动路径更加多样化,从而提高对虚拟环境中的虚拟对象的控制效果。
图16是根据一示例性实施例示出的计算机设备1600的结构框图。该计算机设备1600可以是用户终端,比如智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。计算机设备1600还可能被称为用户设 备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1600包括有:处理器1601和存储器1602。
处理器1601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1601所执行以实现本申请中方法实施例提供的虚拟对象控制方法。
在一些实施例中,计算机设备1600还可选包括有:外围设备接口1603和至少一个外围设备。处理器1601、存储器1602和外围设备接口1603之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1603相连。具体地,外围设备包括:射频电路1604、触摸显示屏1605、摄像头1606、音频电路1607、定位组件1608和电源1609中的至少一种。
外围设备接口1603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1601和存储器1602。在一些实施例中,处理器1601、存储器1602和外围设备接口1603被集成在同一芯片或电路板上;在一些其他实施例中,处理器1601、存储器1602和外围设备接口1603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1604通过电磁信号与通信网络以及其他通信设备进行通 信。射频电路1604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1604可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1605是触摸显示屏时,显示屏1605还具有采集在显示屏1605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1601进行处理。此时,显示屏1605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1605可以为一个,设置计算机设备1600的前面板;在另一些实施例中,显示屏1605可以为至少两个,分别设置在计算机设备1600的不同表面或呈折叠设计;在再一些实施例中,显示屏1605可以是柔性显示屏,设置在计算机设备1600的弯曲表面上或折叠面上。甚至,显示屏1605还可以设置成非矩形的不规则图形,也即异形屏。显示屏1605可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1606用于采集图像或视频。可选地,摄像头组件1606包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1601进行处理,或者输入至射频 电路1604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在计算机设备1600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1601或射频电路1604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1607还可以包括耳机插孔。
定位组件1608用于定位计算机设备1600的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1608可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1609用于为计算机设备1600中的各个组件进行供电。电源1609可以是交流电、直流电、一次性电池或可充电电池。当电源1609包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,计算机设备1600还包括有一个或多个传感器1610。该一个或多个传感器1610包括但不限于:加速度传感器1611、陀螺仪传感器1612、压力传感器1613、指纹传感器1614、光学传感器1615以及接近传感器1616。
加速度传感器1611可以检测以计算机设备1600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1611可以用于检测重力加速度在三个坐标轴上的分量。处理器1601可以根据加速度传感器1611采集的重力加速度信号,控制触摸显示屏1605以横向视图或纵向视图进行用户界面的显示。加速度传感器1611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1612可以检测计算机设备1600的机体方向及转动角度,陀螺仪传感器1612可以与加速度传感器1611协同采集用户对计算机设备1600的3D动作。处理器1601根据陀螺仪传感器1612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1613可以设置在计算机设备1600的侧边框和/或触摸显示屏 1605的下层。当压力传感器1613设置在计算机设备1600的侧边框时,可以检测用户对计算机设备1600的握持信号,由处理器1601根据压力传感器1613采集的握持信号进行左右手识别或快捷操作。当压力传感器1613设置在触摸显示屏1605的下层时,由处理器1601根据用户对触摸显示屏1605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1614用于采集用户的指纹,由处理器1601根据指纹传感器1614采集到的指纹识别用户的身份,或者,由指纹传感器1614根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1601授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1614可以被设置计算机设备1600的正面、背面或侧面。当计算机设备1600上设置有物理按键或厂商Logo时,指纹传感器1614可以与物理按键或厂商Logo集成在一起。
光学传感器1615用于采集环境光强度。在一个实施例中,处理器1601可以根据光学传感器1615采集的环境光强度,控制触摸显示屏1605的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1605的显示亮度;当环境光强度较低时,调低触摸显示屏1605的显示亮度。在另一个实施例中,处理器1601还可以根据光学传感器1615采集的环境光强度,动态调整摄像头组件1606的拍摄参数。
接近传感器1616,也称距离传感器,通常设置在计算机设备1600的前面板。接近传感器1616用于采集用户与计算机设备1600的正面之间的距离。在一个实施例中,当接近传感器1616检测到用户与计算机设备1600的正面之间的距离逐渐变小时,由处理器1601控制触摸显示屏1605从亮屏状态切换为息屏状态;当接近传感器1616检测到用户与计算机设备1600的正面之间的距离逐渐变大时,由处理器1601控制触摸显示屏1605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图16中示出的结构并不构成对计算机设备1600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括至少一条指令、至少一段程序、代码集或指令集的存储器,上 述至少一条指令、至少一段程序、代码集或指令集可由处理器执行以完成上述图3、图4或图13任一实施例所示的方法的全部或者部分步骤。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (15)

  1. 一种虚拟对象控制方法,其特征在于,所述方法由第一终端执行,所述方法包括:
    获取寻路指令;
    根据所述寻路指令获取移动路径,所述移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
    控制虚拟对象在所述虚拟环境中沿所述移动路径进行移动。
  2. 根据权利要求1所述的方法,其特征在于,所述获取寻路指令,包括:
    获取由所述绘制操作触发的第一指令,所述第一指令中包含所述绘制操作的操作轨迹上的至少三个操作点的位置信息;
    所述根据所述寻路指令获取移动路径,包括:
    根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径。
  3. 根据权利要求2所述的方法,其特征在于,所述获取由所述绘制操作触发的第一指令,包括:
    展示所述路径绘制界面;
    获取在所述路径绘制界面中执行的所述绘制操作的操作轨迹;
    按照预设的采样规则从所述操作轨迹中采样获得所述至少三个操作点的位置信息,所述至少三个操作点包括所述操作轨迹的起点、终点以及至少一个中间点;
    生成包含所述绘制操作的操作轨迹上的至少三个操作点的位置信息的所述第一指令。
  4. 根据权利要求3所述的方法,其特征在于,所述按照预设的采样规则从所述操作轨迹中采样获得所述至少三个操作点的位置信息,包括:
    将所述操作轨迹划分为至少两段轨迹分段;
    获取所述至少两段轨迹分段各自的曲率;
    根据所述至少两段轨迹分段各自的曲率获取所述至少两段轨迹分段各自对 应的采样率;
    根据所述至少两段轨迹分段各自对应的采样率,分别对所述至少两段轨迹分段进行采样,获得所述至少三个操作点的位置信息。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径,包括:
    获取所述虚拟环境中,与所述至少三个操作点的位置信息分别对应的至少三个位置点;
    通过预设的路径生成算法生成所述至少三个位置点中每相邻两个位置点之间的子路径;
    将所述至少三个位置点中每相邻两个位置点之间的子路径拼接为所述移动路径。
  6. 根据权利要求5所述的方法,其特征在于,所述通过预设的路径生成算法生成所述至少三个位置点中每相邻两个位置点之间的子路径之前,还包括:
    检测目标位置点是否为有效位置点,所述目标位置点是所述至少三个位置点中的任意位置点,所述有效位置点是所述虚拟对象可到达的位置点;
    当所述目标位置点不是有效位置点时,确定第一有效位置点,所述第一有效位置点是所述至少三个位置点之外,且距离所述目标位置点最近的有效位置点;
    判断所述目标位置点与所述第一有效位置点之间的距离是否小于预设距离阈值;
    当所述目标位置点与所述第一有效位置点之间的距离小于所述预设距离阈值时,将所述目标位置点替换为所述第一有效位置点;
    当所述目标位置点与所述第一有效位置点之间的距离不小于所述预设距离阈值时,将所述目标位置点从所述至少三个位置点中移除。
  7. 根据权利要求2至6任一所述的方法,其特征在于,所述根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径之前,还包括:
    获取所述至少三个操作点中的有效点的数量,所述有效点对应在所述虚拟 环境中的位置点是所述虚拟对象可到达的位置点;
    所述根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径,包括:
    当所述有效点的数量与所述至少三个操作点的数量之间的比值高于预设数值时,执行所述根据所述绘制操作的操作轨迹上的至少三个操作点的位置信息生成所述移动路径的步骤。
  8. 根据权利要求2至6任一所述的方法,其特征在于,所述方法还包括:
    向第二终端发送路径指示信息,所述路径指示信息用于指示所述第二终端控制所述第二终端对应的虚拟对象在所述虚拟场景中沿所述移动路径进行移动。
  9. 根据权利要求1所述的方法,其特征在于,所述获取寻路指令,包括:
    展示移动路径选择界面,所述移动路径选择界面中包含至少一条备选路径;
    获取第二指令,所述第二指令是在所述移动路径选择界面中执行的路径选择操作触发的指令,所述第二指令用于指示所述路径选择操作对应的备选路径;
    所述根据所述寻路指令获取移动路径,包括:
    将所述第二指令指示的备选路径获取为所述移动路径。
  10. 根据权利要求9所述的方法,其特征在于,所述展示移动路径选择界面之前,还包括:
    接收第三终端发送的路径指示信息,所述路径指示信息用于指示所述至少一条备选路径中的部分或者全部备选路径,所述路径指示信息指示的备选路径是所述第三终端根据在所述路径绘制界面中执行的绘制操作的操作轨迹生成的路径。
  11. 一种虚拟对象控制方法,其特征在于,所述方法由终端执行,所述方法包括:
    展示路径绘制界面,所述路径绘制界面中包含虚拟环境的地图;
    获取到在所述路径绘制界面中执行的绘制操作时,在所述路径绘制界面中展示移动路径,所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作, 所述移动路径是根据所述绘制操作的操作轨迹生成的路径;
    控制虚拟对象在所述虚拟环境中沿所述移动路径进行移动。
  12. 一种虚拟对象控制方法,其特征在于,所述方法由终端执行,所述方法包括:
    展示移动路径选择界面,所述移动路径选择界面中包含至少一条备选路径,所述备选路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
    获取到在所述移动路径选择界面中执行的路径选择操作时,控制虚拟对象在虚拟环境中沿移动路径进行移动;所述移动路径是所述路径选择操作对应的备选路径。
  13. 一种虚拟对象控制装置,其特征在于,所述装置包括:
    指令获取模块,用于获取寻路指令;
    路径获取模块,用于根据所述寻路指令获取移动路径,所述移动路径是根据在路径绘制界面中执行的绘制操作的操作轨迹生成的路径,所述路径绘制界面中包含虚拟环境的地图,且所述绘制操作是在所述虚拟环境的地图上执行的滑动绘制操作;
    控制模块,用于控制虚拟对象在所述虚拟环境中沿所述移动路径进行移动。
  14. 一种计算机设备,其特征在于,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至10任一所述的虚拟对象控制方法。
  15. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至10任一所述的虚拟对象控制方法。
PCT/CN2018/115924 2018-02-09 2018-11-16 虚拟对象控制方法、装置、计算机设备及存储介质 WO2019153824A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/909,954 US11565181B2 (en) 2018-02-09 2020-06-23 Virtual object control method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810132752.1 2018-02-09
CN201810132752.1A CN108245888A (zh) 2018-02-09 2018-02-09 虚拟对象控制方法、装置及计算机设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/909,954 Continuation US11565181B2 (en) 2018-02-09 2020-06-23 Virtual object control method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019153824A1 true WO2019153824A1 (zh) 2019-08-15

Family

ID=62744886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115924 WO2019153824A1 (zh) 2018-02-09 2018-11-16 虚拟对象控制方法、装置、计算机设备及存储介质

Country Status (3)

Country Link
US (1) US11565181B2 (zh)
CN (1) CN108245888A (zh)
WO (1) WO2019153824A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562051A (zh) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN113298909A (zh) * 2021-04-13 2021-08-24 网易(杭州)网络有限公司 虚拟道路的生成方法、装置、存储介质和处理器
CN113694530A (zh) * 2021-08-31 2021-11-26 网易(杭州)网络有限公司 虚拟角色移动控制方法、装置、电子设备及存储介质

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245888A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置及计算机设备
CN109116990B (zh) * 2018-08-20 2019-06-11 广州市三川田文化科技股份有限公司 一种移动控制的方法、装置、设备及计算机可读存储介质
CN109350964B (zh) * 2018-09-28 2020-08-11 腾讯科技(深圳)有限公司 控制虚拟角色的方法、装置、设备及存储介质
CN109568965B (zh) * 2018-11-30 2020-10-16 广州要玩娱乐网络技术股份有限公司 目标单位移动方法、装置、存储介质和终端
CN109621420A (zh) * 2018-12-26 2019-04-16 网易(杭州)网络有限公司 游戏中的寻路方法、装置、介质及电子设备
CN109568956B (zh) * 2019-01-10 2020-03-10 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
CN109806585B (zh) * 2019-02-19 2024-02-23 网易(杭州)网络有限公司 游戏的显示控制方法、装置、设备和存储介质
CN110876849B (zh) * 2019-11-14 2022-09-20 腾讯科技(深圳)有限公司 虚拟载具的控制方法、装置、设备及存储介质
CN111061375B (zh) * 2019-12-25 2023-08-01 上海褚信医学科技有限公司 基于虚拟手术的智能消毒训练方法和设备
CN111159324A (zh) * 2019-12-30 2020-05-15 上海米哈游天命科技有限公司 一种对象移动方法、装置、终端及存储介质
CN111228804B (zh) * 2020-02-04 2021-05-14 腾讯科技(深圳)有限公司 在虚拟环境中驾驶载具的方法、装置、终端及存储介质
CN111346381B (zh) * 2020-03-02 2020-12-04 腾讯科技(深圳)有限公司 游戏路径控制方法、装置、设备及计算机可读存储介质
CN111773669A (zh) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 一种在虚拟环境中生成虚拟对象方法及装置
CN111760268B (zh) * 2020-07-06 2021-06-08 网易(杭州)网络有限公司 一种游戏中的寻路控制方法及装置
CN111773671A (zh) * 2020-07-13 2020-10-16 网易(杭州)网络有限公司 虚拟对象的移动控制方法、装置和终端设备
CN111744197B (zh) * 2020-08-07 2022-03-15 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备及可读存储介质
CN112076467B (zh) * 2020-09-17 2023-03-10 腾讯科技(深圳)有限公司 控制虚拟对象使用虚拟道具的方法、装置、终端及介质
US20220184506A1 (en) * 2020-11-12 2022-06-16 Tencent Technology (Shenzhen) Company Limited Method and apparatus for driving vehicle in virtual environment, terminal, and storage medium
CN112569600B (zh) * 2020-12-23 2022-07-26 腾讯科技(深圳)有限公司 虚拟场景中的路径信息发送方法、计算机设备及存储介质
CN112732081A (zh) * 2020-12-31 2021-04-30 珠海金山网络游戏科技有限公司 虚拟对象的移动方法及装置
CN113426125A (zh) * 2021-07-02 2021-09-24 网易(杭州)网络有限公司 游戏中的虚拟单位控制方法及装置、存储介质、电子设备
CN113590013B (zh) * 2021-07-13 2023-08-25 网易(杭州)网络有限公司 虚拟资源处理方法、非易失性存储介质及电子装置
CN113592997B (zh) * 2021-07-30 2023-05-30 腾讯科技(深圳)有限公司 基于虚拟场景的物体绘制方法、装置、设备及存储介质
CN113827971A (zh) * 2021-09-24 2021-12-24 网易(杭州)网络有限公司 游戏地图的标记方法及装置、电子设备、存储介质
CN114130026B (zh) * 2021-10-29 2023-08-25 腾讯科技(深圳)有限公司 导航路线的获取方法、存储介质和电子设备
CN114344907B (zh) * 2022-03-08 2022-06-03 广州极尚网络技术有限公司 图像显示方法、装置、设备及存储介质
CN115862416B (zh) * 2023-01-20 2023-05-23 北京卓翼智能科技有限公司 一种路径规划方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060094502A1 (en) * 2004-11-02 2006-05-04 Nintendo Co., Ltd. Video game device and video game program
US20070265082A1 (en) * 2006-04-28 2007-11-15 Nst Gesture-based control of multiple game characters and other animated objects
CN104740875A (zh) * 2015-04-13 2015-07-01 四川天上友嘉网络科技有限公司 游戏角色移动中的指引方法
CN105209138A (zh) * 2013-07-31 2015-12-30 株式会社得那 持续存储游戏程序的存储介质以及信息处理装置
CN105597310A (zh) * 2015-12-24 2016-05-25 网易(杭州)网络有限公司 一种游戏控制方法及装置
CN108245888A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置及计算机设备

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100458760B1 (ko) * 2000-08-29 2004-12-03 가부시끼가이샤 코에이 집단 캐릭터 표시방법, 기록매체 및 게임장치
JP3795856B2 (ja) * 2002-12-09 2006-07-12 株式会社スクウェア・エニックス ビデオゲーム装置、ビデオゲームの進行制御方法、プログラム及び記録媒体
JP4307310B2 (ja) * 2004-03-31 2009-08-05 任天堂株式会社 ゲーム装置及びゲームプログラム
JP2006314349A (ja) * 2005-05-10 2006-11-24 Nintendo Co Ltd ゲームプログラムおよびゲーム装置
JP4326502B2 (ja) * 2005-05-26 2009-09-09 任天堂株式会社 表示領域を移動させる画像処理プログラムおよび画像処理装置
JP4748657B2 (ja) * 2005-06-24 2011-08-17 任天堂株式会社 入力データ処理プログラムおよび入力データ処理装置
JP4125760B2 (ja) * 2006-03-15 2008-07-30 株式会社スクウェア・エニックス ビデオゲーム処理装置、ビデオゲーム処理方法、およびビデオゲーム処理プログラム
US20200125244A1 (en) * 2009-12-03 2020-04-23 Innoventions, Inc. Context-based graphical view navigation guidance system
US10010793B2 (en) * 2010-06-14 2018-07-03 Nintendo Co., Ltd. Techniques for improved user interface helping super guides
CN103198234A (zh) 2013-04-25 2013-07-10 腾讯科技(深圳)有限公司 一种寻路方法和装置
JP6192366B2 (ja) * 2013-06-04 2017-09-06 任天堂株式会社 情報処理プログラム、情報処理装置の制御方法、情報処理装置および情報処理システム
US10068373B2 (en) * 2014-07-01 2018-09-04 Samsung Electronics Co., Ltd. Electronic device for providing map information
KR102332752B1 (ko) * 2014-11-24 2021-11-30 삼성전자주식회사 지도 서비스를 제공하는 전자 장치 및 방법
CN104548598B (zh) * 2014-12-31 2017-08-08 北京像素软件科技股份有限公司 一种虚拟现实场景中寻路的方法
CN104645616A (zh) * 2015-03-16 2015-05-27 成都优聚软件有限责任公司 一种塔防游戏中游戏对象的移动路径设置方法和系统
CN104740876A (zh) * 2015-04-13 2015-07-01 四川天上友嘉网络科技有限公司 游戏角色移动方向的指向方法
JP6523111B2 (ja) * 2015-09-10 2019-05-29 株式会社バンダイナムコエンターテインメント プログラム及びゲームシステム
CN105094346B (zh) * 2015-09-29 2018-09-25 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质
CN108355348B (zh) * 2015-10-10 2021-01-26 腾讯科技(成都)有限公司 信息处理方法、终端及计算机存储介质
CN105955628A (zh) * 2016-04-22 2016-09-21 深圳市牛蛙互动网络技术有限公司 一种手控轨迹操作在游戏角色移动中的实施方法
WO2018042466A1 (ja) * 2016-08-31 2018-03-08 任天堂株式会社 ゲームプログラム、ゲーム処理方法、ゲームシステム、およびゲーム装置
WO2018103634A1 (zh) * 2016-12-06 2018-06-14 腾讯科技(深圳)有限公司 一种数据处理的方法及移动终端
CN106621329B (zh) * 2017-01-04 2020-06-23 腾讯科技(深圳)有限公司 一种游戏数据处理的方法
JP7029888B2 (ja) * 2017-05-23 2022-03-04 任天堂株式会社 情報処理プログラム、情報処理装置、情報処理システム、及び情報処理方法
CN107096222A (zh) * 2017-06-08 2017-08-29 深圳市乃斯网络科技有限公司 游戏中定位路径规划方法及系统
CN107694089B (zh) * 2017-09-01 2019-02-12 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN107648848B (zh) * 2017-09-01 2018-11-16 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备
US10807001B2 (en) * 2017-09-12 2020-10-20 Netease (Hangzhou) Network Co., Ltd. Information processing method, apparatus and computer readable storage medium
CN107837531B (zh) * 2017-09-28 2018-11-23 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN107992281A (zh) * 2017-10-27 2018-05-04 网易(杭州)网络有限公司 补偿声音信息的视觉显示方法及装置、存储介质、设备
CN108379837A (zh) * 2018-02-01 2018-08-10 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备
CN108499105B (zh) * 2018-04-16 2022-02-25 腾讯科技(深圳)有限公司 在虚拟环境中进行视角调整的方法、装置及存储介质
CN108619721B (zh) * 2018-04-27 2020-08-11 腾讯科技(深圳)有限公司 虚拟场景中的距离信息显示方法、装置及计算机设备
KR102022604B1 (ko) * 2018-09-05 2019-11-04 넷마블 주식회사 주변 오디오를 시각적 표현하는 인터페이스에 기초하여 게임 서비스를 제공하는 게임 서비스 제공 서버 및 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060094502A1 (en) * 2004-11-02 2006-05-04 Nintendo Co., Ltd. Video game device and video game program
US20070265082A1 (en) * 2006-04-28 2007-11-15 Nst Gesture-based control of multiple game characters and other animated objects
CN105209138A (zh) * 2013-07-31 2015-12-30 株式会社得那 持续存储游戏程序的存储介质以及信息处理装置
CN104740875A (zh) * 2015-04-13 2015-07-01 四川天上友嘉网络科技有限公司 游戏角色移动中的指引方法
CN105597310A (zh) * 2015-12-24 2016-05-25 网易(杭州)网络有限公司 一种游戏控制方法及装置
CN108245888A (zh) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置及计算机设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562051A (zh) * 2020-11-30 2021-03-26 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN112562051B (zh) * 2020-11-30 2023-06-27 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、设备及存储介质
CN113298909A (zh) * 2021-04-13 2021-08-24 网易(杭州)网络有限公司 虚拟道路的生成方法、装置、存储介质和处理器
CN113298909B (zh) * 2021-04-13 2023-07-25 网易(杭州)网络有限公司 虚拟道路的生成方法、装置、存储介质和处理器
CN113694530A (zh) * 2021-08-31 2021-11-26 网易(杭州)网络有限公司 虚拟角色移动控制方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108245888A (zh) 2018-07-06
US20200316473A1 (en) 2020-10-08
US11565181B2 (en) 2023-01-31

Similar Documents

Publication Publication Date Title
WO2019153824A1 (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
WO2019153750A1 (zh) 用于对虚拟环境进行视角切换的方法、装置、设备及存储介质
US20200316470A1 (en) Method and terminal for displaying distance information in virtual scene
US20200357163A1 (en) Method and apparatus for adjusting viewing angle in virtual environment, and readable storage medium
CN108710525B (zh) 虚拟场景中的地图展示方法、装置、设备及存储介质
CN110276840B (zh) 多虚拟角色的控制方法、装置、设备及存储介质
CN109529319B (zh) 界面控件的显示方法、设备及存储介质
CN110917616B (zh) 虚拟场景中的方位提示方法、装置、设备及存储介质
CN111701238A (zh) 虚拟画卷的显示方法、装置、设备及存储介质
CN108694073B (zh) 虚拟场景的控制方法、装置、设备及存储介质
CN108786110B (zh) 虚拟环境中的瞄准镜显示方法、设备及存储介质
CN108536295B (zh) 虚拟场景中的对象控制方法、装置及计算机设备
CN109407959B (zh) 虚拟场景中的虚拟对象控制方法、设备以及存储介质
WO2020156252A1 (zh) 在虚拟环境中建造建筑物的方法、装置、设备及存储介质
KR102565711B1 (ko) 관점 회전 방법 및 장치, 디바이스 및 저장 매체
CN111273780B (zh) 基于虚拟环境的动画播放方法、装置、设备及存储介质
JP2023139033A (ja) 視点回転の方法、装置、端末およびコンピュータプログラム
CN112245912B (zh) 虚拟场景中的声音提示方法、装置、设备及存储介质
CN108744511B (zh) 虚拟环境中的瞄准镜显示方法、设备及存储介质
JP2022524802A (ja) 仮想環境におけるスコープの適用方法及び装置並びにコンピュータ装置及びプログラム
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
CN112755517B (zh) 虚拟对象控制方法、装置、终端及存储介质
CN110052030B (zh) 虚拟角色的形象设置方法、装置及存储介质
CN112717409B (zh) 虚拟车辆控制方法、装置、计算机设备及存储介质
CN113769397B (zh) 虚拟物体的设置方法、装置、设备、介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905585

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905585

Country of ref document: EP

Kind code of ref document: A1