WO2019096055A1 - 对象选择方法、终端和存储介质 - Google Patents

对象选择方法、终端和存储介质 Download PDF

Info

Publication number
WO2019096055A1
WO2019096055A1 PCT/CN2018/114529 CN2018114529W WO2019096055A1 WO 2019096055 A1 WO2019096055 A1 WO 2019096055A1 CN 2018114529 W CN2018114529 W CN 2018114529W WO 2019096055 A1 WO2019096055 A1 WO 2019096055A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
skill
virtual object
virtual
offset
Prior art date
Application number
PCT/CN2018/114529
Other languages
English (en)
French (fr)
Inventor
吴东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019096055A1 publication Critical patent/WO2019096055A1/zh
Priority to US16/661,270 priority Critical patent/US11090563B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/63Methods for processing data by generating or executing the game program for controlling the execution of the game in time

Definitions

  • the present application relates to the field of communications technologies, and in particular, to an object selection method, a terminal, and a storage medium.
  • Various embodiments of the present application provide an object selection method, a terminal, and a storage medium.
  • An embodiment of the present application provides an object selection method, including:
  • a target virtual object released by the skill is selected from the candidate object set according to the offset parameter.
  • an object selection apparatus including:
  • a display unit configured to display a graphical user interface, the graphical user interface including at least one virtual object
  • a determining unit configured to determine, from the at least one virtual object, a candidate virtual object released by the skill according to the object selection instruction, to obtain a candidate object set
  • a parameter obtaining unit configured to acquire an offset parameter of the candidate virtual object in the candidate object set relative to the reference object
  • a selecting unit configured to select a target virtual object released by the skill from the candidate object set according to the offset parameter.
  • the embodiment of the present application further provides a terminal, including a memory and a processor, where the memory stores computer readable instructions, when the computer readable instructions are executed by the processor, causing the processor to execute the following step:
  • a target virtual object released by the skill is selected from the candidate object set according to the offset parameter.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium, where the storage medium stores computer readable instructions, when the computer readable instructions are executed by one or more processors, Having the one or more processors perform the following steps:
  • a target virtual object released by the skill is selected from the candidate object set according to the offset parameter.
  • FIG. 1 is a schematic diagram of a scenario of an information interaction system provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an object selection method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a first game interface provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a camera field of view provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of selection of candidate virtual objects according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a skill operation area provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a second game interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a third game interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a fourth game interface provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a fifth game interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a sixth game interface provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a seventh game interface provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of an eighth game interface provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a ninth game interface provided by an embodiment of the present application.
  • 16 is a schematic diagram of a tenth game interface provided by an embodiment of the present application.
  • FIG. 17 is another schematic flowchart of an object selection method provided by an embodiment of the present application.
  • FIG. 18 is still another schematic flowchart of an object selection method according to an embodiment of the present application.
  • FIG. 19 is a first schematic structural diagram of an object selection apparatus according to an embodiment of the present application.
  • 20 is a second schematic structural diagram of an object selection apparatus according to an embodiment of the present application.
  • 21 is a third schematic structural diagram of an object selection apparatus according to an embodiment of the present application.
  • FIG. 22 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • an embodiment of the present application provides an information interaction system, including: a terminal 10 and a server 20, where the terminal 10 and the server 20 are connected through a network 30.
  • the network 30 includes network entities such as routers and gateways, which are not illustrated in the figure.
  • the terminal 10 can perform information interaction with the server 20 through a wired network or a wireless network to download an application, an application update data packet, application-related data information or service information, and the like from the server 20.
  • the terminal 10 can be a mobile phone, a tablet computer, a notebook computer, and the like.
  • FIG. 1 is an example in which the terminal 10 is a mobile phone.
  • the terminal 10 can be installed with various applications required by the user, such as an application having entertainment functions (such as a video application, an audio playback application, a game application, a reading software), and an application having a service function (such as a map navigation application, Group purchase application, etc.).
  • an application having entertainment functions such as a video application, an audio playback application, a game application, a reading software
  • an application having a service function such as a map navigation application, Group purchase application, etc.
  • the terminal 10 downloads the game application, the game application update data package, the data information related to the game application, the service information, and the like from the server 20 via the network 30 as needed.
  • the candidate virtual object released by the skill may be determined from the at least one virtual object to obtain the candidate object set; Obtaining an offset parameter of the candidate virtual object relative to the reference object in the candidate object set; and selecting the target virtual object released by the skill from the candidate object set according to the offset parameter.
  • the target virtual object released by the skill can be blurred based on the offset parameter of the virtual object relative to the reference object, the user can quickly determine the object released by the skill without performing an accurate skill release operation, thereby improving the accuracy of the interaction result.
  • the efficiency of interaction results generation and the resources of the terminal are saved.
  • FIG. 1 is only a system architecture example for implementing the embodiment of the present application.
  • the embodiment of the present application is not limited to the system structure described in FIG. 1 above, and various embodiments of the present application are proposed based on the system architecture.
  • an object selection method which may be executed by a processor of a terminal, as shown in FIG. 2, the object selection method includes:
  • 201 Display a graphical user interface, the graphical user interface including at least one virtual object.
  • a graphical user interface is obtained by executing a software application on a processor of the terminal and rendering on the display of the terminal.
  • at least one virtual object can also be rendered in the graphical user interface.
  • the graphical user interface can include various scene images, such as game screens, social screens, and the like.
  • the scene picture can be a two-dimensional picture or a three-dimensional picture.
  • the virtual object is a virtual resource object, which may be a virtual object in the graphical user interface.
  • the virtual object can include various types of objects in the graphical user interface. For example, a virtual character object that characterizes a character (such as a user character object that represents a player user, a character object that characterizes a robot), a building that characterizes a background, trees, white clouds, tower defense, and the like.
  • the virtual object may be a virtual character object.
  • the game interface when the graphical user interface is a game interface of an FPS (First-Person Shooter), the game interface includes multiple A virtual character object, which may be a virtual enemy character object, that is, an enemy target.
  • the game interface may further include: a virtual object such as a building, a sky, a white cloud, or the like representing the background, an object representing a user state (such as a blood value, a vitality value), and a virtual character representing the user skill, equipment, and the like.
  • An object that represents a direction button object that controls the movement of the user's position such as a circular virtual joystick.
  • the virtual objects may include: virtual character objects, virtual background objects, and the like.
  • an object selection instruction may be acquired, and then a candidate virtual object released by the skill is determined from the at least one virtual object according to the object selection instruction.
  • the candidate object set may include one or more candidate virtual objects.
  • a candidate avatar object that can be released can be determined from at least one avatar object to obtain a candidate object set.
  • the candidate virtual objects may be determined based on the field of view of the camera component of the scene scene in the graphical user interface.
  • the camera component is used to render a corresponding scene picture in a graphical user interface.
  • the camera component can be a rendering component in unity that can correspond to the corresponding image in the graphical user interface based on the configured height, width, and field of view.
  • a camera component is a component for a user player to capture and display a scene image.
  • Scene screen There can be an unlimited number of camera components in a scene, such as two camera components, which can be set to render in any order.
  • the step of “determining a candidate virtual object released from the at least one virtual object according to the object selection instruction” may include:
  • the field of view of the camera component is also referred to as FOV (Field of view), and the range of angles at which the camera component renders the scene image.
  • the field of view angle is 40 degrees, 50 degrees, and the like.
  • the camera component is not a solid camera component, it implements the same function as the physical camera.
  • the field of view of the camera component is equivalent to the field of view of the physical camera.
  • the angle of view of the solid camera can be vertices with the camera lens.
  • the object image of the target can be angled by the two edges of the largest range of the lens. Taking the FPS game as an example, referring to FIG. 4, in the FPS game, the field of view of the camera component is the angle of view of the shooter, and the angle of view is 2 ⁇ .
  • selecting the candidate virtual object through the field of view of the camera component can improve the accuracy of the target object selection, the accuracy of the output of the interaction result, and the user experience.
  • the virtual object outside the field of view may be culled, and the candidate virtual object released by the skill is determined from the virtual objects within the field of view, that is, the virtual object located within the range of the field of view of the camera component is selected as the candidate virtual object.
  • the candidate virtual object released by the skill is determined from the virtual objects within the field of view, that is, the virtual object located within the range of the field of view of the camera component is selected as the candidate virtual object.
  • FIG. 5 it is assumed that there are virtual objects A, B, and C.
  • the virtual objects A and B are located within the range of the field of view, and the virtual object C is outside the range of the field of view.
  • the virtual object A is determined.
  • B is a candidate virtual object for skill release.
  • the skill operation area in the user operation trigger mode, may be set in the graphical user interface, and when the user is detected to perform the corresponding operation in the skill operation area, the generation object selection instruction may be triggered.
  • the user in order to facilitate the user to perform the skill release operation, avoid a large number of misoperations, improve the accuracy and accuracy of the interaction process, and also set the skill object in the skill operation area, the user can perform the skill object by Operation to trigger the generation of an object selection instruction. That is, the step of "determining a candidate virtual object released from the at least one virtual object according to the object selection instruction" may include:
  • the generating object selection instruction is triggered
  • a candidate virtual object of skill release is determined from the at least one virtual object according to the object selection instruction.
  • the skill object may be an object that represents a skill in a graphical user interface, such as a skill button.
  • the skill release trigger operation of the skill object may include: pressing the skill object, such as long pressing the skill object.
  • the terminal will trigger the generation object selection instruction.
  • the virtual joystick object and the operation aperture can also be provided.
  • Auxiliary control objects that help users quickly and accurately trigger skill release. Specifically, when the skill release triggering operation of the skill object is detected, the skill release assistance control object is displayed at a preset position on the graphical user interface, and the generation object selection instruction is triggered; and the auxiliary control object is released according to the skill Operation, controlling the skill release position of the skill object to perform corresponding adjustment within the graphical user interface, and triggering an object selection instruction.
  • the human interaction and the accuracy of the interaction result are improved;
  • the skill release assist control object may include a skill release control aperture object and is in the skill Release a virtual rocker object that controls the range of radiation of the aperture object;
  • the skill release position of the skill object is controlled to be adjusted correspondingly in the graphical user interface, and the object selection instruction is re-triggered, and then the candidate virtual object is returned. Determine the steps.
  • a skill release triggering operation acting on the skill object 1 is acquired, and a skill release assist control object is rendered, and the skill release assist control object includes a skill release control aperture object 31 and a virtual Rocker object 32.
  • the subsequent trigger skill release control operation causes the skill release manipulation aperture object 31 to remain in position.
  • the virtual joystick object 32 when it is detected that the virtual joystick object 32 moves following the drag of the skill release operation gesture, the virtual joystick object 32 deviates from the center of the skill release control aperture object 31, triggering the skill release control operation, The skill release manipulation aperture object is kept in the same position.
  • the user When the user presses the skill object 1 terminal to trigger the generation object selection instruction, then the user can drag the virtual joystick object 32 to move within the skill release manipulation aperture object 31 to adjust the skill release position of the skill object, and regenerate the object selection instruction. . That is, the object selection method of the embodiment of the present application can be triggered by dragging the virtual joystick object 32.
  • the skill release manipulation aperture object 31 can be a wheel shape, a square shape, or a triangular shape, and the specific stroke can be set according to actual needs.
  • the virtual rocker object 32 can be annular, square or ring shaped and can be referred to as a rocker. In practical applications, the shape of the control aperture object 31 and the virtual joystick object 32 may be identical for the convenience of the operational skill release.
  • the terminal in the object selection instruction automatic trigger mode, may automatically trigger the object selection instruction in real time, for example, automatically trigger the generation object selection instruction at regular intervals.
  • a continuous release mode (corresponding to the instruction automatic trigger mode) may be set for the skill release, and in the sustained release mode, the terminal may automatically trigger the generation object selection. Instructions, no user action required. That is, the step of “determining a candidate virtual object released from the at least one virtual object according to the object selection instruction” may include:
  • the object selection instruction is automatically triggered
  • a candidate virtual object of skill release is determined from the at least one virtual object according to the object selection instruction.
  • the skill when the skill release is in the sustained release mode, the skill may be executed on the target virtual object when the skill release confirmation operation of the skill object is detected after the target virtual object of the skill release is selected.
  • the skill release operation of the object when the skill release is selected.
  • the skill release triggering operation may further include: clicking the skill object; for example, referring to FIG. 7, when the skill release is in the sustained release mode, that is, in the continuous trigger mode, when the user clicks the skill button 33, Trigger generates an object selection instruction to trigger object selection for skill release. At this point, the skill release speed is increased, thereby increasing the output speed of the interaction result.
  • the reference object may be a virtual object in the graphical user interface, and may be set according to actual needs, such as selecting an object in the background or characterizing the user role object of the user.
  • the candidate virtual object may also be a camera component.
  • an offset parameter of the candidate virtual object relative to the reference object in the graphical user interface may be acquired.
  • the offset parameter is offset information of the candidate virtual object, such as the avatar object, with respect to the reference object, and the offset parameter may include at least one of an offset angle and an offset distance. In other embodiments, the offset parameter may also be Includes offset direction, etc.
  • the offset angle is an offset angle of the candidate virtual object relative to the reference object in the preset plane or the three-dimensional scene picture; the preset plane may include a screen where the graphical user interface is located.
  • the preset plane may include a screen where the graphical user interface is located.
  • a target point A representing a virtual object a
  • a target point B representing a virtual object b
  • a target point C representing a reference object c
  • the target point A is relative to the target point C
  • the offset angle is ⁇
  • the offset angle of the target point B with respect to the target point C is ⁇ + ⁇ .
  • the offset distance may be an offset distance of the candidate virtual object relative to the reference object in the preset plane or the three-dimensional scene picture, that is, the distance between the candidate virtual object and the reference object in the preset plane or the three-dimensional scene picture;
  • the preset plane can include a screen on which the graphical user interface is located. For example, referring to FIG. 8, in the graphical user interface, a target point A representing a virtual object a, a target point B representing a virtual object b, and a target point C representing a reference object c; wherein, the target point A is relative to the target point C
  • the offset distance is La
  • the offset angle of the target point B with respect to the target point C is Lb.
  • the target virtual object of the skill release may be selected from the candidate object set according to the offset parameter of each candidate virtual object in the candidate object set relative to the reference object.
  • the target virtual object of the skill release may be one or more, and may be configured according to actual needs.
  • the target virtual object released by the skill may be selected from the candidate object set based on the size of the offset parameter value, for example, selecting a candidate virtual object with the largest offset angle value and/or the smallest offset distance as the target Virtual object.
  • the currently selected target virtual object may be prompted in the graphical user interface.
  • the selected identifier can be displayed on the target virtual object to remind the user that the target virtual object is currently selected.
  • the selected identifier may include a marquee, a color marker, and the like.
  • the object selection method of the embodiment of the present application when used to select the release object of the skill object as the left virtual object, the selected on the left virtual object is displayed. Box to remind the user.
  • the implementation method of the present application may adopt different object selection manners based on the number of objects included in the candidate object set. as follows:
  • the candidate object set includes at least two candidate virtual objects
  • one or at least two target virtual objects of the skill release may be selected from the candidate object set according to the offset parameter of each candidate virtual object in the candidate object set relative to the reference object.
  • the selection weight of the candidate virtual object may be set based on the offset parameter of the candidate virtual object, and then, according to the selection weight of the candidate virtual object. Select one or at least two target virtual objects released by the skill from the candidate object set. That is, the step of "selecting a target virtual object released from the candidate object set according to the offset parameter" may include:
  • the selection weight of the candidate virtual object may be the probability or proportion of the virtual object as the skill release object, such as 30%.
  • the embodiment of the present application may set the selection weight of the candidate virtual object based on the principle that the offset parameter value is larger and the weight is smaller. For example, in an embodiment, when the offset parameter includes the offset angle, the selection weight of the candidate virtual object may be set according to a rule that the smaller the offset angle is, the smaller the weight is, and the smaller the offset angle is, the larger the weight is. For example, referring to FIG. 8, the offset angle of the virtual object a is smaller than the offset angle of the virtual object b, and therefore, the selection weight of the virtual object a is higher than the selection weight of the virtual object a.
  • the selection weight of the candidate virtual object may be set according to a rule that the smaller the offset distance is, the smaller the weight is, and the smaller the offset distance is, the larger the weight is. For example, referring to FIG. 8, the offset distance of the virtual object a is smaller than the offset distance of the virtual object b. Therefore, the selection weight of the virtual object a is higher than the selection weight of the virtual object a.
  • the weight reference parameter of the candidate virtual object may be acquired according to the offset angle and the offset distance of the candidate virtual object, and then, based on the weight reference parameter setting The selection weight of the candidate virtual object.
  • the weight reference parameter is used to indicate a parameter for selecting a weight to be acquired or set, and the parameter may be a custom parameter.
  • the weight reference parameter can be obtained in various manners. For example, the weighted sum of the offset angle and the offset distance can be calculated, and the weighted sum is used as the weight reference parameter.
  • the offset angle of the target point A with respect to the target point C is ⁇
  • the offset distance is La.
  • the selection weight of the candidate virtual object may be set according to a rule that the smaller the weight reference parameter is, the smaller the weight is, and the smaller the weight reference parameter is, the larger the weight is.
  • the configuration selects a target virtual object
  • selecting a candidate virtual object with the highest weight is the target virtual object released by the skill, or
  • you can also choose to select a candidate virtual object whose weight is between the highest and lowest is the target virtual object released by the skill.
  • the game interface includes virtual objects A and B.
  • the generation object selection instruction is triggered and displayed at a predetermined position.
  • the skill release controls the aperture object 31 and the virtual joystick object 32.
  • the terminal may determine, according to the object selection instruction, that the virtual objects A and B are candidate virtual objects released by the skill; then, the terminal acquires offset parameters of the objects A and B relative to the reference object (such as a game camera) (eg, an offset angle and / or offset record), the terminal calculates the selection weight of the object A is 0.7 according to the offset parameters of the objects A and B, and the selection weight of the object B is 0.3.
  • the candidate virtual object A can be selected as the target virtual object released by the skill.
  • the check box will be displayed on the virtual object A of the game interface.
  • the skill release position will be adjusted accordingly within the game user interface. Since the skill release position is related to the game camera, in practice, the deflection or movement of the game camera can be adjusted according to the drag operation of the virtual joystick object 32. At this time, the skill release position will change.
  • a crosshair such as a firearm can be set to represent the skill release position, and when the game camera changes, the crosshair of the firearm in the game interface will also change.
  • the terminal when the skill release position changes, the terminal will reacquire the offset parameters (such as the offset angle and/or the offset distance) of the objects A and B relative to the reference object (such as a game camera), and the terminal according to the object.
  • the selection weight of the A and B offset parameter calculation object A is 0.4, and the selection weight of the object B is 0.6. Then, at this time, the candidate virtual object B is selected as the target virtual object released by the skill, and the selected box is displayed on the virtual object B of the game interface.
  • the terminal may determine, according to the object selection instruction, that the virtual objects A and B are candidate virtual objects released by the skill; then, the terminal acquires offset parameters of the objects A and B relative to the reference object (such as a game camera) (eg, an offset angle and / or offset distance), the terminal calculates the selection weight of the object A to be 0.9 according to the offset parameters of the objects A and B, and the selection weight of the object B is 0.1.
  • the candidate virtual object A can be selected as the target virtual object released by the skill.
  • the check box will be displayed on the virtual object A of the game interface.
  • the first candidate virtual objects with the highest weights may be selected as the target virtual objects released by the skills after the selection weights of each candidate virtual object are acquired.
  • the game interface includes virtual objects A, B, and C.
  • the generation object selection instruction is triggered, and at the same time, the reservation is made.
  • the position display skill release controls the aperture object 31 and the virtual joystick object 32.
  • the terminal may determine, according to the object selection instruction, that the virtual objects A, B, and C are candidate virtual objects released by the skill; then, the terminal acquires offset parameters of the objects A, B, and C relative to the reference object (such as a game camera) (eg, The offset angle and/or the offset distance are calculated by the terminal according to the offset parameters of the objects A and B.
  • the selected weight of the object A is 0.5, the selected weight of the object B is 0.3, and the selected weight of the object B is 0.2.
  • the candidate virtual objects A and B are selected as the target virtual objects released by the skill, and the selected boxes are displayed on the virtual objects A and B of the game interface.
  • the skill release position will be adjusted accordingly within the game user interface.
  • the terminal will reacquire the offset parameters (such as the offset angle and/or the offset distance) of the objects A, B, and C relative to the reference object (such as a game camera), and the terminal according to the object A,
  • the offset weight of the calculation parameter A of the calculation object A is 0.2
  • the selection weight of the object B is 0.3
  • the selection weight of the object C is 0.5.
  • the candidate virtual objects B and C are selected as the target virtual objects released by the skill, and the selected boxes are displayed on the virtual objects B and C of the game interface.
  • the candidate object set includes one candidate virtual object
  • the step of "selecting the candidate virtual object as the target virtual object of the skill release" may include:
  • the candidate virtual object is selected as the target virtual object of the skill release.
  • the preset condition may be configured according to actual needs of the user.
  • the preset condition may include: the offset angle is smaller than the preset angle; that is, the step “determining the candidate virtual object relative to the Whether the offset parameter of the reference object satisfies the preset condition may include:
  • the preset condition may include: the offset distance is less than the preset distance; that is, the step “determining whether the offset parameter satisfies the preset condition” may include:
  • the preset condition may include: the offset angle is within the preset angle range, and the offset distance is within the preset distance range; Whether the shift parameter meets the preset condition can include:
  • Determining the object released by the skill through various offset parameters can improve the accuracy of the object selection, so that the selected object satisfies the user's needs, thereby improving the human-computer interaction and the accuracy of the interaction result.
  • the preset angle, the preset distance, the preset angle range, and the preset distance range may all be set according to actual needs.
  • the terminal determines, according to the object selection instruction, that the field of view of the game camera is only the virtual object A.
  • the candidate object set only includes the virtual object A.
  • the terminal can acquire the offset parameter of the virtual object A relative to the camera.
  • the virtual object A may be selected as the target virtual object released by the skill, and the selected box is displayed on the virtual object A to remind the user.
  • the skill release position when the user drags the virtual joystick object 32, such as dragging to the left of the skill release control aperture 31, the skill release position will be adjusted accordingly within the game user interface and the object selection command will be retriggered. Since the skill release position is related to the game camera, in practice, the deflection or movement of the game camera can be adjusted according to the drag operation of the virtual joystick object 32. At this time, the skill release position will change. In the game interface, a crosshair such as a firearm can be set to represent the skill release position, and when the game camera changes, the crosshair of the firearm in the game interface will also change.
  • the terminal determines that the field of view of the game camera has only the virtual object A according to the object selection instruction.
  • the candidate object set only includes the virtual object A.
  • the terminal reacquires the virtual object A.
  • the virtual object A may be selected again as the target virtual object released by the skill, and displayed on the virtual object A. Check the box to remind the user.
  • the object selection method provided by the embodiment of the present application described above may perform a skill release operation on the target virtual object after selecting the skill release object in one embodiment.
  • the skill release triggering operation of the skill object when the skill release triggering operation of the skill object is detected, when the generating object selection instruction is triggered, the skill release confirmation operation of the skill object may be detected, and the skill object may be executed on the target virtual object. Skill release operation.
  • the skill release confirmation operation may be various.
  • the skill release confirmation operation may include a release operation of the drag operation of the virtual joystick object. That is, when the release operation of the drag operation is detected, the skill release operation of the skill object is performed on the target virtual object.
  • the skill release operation of the skill object is performed for the target virtual object.
  • the embodiment of the present application adopts a display graphical user interface, where the graphical user interface includes at least one virtual object, and the candidate virtual object released by the skill is determined from the at least one virtual object according to the object selection instruction, and the candidate object set is obtained, and the candidate object is obtained.
  • the offset parameter of the candidate virtual object in the set relative to the reference object, and the target virtual object released by the skill is selected from the candidate object set according to the offset parameter.
  • the solution can select the target virtual object released by the skill based on the offset parameter of the virtual object relative to the reference object, and can quickly determine the object released by the skill without the user performing the precise skill release operation, thereby improving the accuracy of the interaction result and saving.
  • the resources of the terminal are examples of the resources of the terminal.
  • the skill release auxiliary control object may be displayed at a preset position on the graphic user interface, since the preset position can be The skill release control object appears in the default fixed position, so it can promote the user's quick response during the information interaction process, and avoid the wasteful reaction time of the user in the graphical user interface.
  • the object selection method of the present application is further described by taking a graphical user interface as an FPS game interface and a virtual object as a virtual person object as an example.
  • the game interface includes a two-dimensional game screen or a three-dimensional game screen.
  • the avatar object is a user character object representing a player user or a character object characterization of the robot.
  • the skill object may be an object that represents a skill in a graphical user interface, such as a skill button.
  • the skill release triggering operation may include multiple types, such as pressing, clicking, sliding skill objects, and the like.
  • the accuracy and accuracy of the interaction process are improved.
  • the skill release assist control object is displayed at a preset position on the game interface, and the generated object selection instruction is triggered, and the skill release assist control object includes a skill release control aperture object. And a virtual rocker object within the range of radiation in which the skill release manipulates the aperture object.
  • the skill release position of the skill object is controlled to be adjusted in the graphical user interface, and the object selection instruction is retriggered to select the target virtual character again.
  • the skill release triggering operation acting on the skill object 1 is acquired, and the skill release assist control object is rendered, and the skill release assist control object includes the skill release control aperture object 31 and the virtual shake. Rod object 32.
  • the subsequent trigger skill release control operation causes the skill release manipulation aperture object 31 to remain in position.
  • the virtual joystick object 32 when it is detected that the virtual joystick object 32 moves following the drag of the skill release operation gesture, the virtual joystick object 32 deviates from the center of the skill release control aperture object 31, triggering the skill release control operation, The skill release manipulation aperture object is kept in the same position.
  • the user When the user presses the skill object 1 terminal to trigger the generation object selection instruction, then the user can drag the virtual joystick object 32 to move within the skill release manipulation aperture object 31 to adjust the skill release position of the skill object, and regenerate the object selection instruction. . That is, the object selection method of the embodiment of the present application can be triggered by dragging the virtual joystick object 32.
  • the game camera is a component, that is, a camera component, which can be used to render a corresponding scene image in the game interface.
  • the camera component can be a rendering component in unity that can correspond to the corresponding image in the graphical user interface based on the configured height, width, and field of view.
  • the field of view of the camera component is also referred to as FOV (Field of view), and the range of angles at which the camera component renders the scene image.
  • FOV Field of view
  • the virtual character object other than the field of view is removed, and the candidate virtual character object released by the skill is determined from the virtual person object within the field of view.
  • the offset parameter is offset information of the avatar object relative to the game camera, and the offset parameter may include at least one of an offset angle and an offset distance. In other embodiments, the offset direction may also be included. Wait.
  • the offset parameter is offset information of the candidate avatar object, such as the avatar object, with respect to the reference object, and the offset parameter may include at least one of an offset angle and an offset distance. In other embodiments, It may include an offset direction or the like.
  • the offset angle is an offset angle of the candidate avatar object relative to the reference object in the preset plane or the three-dimensional scene picture; the preset plane may include a screen where the graphical user interface is located.
  • the preset plane may include a screen where the graphical user interface is located.
  • a target point A representing a virtual person object a
  • a target point B representing a virtual person object b
  • a target point C representing a reference object c
  • the target point A is relative to the target
  • the offset angle of the point C is ⁇
  • the offset angle of the target point B with respect to the target point C is ⁇ + ⁇ .
  • the offset distance may be an offset distance of the candidate avatar object relative to the reference object in the preset plane or the three-dimensional scene picture, that is, between the candidate avatar object and the reference object in the preset plane or the three-dimensional scene picture.
  • Distance; the preset plane can include the screen on which the graphical user interface is located.
  • a target point A representing a virtual person object a
  • a target point B representing a virtual person object b
  • a target point C representing a reference object c
  • the target point A is relative to the target
  • the offset distance of the point C is La
  • the offset angle of the target point B with respect to the target point C is Lb.
  • step 306. Determine whether the offset parameter meets a preset condition, and if yes, perform step 307.
  • the preset condition may be configured according to actual needs of the user.
  • the preset condition may include: the offset angle is smaller than the preset angle; that is, the step “determining that the candidate virtual character object is relatively Whether the offset parameter of the reference object satisfies the preset condition may include:
  • the preset condition may include: the offset distance is less than the preset distance; that is, the step “determining whether the offset parameter satisfies the preset condition” may include:
  • the preset condition may include: the offset angle is within the preset angle range, and the offset distance is within the preset distance range; Whether the shift parameter meets the preset condition can include:
  • the preset angle, the preset distance, the preset angle range, and the preset distance range may all be set according to actual needs.
  • the generation object selection instruction is triggered, and at the same time, the skill release manipulation aperture object 31 and the virtual joystick object 32 are displayed at predetermined positions, and the terminal determines according to the object selection instruction.
  • the field of view of the game camera is only the avatar object A.
  • the candidate object set only contains the avatar object A.
  • the terminal can acquire the offset parameter of the avatar object A relative to the camera (offset angle and/or partiality). Shift distance), when the offset parameter satisfies the preset condition, the virtual character object A may be selected as the target virtual character object released by the skill, and the selected box is displayed on the virtual character object A to remind the user.
  • a selection box may also be displayed on the target avatar object to remind the user.
  • the skill release confirmation operation may be various.
  • the skill release confirmation operation may include a release operation of the drag operation of the virtual joystick object. That is, when the release operation of the drag operation is detected, the skill release operation of the skill object is performed on the target avatar object.
  • the skill release operation of the skill object is performed for the target avatar object.
  • the embodiment of the present application uses at least one avatar object to be rendered in the game interface, and determines a candidate avatar object released from the at least one avatar object according to the object selection instruction, and obtains the candidate avatar object relative to the game camera.
  • the offset parameter selects the target virtual avatar object released by the skill from the candidate object set according to the offset parameter.
  • the solution can select the target virtual character object released by the skill based on the offset parameter of the virtual character object relative to the reference object, and can quickly determine the object released by the skill without the user performing an accurate skill release operation, thereby improving the accuracy of the interaction result and Saves resources on the terminal.
  • the object selection method of the present application is further described by taking a graphical user interface as an FPS game interface and a virtual object as a virtual person object as an example.
  • an object selection method is as follows:
  • the game interface includes a two-dimensional game screen or a three-dimensional game screen.
  • the avatar object is a user character object representing a player user or a character object characterization of the robot.
  • the skill object may be an object that represents a skill in a graphical user interface, such as a skill button.
  • the skill release triggering operation may include multiple types, such as pressing, clicking, sliding skill objects, and the like.
  • the accuracy and accuracy of the interaction process are improved.
  • the skill release assist control object is displayed at a preset position on the game interface, and the generated object selection instruction is triggered, and the skill release assist control object includes a skill release control aperture object. And a virtual rocker object within the range of radiation in which the skill release manipulates the aperture object.
  • the skill release position of the skill object is controlled to be adjusted in the graphical user interface, and the object selection instruction is retriggered to select the target virtual character again.
  • the skill release triggering operation acting on the skill object 1 is acquired, and the skill release assist control object is rendered, and the skill release assist control object includes the skill release control aperture object 31 and the virtual shake. Rod object 32.
  • the subsequent trigger skill release control operation causes the skill release manipulation aperture object 31 to remain in position.
  • the virtual joystick object 32 when it is detected that the virtual joystick object 32 moves following the drag of the skill release operation gesture, the virtual joystick object 32 deviates from the center of the skill release control aperture object 31, triggering the skill release control operation, The skill release manipulation aperture object is kept in the same position.
  • the user When the user presses the skill object 1 terminal to trigger the generation object selection instruction, then the user can drag the virtual joystick object 32 to move within the skill release manipulation aperture object 31 to adjust the skill release position of the skill object, and regenerate the object selection instruction. . That is, the object selection method of the embodiment of the present application can be triggered by dragging the virtual joystick object 32.
  • the game camera is a component, that is, a camera component, which can be used to render a corresponding scene image in the game interface.
  • the camera component can be a rendering component in unity that can correspond to the corresponding image in the graphical user interface based on the configured height, width, and field of view.
  • the field of view of the camera component is also referred to as FOV (Field of view), and the range of angles at which the camera component renders the scene image.
  • FOV Field of view
  • the avatar object other than the field of view is also removed, and the candidate avatar object released by the skill is determined from the avatar object within the field of view.
  • the candidate object set includes at least two candidate virtual character objects.
  • the offset parameter is offset information of the avatar object relative to the game camera, and the offset parameter may include at least one of an offset angle and an offset distance. In other embodiments, the offset direction may also be included. Wait.
  • the offset parameter is offset information of the candidate avatar object, such as the avatar object, with respect to the reference object, and the offset parameter may include at least one of an offset angle and an offset distance. In other embodiments, It may include an offset direction or the like.
  • the offset angle is an offset angle of the candidate avatar object relative to the reference object in the preset plane or the three-dimensional scene picture; the preset plane may include a screen where the graphical user interface is located.
  • the preset plane may include a screen where the graphical user interface is located.
  • a target point A representing a virtual person object a
  • a target point B representing a virtual person object b
  • a target point C representing a reference object c
  • the target point A is relative to the target
  • the offset angle of the point C is ⁇
  • the offset angle of the target point B with respect to the target point C is ⁇ + ⁇ .
  • the offset distance may be an offset distance of the candidate avatar object relative to the reference object in the preset plane or the three-dimensional scene picture, that is, between the candidate avatar object and the reference object in the preset plane or the three-dimensional scene picture.
  • Distance; the preset plane can include the screen on which the graphical user interface is located.
  • a target point A representing a virtual person object a
  • a target point B representing a virtual person object b
  • a target point C representing a reference object c
  • the target point A is relative to the target
  • the offset distance of the point C is La
  • the offset angle of the target point B with respect to the target point C is Lb.
  • the selection weight of the candidate avatar object may be set based on the principle that the offset parameter value is larger and the weight is smaller.
  • the selection weight of the candidate avatar object may be set according to a rule that the smaller the offset angle is, the smaller the weight is, and the smaller the offset angle is, the larger the weight is.
  • the offset angle of the avatar object a is smaller than the offset angle of the avatar object b, and therefore, the selection weight of the avatar object a is higher than the selection weight of the avatar object a.
  • the selection weight of the candidate avatar object may be set according to a rule that the smaller the offset distance is, the smaller the weight is, and the smaller the offset distance is, the larger the weight is. For example, referring to FIG. 8, the offset distance of the virtual person object a is smaller than the offset distance of the virtual person object b, and therefore, the selection weight of the virtual person object a is higher than the selection weight of the virtual person object a.
  • the weight reference parameter of the candidate avatar object may be acquired according to the offset angle and the offset distance of the candidate avatar object, and then the candidate is set based on the weight reference parameter.
  • the weight reference parameter is used to indicate a parameter for selecting a weight to be acquired or set, and the parameter may be a custom parameter.
  • the weight reference parameter can be obtained in various manners. For example, the weighted sum of the offset angle and the offset distance can be calculated, and the weighted sum is used as the weight reference parameter.
  • the offset angle of the target point A with respect to the target point C is ⁇
  • the offset distance is La.
  • p2 is the weight value of the offset distance.
  • selecting a candidate avatar object with the highest weight is the target avatar object released by the skill.
  • it is also possible to select a candidate avatar object whose weight is between the highest and lowest is the target avatar object released by the skill.
  • the game interface includes virtual character objects A and B.
  • the generation object selection instruction is triggered, and at the same time at the predetermined position.
  • the display skill release manipulates the aperture object 31 and the virtual joystick object 32.
  • the terminal may determine, according to the object selection instruction, that the avatar objects A and B are candidate avatar objects released by the skill; then, the terminal acquires offset parameters (such as offsets) of the objects A and B relative to the reference object (such as a game camera).
  • the terminal calculates the selection weight of the object A to be 0.7 according to the offset parameters of the objects A and B, and the selection weight of the object B is 0.3.
  • the candidate virtual character object A can be selected as the skill release.
  • the target avatar object will display a check box on the avatar object A of the game interface.
  • each candidate avatar may be acquired when multiple target avatar objects are selected for configuration.
  • the first few candidate virtual character objects with the selected weights are selected as the target virtual character objects released by the skill.
  • the game interface includes virtual character objects A, B, and C.
  • the generation object selection instruction is triggered, and at the same time The predetermined position display skill release manipulation aperture object 31 and virtual joystick object 32.
  • the terminal may determine, according to the object selection instruction, that the avatar objects A, B, and C are candidate avatar objects released by the skill; then, the terminal acquires offset parameters of the objects A, B, and C relative to the reference object (such as a game camera).
  • the terminal calculates the selection weight of the object A to be 0.5 according to the offset parameters of the objects A and B, the selection weight of the object B is 0.3, and the selection weight of the object B is 0.2.
  • the candidate virtual character objects A and B can be selected as the target virtual human object released by the skill, and the selected box will be displayed on the virtual character objects A and B of the game interface.
  • the skill release confirmation operation may be various.
  • the skill release confirmation operation may include a release operation of the drag operation of the virtual joystick object. That is, when the release operation of the drag operation is detected, the skill release operation of the skill object is performed on the target avatar object.
  • the skill release operation of the skill object is performed for the target avatar object.
  • At least one virtual character object is rendered in the game interface, and a plurality of candidate virtual character objects released by the skill are determined from the at least one virtual character object according to the object selection instruction, and the candidate object set is obtained, and the candidate object is obtained.
  • the offset parameter of the candidate avatar object in the set relative to the reference object, and the target avatar object released by the skill is selected from the candidate object set according to the offset parameter.
  • the solution can select the target virtual character object released by the skill based on the offset parameter of the virtual character object relative to the reference object, and can quickly determine the object released by the skill without the user performing an accurate skill release operation, thereby improving the accuracy of the interaction result and Saves resources on the terminal.
  • an object selecting apparatus is further provided in one embodiment.
  • the meaning of the noun is the same as that in the object selection method described above.
  • an object selection device is further provided.
  • the object selection device may include: a display unit 501, a determination unit 502, a parameter acquisition unit 503, and a selection unit 504, as follows:
  • a display unit 501 configured to display a graphical user interface, where the graphical user interface includes at least one virtual object;
  • a determining unit 502 configured to determine a candidate virtual object released from the at least one virtual object according to the object selection instruction, to obtain a candidate object set
  • a parameter obtaining unit 503 configured to acquire an offset parameter of the candidate virtual object in the candidate object set relative to the reference object
  • the selecting unit 504 is configured to select a target virtual object released by the skill from the candidate object set according to the offset parameter.
  • the candidate object set includes: at least two candidate virtual objects; referring to FIG. 20, wherein the selection list 504 includes:
  • the weight acquisition sub-unit 5041 is configured to obtain a selection weight of the candidate virtual object according to the offset parameter, and obtain a selection weight of each candidate virtual object in the candidate object set;
  • the selecting subunit 5042 is configured to select one target virtual object or at least two target virtual objects released by the skill from the candidate object set according to the selection weight of each candidate virtual object in the candidate object set.
  • the offset parameter includes an offset angle and an offset distance; the weight acquisition subunit 5041 is configured to:
  • the candidate object set includes: one candidate virtual object; the selecting unit 504 is configured to: determine whether the offset parameter satisfies a preset condition; if yes, select the candidate virtual object as a skill The target virtual object that was released.
  • the determining unit 502 is configured to:
  • the graphical user interface further includes: a skill operation area, the skill operation area includes a skill object; and referring to FIG. 21, the determining unit 502 may be configured to: when the skill of the skill object is detected When the triggering operation is released, the generating object selection instruction is triggered; and the candidate virtual object of the skill release is determined from the at least one virtual object according to the object selection instruction;
  • the object selection device further includes: an execution unit 506;
  • the executing unit 506 is configured to perform a skill release operation of the skill object on the target virtual object when detecting a skill release confirmation operation of the skill object.
  • the determining unit 502 is configured to display a skill release assist control object at a preset position on the graphical user interface when detecting a skill release triggering operation of the skill object;
  • the skill release assist control object may include a skill release control aperture object and a virtual rocker object within a radiation range of the skill release manipulation aperture object;
  • the execution unit 506 can be configured to perform a skill release operation of the skill object on the target virtual object when the skill release confirmation operation of the skill object is detected.
  • the executing unit 506 can be configured to: when the drag operation on the virtual joystick object is detected, control the skill release position of the skill object to perform corresponding adjustment in the graphical user interface, and re-trigger the object
  • the selection instruction returns a determination step of executing the candidate virtual object.
  • the execution unit 506 is configured to perform a skill release operation of the skill object on the target virtual object when the release operation of the drag operation is detected.
  • the determining unit 502 is configured to: automatically trigger a generate object selection instruction when the skill release is in the sustained release mode; determine a candidate virtual release of the skill from the at least one virtual object according to the object selection instruction Object.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • the object selection device may be integrated into the terminal, for example, in the form of a client, and the terminal may be a device such as a mobile phone or a tablet computer.
  • the object selection device of the embodiment of the present application displays the graphical user interface by using the display unit 501, the graphical user interface includes at least one virtual object, and the determining unit 502 determines the candidate virtual of the skill release from the at least one virtual object according to the object selection instruction.
  • the object obtains the candidate object set, and the parameter obtaining unit 503 acquires the offset parameter of the candidate virtual object in the candidate object set relative to the reference object, and the selecting unit 504 selects the target virtual object released by the skill from the candidate object set according to the offset parameter.
  • the solution can select the target virtual object released by the skill based on the offset parameter of the virtual object relative to the reference object, and can quickly determine the object released by the skill without the user performing the precise skill release operation, thereby improving the accuracy of the interaction result and saving.
  • the resources of the terminal can select the target virtual object released by the skill based on the offset parameter of the virtual object relative to the reference object, and can quickly determine the object released by the skill without the user performing the precise skill release operation, thereby improving the accuracy of the interaction result and saving.
  • the embodiment of the present application further provides a terminal, which may be a device such as a mobile phone or a tablet computer.
  • an embodiment of the present application provides a terminal 600, which may include one or more processor 601, a memory 602 of one or more computer readable storage media, and a radio frequency (RF) circuit 603. , power supply 604, input unit 605, and display unit 606 and other components.
  • RF radio frequency
  • Processor 601 is the control center of the terminal, connecting various portions of the entire terminal using various interfaces and lines, by running or executing software programs and/or modules stored in memory 602, and recalling data stored in memory 602, Perform various functions and processing data of the terminal to monitor the terminal as a whole.
  • the processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 601.
  • the memory 602 can be used to store software programs and modules, and the processor 601 executes various functional applications and data processing by running software programs and modules stored in the memory 602.
  • the RF circuit 603 can be used for receiving and transmitting signals during the process of transmitting and receiving information. Specifically, after receiving the downlink information of the base station, the downlink information is processed by one or more processors 601. In addition, the data related to the uplink is sent to the base station.
  • the terminal also includes a power source 604 (such as a battery) that supplies power to the various components.
  • the power source can be logically coupled to the processor 601 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 604 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal can also include an input unit 605 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • an input unit 605 can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • the terminal can also include a display unit 606 that can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be represented by graphics, text, icons, video, and It is composed of any combination.
  • the display unit 608 can include a display panel. Alternatively, the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the processor 601 in the terminal loads the executable file corresponding to the process of one or more applications into the memory 602 according to the following computer readable instructions, and is executed by the processor 601.
  • the application stored in the memory 602 implements various functions as follows:
  • the graphical user interface includes at least one virtual object, determining a candidate virtual object released from the at least one virtual object according to the object selection instruction, obtaining a candidate object set, and obtaining a candidate virtual object in the candidate object set relative to the reference object
  • the offset parameter selects the target virtual object released by the skill from the candidate object set according to the offset parameter.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Abstract

一种对象选择方法,包括如下步骤:显示图形用户界面,该图形用户界面包括虚拟对象,根据对象选择指令从虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合,获取候选对象集合中候选虚拟对象相对于参考对象的偏移参数,根据偏移参数从候选对象集合中选取技能释放的目标虚拟对象。

Description

对象选择方法、终端和存储介质
本申请要求于2017年11月15日提交中国专利局、申请号为201711132666.2、发明名称为“一种对象选择方法、装置、终端和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通讯技术领域,具体涉及一种对象选择方法、终端和存储介质。
背景技术
随着智能终端的日益普及,智能终端处理器的处理能力也越来越强,从而衍生出很多在智能终端屏幕上基于人机交互实现操控的应用,基于人机交互实现操控的过程中,多个用户之间可以采取一对一,一对多,多对多等各种建立群组的形式运行不同的交互模式,以得到不同的交互结果。比如,在智能终端屏幕上渲染得到的图形用户界面中,将多个用户分成多个不同群组后,利用人机交互中的操控处理,可以进行不同群组间的信息交互,以及根据对信息交互的响应得到不同的交互结果;利用人机交互中的操控处理,还可以在同一个群组的群成员间进行信息交互,以及根据对信息交互的响应得到不同的交互结果。
目前在信息交互过程中可以针对某个虚拟对象触发释放特定的能力,来丰富信息的表现形式和内容,不同的信息表现形式和内容最终会导致不同的交互结果。然而,目前释放特定能力的方式需要用户进行精准的操作才能确定出技能释放的对象,当用户的操作出现一点偏差时便无法确定出技能释放的对象,导致交互结果出现偏差、不准确。并且,为了能够产生正确的交互结果,用户需要不断地对操作进行调整直到操作符合精准操作要求为止,在操作调整的过程中,终端要不停地对用户的调整操作进行响应,导致终端的资源消耗比较大。
发明内容
本申请的各种实施例,提供了一种对象选择方法、终端和存储介质。
本申请实施例提供一种对象选择方法,包括:
显示图形用户界面,所述图形用户界面包括至少一个虚拟对象;
根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
相应的,本申请实施例还提供了一种对象选择装置,包括:
显示单元,用于显示图形用户界面,所述图形用户界面包括至少一个虚拟对象;
确定单元,用于根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
参数获取单元,用于获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
选取单元,用于根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
相应的,本申请实施例还提供一种终端,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
显示图形用户界面,所述图形用户界面包括虚拟对象;
根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
相应的,本申请实施例还提供一种非易失性的计算机可读存储介质,所述存储介质存储有计算机可读指令,所述计算机可读指令被被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
显示图形用户界面,所述图形用户界面包括虚拟对象;
根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的信息交互系统的场景示意图;
图2是本申请实施例提供的对象选择方法的流程示意图;
图3是本申请实施例提供的第一种游戏界面示意图;
图4是本申请实施例提供的摄像机视场角示意图;
图5是本申请实施例提供的候选虚拟对象选择示意图;
图6是本申请实施例提供的技能操作区域示意图;
图7是本申请实施例提供的第二种游戏界面示意图;
图8是本申请实施例提供的偏移参数示意图;
图9是本申请实施例提供的第三种游戏界面示意图;
图10是本申请实施例提供的第四种游戏界面示意图;
图11是本申请实施例提供的第五种游戏界面示意图;
图12是本申请实施例提供的第六种游戏界面示意图;
图13是本申请实施例提供的第七种游戏界面示意图;
图14是本申请实施例提供的第八种游戏界面示意图;
图15是本申请实施例提供的第九种游戏界面示意图;
图16是本申请实施例提供的第十种游戏界面示意图;
图17是本申请实施例提供的对象选择方法的另一流程示意图;
图18是本申请实施例提供的对象选择方法的又一流程示意图;
图19是本申请实施例提供的对象选择装置的第一种结构示意图;
图20是本申请实施例提供的对象选择装置的第二种结构示意图;
图21是本申请实施例提供的对象选择装置的第三种结构示意图;
图22是本申请实施例提供的终端的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参考图1,本申请实施例提供了一种信息交互系统,包括:终端10和服务器20,终端10与服务器20通过网络30连接。其中,网络30中包括路由器、网关等等网络实体,图中并未示意出。终端10可以通过有线网络或无线网络与服务器20进行信息交互,以便从服务器20下载应用、应用更新数据包、与应用相关的数据信息或业务信息等。其中,终端10可以为手机、平板电脑、笔记本电脑等设备,图1是以终端10为手机为例。该终端10中可以安装有各种用户所需的应用,比如具备娱乐功能的应用(如视频应用,音频播放应用,游戏应用,阅读软件),又如具备服务功能的应用(如地图导航应用、团购应用等)。
基于上述图1所示的系统,以游戏场景为例,终端10通过网络30从服务器20中按照需求下载游戏应用、游戏应用更新数据包、与游戏应用相关的数据信息或业务信息等。采用本申请实施例,在终端10开启游戏应用进入渲染得到的游戏界面(该游戏界面包括至少一个虚拟对象)后,可以从至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;获取候选对象集合中候选虚拟对象相对于参考对象的偏移参数;根据所述偏移参数从候选对象集合中选取技能释放的目标虚拟对象。由于可以基于虚拟对象相对于参考对象的偏移参数模糊选取技能释放的目标虚拟对象,无需用户进行精准的技能释放操作,便可以快速确定技能释放的对象,因此,提升了交互结果的准确性、交互结果生成的效率以及节省了终端的资源。
上述图1的例子只是实现本申请实施例的一个系统架构实例,本申请实施例并不限于上述图1所述的系统结构,基于该系统架构,提出本申请各个实 施例。
在一个实施例中,提供了一种对象选择方法,可以由终端的处理器执行,如图2所示,该对象选择方法包括:
201、显示图形用户界面,该图形用户界面包括至少一个虚拟对象。
比如,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面。其中,还可以在图形用户界面中渲染得到至少一个虚拟对象。
该图形用户界面可包含各种场景画面,比如,游戏画面、社交画面等等。该场景画面可以是二维画面或者三维画面。
其中,虚拟对象即为虚拟资源对象,其可以是在图形用户界面中虚拟的对象。该虚拟对象可以包括图形用户界面中各种类型的对象。比如,表征人物的虚拟人物对象(如表征玩家用户的用户角色对象、表征机器人的角色对象)、表征背景的建筑物、树木、白云、塔防等对象等等。
在一个实施例中,虚拟对象可以为虚拟人物对象,比如,参考图3,当图形用户界面为FPS(First-Person Shooter,第一人称视角射击游戏)的游戏界面时,该游戏界面中包含多个虚拟人物对象,该虚拟人物对象可以为虚拟的敌方人物角色对象,即敌方目标。此外,如图3所示,该游戏界面还可以包括:表征背景的建筑物、天空、白云等虚拟对象,表征用户状态(如血值,活力值)的对象,表征用户技能、装备等的虚拟对象,表征控制用户位置移动变化的方向按钮对象,如圆形虚拟操作杆。在其他实施例中,虚拟对象可以包括:虚拟人物对象、虚拟背景对象等。
202、根据对象选择指令从至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合。
比如,可以获取对象选择指令,然后,根据对象选择指令从至少一个虚拟对象中确定技能释放的候选虚拟对象。
其中,候选对象集合可以包括一个或者多个候选虚拟对象。
例如,可以从至少一个虚拟人物对象中确定能释放的候选虚拟人物对象,得到候选对象集合。
在一个实施例中,可以基于图形用户界面中场景画面的摄像机组件的视 场角度来确定候选虚拟对象。该摄像机组件用于在图形用户界面中渲染相应的场景画面。该摄像机组件可以为unity中的渲染组件,该摄像机组件可以根据配置的高度、宽度以及视场角度在图形用户界面中相应的画面。
比如,在游戏中,摄像机组件为用户玩家捕捉和显示场景画面的一种组件,在一个场景画面中可以有数量不限的摄像机组件,如两个摄像机组件,他们可以设置为以任何顺序来渲染场景画面。
步骤“根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象”可以包括:
根据对象选择指令获取摄像机组件的视场角度,所述摄像机为参考对象;
根据所述视场角度从所述至少一个虚拟对象中确定技能释放的候选虚拟对象。
其中,摄像机组件的视场角度也称为FOV(Field of view,视野范围),摄像机组件渲染场景画面的角度范围。该视场角度为40度、50度等。该摄像机组件虽然不是实体摄像机组件,但是实现的功能与实体摄像机相同,该摄像机组件的视场角度相当于实体摄像机的视场角度,实体摄像机的视场角度可以以摄像机的镜头为顶点,以被测目标的物像可通过镜头的最大范围的两条边缘构成的夹角。以FPS游戏为例,参考图4,在FPS游戏中摄像机组件的视场角度为射击者的视场角,此时视场角为2θ。
由于摄像机组件的视场角度与用户视角是匹配的,因此,通过摄像机组件的视场角度选取候选虚拟对象,可以提升技能释放的目标对象选取的准确性、交互结果输出的准确性以及用户体验。
在一个实施例中,可以将视野以外的虚拟对象剔除,从视野以内的虚拟对象中确定技能释放的候选虚拟对象,也即选取位于摄像机组件视场角度范围内的虚拟对象为候选虚拟对象。参考图5,假设存在虚拟对象A、B、C,由图可知,虚拟对象A、B位于视场角度范围内,在虚拟对象C位于视场角度范围之外,此时,确定虚拟对象A、B为技能释放的候选虚拟对象。
本申请实施例中对象选择指令的触发方式有多种,比如,可以包括用户操作触发、自动触发两种模式。
在一个实施例中,在用户操作触发模式下,可以在图形用户界面中设置 技能操作区域,当检测到用户在技能操作区域进行相应的操作时,可以触发生成对象选择指令。
比如,在一个实施例中,为便于用户进行技能释放操作,避免出现大量的误操作,提升交互处理的精确性和准确性,还可以在技能操作区域设置技能对象,用户可以通过对技能对象进行操作来触发生成对象选择指令。也即,步骤“根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象”可以包括:
当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令;
根据所述对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象。
其中,技能对象可以为图形用户界面中表征技能的对象,如技能按钮等。技能对象的技能释放触发操作可以包括:按压技能对象,如长按该技能对象。当用户长按技能按钮时,终端将会触发生成对象选择指令。
为了能够精准地执行对所述技能对象的技能释放操作,从而避免出现大量的误操作,提高了交互处理的精度及准确率;在一个实施例中,还可以提供虚拟摇杆对象和操作光圈等辅助控制对象,帮助用户快速,精确地触发技能释放。具体地,当检测到所述技能对象的技能释放触发操作时,在图形用户界面上的预设位置显示技能释放辅助控制对象、并触发生成对象选择指令;根据对所述技能释放辅助控制对象的操作,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并触发对象选择指令。
在一个实施例中,为了便于用户操作以及提升技能释放的准确性,进而提升人机交互性以及交互结果的准确性;其中,技能释放辅助控制对象可以包括技能释放操控光圈对象和处于所述技能释放操控光圈对象的辐射范围以内的虚拟摇杆对象;
当检测到对所述虚拟摇杆对象的拖动操作时,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并重新触发对象选择指令,然后返回执行候选虚拟对象确定步骤。
参考图6,在图形用户界面的一技能操作区域34中,获取作用于技能对象1的技能释放触发操作,渲染得到技能释放辅助控制对象,技能释放辅助控制 对象包括技能释放操控光圈对象31和虚拟摇杆对象32。后续触发技能释放控制操作,使得所述技能释放操控光圈对象31保持位置不变。
比如,检测到所述虚拟摇杆对象32跟随所述技能释放操作手势的拖动进行移动时,所述虚拟摇杆对象32偏离所述技能释放操控光圈对象31的中心,触发技能释放控制操作,使得所述技能释放操控光圈对象保持位置不变。
当用户按住技能对象1终端触发生成对象选择指令,然后,用户可以拖动虚拟摇杆对象32在技能释放操控光圈对象31内移动,以调整技能对象的技能释放位置,并重新生成对象选择指令。也即,本申请实施例的对象选择方法可以通过拖动虚拟摇杆对象32来触发。
其中,技能释放操控光圈对象31可以为轮盘形状、正方形状、三角形状,具体行程可以根据实际需求设定。虚拟摇杆对象32可以为环形、正方形或圈状,可以称为摇杆。实际应用中,为方便操作技能释放操控光圈对象31与虚拟摇杆对象32的形状可以一致。
在一个实施例中,在对象选择指令自动触发模式下,终端可以实时自动触发对象选择指令,比如,每隔一定时间自动触发生成对象选择指令。
在实际应用中,为了便于用户操作、提升交互结果的输出效率以及节省资源,可以针对技能释放设置持续释放模式(对应于指令自动触发模式),在持续释放模式下,终端可以自动触发生成对象选择指令,无需用户操作。也即,步骤“根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象”,可以包括:
当技能释放处于持续释放模式时,自动触发生成对象选择指令;
根据所述对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象。
在一个实施例中,当技能释放处于持续释放模式时,可以在选取技能释放的目标虚拟对象之后,当检测到所述技能对象的技能释放确认操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
在一个实施例中,技能释放触发操作还可以包括:点击技能对象;比如,参考图7,当技能释放处于持续释放模式,也即在持续触发模式下,当用户点击技能按钮33时,便可以触发生成对象选择指令,以触发技能释放的对象选 择。此时,提升技能释放速度,进而提升交互结果的输出速度。
203、获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数。
其中,参考对象可以为图形用户界面中的虚拟对象,可以根据实际需求设定,如选取背景中的某个对象,或者表征用户的用户角色对象。
该候选虚拟对象还可以为摄像机组件,此时,可以获取候选虚拟对象在图形用户界面中相对于参考对象的偏移参数。
其中,偏移参数为候选虚拟对象如虚拟人物对象相对于参考对象的偏移信息,该偏移参数可以包括:偏移角度、偏移距离中的至少一种,在其他实施例中,还可以包括偏移方向等。
其中,偏移角度为候选虚拟对象相对于参考对象在预设平面或三维场景画面中的偏移角度;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟对象a目标点A、代表虚拟对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移角度为θ,目标点B相对于目标点C的偏移角度为θ+β。
其中,偏移距离可以为候选虚拟对象相对于参考对象在预设平面或三维场景画面中的偏移距离,也即候选虚拟对象与参考对象在预设平面或三维场景画面中之间的距离;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟对象a目标点A、代表虚拟对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移距离为La,目标点B相对于目标点C的偏移角度为Lb。
204、根据偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
比如,可以根据候选对象集合中每个候选虚拟对象相对于参考对象的偏移参数来从候选对象集合中选取技能释放的目标虚拟对象。
其中,技能释放的目标虚拟对象可以为一个或者多个,具体可以根据实际需求配置。
在一个实施例中,可以基于偏移参数值的大小来从候选对象集合中选取技能释放的目标虚拟对象,比如,选取偏移角度值最大、和/或偏移距离最小 的候选虚拟对象为目标虚拟对象。
在一个实施例中,还可以在选取目标虚拟对象后,可以在图形用户界面中提示当前已选取的目标虚拟对象。比如,可以在目标虚拟对象上显示选中标识,以提醒用户当前已选中该目标虚拟对象。其中,选中标识可以包括选取框、颜色标记等。
比如,以图形用户界面为FPS游戏界面为例,参考图9,当采用本申请实施例对象选择方法选取技能对象的释放对象为左边的虚拟人物对象时,会在左边的虚拟人物对象上显示选中框,以提醒用户。
可选地,本申请实施方法可以基于候选对象集合中包含的对象数量采用不同的对象选取方式。如下:
(1)、候选对象集合包括至少两个候选虚拟对象;
此时,可以根据候选对象集合中每个候选虚拟对象相对于参考对象的偏移参数,从候选对象集合中选取技能释放的一个或者至少两个目标虚拟对象。
具体地,在一个实施例中,为了提升对象的选择准确性以及交互结果的精确性,可以基于候选虚拟对象的偏移参数来设置候选虚拟对象的选取权重,然后,根据候选虚拟对象的选取权重从候选对象集合中选取技能释放的一个或者至少两个目标虚拟对象。也即,步骤“根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象”可以包括:
根据所述偏移参数获取所述候选对象集合中每个候选虚拟对象的选取权重,得到候选对象集合中每个候选虚拟对象的选取权重;
根据所述候选对象集合中每个候选虚拟对象的选取权重从所述候选对象集合中选取一个目标虚拟对象或者至少两个目标虚拟对象。
其中,候选虚拟对象的选取权重可以为虚拟对象作为技能释放对象的概率或比例,如30%等。
本申请实施例可以基于偏移参数值越大选取权重越小的原则来设置候选虚拟对象的选取权重。比如,在一个实施例中,偏移参数包括偏移角度时,可以根据偏移角度越大选取权重越小、偏移角度越小选取权重越大的规则来设置候选虚拟对象的选取权重。比如,参考图8,虚拟对象a的偏移角度小于虚拟对象b的偏移角度,因此,虚拟对象a的选取权重比虚拟对象a的选取权重 高。
又比如,在一个实施例中,偏移参数包括偏移距离时可以根据偏移距离越大选取权重越小、偏移距离越小选取权重越大的规则来设置候选虚拟对象的选取权重。比如,参考图8,虚拟对象a的偏移距离小于虚拟对象b的偏移距离,因此,虚拟对象a的选取权重比虚拟对象a的选取权重高。
在一个实施例中,当偏移参数包括偏移角度和偏移距离时,可以根据候选虚拟对象的偏移角度和偏移距离获取该候选虚拟对象的权重参考参数,然后,基于权重参考参数设置该候选虚拟对象的选择权重。该权重参考参数用于指示选取权重获取或设置的参数,该参数可以为一个自定义参数。通过偏移角度和偏移距离设置权重参考参数,可以提升对象的权重设置准确性,进而提升对象选取的准确性。
其中,权重参考参数的获取方式有多种,比如,可以计算偏移角度和偏移距离的加权和,将该加权和作为权重参考参数。例如,参考图8,目标点A相对于目标点C的偏移角度为θ、偏移距离为La,此时,候选虚拟对象A的权重参考参数QA=θ*p1+La*p2,p1为偏移角度的权重值,p2为偏移距离的权重值。
在获取权重参考参数后,可以基于权重参考参数越大选取权重越小、权重参考参数越小选取权重越大的规则来设置候选虚拟对象的选取权重。
在一个实施例中,在当配置选取一个目标虚拟对象时,可以在获取每个候选虚拟对象的选取权重后,选择选取权重最高的一个候选虚拟对象为技能释放的目标虚拟对象,或者,为了提升对象选取的灵活性,还可以选择选取权重位于最高和最低之间的一个候选虚拟对象为技能释放的目标虚拟对象。
例如,参考图10,以图形用户界面为FPS游戏界面为例,假设游戏界面包括虚拟对象A、B,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32。此时,终端可以根据对象选择指令确定虚拟对象A、B为技能释放的候选虚拟对象;然后,终端获取对象A、B相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移记录),终端根据对象A、B的偏移参数计算对象A的选取权重为0.7、对象B的选取权重为0.3,此时,可以选取候选虚拟对象A为技能 释放的目标虚拟对象,在游戏界面的虚拟对象A上将会显示选中框。
参考图11,当用户拖动虚拟摇杆对象32,如向技能释放操控光圈31的右下方拖动时,技能释放位置将会在游戏用户界面内进行相应的调整。由于技能释放位置与游戏摄像机相关,因此,实际应用中可以根据虚拟摇杆对象32的拖动操作来调整游戏摄像机的偏转或者移动,此时,技能释放位置将会发生变化。在游戏界面中可以设置标识如枪械的准星来表征技能释放位置,那么游戏摄像机发生变化时,该游戏界面中枪械的准星也将会发生变化。
参考图11,当技能释放位置发生变化时,终端将会重新获取对象A、B相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移距离),终端根据对象A、B的偏移参数计算对象A的选取权重为0.4、对象B的选取权重为0.6。那么此时,将选取候选虚拟对象B为技能释放的目标虚拟对象,在游戏界面的虚拟对象B上将会显示选中框。
又比如,参考图12,以图形用户界面为FPS游戏界面为例,假设游戏界面包括虚拟对象A、B,在持续触发选择模式中个,当用户点击右侧技能按钮时,便会触发生成对象选择指令。此时,终端可以根据对象选择指令确定虚拟对象A、B为技能释放的候选虚拟对象;然后,终端获取对象A、B相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移距离),终端根据对象A、B的偏移参数计算对象A的选取权重为0.9、对象B的选取权重为0.1,此时,可以选取候选虚拟对象A为技能释放的目标虚拟对象,在游戏界面的虚拟对象A上将会显示选中框。
在一个实施例中,在当配置选取多个目标虚拟对象时,可以在获取每个候选虚拟对象的选取权重后,选择选取权重高的前几个候选虚拟对象为技能释放的目标虚拟对象。
例如,参考图13,以图形用户界面为FPS游戏界面为例,假设游戏界面包括虚拟对象A、B、C,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32。此时,终端可以根据对象选择指令确定虚拟对象A、B、C为技能释放的候选虚拟对象;然后,终端获取对象A、B、C相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移距离),终端根据对象A、B的偏移参数计算 对象A的选取权重为0.5、对象B的选取权重为0.3、对象B的选取权重为0.2,此时,可以选取候选虚拟对象A、B为技能释放的目标虚拟对象,在游戏界面的虚拟对象A、B上将会显示选中框。
参考图14,当用户拖动虚拟摇杆对象32,如向技能释放操控光圈31的右下方拖动时,技能释放位置将会在游戏用户界面内进行相应的调整。当技能释放位置发生变化时,终端将会重新获取对象A、B、C相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移距离),终端根据对象A、B的偏移参数计算对象A的选取权重为0.2、对象B的选取权重为0.3、对象C的选取权重为0.5。那么此时,将选取候选虚拟对象B、C为技能释放的目标虚拟对象,在游戏界面的虚拟对象B、C上将会显示选中框。
(2)、候选对象集合包括一个候选虚拟对象;
此时,为了提升对象选取的准确性,可以确定该候选虚拟对象的偏移参数是否满足预设条件,若选取所述候选虚拟对象作为技能释放的目标虚拟对象。也即,步骤“选取所述候选虚拟对象作为技能释放的目标虚拟对象”可以包括:
确定所述偏移参数是否满足预设条件;
若是,则选取所述候选虚拟对象作为技能释放的目标虚拟对象。
其中,预设条件可以根据用户实际需求配置,比如,当偏移参数包括偏移角度时,预设条件可以包括:偏移角度小于预设角度;也即步骤“确定所述候选虚拟对象相对于参考对象的偏移参数是否满足预设条件”可以包括:
确定候选虚拟对象的偏移角度是否小于预设角度;若是,则确定满足预设条件,若否,则不满足预设条件。
又比如,当偏移参数包括偏移距离时,预设条件可以包括:偏移距离小于预设距离;也即步骤“确定偏移参数是否满足预设条件”可以包括:
确定偏移距离是否小于预设距离;若是,则确定满足预设条件,若否,则不满足预设条件。
又比如,当偏移参数包括偏移距离和偏移角度时,预设条件可以包括:偏移角度在预设角度范围内、偏移距离在预设距离范围内;此时,步骤“确定偏移参数是否满足预设条件”可以包括:
当偏移角度在预设角度范围、且偏移距离在预设距离范围内时,确定便宜出按时满足预设条件。
通过多种偏移参数确定技能释放的对象,可以提升对象选取的准确性,使得选取对象满足用户需求,进而提升人机交互性以及交互结果的准确性。
其中,预设角度、预设距离、预设角度范围、预设距离范围均可根据实际需求设定。
例如,参考图15,以图形用户界面为FPS游戏界面为例,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32,终端根据对象选择指令确定游戏摄像机的视场只有虚拟对象A,那么此时,候选对象集合仅包含虚拟对象A,此时,可以终端获取虚拟对象A相对于摄像机的偏移参数(偏移角度和/或偏移距离),当偏移参数满足预设条件时,可以选取虚拟对象A为技能释放的目标虚拟对象,在虚拟对象A上显示选中框,提醒用户。
参考图16,当用户拖动虚拟摇杆对象32,如向技能释放操控光圈31的左边拖动时技能释放位置将会在游戏用户界面内进行相应的调整,并重新触发对象选择指令。由于技能释放位置与游戏摄像机相关,因此,实际应用中可以根据虚拟摇杆对象32的拖动操作来调整游戏摄像机的偏转或者移动,此时,技能释放位置将会发生变化。在游戏界面中可以设置标识如枪械的准星来表征技能释放位置,那么游戏摄像机发生变化时,该游戏界面中枪械的准星也将会发生变化。图16,当技能释放位置发生变化时,终端会根据对象选择指令确定游戏摄像机的视场只有虚拟对象A,那么此时,候选对象集合仅包含虚拟对象A,此时,终端重新获取虚拟对象A相对于摄像机的偏移参数(偏移角度和/或偏移距离),当偏移参数还满足预设条件时,可以再次选取虚拟对象A为技能释放的目标虚拟对象,在虚拟对象A上显示选中框,提醒用户。
以上介绍的本申请实施例提供的对象选择方法,在一个实施例中在选取技能释放对象之后,还可以对目标虚拟对象执行技能释放操作。
比如,当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令的情况下,可以当检测到所述技能对象的技能释放确认操作,对所述目标虚拟对象执行所述技能对象的技能释放操作。
其中,技能释放确认操作可以有多种,比如,在显示技能释放辅助控制对象时,该技能释放确认操作可以包括虚拟摇杆对象的拖动操作的释放操作。也即,当检测到所述拖动操作的释放操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
例如,参考图10、图11、图12、图15,当检测到拖动操作的释放操作时,针对目标虚拟对象执行技能对象的技能释放操作。
由上可知,本申请实施例采用显示图形用户界面,该图形用户界面包括至少一个虚拟对象,根据对象选择指令从至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合,获取候选对象集合中候选虚拟对象相对于参考对象的偏移参数,根据偏移参数从候选对象集合中选取技能释放的目标虚拟对象。该方案可以基于虚拟对象相对于参考对象的偏移参数选取技能释放的目标虚拟对象,无需用户进行精准的技能释放操作,便可以快速确定技能释放的对象,提升了交互结果的准确性以及节省了终端的资源。
此外,在对象选择过程中当检测到技能操作区域中技能对象的技能释放触发操作时,可以在所述图形用户界面上的预设位置显示技能释放辅助控制对象,由于能在预设位置这一默认固定位置出现技能释放辅助控制对象,因此,能促进用户在信息交互过程中的快速响应,避免了用户在图形用户界面查找所浪费的反应时间。在检测到对所述虚拟摇杆对象的拖动操作时,控制所述技能对象的技能释放位置在所述图形用户界面内的对应调整,使得即便虚拟摇杆对象偏离所述技能释放操控光圈对象的中心,所述技能释放操控光圈对象始终保持位置不变,由于技能释放操控光圈对象保持位置不变,因此,控制区域具备稳定性,能帮助用户快速定位所述技能可释放范围;检测到所述拖动操作的释放操作时,执行对所述技能对象的技能释放操作,从而在所述技能可释放范围内,按照所述虚拟摇杆对象跟随所述技能释放操作手势的拖动进行移动所得到的所述技能对象的释放位置和/或方向,能精准的执行对所述技能对象的技能释放操作,从而避免出现大量的误操作,提高了交互处理的精度及准确率。
在一实施例,根据上述所描述的方法,将作进一步详细说明。
本申请实施例将以图形用户界面为FPS游戏界面、虚拟对象为虚拟人物对 象为例进一步地说明本申请的对象选择方法。
如图17所示,一种对象选择方法,具体流程如下:
301、在游戏界面中渲染至少一个虚拟人物对象。
该游戏界面中包含二维游戏画面或者三维游戏画面。
其中,虚拟人物对象为表征玩家用户的用户角色对象、或者表征机器人的角色对象。
302、检测游戏界面的技能操作区域中技能对象的技能释放触发操作时,触发生成对象选择指令。
其中,技能对象可以为图形用户界面中表征技能的对象,如技能按钮等。
其中,技能释放触发操作可以包括多种,如按压、点击、滑动技能对象等。
可选地,为了能够精准地执行对所述技能对象的技能释放操作,从而避免出现大量的误操作,提高了交互处理的精度及准确率。可以当检测到所述技能对象的技能释放触发操作时,在游戏界面上的预设位置显示技能释放辅助控制对象、并触发生成对象选择指令,所述技能释放辅助控制对象包括技能释放操控光圈对象和处于所述技能释放操控光圈对象的辐射范围以内的虚拟摇杆对象。
当检测到对所述虚拟摇杆对象的拖动操作时,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并重新触发对象选择指令,以便再次选取目标虚拟人物对象。
参考图6,在图形用户界面的技能操作区域34中,获取作用于技能对象1的技能释放触发操作,渲染得到技能释放辅助控制对象,技能释放辅助控制对象包括技能释放操控光圈对象31和虚拟摇杆对象32。后续触发技能释放控制操作,使得所述技能释放操控光圈对象31保持位置不变。
比如,检测到所述虚拟摇杆对象32跟随所述技能释放操作手势的拖动进行移动时,所述虚拟摇杆对象32偏离所述技能释放操控光圈对象31的中心,触发技能释放控制操作,使得所述技能释放操控光圈对象保持位置不变。
当用户按住技能对象1终端触发生成对象选择指令,然后,用户可以拖动虚拟摇杆对象32在技能释放操控光圈对象31内移动,以调整技能对象的技能 释放位置,并重新生成对象选择指令。也即,本申请实施例的对象选择方法可以通过拖动虚拟摇杆对象32来触发。
303、根据对象选择指令获取游戏摄像机的视场角。
其中,游戏摄像机为一个组件,即摄像机组件,可以用于在游戏界面中渲染相应的场景画面。该摄像机组件可以为unity中的渲染组件,该摄像机组件可以根据配置的高度、宽度以及视场角度在图形用户界面中相应的画面。
其中,摄像机组件的视场角度也称为FOV(Field of view,视野范围),摄像机组件渲染场景画面的角度范围。
304、从至少一个虚拟人物对象中选取位于视场角范围内的虚拟人物对象为技能释放的候选虚拟人物对象。
也即将视野以外的虚拟人物对象剔除,从视野以内的虚拟人物对象中确定技能释放的候选虚拟人物对象。
305、当虚拟人物对象为一个时,获取虚拟人物对象相对于游戏摄像机的偏移参数。
其中,偏移参数为虚拟人物对象相对于游戏摄像机的偏移信息,该偏移参数可以包括:偏移角度、偏移距离中的至少一种,在其他实施例中,还可以包括偏移方向等。
其中,偏移参数为候选虚拟人物对象如虚拟人物对象相对于参考对象的偏移信息,该偏移参数可以包括:偏移角度、偏移距离中的至少一种,在其他实施例中,还可以包括偏移方向等。
其中,偏移角度为候选虚拟人物对象相对于参考对象在预设平面或三维场景画面中的偏移角度;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟人物对象a目标点A、代表虚拟人物对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移角度为θ,目标点B相对于目标点C的偏移角度为θ+β。
其中,偏移距离可以为候选虚拟人物对象相对于参考对象在预设平面或三维场景画面中的偏移距离,也即候选虚拟人物对象与参考对象在预设平面或三维场景画面中之间的距离;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟人物对象a目标点A、 代表虚拟人物对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移距离为La,目标点B相对于目标点C的偏移角度为Lb。
306、确定该偏移参数是否满足预设条件,若是,则执行步骤307。
其中,预设条件可以根据用户实际需求配置,比如,当偏移参数包括偏移角度时,预设条件可以包括:偏移角度小于预设角度;也即步骤“确定所述候选虚拟人物对象相对于参考对象的偏移参数是否满足预设条件”可以包括:
确定候选虚拟人物对象的偏移角度是否小于预设角度;若是,则确定满足预设条件,若否,则不满足预设条件。
又比如,当偏移参数包括偏移距离时,预设条件可以包括:偏移距离小于预设距离;也即步骤“确定偏移参数是否满足预设条件”可以包括:
确定偏移距离是否小于预设距离;若是,则确定满足预设条件,若否,则不满足预设条件。
又比如,当偏移参数包括偏移距离和偏移角度时,预设条件可以包括:偏移角度在预设角度范围内、偏移距离在预设距离范围内;此时,步骤“确定偏移参数是否满足预设条件”可以包括:
当偏移角度在预设角度范围、且偏移距离在预设距离范围内时,确定便宜出按时满足预设条件。
其中,预设角度、预设距离、预设角度范围、预设距离范围均可根据实际需求设定。
例如,参考图15-16,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32,终端根据对象选择指令确定游戏摄像机的视场只有虚拟人物对象A,那么此时,候选对象集合仅包含虚拟人物对象A,此时,可以终端获取虚拟人物对象A相对于摄像机的偏移参数(偏移角度和/或偏移距离),当偏移参数满足预设条件时,可以选取虚拟人物对象A为技能释放的目标虚拟人物对象,在虚拟人物对象A上显示选中框,提醒用户。
307、选取该候选虚拟人物对象为技能释放的目标虚拟人物对象。
可选地,还可以在目标虚拟人物对象上显示一选择框,以提醒用户。
308、当检测到技能对象的技能释放确认操作时,对所述目标虚拟人物对象执行所述技能对象的技能释放操作。
其中,技能释放确认操作可以有多种,比如,在显示技能释放辅助控制对象时,该技能释放确认操作可以包括虚拟摇杆对象的拖动操作的释放操作。也即,当检测到所述拖动操作的释放操作时,对所述目标虚拟人物对象执行所述技能对象的技能释放操作。
例如,参考图15-16,当检测到拖动操作的释放操作时,针对目标虚拟人物对象执行技能对象的技能释放操作。
由上可知,本申请实施例采用在游戏界面渲染至少一个虚拟人物对象,根据对象选择指令从至少一个虚拟人物对象中确定技能释放的一个候选虚拟人物对象,获取该候选虚拟人物对象相对于游戏摄像机的偏移参数,根据偏移参数从候选对象集合中选取技能释放的目标虚拟人物对象。该方案可以基于虚拟人物对象相对于参考对象的偏移参数选取技能释放的目标虚拟人物对象,无需用户进行精准的技能释放操作,便可以快速确定技能释放的对象,提升了交互结果的准确性以及节省了终端的资源。
在一实施例,根据上述所描述的方法,将作进一步详细说明。
本申请实施例将以图形用户界面为FPS游戏界面、虚拟对象为虚拟人物对象为例进一步地说明本申请的对象选择方法。
如图18所示,一种对象选择方法,具体流程如下:
401、在游戏界面中渲染至少一个虚拟人物对象。
该游戏界面中包含二维游戏画面或者三维游戏画面。
其中,虚拟人物对象为表征玩家用户的用户角色对象、或者表征机器人的角色对象。
402、检测游戏界面的技能操作区域中技能对象的技能释放触发操作时,触发生成对象选择指令。
其中,技能对象可以为图形用户界面中表征技能的对象,如技能按钮等。
其中,技能释放触发操作可以包括多种,如按压、点击、滑动技能对象等。
可选地,为了能够精准地执行对所述技能对象的技能释放操作,从而避免出现大量的误操作,提高了交互处理的精度及准确率。可以当检测到所述技能对象的技能释放触发操作时,在游戏界面上的预设位置显示技能释放辅助控制对象、并触发生成对象选择指令,所述技能释放辅助控制对象包括技能释放操控光圈对象和处于所述技能释放操控光圈对象的辐射范围以内的虚拟摇杆对象。
当检测到对所述虚拟摇杆对象的拖动操作时,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并重新触发对象选择指令,以便再次选取目标虚拟人物对象。
参考图6,在图形用户界面的技能操作区域40中,获取作用于技能对象1的技能释放触发操作,渲染得到技能释放辅助控制对象,技能释放辅助控制对象包括技能释放操控光圈对象31和虚拟摇杆对象32。后续触发技能释放控制操作,使得所述技能释放操控光圈对象31保持位置不变。
比如,检测到所述虚拟摇杆对象32跟随所述技能释放操作手势的拖动进行移动时,所述虚拟摇杆对象32偏离所述技能释放操控光圈对象31的中心,触发技能释放控制操作,使得所述技能释放操控光圈对象保持位置不变。
当用户按住技能对象1终端触发生成对象选择指令,然后,用户可以拖动虚拟摇杆对象32在技能释放操控光圈对象31内移动,以调整技能对象的技能释放位置,并重新生成对象选择指令。也即,本申请实施例的对象选择方法可以通过拖动虚拟摇杆对象32来触发。
403、根据对象选择指令获取游戏摄像机的视场角。
其中,游戏摄像机为一个组件,即摄像机组件,可以用于在游戏界面中渲染相应的场景画面。该摄像机组件可以为unity中的渲染组件,该摄像机组件可以根据配置的高度、宽度以及视场角度在图形用户界面中相应的画面。
其中,摄像机组件的视场角度也称为FOV(Field of view,视野范围),摄像机组件渲染场景画面的角度范围。
404、从至少一个虚拟人物对象中选取位于视场角范围内的虚拟人物对象为技能释放的多个候选虚拟人物对象,得到候选对象集合。
也即将视野以外的虚拟人物对象剔除,从视野以内的虚拟人物对象中确 定技能释放的候选虚拟人物对象。
其中,候选对象集合包括至少两个候选虚拟人物对象。
405、获取候选虚拟人物对象相对于游戏摄像机的偏移参数。
其中,偏移参数为虚拟人物对象相对于游戏摄像机的偏移信息,该偏移参数可以包括:偏移角度、偏移距离中的至少一种,在其他实施例中,还可以包括偏移方向等。
其中,偏移参数为候选虚拟人物对象如虚拟人物对象相对于参考对象的偏移信息,该偏移参数可以包括:偏移角度、偏移距离中的至少一种,在其他实施例中,还可以包括偏移方向等。
其中,偏移角度为候选虚拟人物对象相对于参考对象在预设平面或三维场景画面中的偏移角度;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟人物对象a目标点A、代表虚拟人物对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移角度为θ,目标点B相对于目标点C的偏移角度为θ+β。
其中,偏移距离可以为候选虚拟人物对象相对于参考对象在预设平面或三维场景画面中的偏移距离,也即候选虚拟人物对象与参考对象在预设平面或三维场景画面中之间的距离;该预设平面可以包括图形用户界面所在的屏幕。比如,参考图8,在图形用户界面中,包括代表虚拟人物对象a目标点A、代表虚拟人物对象b的目标点B、以及代表参考对象c的目标点C;其中,目标点A相对于目标点C的偏移距离为La,目标点B相对于目标点C的偏移角度为Lb。
406、根据所述偏移参数获取所述候选对象集合中每个候选虚拟人物对象的选取权重,得到候选对象集合中每个候选虚拟人物对象的选取权重。
可以基于偏移参数值越大选取权重越小的原则来设置候选虚拟人物对象的选取权重。
比如,在一个实施例中,偏移参数包括偏移角度时,可以根据偏移角度越大选取权重越小、偏移角度越小选取权重越大的规则来设置候选虚拟人物对象的选取权重。比如,参考图8,虚拟人物对象a的偏移角度小于虚拟人物对象b的偏移角度,因此,虚拟人物对象a的选取权重比虚拟人物对象a的选取 权重高。
又比如,在一个实施例中,偏移参数包括偏移距离时可以根据偏移距离越大选取权重越小、偏移距离越小选取权重越大的规则来设置候选虚拟人物对象的选取权重。比如,参考图8,虚拟人物对象a的偏移距离小于虚拟人物对象b的偏移距离,因此,虚拟人物对象a的选取权重比虚拟人物对象a的选取权重高。
比如,当偏移参数包括偏移角度和偏移距离时,可以根据候选虚拟人物对象的偏移角度和偏移距离获取该候选虚拟人物对象的权重参考参数,然后,基于权重参考参数设置该候选虚拟人物对象的选择权重。该权重参考参数用于指示选取权重获取或设置的参数,该参数可以为一个自定义参数。
其中,权重参考参数的获取方式有多种,比如,可以计算偏移角度和偏移距离的加权和,将该加权和作为权重参考参数。例如,参考图8,目标点A相对于目标点C的偏移角度为θ、偏移距离为La,此时,候选虚拟人物对象A的权重参考参数QA=θ*p1+La*p2,p1为偏移角度的权重值,p2为偏移距离的权重值。
407、根据候选虚拟人物对象的选取权重,从所述候选对象集合中选取一个或者至少两个目标虚拟人物对象。
在一个实施例中,在当配置选取一个目标虚拟人物对象时,可以在获取每个候选虚拟人物对象的选取权重后,选择选取权重最高的一个候选虚拟人物对象为技能释放的目标虚拟人物对象,或者,还可以选择选取权重位于最高和最低之间的一个候选虚拟人物对象为技能释放的目标虚拟人物对象。
例如,参考图10,以图形用户界面为FPS游戏界面为例,假设游戏界面包括虚拟人物对象A、B,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32。此时,终端可以根据对象选择指令确定虚拟人物对象A、B为技能释放的候选虚拟人物对象;然后,终端获取对象A、B相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移记录),终端根据对象A、B的偏移参数计算对象A的选取权重为0.7、对象B的选取权重为0.3,此时,可以选取候选虚拟人物对象A为技能释放的目标虚拟人物对象,在游戏界面的虚拟人物对象A上 将会显示选中框。
在一个实施例中,在当配置选取多个目标虚拟人物对象时,可以在获取每个候选虚拟人物
对象的选取权重后,选择选取权重高的前几个候选虚拟人物对象为技能释放的目标虚拟人物对象。
例如,参考图13,以图形用户界面为FPS游戏界面为例,假设游戏界面包括虚拟人物对象A、B、C,当用户按住右侧技能按钮时,会触发生成对象选择指令,并同时在预定位置显示技能释放操控光圈对象31和虚拟摇杆对象32。此时,终端可以根据对象选择指令确定虚拟人物对象A、B、C为技能释放的候选虚拟人物对象;然后,终端获取对象A、B、C相对于参考对象(如游戏摄像机)的偏移参数(如偏移角度和/或偏移距离),终端根据对象A、B的偏移参数计算对象A的选取权重为0.5、对象B的选取权重为0.3、对象B的选取权重为0.2,此时,可以选取候选虚拟人物对象A、B为技能释放的目标虚拟人物对象,在游戏界面的虚拟人物对象A、B上将会显示选中框。
408、当检测到技能对象的技能释放确认操作时,对一个或者多个目标虚拟人物对象执行所述技能对象的技能释放操作。
其中,技能释放确认操作可以有多种,比如,在显示技能释放辅助控制对象时,该技能释放确认操作可以包括虚拟摇杆对象的拖动操作的释放操作。也即,当检测到所述拖动操作的释放操作时,对所述目标虚拟人物对象执行所述技能对象的技能释放操作。
例如,参考图10,当检测到拖动操作的释放操作时,针对目标虚拟人物对象执行技能对象的技能释放操作。
由上可知,本申请实施例采用在游戏界面中渲染至少一个虚拟人物对象,根据对象选择指令从至少一个虚拟人物对象中确定技能释放的多个候选虚拟人物对象,得到候选对象集合,获取候选对象集合中候选虚拟人物对象相对于参考对象的偏移参数,根据偏移参数从候选对象集合中选取技能释放的目标虚拟人物对象。该方案可以基于虚拟人物对象相对于参考对象的偏移参数选取技能释放的目标虚拟人物对象,无需用户进行精准的技能释放操作,便可以快速确定技能释放的对象,提升了交互结果的准确性以及节省了终端的 资源。
为了便于更好的实施本申请实施例提供的视频播放方法,在一个实施例中还提供了一种对象选择装置。其中名词的含义与上述对象选择方法中相同,具体实现细节可以参考方法实施例中的说明。
在一个实施例中,还提供对象选择装置,如图19所示,该对象选择装置可以包括:显示单元501、确定单元502、参数获取单元503以及选取单元504,如下:
显示单元501,用于显示图形用户界面,所述图形用户界面包括至少一个虚拟对象;
确定单元502,用于根据对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
参数获取单元503,用于获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;
选取单元504,用于根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
在一个实施例中,所述候选对象集合包括:至少两个候选虚拟对象;参考图20,其中,选取单504,包括:
权重获取子单元5041,用于根据所述偏移参数获取候选虚拟对象的选取权重,得到所述候选对象集合中每个候选虚拟对象的选取权重;
选取子单元5042,用于根据所述候选对象集合中每个候选虚拟对象的选取权重从所述候选对象集合中选取技能释放的一个目标虚拟对象或者至少两个目标虚拟对象。
在一个实施例中,所述偏移参数包括偏移角度和偏移距离;所述权重获取子单元5041,用于:
根据所述偏移角度和偏移距离获取所述候选虚拟对象的权重参考参数;
根据所述候选虚拟对象的权重参考参数获取所述候选虚拟对象的选取权重。
在一个实施例中,所述候选对象集合包括:一个候选虚拟对象;所述选取单元504,用于:确定所述偏移参数是否满足预设条件;若是,则选取所述 候选虚拟对象作为技能释放的目标虚拟对象。
在一个实施例中,所述确定单元502,用于:
根据对象选择指令获取摄像机组件的视场角度,所述摄像机组件用于在所述图形用户界面中渲染场景;
根据所述视场角度从所述至少一个虚拟对象中确定待释放技能的候选虚拟对象。
在一个实施例中,所述图形用户界面还包括:技能操作区域,所述技能操作区域包括技能对象;参考图21,所述确定单元502,可以用于:当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令;根据所述对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象;
其中,对象选择装置还包括:执行单元506;
执行单元506,用于当检测到所述技能对象的技能释放确认操作,对所述目标虚拟对象执行所述技能对象的技能释放操作。在一个实施例中,确定单元502,可以用于当检测到所述技能对象的技能释放触发操作时,在所述图形用户界面上的预设位置显示技能释放辅助控制对象;
根据对所述技能释放辅助控制对象的操作,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并触发对象选择指令。
其中,所述技能释放辅助控制对象可以包括技能释放操控光圈对象和处于所述技能释放操控光圈对象的辐射范围以内的虚拟摇杆对象;
其中,执行单元506,可以用于当检测到所述技能对象的技能释放确认操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
比如,执行单元506,可以用于当检测到对所述虚拟摇杆对象的拖动操作时,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并重新触发对象选择指令,返回执行候选虚拟对象的确定步骤。
在一个实施例中,执行单元506,用于当检测到所述拖动操作的释放操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
在一个实施例中,确定单元502,可以用于:当技能释放处于持续释放模式时,自动触发生成对象选择指令;根据所述对象选择指令从所述至少一个虚拟对象中确定技能释放的候选虚拟对象。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
该对象选择装置具体可以集成终端,比如以客户端的形式集成在终端中,该终端可以为手机、平板电脑等设备。
由上可知,本申请实施例对象选择装置采用显示单元501显示图形用户界面,该图形用户界面包括至少一个虚拟对象,由确定单元502根据对象选择指令从至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合,由参数获取单元503获取候选对象集合中候选虚拟对象相对于参考对象的偏移参数,由选取单元504根据偏移参数从候选对象集合中选取技能释放的目标虚拟对象。该方案可以基于虚拟对象相对于参考对象的偏移参数选取技能释放的目标虚拟对象,无需用户进行精准的技能释放操作,便可以快速确定技能释放的对象,提升了交互结果的准确性以及节省了终端的资源。
实施例四、
为了更好地实施以上方法,本申请实施例还提供了一种终端,该终端可以为手机、平板电脑等设备。
参考图22,本申请实施例提供了一种终端600,可以包括一个或者一个以上处理核心的处理器601、一个或一个以上计算机可读存储介质的存储器602、射频(Radio Frequency,RF)电路603、电源604、输入单元605、以及显示单元606等部件。本领域技术人员可以理解,图4中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器601是该终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器602内的软件程序和/或模块,以及调用存储在存储器602内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。可选的,处理器601可包括一个或多个处理核心;优选的,处理器601可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器601中。
存储器602可用于存储软件程序以及模块,处理器601通过运行存储在存储器602的软件程序以及模块,从而执行各种功能应用以及数据处理。
RF电路603可用于收发信息过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器601处理;另外,将涉及上行的数据发送给基站。
终端还包括给各个部件供电的电源604(比如电池),优选的,电源可以通过电源管理系统与处理器601逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源604还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该终端还可包括输入单元605,该输入单元605可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
该终端还可包括显示单元606,该显示单元606可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元608可包括显示面板,可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。
具体在本实施例中,终端中的处理器601会按照如下的计算机可读指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器602中,并由处理器601来运行存储在存储器602中的应用程序,从而实现各种功能,如下:
显示图形用户界面,该图形用户界面包括至少一个虚拟对象,根据对象选择指令从至少一个虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合,获取候选对象集合中候选虚拟对象相对于参考对象的偏移参数,根据偏移参数从候选对象集合中选取技能释放的目标虚拟对象。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可 读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上对本申请实施例所提供的一种对象选择方法、装置、终端和存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种对象选择方法,应用于终端,包括:
    显示图形用户界面,所述图形用户界面包括虚拟对象;
    根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
    获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
    根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
  2. 如权利要求1所述的对象选择方法,其特征在于,所述候选对象集合包括至少两个候选虚拟对象;根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象包括:
    根据所述偏移参数获取候选虚拟对象的选取权重,得到所述候选对象集合中每个候选虚拟对象的选取权重;及
    根据所述候选对象集合中每个候选虚拟对象的选取权重从所述候选对象集合中选取技能释放的一个目标虚拟对象或者至少两个目标虚拟对象。
  3. 如权利要求2所述的对象选择方法,其特征在于,所述偏移参数包括偏移角度和偏移距离;根据所述偏移参数获取候选虚拟对象的选取权重包括:
    根据所述偏移角度和偏移距离获取所述候选虚拟对象的权重参考参数;及
    根据所述候选虚拟对象的权重参考参数获取所述候选虚拟对象的选取权重。
  4. 如权利要求1所述的对象选择方法,其特征在于,所述候选对象集合包括一个候选虚拟对象;根据所述候偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象包括:
    确定所述偏移参数是否满足预设条件;及
    若是,则选取所述候选虚拟对象作为技能释放的目标虚拟对象。
  5. 如权利要求1所述的对象选择方法,其特征在于,根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象包括:
    根据对象选择指令获取摄像机组件的视场角度,所述摄像机组件用于在 所述图形用户界面中渲染场景;及
    根据所述视场角度从所述虚拟对象中确定待释放技能的候选虚拟对象。
  6. 如权利要求1所述的对象选择方法,其特征在于,所述图形用户界面还包括技能操作区域,所述技能操作区域包括技能对象;根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象包括:
    当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令;
    根据所述对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象;及
    在选取技能释放的目标虚拟对象之后,所述对象选择方法还包括:当检测到所述技能对象的技能释放确认操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
  7. 如权利要求6所述的对象选择方法,其特征在于,当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令包括:
    当检测到所述技能对象的技能释放触发操作时,在所述图形用户界面上的预设位置显示技能释放辅助控制对象;及
    根据对所述技能释放辅助控制对象的操作,控制所述技能对象的技能释放位置在所述图形用户界面内进行相应的调整,并触发对象选择指令。
  8. 如权利要求7所述的对象选择方法,其特征在于,所述技能释放辅助控制对象包括技能释放操控光圈对象和处于所述技能释放操控光圈对象的辐射范围以内的虚拟摇杆对象;对所述技能释放辅助控制对象的操作包括对所述虚拟摇杆对象的拖动操作;当检测到所述技能对象的技能释放确认操作,对所述目标虚拟对象执行所述技能对象的技能释放操作包括:
    当检测到所述拖动操作的释放操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
  9. 如权利要求1所述的对象选择方法,其特征在于,所述图形用户界面还包括技能操作区域,所述技能操作区域包括技能对象;根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象包括:
    当技能释放处于持续释放模式时,自动触发生成对象选择指令;
    根据所述对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对 象;及
    在选取技能释放的目标虚拟对象之后,所述对象选择方法还包括:当检测到所述技能对象的技能释放确认操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
  10. 一种终端,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    显示图形用户界面,所述图形用户界面包括虚拟对象;
    根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
    获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
    根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
  11. 根据权利要求10所述的终端,其特征在于,所述候选对象集合包括至少两个候选虚拟对象;所述计算机可读指令被所述处理器执行根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象的步骤时,使得所述处理器执行以下步骤:
    根据所述偏移参数获取候选虚拟对象的选取权重,得到所述候选对象集合中每个候选虚拟对象的选取权重;及
    根据所述候选对象集合中每个候选虚拟对象的选取权重从所述候选对象集合中选取技能释放的一个目标虚拟对象或者至少两个目标虚拟对象。
  12. 根据权利要求11所述的终端,其特征在于,所述偏移参数包括偏移角度和偏移距离;所述计算机可读指令被所述处理器执行根据所述偏移参数获取候选虚拟对象的选取权重的步骤时,使得所述处理器执行以下步骤:
    根据所述偏移角度和偏移距离获取所述候选虚拟对象的权重参考参数;及
    根据所述候选虚拟对象的权重参考参数获取所述候选虚拟对象的选取权重。
  13. 根据权利要求10所述的终端,其特征在于,所述候选对象集合包括一个候选虚拟对象;所述计算机可读指令被所述处理器执行根据所述候偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象的步骤时,使得所述处理器执行以下步骤:
    确定所述偏移参数是否满足预设条件;及
    若是,则选取所述候选虚拟对象作为技能释放的目标虚拟对象。
  14. 根据权利要求10所述的终端,其特征在于,所述计算机可读指令被所述处理器执行根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象的步骤时,使得所述处理器执行以下步骤:
    根据对象选择指令获取摄像机组件的视场角度,所述摄像机组件用于在所述图形用户界面中渲染场景;及
    根据所述视场角度从所述虚拟对象中确定待释放技能的候选虚拟对象。
  15. 根据权利要求10所述的终端,其特征在于,所述图形用户界面还包括技能操作区域,所述技能操作区域包括技能对象;所述计算机可读指令被所述处理器执行根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象的步骤时,使得所述处理器执行以下步骤:
    当检测到所述技能对象的技能释放触发操作时,触发生成对象选择指令;
    根据所述对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象;及
    在选取技能释放的目标虚拟对象之后,所述对象选择方法还包括:当检测到所述技能对象的技能释放确认操作时,对所述目标虚拟对象执行所述技能对象的技能释放操作。
  16. 一种非易失性的计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    显示图形用户界面,所述图形用户界面包括虚拟对象;
    根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象,得到候选对象集合;
    获取所述候选对象集合中候选虚拟对象相对于参考对象的偏移参数;及
    根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象。
  17. 根据权利要求16所述的存储介质,其特征在于,所述候选对象集合包括至少两个候选虚拟对象;所述计算机可读指令被所述一个或多个处理器执行根据所述偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象的步骤时,使得所述一个或多个处理器执行以下步骤:
    根据所述偏移参数获取候选虚拟对象的选取权重,得到所述候选对象集合中每个候选虚拟对象的选取权重;及
    根据所述候选对象集合中每个候选虚拟对象的选取权重从所述候选对象集合中选取技能释放的一个目标虚拟对象或者至少两个目标虚拟对象。
  18. 根据权利要求17所述的存储介质,其特征在于,所述偏移参数包括偏移角度和偏移距离;所述计算机可读指令被所述一个或多个处理器执行根据所述偏移参数获取候选虚拟对象的选取权重的步骤时,使得所述一个或多个处理器执行以下步骤:
    根据所述偏移角度和偏移距离获取所述候选虚拟对象的权重参考参数;及
    根据所述候选虚拟对象的权重参考参数获取所述候选虚拟对象的选取权重。
  19. 根据权利要求16所述的存储介质,其特征在于,所述候选对象集合包括一个候选虚拟对象;所述计算机可读指令被所述一个或多个处理器执行根据所述候偏移参数从所述候选对象集合中选取技能释放的目标虚拟对象的步骤时,使得所述一个或多个处理器执行以下步骤:
    确定所述偏移参数是否满足预设条件;及
    若是,则选取所述候选虚拟对象作为技能释放的目标虚拟对象。
  20. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述一个或多个处理器执行根据对象选择指令从所述虚拟对象中确定技能释放的候选虚拟对象的步骤时,使得所述一个或多个处理器执行以下步骤:
    根据对象选择指令获取摄像机组件的视场角度,所述摄像机组件用于在 所述图形用户界面中渲染场景;及
    根据所述视场角度从所述虚拟对象中确定待释放技能的候选虚拟对象。
PCT/CN2018/114529 2017-11-15 2018-11-08 对象选择方法、终端和存储介质 WO2019096055A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/661,270 US11090563B2 (en) 2017-11-15 2019-10-23 Object selection method, terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711132666.2 2017-11-15
CN201711132666.2A CN107837529B (zh) 2017-11-15 2017-11-15 一种对象选择方法、装置、终端和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/661,270 Continuation US11090563B2 (en) 2017-11-15 2019-10-23 Object selection method, terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2019096055A1 true WO2019096055A1 (zh) 2019-05-23

Family

ID=61678856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114529 WO2019096055A1 (zh) 2017-11-15 2018-11-08 对象选择方法、终端和存储介质

Country Status (3)

Country Link
US (1) US11090563B2 (zh)
CN (1) CN107837529B (zh)
WO (1) WO2019096055A1 (zh)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107837529B (zh) 2017-11-15 2019-08-27 腾讯科技(上海)有限公司 一种对象选择方法、装置、终端和存储介质
CN109117050A (zh) * 2018-08-27 2019-01-01 广州要玩娱乐网络技术股份有限公司 多兵种多技能操作方法、计算机存储介质和终端
CN109151320B (zh) * 2018-09-29 2022-04-22 联想(北京)有限公司 一种目标对象选取方法及装置
CN109758764B (zh) * 2018-12-11 2023-02-03 网易(杭州)网络有限公司 一种游戏技能控制的方法及装置、电子设备、存储介质
CN109999493B (zh) * 2019-04-03 2022-04-15 网易(杭州)网络有限公司 游戏中的信息处理方法、装置、移动终端及可读存储介质
CN110115838B (zh) * 2019-05-30 2021-10-29 腾讯科技(深圳)有限公司 虚拟环境中生成标记信息的方法、装置、设备及存储介质
CN110215709B (zh) * 2019-06-04 2022-12-06 网易(杭州)网络有限公司 对象选取方法、装置、存储介质及电子设备
US10943388B1 (en) * 2019-09-06 2021-03-09 Zspace, Inc. Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
CN111151007B (zh) * 2019-12-27 2023-09-08 上海米哈游天命科技有限公司 一种对象选择方法、装置、终端及存储介质
CN111589126B (zh) * 2020-04-23 2023-07-04 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、设备及存储介质
CN111672117B (zh) * 2020-06-05 2021-07-09 腾讯科技(深圳)有限公司 虚拟对象的选择方法、装置、设备及存储介质
CN111672118B (zh) * 2020-06-05 2022-02-18 腾讯科技(深圳)有限公司 虚拟对象的瞄准方法、装置、设备及介质
CN112107861B (zh) * 2020-09-18 2023-03-24 腾讯科技(深圳)有限公司 虚拟道具的控制方法和装置、存储介质及电子设备
JP6928709B1 (ja) * 2020-12-28 2021-09-01 プラチナゲームズ株式会社 情報処理プログラム、情報処理装置、および情報処理方法
CN113101662B (zh) * 2021-03-29 2023-10-17 北京达佳互联信息技术有限公司 目标虚拟攻击对象的确定方法、装置及存储介质
CN113239172A (zh) * 2021-06-09 2021-08-10 腾讯科技(深圳)有限公司 机器人群组中的会话交互方法、装置、设备及存储介质
CN113599829B (zh) * 2021-08-06 2023-08-22 腾讯科技(深圳)有限公司 虚拟对象的选择方法、装置、终端及存储介质
CN114425161A (zh) * 2022-01-25 2022-05-03 网易(杭州)网络有限公司 目标锁定方法、装置、电子设备及存储介质
CN117942570A (zh) * 2022-10-18 2024-04-30 腾讯科技(成都)有限公司 虚拟对象的交互方法、装置、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007215756A (ja) * 2006-02-16 2007-08-30 Konami Digital Entertainment:Kk ゲームプログラム、ゲーム装置及びゲーム方法
US20130241829A1 (en) * 2012-03-16 2013-09-19 Samsung Electronics Co., Ltd. User interface method of touch screen terminal and apparatus therefor
CN105194873A (zh) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质
CN105597325A (zh) * 2015-10-30 2016-05-25 广州银汉科技有限公司 协助瞄准的方法与系统
CN107029428A (zh) * 2016-02-04 2017-08-11 网易(杭州)网络有限公司 一种射击游戏的操控系统、方法及终端
CN107837529A (zh) * 2017-11-15 2018-03-27 腾讯科技(上海)有限公司 一种对象选择方法、装置、终端和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8210943B1 (en) * 2006-05-06 2012-07-03 Sony Computer Entertainment America Llc Target interface
US9327191B2 (en) * 2006-05-08 2016-05-03 Nintendo Co., Ltd. Method and apparatus for enhanced virtual camera control within 3D video games or other computer graphics presentations providing intelligent automatic 3D-assist for third person viewpoints
JP5563633B2 (ja) * 2012-08-31 2014-07-30 株式会社スクウェア・エニックス ビデオゲーム処理装置、およびビデオゲーム処理プログラム
US10350487B2 (en) * 2013-06-11 2019-07-16 We Made Io Co., Ltd. Method and apparatus for automatically targeting target objects in a computer game
US20150231509A1 (en) * 2014-02-17 2015-08-20 Xaviant, LLC (a GA Limited Liability Company) System, Method, and Apparatus for Smart Targeting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007215756A (ja) * 2006-02-16 2007-08-30 Konami Digital Entertainment:Kk ゲームプログラム、ゲーム装置及びゲーム方法
US20130241829A1 (en) * 2012-03-16 2013-09-19 Samsung Electronics Co., Ltd. User interface method of touch screen terminal and apparatus therefor
CN105194873A (zh) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质
CN105597325A (zh) * 2015-10-30 2016-05-25 广州银汉科技有限公司 协助瞄准的方法与系统
CN107029428A (zh) * 2016-02-04 2017-08-11 网易(杭州)网络有限公司 一种射击游戏的操控系统、方法及终端
CN107837529A (zh) * 2017-11-15 2018-03-27 腾讯科技(上海)有限公司 一种对象选择方法、装置、终端和存储介质

Also Published As

Publication number Publication date
CN107837529A (zh) 2018-03-27
US20200054947A1 (en) 2020-02-20
US11090563B2 (en) 2021-08-17
CN107837529B (zh) 2019-08-27

Similar Documents

Publication Publication Date Title
WO2019096055A1 (zh) 对象选择方法、终端和存储介质
US11003261B2 (en) Information processing method, terminal, and computer storage medium
KR102034367B1 (ko) 정보 처리 방법과 단말기, 및 컴퓨터 저장 매체
KR102018212B1 (ko) 정보 처리 방법, 단말 및 컴퓨터 저장 매체
KR102595150B1 (ko) 다수의 가상 캐릭터를 제어하는 방법, 기기, 장치 및 저장 매체
WO2017054453A1 (zh) 一种信息处理方法、终端及计算机存储介质
CN108279694B (zh) 电子设备及其控制方法
CN110559658B (zh) 信息交互方法、装置、终端以及存储介质
WO2022134980A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
US11954200B2 (en) Control information processing method and apparatus, electronic device, and storage medium
WO2019109778A1 (zh) 游戏对局结果的展示方法、装置及终端
US20230259261A1 (en) Method for Moving Object, Storage Medium and Electronic device
CN105704367B (zh) 无人飞行器的摄像控制方法及摄像控制装置
US20220040574A1 (en) Virtual object control method and apparatus, computer device, and storage medium
US11790607B2 (en) Method and apparatus for displaying heat map, computer device, and readable storage medium
WO2022111458A1 (zh) 图像拍摄方法和装置、电子设备及存储介质
US20230338849A1 (en) Interface display method and apparatus, terminal, storage medium, and computer program product
EP4170588A2 (en) Video photographing method and apparatus, and device and storage medium
CN114159785A (zh) 虚拟道具丢弃方法、装置、电子设备和存储介质
CN116059639A (zh) 虚拟对象控制方法、装置、电子设备和存储介质
CN115225926B (zh) 游戏直播的画面处理方法、装置、计算机设备和存储介质
CN113101664B (zh) 寻路指示方法、装置、终端和存储介质
CN114146411A (zh) 游戏对象控制方法、装置、电子设备和存储介质
WO2021143262A1 (zh) 地图元素添加方法、装置、终端及存储介质
CN116764221A (zh) 虚拟对象控制方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18879836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18879836

Country of ref document: EP

Kind code of ref document: A1