WO2020168877A1 - 对象控制方法和装置、存储介质及电子装置 - Google Patents

对象控制方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2020168877A1
WO2020168877A1 PCT/CN2020/072635 CN2020072635W WO2020168877A1 WO 2020168877 A1 WO2020168877 A1 WO 2020168877A1 CN 2020072635 W CN2020072635 W CN 2020072635W WO 2020168877 A1 WO2020168877 A1 WO 2020168877A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
virtual
button
virtual button
state
Prior art date
Application number
PCT/CN2020/072635
Other languages
English (en)
French (fr)
Inventor
黄雄飞
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020217013308A priority Critical patent/KR102549758B1/ko
Priority to SG11202103686VA priority patent/SG11202103686VA/en
Priority to JP2021536060A priority patent/JP7238136B2/ja
Publication of WO2020168877A1 publication Critical patent/WO2020168877A1/zh
Priority to US17/320,051 priority patent/US11938400B2/en
Priority to US18/444,415 priority patent/US20240189711A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • This application relates to the computer field, specifically, to an object control technology.
  • the racing track includes curves with different turning angles.
  • the player In order to shorten the time for the target object controlled by the player to drive through the curve, the player often controls the target object to achieve a drifting action through the control buttons set in the human-computer interaction interface.
  • the player is usually required to manually adjust the control operation of the control button according to the game experience to determine the drift angle of the target object during the drift process, so that the target object is determined according to the determined drift angle. Continue driving after drifting at the drift angle.
  • the player is not proficient in the control operation of the target object, it will easily lead to drift errors, which will affect the results of the race.
  • the object control method provided in the related technology requires relatively high operation requirements for players, and there is a problem of poor control accuracy in the process of controlling the object to achieve drift.
  • the embodiments of the present application provide an object control method and device, a storage medium, and an electronic device to at least solve the technical problem of poor control accuracy in the process of controlling an object to achieve a target action.
  • an object control method applied to a terminal device, including: acquiring a first virtual button and a second virtual button in a human-computer interaction interface displayed on a client to perform a long-press operation generation
  • the first virtual button is used to adjust the forward direction of the target object controlled by the client
  • the second virtual button is used to trigger the target object to perform the target action
  • the target object is controlled Execute the target action in the current path, and detect the target angle generated by the target object during the execution of the target action, wherein the target angle is the clip between the forward direction of the target object and the sliding direction of the target object Angle
  • the first virtual button In the case of detecting that the long press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the first angle threshold that matches the current path, the first virtual button
  • the button state of the second virtual button is adjusted to a disabled state, so that the target object enters a state of passively performing the target action, wherein the disabled
  • an object control device including: a first obtaining unit, configured to obtain the first virtual key and the second virtual key execution in the human-computer interaction interface displayed to the client An operation instruction generated by a long press operation, wherein the first virtual button is used to adjust the forward direction of the target object controlled by the client, and the second virtual button is used to trigger the target object to perform the target action; the first control unit, Used to respond to the above operating instructions, control the target object to perform the target action in the current path, and detect the target angle generated by the target object during the execution of the target action, wherein the target angle is the forward direction of the target object The angle between the sliding direction and the sliding direction of the target object; the first adjusting unit is configured to detect that the long-press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the same as the current In the case of the first angle threshold that matches the path, the key states of the first virtual key and the second virtual key are adjusted to a disabled state, so
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the above object control method when running.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the above through the computer program Object control method.
  • a computer program product including instructions, which when run on a computer, cause the computer to execute the above object control method.
  • the object control method provided in this embodiment acquires the first virtual button and the first virtual button in the human-computer interaction interface displayed to the client during the operation of the client of the human-computer interaction application.
  • the virtual button executes the operation instruction generated by the long press operation
  • the target object is controlled to perform the target action on the current path, and the target angle generated by the target object during the execution of the target action is detected.
  • the target angle reaches the first angle threshold that matches the current path
  • the first virtual button and the second virtual button The button state of the button is adjusted to the disabled state.
  • the target object controlled by the client executes the target action in the current path, by detecting the relative relationship between the generated target angle and the first angle threshold, the target object is controlled to automatically enter the passive execution target action.
  • Status instead of relying on the player’s game experience, so that the player does not need to manually adjust the control operation based on the game experience to determine the angle required for the target object to perform the target action, thereby reducing the difficulty of the player’s operation and improving the control of the target object to perform the target action
  • Time control accuracy overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
  • Fig. 1 is a schematic diagram of a hardware environment of an optional object control method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of the hardware environment of another optional object control method according to an embodiment of the present application.
  • Fig. 3 is a flowchart of an optional object control method according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an optional object control method according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of another optional object control method according to an embodiment of the present application.
  • Fig. 6 is a flowchart of another optional object control method according to an embodiment of the present application.
  • Fig. 7 is a schematic diagram of yet another optional object control method according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of yet another optional object control method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a target classification model in an optional object control method according to an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an initial classification model in yet another optional object control method according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a configuration interface of an optional object control method according to an embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of an optional object control device according to an embodiment of the present application.
  • Fig. 13 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • an object control method is provided.
  • the above object control method can be, but is not limited to, applied to the hardware environment as shown in FIG. 1.
  • a client of a human-computer interaction application is installed in the user equipment 102 (as shown in FIG. 1 is a racing game application client).
  • the human-computer interaction interface displayed on the client is obtained
  • the first virtual button (the direction key shown in the lower left corner of FIG. 1)
  • the second virtual button (the action control button shown in the lower right corner of FIG. 1) execute the operation instructions generated by the long press operation, as in step S102.
  • the user equipment 102 includes a human-computer interaction screen 104, a processor 106, and a memory 108.
  • the human-computer interaction screen 104 is used to obtain human-computer interaction operations;
  • the processor 106 is used to generate corresponding operation instructions according to the human-computer interaction operations, and respond to the operation instructions to control the target object to perform corresponding actions.
  • the target object is controlled by the user through the client Virtual objects, such as racing cars in racing game applications.
  • the memory 108 is used to store the aforementioned operation instructions and attribute information related to the target object.
  • the user equipment 102 may execute step S104 to send an operation instruction to the server 112 via the network 110.
  • the server 112 includes a database 114 and a processing engine 116.
  • step S106 the server 112 calls the processing engine 116 to determine from the database 114 a first angle threshold that matches the current path where the target object is located.
  • step S108 the server 112 executes step S108 to send the determined first angle threshold to the user equipment 102 via the network 110, so that the user equipment 102 uses the acquired first angle threshold to execute step S110, and controls the target object to execute in the current path Target action.
  • the above object control method can also be applied but not limited to the hardware environment as shown in FIG. 2. It is still assumed that a client of a human-computer interaction application is installed in the user equipment 102 (as shown in FIG. 2 is a racing game application client), and the human-computer interaction interface displayed on the client is acquired during the running of the client
  • the first virtual button (the direction key shown in the lower left corner of FIG. 2) and the second virtual button (the action control button shown in the lower right corner of FIG. 2) execute the operation instruction generated by the long press operation, as in step S202.
  • Subsequent operations after obtaining the operation instructions can be, but are not limited to, applied to an independent processing device with stronger processing capabilities without data interaction with the server 112. If the independent processing device is still the user equipment 102, the user equipment The processor 106 in 102 will respond to the above operation instructions to perform steps S204-S208: control the target object to perform the target action in the current path, and detect the target angle generated by the target object in the process of performing the target action; then, after detecting When the first virtual button and the second virtual button are currently performing a long press operation, and the target angle reaches the first angle threshold that matches the current path, adjust the button state of the first virtual button and the second virtual button to The invalid state, so that the target object enters the state of passively performing the target action.
  • the foregoing first angle threshold may be, but not limited to, pre-stored in the memory 108 of the user equipment 102. The foregoing is only an example, and there is no limitation on this in this embodiment.
  • the object control method provided in this embodiment acquires the first virtual button and the second virtual button in the human-computer interaction interface displayed to the client during the operation of the client of the human-computer interaction application.
  • the button executes the operation instruction generated by the long press operation
  • the target object controlled by the client is controlled to execute the target action on the current path
  • the target angle generated by the target object during the execution of the target action is detected.
  • the target angle reaches the first angle threshold that matches the current path
  • the first virtual button and the second virtual button The button state of the button is adjusted to the disabled state.
  • the relative relationship between the generated target angle and the first angle threshold is detected to control the target object to automatically enter the passive execution of the target action.
  • the player instead of relying on the player’s game experience, so that the player does not need to manually adjust the control operation according to the game experience to determine the angle required by the target object to perform the target action, thereby reducing the difficulty of the player’s operation and improving the control when performing the target action
  • the accuracy overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
  • the above-mentioned user equipment may be, but not limited to, a mobile phone, a tablet computer, a notebook computer, a PC, and other terminal devices that support running application clients.
  • the foregoing server and user equipment may, but are not limited to, implement data interaction through a network
  • the foregoing network may include, but is not limited to, a wireless network or a wired network.
  • the wireless network includes: Bluetooth, WIFI and other networks that realize wireless communication.
  • the aforementioned wired network may include, but is not limited to: wide area network, metropolitan area network, and local area network. The foregoing is only an example, and there is no limitation on this in this embodiment.
  • the foregoing object control method includes:
  • the above-mentioned object control method can be but not limited to be applied to scenarios in which objects controlled by the client of a human-computer interaction application are automatically controlled.
  • the human-computer interaction application can be, but is not limited to, a competition.
  • the target object may correspond to virtual objects that are manipulated in racing game applications, such as virtual characters, virtual equipment, virtual vehicles, etc.
  • the aforementioned target action may be, but not limited to, a drifting action in a racing game scene, and the corresponding target angle may be, but not limited to, a drifting angle.
  • the target angle generated by the target object during the drifting action can be obtained in real time, and compared with the target angle And the first angle threshold that matches the current path, and when the comparison result indicates that the target angle reaches the first angle threshold, adjust the button response logic of the first virtual button and the second virtual button in the human-computer interaction interface to enter Suspended state (ie, failure state), so that the target object automatically enters a passive drift state.
  • Suspended state ie, failure state
  • the target object controlled by the client executes the target action in the current path
  • the relative relationship between the target angle generated by the target object and the first angle threshold is detected to control
  • the target object automatically enters the state of passively performing the target action, instead of relying on the player's game experience, so that the player does not need to manually adjust the control operation according to the game experience to determine the angle required by the target object to perform the target action, thereby reducing the player’s Difficulty of operation to improve the accuracy of control when performing the target action.
  • the target action performed by the target object in the current path may be, but not limited to, a drift action in a racing scene. It should be noted that the above drifting action needs to be triggered to be executed in a state where it is detected that the first virtual key and the second virtual key are simultaneously long-pressed.
  • the first virtual key may be, but is not limited to, a direction key used to control the forward direction of the target object, such as "left direction key” and “right direction key” as shown in FIG. 4.
  • the second virtual key can be, but is not limited to, a trigger control key used to trigger the target object to perform the target action, such as the "drift start key" shown in FIG. 4.
  • the display state of the corresponding virtual key indicates that the virtual key is in the "operating state”.
  • the corresponding virtual button is displayed as a "solid line”
  • the drift start button ie, the second virtual button
  • the corresponding virtual button is displayed as "Grid Filling”.
  • the key state of the virtual key may include, but is not limited to: a valid state and an invalid state.
  • the above-mentioned invalid state is used to indicate that the button response logic of the virtual button is in a suspended state. That is, when a pressing operation (such as a long-press operation) is performed on a virtual button in an invalid state, the button response logic of the virtual button will not be executed in the background.
  • the above valid state is used to indicate that the key response logic of the virtual key is normal. That is, when a pressing operation (such as a long press operation) is performed on a virtual button in a valid state, the button response logic of the virtual button will be executed in the background.
  • the target object after the key states of the first virtual key and the second virtual key are adjusted to the disabled state, and the target object enters the state of passively performing the target action, it can be but not limited to: In the machine interactive interface, the display state of the first virtual button and the second virtual button in the disabled state are kept consistent with the display state when the first virtual button and the second virtual button are performing a long press operation. That is, when the target angle reaches the first angle threshold, the display state of the virtual key is maintained, so that the user can control the target object to complete the target action on the current path without perception.
  • a target classification model determines the aforementioned first angle threshold (hereinafter also referred to as sensitivity).
  • the target classification model is obtained after machine training using sample data for Determine the angle threshold that matches the path information of the path.
  • the angle threshold is the angle that takes the shortest time to complete the target action in the path.
  • the first angle threshold determined by the target classification model may also be optimized.
  • the configuration method may include but is not limited to: performing a configuration operation on the angle threshold configuration item in the configuration interface of the client.
  • the above angle threshold configuration items can be used to implement flexible configuration of the angle threshold, so as to improve the flexibility of controlling the target object to perform the target action.
  • the background of the client can directly monitor the target angle of the controlled target object in real time, but can also, but is not limited to, after acquiring the forward direction of the target object and the sliding direction of the target object. , And then calculate the target angle of the above target object.
  • the forward direction of the target object may correspond to the front direction of the virtual vehicle
  • the sliding direction of the target object may correspond to the actual sliding direction of the body of the virtual vehicle.
  • the above two directions are used to determine the target angle generated by the virtual vehicle when performing a drifting action.
  • the target object after adjusting the key states of the first virtual key and the second virtual key to the disabled state, it further includes: after the target object enters the state of passively executing the target action, according to the current path The matching frictional resistance determines the remaining time for the target object to complete the target action; within the remaining time, the target object is controlled to complete the target action.
  • steps S602-S608 shown in Fig. 6 assuming that a racing game application client is still taken as an example for description, the target object is a virtual vehicle participating in a racing controlled by the client, and the target action is a drifting action.
  • the first virtual key is a direction key, and the second virtual key is a drift start key.
  • step S602 the client terminal obtains the operation instructions generated by simultaneously long pressing the direction key (assuming long pressing the right direction key) and the drift start key.
  • the client will execute step S604 to control the corresponding virtual vehicle to start drifting on the current path.
  • the target angle of the virtual vehicle will continue to increase.
  • step S606 it is detected whether the generated target angle reaches the first angle threshold ⁇ . If it is detected that the target angle has not reached the first angle threshold ⁇ , return to step S604 to maintain the steering force to continue steering drift; if it is detected that the target angle reaches the first angle threshold ⁇ , step S608 is executed to control the virtual vehicle to enter a passive drift state.
  • the target angle ⁇ generated by the virtual vehicle during the drifting action can be shown in Figure 7, which is the forward direction of the virtual vehicle (i.e. the front direction) and the sliding direction of the virtual vehicle (i.e. the vector direction of the actual vehicle speed) The angle between.
  • Figure 7 is the forward direction of the virtual vehicle (i.e. the front direction) and the sliding direction of the virtual vehicle (i.e. the vector direction of the actual vehicle speed) The angle between.
  • the target object controlled by the client terminal executes the target action on the current path, by detecting the relative relationship between the target angle generated by the target object and the first angle threshold, the target object is controlled to automatically enter The state of passively executing the target action, instead of relying on the player's game experience, so that the player does not need to manually adjust the control operation according to the game experience, determine the angle required by the target object to perform the target action, reduce the difficulty of the player's operation, and improve the target execution
  • the control accuracy during the action overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
  • the method further includes:
  • the display state of the virtual key may be, but not limited to, presented through the UI performance of the virtual key in the human-computer interaction interface.
  • the display state of the corresponding virtual key is "operation state”
  • no operation is performed on the virtual key
  • the display state of the corresponding virtual key is "no operation state”.
  • the first virtual button can be, but is not limited to, the direction keys shown in the lower left corner of FIG. 8, including "left direction button” and “right direction button”.
  • the second virtual key can be, but is not limited to, the "drift start key” as shown in the lower right corner of FIG. 8.
  • the display state of the "right direction button” is “operating state”, as shown in Figure 8 shows a “solid line”, and The display state of the "left direction button” that has not been detected to be operated by the user is “no operation state”, and a “dashed line” appears as shown in FIG. 8.
  • the display state of the "drift start key” is “operating state”, as shown in Figure 8 showing "grid filling”.
  • the target angle ⁇ generated by the virtual vehicle during the drifting action is detected. With the interaction between the torsion force F and the frictional resistance f generated during the execution of the drift action, the target angle ⁇ will continue to increase.
  • the button states of the "right direction button” and the “drift start button” are adjusted to the disabled state to make the virtual vehicle The vehicle enters a state of passive drift.
  • the display state of the first virtual button and the second virtual button that are in the disabled state is controlled, and the display state when the virtual button is long pressed is maintained
  • the state is consistent, so that the user can control the target object to complete the target action on the current path without perception.
  • the execution of the target action in the current path is automatically completed according to the first angle threshold when the user does not perceive it, which reduces the operation difficulty of the user's operation and avoids the problem of errors caused by unskilled operation.
  • the method before obtaining the operation instructions generated by the long-press operation of the first virtual button and the second virtual button in the human-computer interaction interface displayed on the client, the method further includes:
  • the above-mentioned target classification model can be, but is not limited to, used to classify the driving difficulty of the current route according to the path information of the current route, and determine the first angle that matches the current route according to the classification result. Threshold, and output the first angle threshold as the output result.
  • the first angle threshold may also be used but not limited to indicate the sensitivity of the player to control the target object through the current path.
  • the above-mentioned target classification model may be as shown in FIG. 9.
  • the classification of the driving difficulty of the current path according to the path information of the current path may include but is not limited to: extracting the path features of the current path through the target classification model 900 (as shown in FIG. 9 Path features 1-k), where the path features may include, but are not limited to: curve angle, curve length, friction resistance, etc. And store the aforementioned path characteristics in the database 902. Then, through the embedding function 904-2 in the deep network 904 and the neural network layer 904-4, deep learning is performed on the path features of the current path in the database 902, and the classification of the driving difficulty of the current path is determined by the classifier 906 grade.
  • the angle threshold value adapted to the classification level is obtained as the first angle threshold value matching the current path, and the output result 908 is obtained.
  • the first angle threshold is the angle that takes the shortest time to execute the target action in the path corresponding to the above classification level.
  • the method before inputting the path information of the current path into the target classification model, the method further includes: acquiring sample data generated when the target action is executed in the N sample paths, where the sample data is included in the i-th The angle used to execute the target action in the sample paths and the time taken to complete the target action, i is an integer greater than or equal to 1 and less than or equal to N; input the sample data into the pre-built initial classification model, and according to the output result of the initial classification model Adjust the parameters in the initial classification model to train the target classification model.
  • the initial classification model is constructed in advance, and it is assumed that the sample data generated when the target action is executed in N sample paths is obtained.
  • the above sample data may include but is not limited to: the angle used when the target action is executed in each sample path and the completion of the target The duration of the action.
  • the above-mentioned angle may include, but is not limited to, [angle min , angle max ] and the corresponding duration when the target action is executed in the sample path.
  • the path characteristics of the aforementioned sample path such as the angle of the curve, the length of the curve, and the friction resistance, are obtained.
  • deep learning is performed on the path features and sample data of the above N sample paths.
  • sample path 1 obtain the path features of sample path 1 (path features 1 to k as shown in Figure 10) and the corresponding sample data 1 and store them in the database 1002, and enter the path features 1 to k and sample data 1 above
  • the deep network 1004 uses the embedding function 1004-2 and the neural network layer 1004-4 in the deep network 1004 to perform deep learning on the path features of the sample path 1 and the sample data 1, and obtain the output result 1008 through the classifier 1006.
  • the parameters in the deep network 1004 in the initial classification model are adjusted and optimized to train the target classification model with convergent results. In order to use the target classification model to determine the optimal angle threshold for executing the target action under each path.
  • the path information of the current path is input into the target classification model, so that the target classification model can be used to determine the optimal angle threshold when the target action is performed under the current path. This shortens the time for the target object to perform the target action under the current path, and improves the accuracy of controlling the target object to perform the target action.
  • the method further includes:
  • the first angle threshold is determined based on the target classification model, it can also be but not limited to the human-computer interaction interface
  • the configuration interface is displayed on the top, and the configuration interface can include angle threshold configuration items, as shown in Figure 11 "angle threshold". Further, a configuration instruction generated by performing a configuration operation on the parameter value ⁇ of the angle threshold configuration item is acquired, so as to achieve configuration optimization of the first angle threshold determined by the target classification model.
  • the configuration instruction generated by performing the configuration operation on the angle threshold configuration item in the configuration interface is obtained, and according to the configuration It is instructed to further optimize the adjustment of the first angle threshold, so that the adjusted first angle threshold is adapted to the player's operating habits. Therefore, it is convenient for different players to flexibly adjust different first angle thresholds, so as to improve the flexibility of object control.
  • the method further includes:
  • the button state of the second virtual button is controlled to return to the effective state, where the effective state is used for Instruct the button response logic of the second virtual button to return to normal;
  • the button state of the first virtual button is controlled to remain in the disabled state , So that the user can complete the target action in a short period of time without perception.
  • the button response logic of the second virtual button is restored, so that The second virtual button can be restarted quickly in response to the user's operation again, shortening the start time of the next execution of the target action.
  • the virtual key is controlled to perform differently according to the user's different operation modes (such as pressing or lifting) of the virtual key Operation logic to achieve the effect of expanding key operation functions.
  • an object control device for implementing the above object control method.
  • the device includes:
  • the first obtaining unit 1202 is configured to obtain the operation instructions generated by the long-press operation of the first virtual button and the second virtual button in the human-computer interaction interface displayed on the client, where the first virtual button is used to adjust The forward direction of the target object controlled by the client, and the second virtual key is used to trigger the target object to perform the target action;
  • the first control unit 1204 is used to respond to operation instructions to control the target object to perform the target action in the current path, and to detect the target angle generated by the target object during the execution of the target action, where the target angle is the forward movement of the target object The angle between the orientation and the sliding direction of the target object;
  • the first adjustment unit 1206 is configured to: when it is detected that a long press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the first angle threshold that matches the current path, the first The key state of the virtual key and the second virtual key is adjusted to the invalid state, so that the target object enters the state of passively performing the target action, wherein the invalid state is used to indicate that the key response logic of the first virtual key and the second virtual key is in the suspended state .
  • the foregoing device further includes:
  • the display unit is used to control the display of the first and second virtual buttons in the disabled state in the human-computer interaction interface when the button states of the first and second virtual buttons are adjusted to the disabled state
  • the state is consistent with the display state of the first virtual button and the second virtual button when the long press operation is performed on the first virtual button and the second virtual button.
  • the display state of the first virtual button and the second virtual button that are in the disabled state is controlled, and the button identification of the virtual button when the long press operation is maintained and executed
  • the display status of is consistent, so that the user can control the target object to complete the target action on the current path without perception.
  • the execution of the target action in the current path is automatically completed according to the first angle threshold without the user's perception, which reduces the operation difficulty of the user's operation and avoids the problem of errors caused by unskilled operation.
  • the foregoing device further includes:
  • the input unit is used to input the path information of the current path into the target classification model before obtaining the operation instructions generated by the long-press operation of the first virtual key and the second virtual key in the human-computer interaction interface displayed on the client,
  • the target classification model is a model obtained after machine training using sample data to determine an angle threshold that matches the path information of the path, and the angle threshold is the angle that takes the shortest time to complete the target action in the path;
  • the determining unit is used to determine the first angle threshold that matches the current path according to the output result of the target classification model.
  • the foregoing device further includes:
  • the second acquiring unit is used to acquire sample data generated when the target action is executed in N sample paths before inputting the path information of the current path into the target classification model, where the sample data is included in the i-th sample path
  • the angle used to execute the target action and the time used to complete the target action, i is an integer greater than or equal to 1 and less than or equal to N;
  • the training unit is used to input sample data into the pre-built initial classification model, and adjust the parameters in the initial classification model according to the output results of the initial classification model to train to obtain the target classification model.
  • the path information of the current path is input into the target classification model, so that the target classification model can be used to determine the optimal angle threshold when the target action is performed under the current path. This shortens the time for the target object to perform the target action under the current path, and improves the accuracy of controlling the target object to perform the target action.
  • the foregoing device further includes:
  • the third acquiring unit is configured to acquire a configuration instruction generated by performing a configuration operation on the angle threshold configuration item in the configuration interface of the client after determining the first angle threshold that matches the current path according to the output result of the target classification model;
  • the second adjustment unit is configured to adjust the first angle threshold in response to the configuration instruction to obtain the adjusted first angle threshold.
  • the configuration instruction generated by performing the configuration operation on the angle threshold configuration item in the configuration interface is obtained, and according to the configuration It is instructed to further optimize the adjustment of the first angle threshold, so that the adjusted first angle threshold is adapted to the player's operating habits.
  • different players can flexibly adjust different first angle thresholds, so as to improve the flexibility of object control.
  • the foregoing device further includes:
  • the second control unit is configured to, after adjusting the button states of the first virtual button and the second virtual button to the disabled state, after detecting that the long press operation is currently performed on the first virtual button, but the second virtual button is not currently When the key is pressed, the key state of the control first virtual key remains in the disabled state;
  • the third control unit is configured to control the button state of the second virtual button to return to the valid state when it is detected that the pressing operation is not currently performed on the first virtual button but the long pressing operation is currently performed on the second virtual button , Where the valid state is used to indicate that the button response logic of the second virtual button returns to normal; or
  • the fourth control unit is configured to control the button state of the first virtual button and the second virtual button to return to a valid state when it is detected that no pressing operation is currently performed on the first virtual button and the second virtual button, wherein ,
  • the valid state is used to indicate that the button response logic of the first virtual button and the second virtual button returns to normal.
  • the virtual key is controlled to perform differently according to the user's different operation modes (such as pressing or lifting) of the virtual key Operation logic to achieve the effect of expanding key operation functions.
  • the electronic device for implementing the above object control method.
  • the electronic device includes a memory 1302 and a processor 1304.
  • the memory 1302 stores a computer
  • the processor 1304 is configured to execute the steps in any one of the foregoing method embodiments through a computer program.
  • the above electronic device may be located in at least one network device among multiple network devices in the computer network.
  • the foregoing processor may be configured to execute the following steps through a computer program:
  • FIG. 13 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
  • FIG. 13 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 13, or have a configuration different from that shown in FIG.
  • the memory 1302 can be used to store software programs and modules, such as program instructions/modules corresponding to the object control method and device in the embodiments of the present application.
  • the processor 1304 executes the software programs and modules stored in the memory 1302 by running the software programs and modules. This kind of functional application and data processing realizes the above-mentioned object control method.
  • the memory 1302 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1302 may further include a memory remotely provided with respect to the processor 1304, and these remote memories may be connected to the terminal through a network.
  • the memory 1302 can be specifically, but not limited to, used for operation instructions, first angle threshold, target angle and other information.
  • the memory 1302 may include, but is not limited to, the extraction unit 1102, the determination unit 1104, the generation unit 1106, and the processing unit 1108 in the above object control device.
  • the memory 1302 may also include, but is not limited to, other module units in the above object control device, which will not be repeated in this example.
  • the aforementioned transmission device 1306 is used to receive or send data via a network.
  • the above-mentioned specific examples of networks may include wired networks and wireless networks.
  • the transmission device 1306 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1306 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the above-mentioned electronic device further includes: a display 1308 for displaying the above-mentioned human-computer interaction interface and a screen of the target object performing the target action in the current path; and a connection bus 1310 for connecting each module component in the above-mentioned electronic device.
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the foregoing method embodiments when running.
  • the foregoing storage medium may be configured to store a computer program for executing the following steps:
  • the storage medium may include: a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
  • the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, A number of instructions are included to make one or more computer devices (which may be personal computers, servers, or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Optics & Photonics (AREA)

Abstract

一种对象控制方法和装置、存储介质及电子装置。其中,所述方法包括:获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作所生成的操作指令(S302);响应操作指令,控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度(S304);在检测到当前针对第一虚拟按键与第二虚拟按键执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态(S306)。该方法在控制对象实现目标动作的过程中存在控制准确性较差的技术问题。

Description

对象控制方法和装置、存储介质及电子装置
本申请要求于2019年02月21日提交中国专利局、申请号为201910130187X、申请名称为“对象控制方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种对象控制技术。
背景技术
目前在竞速类游戏应用中,为了丰富玩家的用户体验,游戏场景中常常会设计不同的竞速赛道,而竞速赛道中又会包括不同转弯角度的弯道。为了缩短玩家控制的目标对象行驶通过弯道的时间,玩家往往会通过人机交互界面中设置的控制按钮,控制该目标对象实现漂移动作。
然而,在玩家控制上述目标对象实现漂移动作的过程中,通常需要玩家根据游戏经验手动调整对控制按钮的控制操作,以确定目标对象在漂移过程中采用的漂移角度,从而使得目标对象按照确定后的漂移角度进行漂移后继续行驶。在实际应用中,如果玩家对目标对象的控制操作不熟练,将很容易导致漂移失误,进而影响竞速结果。
综上,相关技术中提供的对象控制方法对玩家的操作要求比较高,在控制对象实现漂移的过程中存在控制准确性较差的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种对象控制方法和装置、存储介质及电子装置,以至少解决在控制对象实现目标动作的过程中存在控制准确性较差的技术问题。
根据本申请实施例的一个方面,提供了一种对象控制方法,应用于终端设备,包括:获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,上述第一虚拟按键用于调整通过上述客户端控制的目标对象的前进朝向,上述第二虚拟按键用于触发上述目标对象执行目标动作;响应上述操作指令,控制上述目标对象在当前路径中执行上述目标动作,并检测上述目标对象在执行上述目标动作的过程中产生的目标角度,其中,上述目标角度为上述目标对象的前进朝向和上述目标对象的滑行方向之间的夹角;在检测到当前针对上述第一虚拟按键与上述第二虚拟按键执行上述长按操作,且上述目标角度达到与上述当前路径相匹配的第一角度阈值的情况下,将上述第一虚拟按键和上述第二虚拟按键的按键状态调整为失效状态,以使上述目标对象进入被动执行上述目标动作的状态,其中,上述失效状态用于指示上述第一虚拟按键和上述第二虚拟按键的按键 响应逻辑处于中止状态。
根据本申请实施例的另一方面,还提供了一种对象控制装置,包括:第一获取单元,用于获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,上述第一虚拟按键用于调整通过上述客户端控制的目标对象的前进朝向,上述第二虚拟按键用于触发上述目标对象执行目标动作;第一控制单元,用于响应上述操作指令,控制上述目标对象在当前路径中执行上述目标动作,并检测上述目标对象在执行上述目标动作的过程中产生的目标角度,其中,上述目标角度为上述目标对象的前进朝向和上述目标对象的滑行方向之间的夹角;第一调整单元,用于在检测到当前针对上述第一虚拟按键与上述第二虚拟按键执行上述长按操作,且上述目标角度达到与上述当前路径相匹配的第一角度阈值的情况下,将上述第一虚拟按键和上述第二虚拟按键的按键状态调整为失效状态,以使上述目标对象进入被动执行上述目标动作的状态,其中,上述失效状态用于指示上述第一虚拟按键和上述第二虚拟按键的按键响应逻辑处于中止状态。
根据本申请实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述对象控制方法。
根据本申请实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的对象控制方法。
根据本申请实施例的又一方面,还提供了一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行上述的对象控制方法。
在本申请实施例中,本实施例中所提供的对象控制方法,在人机交互应用的客户端运行的过程中,获取到对客户端显示的人机交互界面中的第一虚拟按键和第二虚拟按键执行长按操作生成的操作指令之后,响应该操作指令,控制目标对象在当前路径执行目标动作,并检测该目标对象在执行目标动作的过程中产生的目标角度。然后,在检测到当前针对第一虚拟按键与第二虚拟按键正在执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将上述第一虚拟按键和第二虚拟按键的按键状态调整为失效状态。也就是说,在客户端控制的目标对象在当前路径中执行目标动作的过程中,通过检测所产生的目标角度与第一角度阈值之间的相对关系,控制目标对象自动进入被动执行目标动作的状态,而不再依赖于玩家的游戏经验,使玩家无需根据游戏经验手动调整控制操作,以确定目标对象执行目标动作时所需的角度,从而降低玩家的操作难度,提高控制目标对象执行目标动作时的控制准确性,克服了相关技术中因玩家对目标对象的控制操作不熟练而导致的控制准确性较差的问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种可选的对象控制方法的硬件环境的示意图;
图2是根据本申请实施例的另一种可选的对象控制方法的硬件环境的示意图;
图3是根据本申请实施例的一种可选的对象控制方法的流程图;
图4是根据本申请实施例的一种可选的对象控制方法的示意图;
图5是根据本申请实施例的另一种可选的对象控制方法的示意图;
图6是根据本申请实施例的另一种可选的对象控制方法的流程图;
图7是根据本申请实施例的又一种可选的对象控制方法的示意图;
图8是根据本申请实施例的又一种可选的对象控制方法的示意图;
图9是根据本申请实施例的一种可选的对象控制方法中目标分类模型的示意图;
图10是根据本申请实施例的又一种可选的对象控制方法中初始分类模型的示意图;
图11是根据本申请实施例的一种可选的对象控制方法的配置界面示意图;
图12是根据本申请实施例的一种可选的对象控制装置的结构示意图;
图13是根据本申请实施例的一种可选的电子装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种对象控制方法,可选地,作 为一种可选的实施方式,上述对象控制方法可以但不限于应用于如图1所示的硬件环境中。假设用户设备102中安装有人机交互应用的客户端(如图1所示为竞速类游戏应用客户端),在该客户端运行的过程中,获取对该客户端显示的人机交互界面中的第一虚拟按键(如图1左下角所示方向键)和第二虚拟按键(如图1右下角所示动作控制按键)执行长按操作生成的操作指令,如步骤S102。
其中,用户设备102中包括人机交互屏幕104,处理器106及存储器108。人机交互屏幕104用于获取人机交互操作;处理器106用于根据人机交互操作生成对应的操作指令,并响应操作指令控制目标对象执行对应的动作,该目标对象是用户通过客户端控制的虚拟对象,如竞速类游戏应用中的赛车等。存储器108用于存储上述操作指令及目标对象相关的属性信息。
然后,用户设备102可以执行步骤S104,通过网络110发送操作指令至服务器112。服务器112中包括数据库114及处理引擎116。如步骤S106,服务器112调用处理引擎116,从数据库114中确定出与上述目标对象所在当前路径相匹配的第一角度阈值。然后,服务器112执行步骤S108,通过网络110将确定出的第一角度阈值发送给用户设备102,以使用户设备102利用获取到的第一角度阈值执行步骤S110,控制目标对象在当前路径中执行目标动作。
此外,作为一种可选的实施方式,上述对象控制方法还可以但不限于应用于如图2所示的硬件环境中。仍假设用户设备102中安装有人机交互应用的客户端(如图2所示为竞速类游戏应用客户端),在该客户端运行的过程中,获取对该客户端显示的人机交互界面中的第一虚拟按键(如图2左下角所示方向键)和第二虚拟按键(如图2右下角所示动作控制按键)执行长按操作生成的操作指令,如步骤S202。
在获取到操作指令之后的后续操作可以但不限于应用于处理能力较强大的独立的处理设备中,而无需与服务器112进行数据交互,如该独立的处理设备仍为用户设备102,则用户设备102中的处理器106将响应上述操作指令,执行步骤S204-S208:控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度;然后,在检测到当前针对第一虚拟按键与第二虚拟按键正在执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态。其中,上述第一角度阈值可以但不限于预先存储在用户设备102的存储器108中。上述仅是一种示例,本实施例中对此不作任何限定。
需要说明的是,本实施例中所提供的对象控制方法,在人机交互应用的客户端运行的过程中,获取到对客户端显示的人机交互界面中的第一虚拟按键和第二虚拟按键执行长按操作生成的操作指令之后,响应该操作指令,控制由上述客户端控制的目标对象在当前路径执行目标动作,并检测该目标对 象在执行目标动作的过程中产生的目标角度。然后,在检测到当前针对第一虚拟按键与第二虚拟按键正在执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将上述第一虚拟按键和第二虚拟按键的按键状态调整为失效状态。也就是说,在客户端控制的目标对象在当前路径中执行目标动作的过程中,通过检测产生的目标角度与第一角度阈值之间的相对关系,以控制目标对象自动进入被动执行目标动作的状态,而不再依赖于玩家的游戏经验,使玩家无需根据游戏经验手动调整控制操作,以确定目标对象执行目标动作时所需的角度,从而降低玩家的操作难度,提高执行目标动作时的控制准确性,克服了相关技术中玩家对目标对象的控制操作不熟练所导致的控制准确性较差的问题。
可选地,在本实施例中,上述用户设备可以但不限于为手机、平板电脑、笔记本电脑、PC机等支持运行应用客户端的终端设备。上述服务器和用户设备可以但不限于通过网络实现数据交互,上述网络可以包括但不限于无线网络或有线网络。其中,该无线网络包括:蓝牙、WIFI及其他实现无线通信的网络。上述有线网络可以包括但不限于:广域网、城域网、局域网。上述仅是一种示例,本实施例中对此不作任何限定。
可选地,作为一种可选的实施方式,如图3所示,上述对象控制方法包括:
S302,获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,第一虚拟按键用于调整通过客户端控制的目标对象的前进朝向,第二虚拟按键用于触发目标对象执行目标动作;
S304,响应操作指令,控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度,其中,目标角度为目标对象的前进朝向和目标对象的滑行方向之间的夹角;
S306,在检测到当前针对第一虚拟按键与第二虚拟按键执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态,其中,失效状态用于指示第一虚拟按键和第二虚拟按键的按键响应逻辑处于中止状态。
可选地,在本实施例中,上述对象控制方法可以但不限于应用于对人机交互应用的客户端控制的对象实现自动控制的场景中,例如该人机交互应用可以但不限于为竞速类游戏应用,该目标对象可以对应为竞速类游戏应用中被操控的虚拟对象,如虚拟角色、虚拟装备、虚拟车辆等。上述目标动作可以但不限于为竞速游戏场景下的漂移动作,对应的目标角度可以但不限于为漂移角度。
也就是说,当目标对象在当前路径中执行漂移动作的情况下,通过本实 施例中提供的方案,可以实时获取该目标对象在执行漂移动作的过程中产生的目标角度,比对该目标角度及与当前路径相匹配的第一角度阈值,并在比对结果指示目标角度达到第一角度阈值的情况下,调整人机交互界面中的第一虚拟按键及第二虚拟按键的按键响应逻辑进入中止状态(即失效状态),以使目标对象自动进入被动漂移的状态。上述仅是一种示例,本实施例中对此不作任何限定。
需要说明的是,在本实施例中,在客户端控制的目标对象在当前路径中执行目标动作的过程中,通过检测目标对象产生的目标角度与第一角度阈值之间的相对关系,以控制目标对象自动进入被动执行目标动作的状态,而不再依赖于玩家的游戏经验,使玩家无需根据游戏经验来手动调整控制操作,以确定目标对象执行目标动作时所需的角度,从而降低玩家的操作难度,以提高执行目标动作时的控制准确性。
此外,在本实施例中,上述目标对象在当前路径中执行的目标动作可以但不限于为竞速场景中的漂移动作。需要说明的是,上述漂移动作需要在检测到针对第一虚拟按键及第二虚拟按键同时执行长按操作的状态下被触发执行。其中,第一虚拟按键可以但不限于为用于控制目标对象的前进朝向的方向键,如图4所示“左方向按键”和“右方向按键”。而第二虚拟按键可以但不限于为用于触发目标对象执行目标动作的触发控制键,如图4所示“漂移启动键”。
进一步,在本实施例中,在检测到用户针对上述虚拟按键执行操作时,对应的虚拟按键的显示状态表征该虚拟按键处于“操作状态”,如图4所示,检测到用户针对右方向按键(即第一虚拟按键)执行长按操作,则对应的虚拟按键显示为“实线”,检测到用户针对漂移启动键(即第二虚拟按键)执行长按操作,则对应的虚拟按键显示为“网格填充”。而在没有检测到用户针对上述虚拟按键执行操作时,对应的虚拟按键的显示状态表征该虚拟按键处于“无操作状态”,如图4所示检测到没有针对左方向按键执行长按操作时,则对应的虚拟按键显示为“虚线”,而检测到没有针对漂移启动键(即第二虚拟按键)执行长按操作,则对应的虚拟按键可以修改为“无填充”(图3中未示出)。
可选地,在本实施例中,上述虚拟按键的按键状态可以包括但不限于:有效状态、失效状态。其中,上述失效状态用于指示虚拟按键的按键响应逻辑处于中止状态。即对处于失效状态下的虚拟按键执行按压操作(如长按操作)时,后台将不执行该虚拟按键的按键响应逻辑。上述有效状态用于指示虚拟按键的按键响应逻辑正常。即对处于有效状态下的虚拟按键执行按压操作(如长按操作)时,后台将执行该虚拟按键的按键响应逻辑。
进一步,结合上述说明,在本实施例中,在第一虚拟按键和第二虚拟按键的按键状态被调整为失效状态,而目标对象进入被动执行目标动作的状态 后,可以但不限于:在人机交互界面中,使处于失效状态下的第一虚拟按键和第二虚拟按键的显示状态,与针对第一虚拟按键和第二虚拟按键正在执行长按操作时的显示状态保持一致。也就是说,目标角度达到第一角度阈值的情况下,保持虚拟按键的显示状态,使用户在无感知的情况下,控制目标对象完成在当前路径的目标动作。
可选地,在本实施例中,可以但不限于利用目标分类模型确定上述第一角度阈值(下文也可称作灵敏度),该目标分类模型为利用样本数据进行机器训练后得到的,用于确定与路径的路径信息相匹配的角度阈值,角度阈值为在路径中完成目标动作所用时长最短的角度。
进一步,在本实施例中,还可以对上述目标分类模型确定的第一角度阈值进行优化配置,配置方式可以包括但不限于:对客户端的配置界面中的角度阈值配置项执行配置操作实现。也就是说,针对登录客户端的不同用户账号,可以利用上述角度阈值配置项实现对角度阈值的灵活配置,以提升控制目标对象执行目标动作时的灵活性。
可选地,在本实施例中,上述客户端的后台可以但不限于实时直接监控到被控制的目标对象的目标角度,也可以但不限于在获取目标对象的前进朝向及目标对象的滑行方向后,再计算得到上述目标对象的目标角度。
例如,假设目标对象为竞速类车辆游戏应用中的虚拟车辆,目标对象的前进朝向可以对应为虚拟车辆的车头方向,目标对象的滑行朝向可以对应为虚拟车辆的车身实际滑动方向。进一步,利用上述两个方向来确定该虚拟车辆在执行漂移动作时所产生的目标角度。
可选地,在本实施例中,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,还包括:在目标对象进入被动执行目标动作的状态后,根据与当前路径相匹配的摩擦阻力,确定目标对象完成目标动作的剩余时长;在剩余时长内,控制目标对象完成目标动作。
需要说明的是,在同时长按第一虚拟按键和第二虚拟按键生成操作指令,以控制目标对象执行目标动作的过程中,如图5所示,在与目标对象前进朝向的垂直方向上会产生一个扭力F,该扭力F用于控制目标对象维持快速转向的状态,使目标角度不断变大,从而实现在当前路径下执行目标动作(如漂移动作,也可称作“甩尾动作”)。进一步,在目标角度达到第一角度阈值的情况下,在将第一虚拟按键与第二虚拟按键的按键状态调整为失效状态之后,该扭力F也将随之消失,目标对象进入被动执行目标动作的状态(即被动漂移)。随着当前路径的摩擦阻力对目标对象的影响,目标对象会在被动执行目标动作的过程中完成该目标动作,即,使目标对象脱离漂移动作,进入正常行驶状态。
具体结合图6所示步骤S602-S608进行说明:假设仍以竞速类游戏应用客户端为例进行说明,目标对象为客户端所控制的参与竞速的虚拟车辆,目 标动作为漂移动作。第一虚拟按键为方向键,第二虚拟按键为漂移启动键。
如步骤S602,客户端获取同时长按方向键(假设长按右方向按键)和漂移启动键生成的操作指令。响应该操作指令,客户端将执行步骤S604,控制对应的虚拟车辆在当前路径开始执行漂移动作。在执行漂移动作的过程中,虚拟车辆的目标角度将不断增大。如步骤S606,检测所产生的目标角度是否达到第一角度阈值α。若检测到目标角度尚未达到第一角度阈值α,则返回步骤S604,保持转向力继续转向漂移;若检测到目标角度达到第一角度阈值α,则执行步骤S608,控制虚拟车辆进入被动漂移状态。
其中,上述虚拟车辆在执行漂移动作的过程中产生的目标角度β可以如图7所示,是虚拟车辆的前进朝向(即车头方向)和虚拟车辆的滑行方向(即车身实际速度的矢量方向)之间的夹角。上述仅是一种示例,本实施例中对此不作任何限定。
通过本申请提供的实施例,在客户端控制的目标对象在当前路径上执行目标动作的过程中,通过检测目标对象产生的目标角度与第一角度阈值之间的相对关系,控制目标对象自动进入被动执行目标动作的状态,而不再依赖于玩家的游戏经验,使玩家无需根据游戏经验来手动调整控制操作,确定目标对象执行目标动作时所需的角度,降低玩家的操作难度,提高执行目标动作时的控制准确性,克服了相关技术中玩家对目标对象的控制操作不熟练所导致的控制准确性较差的问题。
作为一种可选的方案,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态时,还包括:
S1,在人机交互界面中,控制处于失效状态的第一虚拟按键和第二虚拟按键的显示状态,与针对第一虚拟按键和第二虚拟按键执行长按操作时的显示状态保持一致。
需要说明的是,在本实施例中,虚拟按键的显示状态可以但不限于通过人机交互界面中虚拟按键的UI表现来呈现。其中,在针对上述虚拟按键正在执行操作时,对应的虚拟按键的显示状态为“操作状态”,而在未针对上述虚拟按键执行操作时,对应的虚拟按键的显示状态为“无操作状态”。
具体结合图8所示界面来进行说明,如图8所示,假设仍以竞速类游戏应用客户端为例进行说明。其中,第一虚拟按键可以但不限于如图8左下角所示方向键,包括“左方向按键”和“右方向按键”。第二虚拟按键可以但不限于如图8右下角所示“漂移启动键”。在检测到左右拇指分别对“右方向按键”和“漂移启动键”执行长按操作后,“右方向按键”的显示状态为“操作状态”,如图8所示呈现“实线”,而对未检测到用户执行操作的“左方向按键”的显示状态为“无操作状态”,如图8所示呈现“虚线”。“漂移启动键”的显示状态为“操作状态”,如图8所示呈现“网格填充”。
进一步,如图8(a)所示,检测虚拟车辆在执行漂移动作的过程中产生 的目标角度β。随着漂移动作执行过程中产生的扭力F和摩擦阻力f之间的相互作用,该目标角度β将不断增大。在检测到虚拟车辆在执行漂移动作的过程中产生的目标角度β达到第一角度阈值α的情况下,将“右方向按键”和“漂移启动键”的按键状态调整为失效状态,以使虚拟车辆进入被动漂移的状态。在上述情况下,人机交互界面如图8(b)所示,“右方向按键”和“漂移启动键”的显示状态仍呈现长按操作对应的“操作状态”,即“右方向按键”呈现“实线”,“漂移启动键”呈现“网格填充”。也就是说,在“右方向按键”和“漂移启动键”的按键状态调整为失效状态的情况下,其显示状态将继续保持不变。
通过本申请提供的实施例,在目标角度达到第一角度阈值的情况下,控制处于失效状态的第一虚拟按键和第二虚拟按键的显示状态,保持与针对虚拟按键执行长按操作时的显示状态一致,以使用户在无感知的情况下,控制目标对象在当前路径完成目标动作。从而达到在用户无感知的情况下,自动按照第一角度阈值完成当前路径中的目标动作的执行,降低用户操作的操作难度,避免操作不熟练导致的失误问题。
作为一种可选的方案,在获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,还包括:
S1,将当前路径的路径信息输入目标分类模型,其中,目标分类模型是利用样本数据进行机器训练后所得到的模型,用于确定与路径的路径信息相匹配的角度阈值,角度阈值为在路径中完成目标动作所用时长最短的角度;
S2,根据目标分类模型的输出结果,确定与当前路径相匹配的第一角度阈值。
需要说明的是,在本实施例中,上述目标分类模型可以但不限于用于根据当前路径的路径信息对当前路径的行驶难度进行分类,并根据分类结果确定与当前路径相匹配的第一角度阈值,并将该第一角度阈值作为输出结果输出。其中,该第一角度阈值还可以但不限于用于表示玩家控制目标对象通过当前路径的灵敏度。
其中,上述目标分类模型可以如图9所示,根据当前路径的路径信息对当前路径的行驶难度进行分类可以包括但不限于:通过目标分类模型900提取当前路径的路径特征(如图9所示路径特征1~k),其中,该路径特征可以包括但不限于:弯道角度、弯道长度、摩擦阻力等。并将上述路径特征存储至数据库902中。然后,通过深度网络904中的嵌入函数904-2和神经网络层904-4,对上述数据库902中当前路径的路径特征进行深度学习,并通过分类器906确定该当前路径的行驶难度所属的分类等级。进一步,获取与该分类等级相适配的角度阈值作为与当前路径相匹配的第一角度阈值,得到输出结果908。其中,该第一角度阈值为在上述分类等级对应的路径中执行目标动作所用时长最短的角度。
可选地,在本实施例中,在将当前路径的路径信息输入目标分类模型之前,还包括:获取在N个样本路径中执行目标动作时产生的样本数据,其中,样本数据包括在第i个样本路径中执行目标动作时使用的角度及完成目标动作所用时长,i为大于等于1,且小于等于N的整数;将样本数据输入预先构建的初始分类模型,并根据初始分类模型的输出结果调整初始分类模型中的参数,以训练得到目标分类模型。
具体结合图10所示示例进行说明。预先构建初始分类模型,假设获取在N个样本路径中执行目标动作时产生的样本数据,其中,上述样本数据可以包括但不限于:在每个样本路径中执行目标动作时使用的角度及完成目标动作所用时长。其中,上述角度可以包括但不限于在样本路径中执行目标动作时的[角度 min,角度 max]及对应所用时长。进一步,获取上述样本路径的路径特征,如弯道角度、弯道长度、摩擦阻力等。
进一步,对上述N个样本路径的路径特征及样本数据进行深度学习。以样本路径1为例,获取样本路径1的路径特征(如图10所示路径特征1~k)及对应的样本数据1存储至数据库1002,并将上述路径特征1~k及样本数据1输入深度网络1004,通过深度网络1004中的嵌入函数1004-2和神经网络层1004-4,对上述样本路径1的路径特征及样本数据1进行深度学习,通过分类器1006得到输出结果1008。利用输出结果1008的反馈,调整优化初始分类模型中深度网络1004中的参数,以训练得到结果收敛的目标分类模型。以便于利用该目标分类模型确定在各个路径下执行目标动作的最优角度阈值。
通过本申请提供的实施例,将当前路径的路径信息输入目标分类模型,以便于利用该目标分类模型确定在当前路径下执行目标动作时的最优角度阈值。从而缩短目标对象在当前路径下执行目标动作的时长,提高控制目标对象执行目标动作的准确性。
作为一种可选的方案,在根据目标分类模型的输出结果确定与当前路径相匹配的第一角度阈值之后,还包括:
S1,获取对客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;
S2,响应配置指令,调整第一角度阈值,得到调整后的第一角度阈值。
具体结合图11所示进行说明,仍以上述实施例中提供的确定第一角度阈值的方式为例,在基于目标分类模型确定出第一角度阈值之后,还可以但不限于在人机交互界面上显示配置界面,该配置界面中可以包括角度阈值配置项,如图11所示“角度阈值”。进一步,获取对角度阈值配置项的参数值α执行配置操作生成的配置指令,以实现对目标分类模型确定出的第一角度阈值进行配置优化。
通过本申请提供的实施例,在根据目标分类模型的输出结果确定与当前 路径相匹配的第一角度阈值之后,获取对配置界面中的角度阈值配置项执行配置操作生成的配置指令,根据该配置指令对第一角度阈值进行进一步的优化调整,以使调整后的第一角度阈值与玩家的操作习惯相适配。从而便于不同玩家灵活调整不同的第一角度阈值,达到提高对象控制的灵活性。
作为一种可选的方案,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,还包括:
1)在检测到当前针对第一虚拟按键执行长按操作,而当前未针对第二虚拟按键执行按压操作的情况下,控制第一虚拟按键的按键状态保持失效状态;或者
2)在检测到当前未针对第一虚拟按键执行按压操作,而当前针对第二虚拟按键执行长按操作的情况下,控制第二虚拟按键的按键状态恢复为有效状态,其中,有效状态用于指示第二虚拟按键的按键响应逻辑恢复正常;或者
3)在检测到当前未针对第一虚拟按键与第二虚拟按键执行按压操作的情况下,控制第一虚拟按键与第二虚拟按键的按键状态恢复为有效状态,其中,有效状态用于指示第一虚拟按键与第二虚拟按键的按键响应逻辑恢复正常。
需要说明的是,在本实施例中,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,为了恢复第一虚拟按键与第二虚拟按键的按键响应逻辑,可以但不限于释放对两个按键的按压,即不再对第一虚拟按键与第二虚拟按键执行按压操作。
此外,在检测到用户仍针对第一虚拟按键执行长按操作,而释放对于第二虚拟按键的按压(即不再执行按压操作)的情况下,则控制第一虚拟按键的按键状态保持失效状态,以使用户在较短时间内无感知的情况下,完成目标动作。而在检测到用户释放对于第一虚拟按键的按压(即不再执行按压操作),而针对第二虚拟按键仍然执行长按操作的情况下,则恢复第二虚拟按键的按键响应逻辑,以使第二虚拟按键可以再次响应用户操作快速重新启动,缩短下一次执行目标动作的启动时长。
通过本申请提供的实施例,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,根据用户对于虚拟按键不同的操作方式(如按压或抬起来),控制虚拟按键执行不同操作逻辑,以达到扩展按键操作功能的效果。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述对象控制方法的对象控制装置。如图12所示,该装置包括:
1)第一获取单元1202,用于获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,第一虚拟按键用于调整通过客户端控制的目标对象的前进朝向,第二虚拟按键用于触发目标对象执行目标动作;
2)第一控制单元1204,用于响应操作指令,控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度,其中,目标角度为目标对象的前进朝向和目标对象的滑行方向之间的夹角;
3)第一调整单元1206,用于在检测到当前针对第一虚拟按键与第二虚拟按键执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态,其中,失效状态用于指示第一虚拟按键和第二虚拟按键的按键响应逻辑处于中止状态。
作为一种可选的方案,上述装置还包括:
1)显示单元,用于在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态时,在人机交互界面中,控制处于失效状态的第一虚拟按键和第二虚拟按键的显示状态,与针对第一虚拟按键和第二虚拟按键执行长按操作时第一虚拟按键和第二虚拟按键的显示状态保持一致。
通过本申请提供的实施例,在目标角度达到第一角度阈值的情况下,控制处于失效状态的第一虚拟按键和第二虚拟按键的显示状态,保持与执行长按操作时虚拟按键的按键标识的显示状态一致,以使用户在无感知的情况下,控制目标对象完成在当前路径的目标动作。从而达到在对用户无感知的情况下,自动按照第一角度阈值完成当前路径中的目标动作的执行,降低用户操作的操作难度,避免操作不熟练所导致的失误问题。
作为一种可选的方案,上述装置还包括:
1)输入单元,用于在获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,将当前路径的路径信息输入目标分类模型,其中,目标分类模型是利用样本数据进行机器训练后所得到的模型,用于确定与路径的路径信息相匹配的角度阈值,角度阈值为在路径中完成目标动作所用时长最短的角度;
2)确定单元,用于根据目标分类模型的输出结果,确定与当前路径相匹配的第一角度阈值。
可选地,在本实施例中,上述装置还包括:
1)第二获取单元,用于在将当前路径的路径信息输入目标分类模型之前,获取在N个样本路径中执行目标动作时产生的样本数据,其中,样本数据包括在第i个样本路径中执行目标动作时所使用的角度及完成目标动作所用时长,i为大于等于1,且小于等于N的整数;
2)训练单元,用于将样本数据输入预先构建的初始分类模型,并根据初 始分类模型的输出结果调整初始分类模型中的参数,以训练得到目标分类模型。
通过本申请提供的实施例,将当前路径的路径信息输入目标分类模型,以便于利用该目标分类模型确定在当前路径下执行目标动作时的最优角度阈值。从而缩短目标对象在当前路径下执行目标动作的时长,提高控制目标对象执行目标动作的准确性。
作为一种可选的方案,上述装置还包括:
1)第三获取单元,用于在根据目标分类模型的输出结果确定与当前路径相匹配的第一角度阈值之后,获取对客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;
2)第二调整单元,用于响应配置指令,调整第一角度阈值,得到调整后的第一角度阈值。
通过本申请提供的实施例,在根据目标分类模型的输出结果确定与当前路径相匹配的第一角度阈值之后,获取对配置界面中的角度阈值配置项执行配置操作生成的配置指令,根据该配置指令对第一角度阈值进行进一步的优化调整,以使调整后的第一角度阈值与玩家的操作习惯相适配。从而使得不同玩家可以灵活调整出不同的第一角度阈值,达到提高对象控制的灵活性。
作为一种可选的方案,上述装置还包括:
1)第二控制单元,用于在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,在检测到当前针对第一虚拟按键执行长按操作,而当前未针对第二虚拟按键执行按压操作的情况下,控制第一虚拟按键的按键状态保持失效状态;或者
2)第三控制单元,用于在检测到当前未针对第一虚拟按键执行按压操作,而当前针对第二虚拟按键执行长按操作的情况下,控制第二虚拟按键的按键状态恢复为有效状态,其中,有效状态用于指示第二虚拟按键的按键响应逻辑恢复正常;或者
3)第四控制单元,用于在检测到当前未针对第一虚拟按键与第二虚拟按键执行按压操作的情况下,控制第一虚拟按键与第二虚拟按键的按键状态恢复为有效状态,其中,有效状态用于指示第一虚拟按键与第二虚拟按键的按键响应逻辑恢复正常。
通过本申请提供的实施例,在将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态之后,根据用户对于虚拟按键不同的操作方式(如按压或抬起来),控制虚拟按键执行不同操作逻辑,以达到扩展按键操作功能的效果。
根据本申请实施例的又一个方面,还提供了一种用于实施上述对象控制方法的电子装置,如图13所示,该电子装置包括存储器1302和处理器1304,该存储器1302中存储有计算机程序,该处理器1304被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,第一虚拟按键用于调整通过客户端控制的目标对象的前进朝向,第二虚拟按键用于触发目标对象执行目标动作;
S2,响应操作指令,控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度,其中,目标角度为目标对象的前进朝向和目标对象的滑行方向之间的夹角;
S3,在检测到当前针对第一虚拟按键与第二虚拟按键执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态,其中,失效状态用于指示第一虚拟按键和第二虚拟按键的按键响应逻辑处于中止状态。
可选地,本领域普通技术人员可以理解,图13所示的结构仅为示意,电子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图13其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图13中所示更多或者更少的组件(如网络接口等),或者具有与图13所示不同的配置。
其中,存储器1302可用于存储软件程序以及模块,如本申请实施例中的对象控制方法和装置对应的程序指令/模块,处理器1304通过运行存储在存储器1302内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的对象控制方法。存储器1302可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1302可进一步包括相对于处理器1304远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1302具体可以但不限于用于操作指令、第一角度阈值及目标角度等信息。作为一种示例,如图13所示,上述存储器1302中可以但不限于包括上述对象控制装置中的提取单元1102、确定单元1104、生成单元1106及处理单元1108。此外,还可以包括但不限于上述对象控制装置中的其他模块单元,本示例中不再赘述。
可选地,上述的传输装置1306用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1306包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与 其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1306为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子装置还包括:显示器1308,用于显示上述人机交互界面及目标对象在当前路径中执行目标动作的画面;和连接总线1310,用于连接上述电子装置中的各个模块部件。
根据本申请的实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,第一虚拟按键用于调整通过客户端控制的目标对象的前进朝向,第二虚拟按键用于触发目标对象执行目标动作;
S2,响应操作指令,控制目标对象在当前路径中执行目标动作,并检测目标对象在执行目标动作的过程中产生的目标角度,其中,目标角度为目标对象的前进朝向和目标对象的滑行方向之间的夹角;
S3,在检测到当前针对第一虚拟按键与第二虚拟按键执行长按操作,且目标角度达到与当前路径相匹配的第一角度阈值的情况下,将第一虚拟按键和第二虚拟按键的按键状态调整为失效状态,以使目标对象进入被动执行目标动作的状态,其中,失效状态用于指示第一虚拟按键和第二虚拟按键的按键响应逻辑处于中止状态。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施 例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种对象控制方法,应用于终端设备,包括:
    获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,所述第一虚拟按键用于调整通过所述客户端控制的目标对象的前进朝向,所述第二虚拟按键用于触发所述目标对象执行目标动作;
    响应所述操作指令,控制所述目标对象在当前路径中执行所述目标动作,并检测所述目标对象在执行所述目标动作的过程中产生的目标角度,其中,所述目标角度为所述目标对象的前进朝向和所述目标对象的滑行方向之间的夹角;
    在检测到当前针对所述第一虚拟按键与所述第二虚拟按键执行所述长按操作,且所述目标角度达到与所述当前路径相匹配的第一角度阈值的情况下,将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态,以使所述目标对象进入被动执行所述目标动作的状态,其中,所述失效状态用于指示所述第一虚拟按键和所述第二虚拟按键的按键响应逻辑处于中止状态。
  2. 根据权利要求1所述的方法,在将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态时,还包括:
    在所述人机交互界面中,控制处于所述失效状态的所述第一虚拟按键和所述第二虚拟按键的显示状态,与针对所述第一虚拟按键和所述第二虚拟按键执行所述长按操作时所述第一虚拟按键和所述第二虚拟按键的显示状态保持一致。
  3. 根据权利要求1所述的方法,在所述获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,还包括:
    将所述当前路径的路径信息输入目标分类模型,其中,所述目标分类模型是利用样本数据进行机器训练后得到的模型,用于确定与路径的路径信息相匹配的角度阈值,所述角度阈值为在所述路径中完成所述目标动作所用时长最短的角度;
    根据所述目标分类模型的输出结果,确定与所述当前路径相匹配的所述第一角度阈值。
  4. 根据权利要求3所述的方法,在所述将所述当前路径的路径信息输入目标分类模型之前,还包括:
    获取在N个样本路径中执行所述目标动作时产生的所述样本数据,其中,所述样本数据包括在第i个样本路径中执行所述目标动作时使用的角度及完成所述目标动作所用时长,i为大于等于1,且小于等于N的整数;
    将所述样本数据输入预先构建的初始分类模型,并根据所述初始分类模型的输出结果调整所述初始分类模型中的参数,以训练得到所述目标分类模 型。
  5. 根据权利要求3所述的方法,在所述根据所述目标分类模型的输出结果确定与所述当前路径相匹配的所述第一角度阈值之后,还包括:
    获取对所述客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;
    响应所述配置指令,调整所述第一角度阈值,得到调整后的所述第一角度阈值。
  6. 根据权利要求1所述的方法,在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,还包括:
    在所述目标对象进入被动执行所述目标动作的状态后,根据与所述当前路径相匹配的摩擦阻力,确定所述目标对象完成所述目标动作的剩余时长;
    在所述剩余时长内,控制所述目标对象完成所述目标动作。
  7. 根据权利要求1至6中任一项所述的方法,在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,还包括:
    在检测到当前针对所述第一虚拟按键执行所述长按操作,而当前未针对所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键的按键状态保持所述失效状态;或者
    在检测到当前未针对所述第一虚拟按键执行按压操作,而当前针对所述第二虚拟按键执行所述长按操作的情况下,控制所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第二虚拟按键的按键响应逻辑恢复正常;或者
    在检测到当前未针对所述第一虚拟按键与所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键与所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第一虚拟按键与所述第二虚拟按键的按键响应逻辑恢复正常。
  8. 一种对象控制装置,包括:
    第一获取单元,用于获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,所述第一虚拟按键用于调整通过所述客户端控制的目标对象的前进朝向,所述第二虚拟按键用于触发所述目标对象执行目标动作;
    第一控制单元,用于响应所述操作指令,控制所述目标对象在当前路径中执行所述目标动作,并检测所述目标对象在执行所述目标动作的过程中产生的目标角度,其中,所述目标角度为所述目标对象的前进朝向和所述目标对象的滑行方向之间的夹角;
    第一调整单元,用于在检测到当前针对所述第一虚拟按键与所述第二虚拟按键执行所述长按操作,且所述目标角度达到与所述当前路径相匹配的第一角度阈值的情况下,将所述第一虚拟按键和所述第二虚拟按键的按键状态 调整为失效状态,以使所述目标对象进入被动执行所述目标动作的状态,其中,所述失效状态用于指示所述第一虚拟按键和所述第二虚拟按键的按键响应逻辑处于中止状态。
  9. 根据权利要求8所述的装置,所述装置还包括:
    显示单元,用于在将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态时,在所述人机交互界面中,控制处于所述失效状态的所述第一虚拟按键和所述第二虚拟按键的显示状态,与针对所述第一虚拟按键和所述第二虚拟按键执行所述长按操作时所述第一虚拟按键和所述第二虚拟按键的显示状态保持一致。
  10. 根据权利要求8所述的装置,还包括:
    输入单元,用于在所述获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,将所述当前路径的路径信息输入目标分类模型,其中,所述目标分类模型是利用样本数据进行机器训练后得到的,用于确定与路径的路径信息相匹配的角度阈值,所述角度阈值为在所述路径中完成所述目标动作所用时长最短的角度;
    确定单元,用于根据所述目标分类模型的输出结果,确定与所述当前路径相匹配的所述第一角度阈值。
  11. 根据权利要求10所述的装置,还包括:
    第二获取单元,用于在所述将所述当前路径的路径信息输入目标分类模型之前,获取在N个样本路径中执行所述目标动作时产生的所述样本数据,其中,所述样本数据包括在第i个样本路径中执行所述目标动作时使用的角度及完成所述目标动作所用时长,i为大于等于1,且小于等于N的整数;
    训练单元,用于将所述样本数据输入预先构建的初始分类模型,并根据所述初始分类模型的输出结果调整所述初始分类模型中的参数,以训练得到所述目标分类模型。
  12. 根据权利要求10所述的装置,还包括:
    第三获取单元,用于在所述根据所述目标分类模型的输出结果确定与所述当前路径相匹配的所述第一角度阈值之后,获取对所述客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;
    第二调整单元,用于响应所述配置指令,调整所述第一角度阈值,得到调整后的所述第一角度阈值。
  13. 根据权利要求8至12中任一项所述的装置,还包括:
    第二控制单元,用于在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,在检测到当前针对所述第一虚拟按键执行所述长按操作,而当前未针对所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键的按键状态保持所述失效状态;或者
    第三控制单元,用于在检测到当前未针对所述第一虚拟按键执行按压操 作,而当前针对所述第二虚拟按键执行所述长按操作的情况下,控制所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第二虚拟按键的按键响应逻辑恢复正常;或者
    第四控制单元,用于在检测到当前未针对所述第一虚拟按键与所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键与所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第一虚拟按键与所述第二虚拟按键的按键响应逻辑恢复正常。
  14. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至7任一项中所述的方法。
  15. 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至7任一项中所述的方法。
  16. 一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行权利要求1至7任一项中所述的方法。
PCT/CN2020/072635 2019-02-21 2020-01-17 对象控制方法和装置、存储介质及电子装置 WO2020168877A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020217013308A KR102549758B1 (ko) 2019-02-21 2020-01-17 오브젝트 제어 방법 및 장치, 저장 매체 및 전자 장치
SG11202103686VA SG11202103686VA (en) 2019-02-21 2020-01-17 Object control method and apparatus, storage medium, and electronic apparatus
JP2021536060A JP7238136B2 (ja) 2019-02-21 2020-01-17 オブジェクト制御方法とオブジェクト制御装置、コンピュータ・プログラム、および電子装置
US17/320,051 US11938400B2 (en) 2019-02-21 2021-05-13 Object control method and apparatus, storage medium, and electronic apparatus
US18/444,415 US20240189711A1 (en) 2019-02-21 2024-02-16 Drift control assistance in virtual environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910130187.XA CN109806590B (zh) 2019-02-21 2019-02-21 对象控制方法和装置、存储介质及电子装置
CN201910130187.X 2019-02-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/320,051 Continuation US11938400B2 (en) 2019-02-21 2021-05-13 Object control method and apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2020168877A1 true WO2020168877A1 (zh) 2020-08-27

Family

ID=66607100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/072635 WO2020168877A1 (zh) 2019-02-21 2020-01-17 对象控制方法和装置、存储介质及电子装置

Country Status (6)

Country Link
US (2) US11938400B2 (zh)
JP (1) JP7238136B2 (zh)
KR (1) KR102549758B1 (zh)
CN (1) CN109806590B (zh)
SG (1) SG11202103686VA (zh)
WO (1) WO2020168877A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113476823A (zh) * 2021-07-13 2021-10-08 网易(杭州)网络有限公司 虚拟对象控制方法、装置、存储介质及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109806590B (zh) * 2019-02-21 2020-10-09 腾讯科技(深圳)有限公司 对象控制方法和装置、存储介质及电子装置
CN110207716B (zh) * 2019-04-26 2021-08-17 纵目科技(上海)股份有限公司 一种参考行驶线快速生成方法、系统、终端和存储介质
CN110201387B (zh) 2019-05-17 2021-06-25 腾讯科技(深圳)有限公司 对象控制方法和装置、存储介质及电子装置
CN111388991B (zh) * 2020-03-12 2023-12-01 安徽艺像网络科技有限公司 一种基于多点触控的游戏交互方法
CN116481781B (zh) * 2022-12-01 2024-08-23 广州星际悦动股份有限公司 按键测试方法及系统、按键测试设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228102A1 (en) * 2014-09-28 2017-08-10 Zte Corporation Method and device for operating a touch screen
CN108985367A (zh) * 2018-07-06 2018-12-11 中国科学院计算技术研究所 计算引擎选择方法和基于该方法的多计算引擎平台
CN109107152A (zh) * 2018-07-26 2019-01-01 网易(杭州)网络有限公司 控制虚拟对象漂移的方法和设备
CN109806590A (zh) * 2019-02-21 2019-05-28 腾讯科技(深圳)有限公司 对象控制方法和装置、存储介质及电子装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3165768B2 (ja) * 1993-10-21 2001-05-14 株式会社ナムコ ビデオゲーム装置
JP2016120131A (ja) * 2014-12-25 2016-07-07 株式会社バンダイナムコエンターテインメント ゲームシステム及びサーバ
US9687741B1 (en) * 2015-03-10 2017-06-27 Kabam, Inc. System and method for providing separate drift and steering controls
JP6869692B2 (ja) * 2016-10-19 2021-05-12 任天堂株式会社 ゲームプログラム、ゲーム処理方法、ゲームシステム、およびゲーム装置
WO2018216080A1 (ja) * 2017-05-22 2018-11-29 任天堂株式会社 ゲームプログラム、情報処理装置、情報処理システム、および、ゲーム処理方法
EP3441120A4 (en) * 2017-05-22 2020-01-22 Nintendo Co., Ltd. GAME PROGRAM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND GAME PROCESSING METHOD
CN109491579B (zh) * 2017-09-12 2021-08-17 腾讯科技(深圳)有限公司 对虚拟对象进行操控的方法和装置
CN108939546B (zh) * 2018-05-21 2021-09-03 网易(杭州)网络有限公司 虚拟对象的漂移控制方法及装置、电子设备、存储介质
CN109513210B (zh) * 2018-11-28 2021-02-12 腾讯科技(深圳)有限公司 虚拟世界中的虚拟车辆漂移方法、装置及存储介质
CN109806586B (zh) * 2019-02-28 2022-02-22 腾讯科技(深圳)有限公司 游戏辅助功能的开启方法、装置、设备及可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228102A1 (en) * 2014-09-28 2017-08-10 Zte Corporation Method and device for operating a touch screen
CN108985367A (zh) * 2018-07-06 2018-12-11 中国科学院计算技术研究所 计算引擎选择方法和基于该方法的多计算引擎平台
CN109107152A (zh) * 2018-07-26 2019-01-01 网易(杭州)网络有限公司 控制虚拟对象漂移的方法和设备
CN109806590A (zh) * 2019-02-21 2019-05-28 腾讯科技(深圳)有限公司 对象控制方法和装置、存储介质及电子装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAME FRONTLINE INFORMATION ANALYSIS: "QQ Speed ​​Mobile Games: speak from the data! The B car data you want is here!", BAIJIAHAO.BAIDU.COM, BAIDU, CN, 26 February 2018 (2018-02-26), CN, pages 1 - 10, XP055730719, Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1593329611890592319&wfr=spider&for=pc> [retrieved on 20200915] *
SHARK GIRL MOBILE GAMES VIDEO: "QQ Speed, advanced drift teaching, new fast drift CWW jet", BILIBILI.COM, 17 April 2018 (2018-04-17), CN, pages 1 - 2, XP054980895, [retrieved on 20200915] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113476823A (zh) * 2021-07-13 2021-10-08 网易(杭州)网络有限公司 虚拟对象控制方法、装置、存储介质及电子设备
CN113476823B (zh) * 2021-07-13 2024-02-27 网易(杭州)网络有限公司 虚拟对象控制方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN109806590A (zh) 2019-05-28
JP7238136B2 (ja) 2023-03-13
US20210260478A1 (en) 2021-08-26
CN109806590B (zh) 2020-10-09
KR102549758B1 (ko) 2023-06-29
US20240189711A1 (en) 2024-06-13
SG11202103686VA (en) 2021-05-28
KR20210064373A (ko) 2021-06-02
US11938400B2 (en) 2024-03-26
JP2022520699A (ja) 2022-04-01

Similar Documents

Publication Publication Date Title
WO2020168877A1 (zh) 对象控制方法和装置、存储介质及电子装置
JP7077463B2 (ja) スマートデバイスの識別および制御
WO2020199820A1 (zh) 对象控制方法和装置、存储介质及电子装置
US11526325B2 (en) Projection, control, and management of user device applications using a connected resource
CN109952757B (zh) 基于虚拟现实应用录制视频的方法、终端设备及存储介质
US9400548B2 (en) Gesture personalization and profile roaming
WO2020224361A1 (zh) 动作执行方法和装置、存储介质及电子装置
US9019201B2 (en) Evolving universal gesture sets
WO2020238636A1 (zh) 虚拟对象控制方法和装置、存储介质及电子装置
WO2017133500A1 (zh) 应用程序的处理方法和装置
WO2020233395A1 (zh) 对象控制方法和装置、存储介质及电子装置
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
CN111367488B (zh) 语音设备及语音设备的交互方法、设备、存储介质
US10678327B2 (en) Split control focus during a sustained user interaction
WO2020216018A1 (zh) 操作控制方法和装置、存储介质及设备
US9952668B2 (en) Method and apparatus for processing virtual world
US9437158B2 (en) Electronic device for controlling multi-display and display control method thereof
US11314344B2 (en) Haptic ecosystem
CN108536367B (zh) 一种交互页面卡顿的处理方法、终端及可读存储介质
US20170161011A1 (en) Play control method and electronic client
KR102405307B1 (ko) 전자 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체
CN113413590B (zh) 一种信息验证方法、装置、计算机设备及存储介质
US20160078635A1 (en) Avatar motion modification
US9075880B2 (en) Method of associating multiple applications
CN114281185B (zh) 基于嵌入式平台的体态识别及体感交互系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20758881

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217013308

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021536060

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20758881

Country of ref document: EP

Kind code of ref document: A1