WO2020168877A1 - 对象控制方法和装置、存储介质及电子装置 - Google Patents
对象控制方法和装置、存储介质及电子装置 Download PDFInfo
- Publication number
- WO2020168877A1 WO2020168877A1 PCT/CN2020/072635 CN2020072635W WO2020168877A1 WO 2020168877 A1 WO2020168877 A1 WO 2020168877A1 CN 2020072635 W CN2020072635 W CN 2020072635W WO 2020168877 A1 WO2020168877 A1 WO 2020168877A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- virtual
- button
- virtual button
- state
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/44—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/214—Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
- A63F13/2145—Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/22—Setup operations, e.g. calibration, key configuration or button assignment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/23—Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/422—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/803—Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- This application relates to the computer field, specifically, to an object control technology.
- the racing track includes curves with different turning angles.
- the player In order to shorten the time for the target object controlled by the player to drive through the curve, the player often controls the target object to achieve a drifting action through the control buttons set in the human-computer interaction interface.
- the player is usually required to manually adjust the control operation of the control button according to the game experience to determine the drift angle of the target object during the drift process, so that the target object is determined according to the determined drift angle. Continue driving after drifting at the drift angle.
- the player is not proficient in the control operation of the target object, it will easily lead to drift errors, which will affect the results of the race.
- the object control method provided in the related technology requires relatively high operation requirements for players, and there is a problem of poor control accuracy in the process of controlling the object to achieve drift.
- the embodiments of the present application provide an object control method and device, a storage medium, and an electronic device to at least solve the technical problem of poor control accuracy in the process of controlling an object to achieve a target action.
- an object control method applied to a terminal device, including: acquiring a first virtual button and a second virtual button in a human-computer interaction interface displayed on a client to perform a long-press operation generation
- the first virtual button is used to adjust the forward direction of the target object controlled by the client
- the second virtual button is used to trigger the target object to perform the target action
- the target object is controlled Execute the target action in the current path, and detect the target angle generated by the target object during the execution of the target action, wherein the target angle is the clip between the forward direction of the target object and the sliding direction of the target object Angle
- the first virtual button In the case of detecting that the long press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the first angle threshold that matches the current path, the first virtual button
- the button state of the second virtual button is adjusted to a disabled state, so that the target object enters a state of passively performing the target action, wherein the disabled
- an object control device including: a first obtaining unit, configured to obtain the first virtual key and the second virtual key execution in the human-computer interaction interface displayed to the client An operation instruction generated by a long press operation, wherein the first virtual button is used to adjust the forward direction of the target object controlled by the client, and the second virtual button is used to trigger the target object to perform the target action; the first control unit, Used to respond to the above operating instructions, control the target object to perform the target action in the current path, and detect the target angle generated by the target object during the execution of the target action, wherein the target angle is the forward direction of the target object The angle between the sliding direction and the sliding direction of the target object; the first adjusting unit is configured to detect that the long-press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the same as the current In the case of the first angle threshold that matches the path, the key states of the first virtual key and the second virtual key are adjusted to a disabled state, so
- a storage medium in which a computer program is stored, wherein the computer program is configured to execute the above object control method when running.
- an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the above through the computer program Object control method.
- a computer program product including instructions, which when run on a computer, cause the computer to execute the above object control method.
- the object control method provided in this embodiment acquires the first virtual button and the first virtual button in the human-computer interaction interface displayed to the client during the operation of the client of the human-computer interaction application.
- the virtual button executes the operation instruction generated by the long press operation
- the target object is controlled to perform the target action on the current path, and the target angle generated by the target object during the execution of the target action is detected.
- the target angle reaches the first angle threshold that matches the current path
- the first virtual button and the second virtual button The button state of the button is adjusted to the disabled state.
- the target object controlled by the client executes the target action in the current path, by detecting the relative relationship between the generated target angle and the first angle threshold, the target object is controlled to automatically enter the passive execution target action.
- Status instead of relying on the player’s game experience, so that the player does not need to manually adjust the control operation based on the game experience to determine the angle required for the target object to perform the target action, thereby reducing the difficulty of the player’s operation and improving the control of the target object to perform the target action
- Time control accuracy overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
- Fig. 1 is a schematic diagram of a hardware environment of an optional object control method according to an embodiment of the present application
- FIG. 2 is a schematic diagram of the hardware environment of another optional object control method according to an embodiment of the present application.
- Fig. 3 is a flowchart of an optional object control method according to an embodiment of the present application.
- Fig. 4 is a schematic diagram of an optional object control method according to an embodiment of the present application.
- Fig. 5 is a schematic diagram of another optional object control method according to an embodiment of the present application.
- Fig. 6 is a flowchart of another optional object control method according to an embodiment of the present application.
- Fig. 7 is a schematic diagram of yet another optional object control method according to an embodiment of the present application.
- Fig. 8 is a schematic diagram of yet another optional object control method according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of a target classification model in an optional object control method according to an embodiment of the present application.
- Fig. 10 is a schematic diagram of an initial classification model in yet another optional object control method according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of a configuration interface of an optional object control method according to an embodiment of the present application.
- Fig. 12 is a schematic structural diagram of an optional object control device according to an embodiment of the present application.
- Fig. 13 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
- an object control method is provided.
- the above object control method can be, but is not limited to, applied to the hardware environment as shown in FIG. 1.
- a client of a human-computer interaction application is installed in the user equipment 102 (as shown in FIG. 1 is a racing game application client).
- the human-computer interaction interface displayed on the client is obtained
- the first virtual button (the direction key shown in the lower left corner of FIG. 1)
- the second virtual button (the action control button shown in the lower right corner of FIG. 1) execute the operation instructions generated by the long press operation, as in step S102.
- the user equipment 102 includes a human-computer interaction screen 104, a processor 106, and a memory 108.
- the human-computer interaction screen 104 is used to obtain human-computer interaction operations;
- the processor 106 is used to generate corresponding operation instructions according to the human-computer interaction operations, and respond to the operation instructions to control the target object to perform corresponding actions.
- the target object is controlled by the user through the client Virtual objects, such as racing cars in racing game applications.
- the memory 108 is used to store the aforementioned operation instructions and attribute information related to the target object.
- the user equipment 102 may execute step S104 to send an operation instruction to the server 112 via the network 110.
- the server 112 includes a database 114 and a processing engine 116.
- step S106 the server 112 calls the processing engine 116 to determine from the database 114 a first angle threshold that matches the current path where the target object is located.
- step S108 the server 112 executes step S108 to send the determined first angle threshold to the user equipment 102 via the network 110, so that the user equipment 102 uses the acquired first angle threshold to execute step S110, and controls the target object to execute in the current path Target action.
- the above object control method can also be applied but not limited to the hardware environment as shown in FIG. 2. It is still assumed that a client of a human-computer interaction application is installed in the user equipment 102 (as shown in FIG. 2 is a racing game application client), and the human-computer interaction interface displayed on the client is acquired during the running of the client
- the first virtual button (the direction key shown in the lower left corner of FIG. 2) and the second virtual button (the action control button shown in the lower right corner of FIG. 2) execute the operation instruction generated by the long press operation, as in step S202.
- Subsequent operations after obtaining the operation instructions can be, but are not limited to, applied to an independent processing device with stronger processing capabilities without data interaction with the server 112. If the independent processing device is still the user equipment 102, the user equipment The processor 106 in 102 will respond to the above operation instructions to perform steps S204-S208: control the target object to perform the target action in the current path, and detect the target angle generated by the target object in the process of performing the target action; then, after detecting When the first virtual button and the second virtual button are currently performing a long press operation, and the target angle reaches the first angle threshold that matches the current path, adjust the button state of the first virtual button and the second virtual button to The invalid state, so that the target object enters the state of passively performing the target action.
- the foregoing first angle threshold may be, but not limited to, pre-stored in the memory 108 of the user equipment 102. The foregoing is only an example, and there is no limitation on this in this embodiment.
- the object control method provided in this embodiment acquires the first virtual button and the second virtual button in the human-computer interaction interface displayed to the client during the operation of the client of the human-computer interaction application.
- the button executes the operation instruction generated by the long press operation
- the target object controlled by the client is controlled to execute the target action on the current path
- the target angle generated by the target object during the execution of the target action is detected.
- the target angle reaches the first angle threshold that matches the current path
- the first virtual button and the second virtual button The button state of the button is adjusted to the disabled state.
- the relative relationship between the generated target angle and the first angle threshold is detected to control the target object to automatically enter the passive execution of the target action.
- the player instead of relying on the player’s game experience, so that the player does not need to manually adjust the control operation according to the game experience to determine the angle required by the target object to perform the target action, thereby reducing the difficulty of the player’s operation and improving the control when performing the target action
- the accuracy overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
- the above-mentioned user equipment may be, but not limited to, a mobile phone, a tablet computer, a notebook computer, a PC, and other terminal devices that support running application clients.
- the foregoing server and user equipment may, but are not limited to, implement data interaction through a network
- the foregoing network may include, but is not limited to, a wireless network or a wired network.
- the wireless network includes: Bluetooth, WIFI and other networks that realize wireless communication.
- the aforementioned wired network may include, but is not limited to: wide area network, metropolitan area network, and local area network. The foregoing is only an example, and there is no limitation on this in this embodiment.
- the foregoing object control method includes:
- the above-mentioned object control method can be but not limited to be applied to scenarios in which objects controlled by the client of a human-computer interaction application are automatically controlled.
- the human-computer interaction application can be, but is not limited to, a competition.
- the target object may correspond to virtual objects that are manipulated in racing game applications, such as virtual characters, virtual equipment, virtual vehicles, etc.
- the aforementioned target action may be, but not limited to, a drifting action in a racing game scene, and the corresponding target angle may be, but not limited to, a drifting angle.
- the target angle generated by the target object during the drifting action can be obtained in real time, and compared with the target angle And the first angle threshold that matches the current path, and when the comparison result indicates that the target angle reaches the first angle threshold, adjust the button response logic of the first virtual button and the second virtual button in the human-computer interaction interface to enter Suspended state (ie, failure state), so that the target object automatically enters a passive drift state.
- Suspended state ie, failure state
- the target object controlled by the client executes the target action in the current path
- the relative relationship between the target angle generated by the target object and the first angle threshold is detected to control
- the target object automatically enters the state of passively performing the target action, instead of relying on the player's game experience, so that the player does not need to manually adjust the control operation according to the game experience to determine the angle required by the target object to perform the target action, thereby reducing the player’s Difficulty of operation to improve the accuracy of control when performing the target action.
- the target action performed by the target object in the current path may be, but not limited to, a drift action in a racing scene. It should be noted that the above drifting action needs to be triggered to be executed in a state where it is detected that the first virtual key and the second virtual key are simultaneously long-pressed.
- the first virtual key may be, but is not limited to, a direction key used to control the forward direction of the target object, such as "left direction key” and “right direction key” as shown in FIG. 4.
- the second virtual key can be, but is not limited to, a trigger control key used to trigger the target object to perform the target action, such as the "drift start key" shown in FIG. 4.
- the display state of the corresponding virtual key indicates that the virtual key is in the "operating state”.
- the corresponding virtual button is displayed as a "solid line”
- the drift start button ie, the second virtual button
- the corresponding virtual button is displayed as "Grid Filling”.
- the key state of the virtual key may include, but is not limited to: a valid state and an invalid state.
- the above-mentioned invalid state is used to indicate that the button response logic of the virtual button is in a suspended state. That is, when a pressing operation (such as a long-press operation) is performed on a virtual button in an invalid state, the button response logic of the virtual button will not be executed in the background.
- the above valid state is used to indicate that the key response logic of the virtual key is normal. That is, when a pressing operation (such as a long press operation) is performed on a virtual button in a valid state, the button response logic of the virtual button will be executed in the background.
- the target object after the key states of the first virtual key and the second virtual key are adjusted to the disabled state, and the target object enters the state of passively performing the target action, it can be but not limited to: In the machine interactive interface, the display state of the first virtual button and the second virtual button in the disabled state are kept consistent with the display state when the first virtual button and the second virtual button are performing a long press operation. That is, when the target angle reaches the first angle threshold, the display state of the virtual key is maintained, so that the user can control the target object to complete the target action on the current path without perception.
- a target classification model determines the aforementioned first angle threshold (hereinafter also referred to as sensitivity).
- the target classification model is obtained after machine training using sample data for Determine the angle threshold that matches the path information of the path.
- the angle threshold is the angle that takes the shortest time to complete the target action in the path.
- the first angle threshold determined by the target classification model may also be optimized.
- the configuration method may include but is not limited to: performing a configuration operation on the angle threshold configuration item in the configuration interface of the client.
- the above angle threshold configuration items can be used to implement flexible configuration of the angle threshold, so as to improve the flexibility of controlling the target object to perform the target action.
- the background of the client can directly monitor the target angle of the controlled target object in real time, but can also, but is not limited to, after acquiring the forward direction of the target object and the sliding direction of the target object. , And then calculate the target angle of the above target object.
- the forward direction of the target object may correspond to the front direction of the virtual vehicle
- the sliding direction of the target object may correspond to the actual sliding direction of the body of the virtual vehicle.
- the above two directions are used to determine the target angle generated by the virtual vehicle when performing a drifting action.
- the target object after adjusting the key states of the first virtual key and the second virtual key to the disabled state, it further includes: after the target object enters the state of passively executing the target action, according to the current path The matching frictional resistance determines the remaining time for the target object to complete the target action; within the remaining time, the target object is controlled to complete the target action.
- steps S602-S608 shown in Fig. 6 assuming that a racing game application client is still taken as an example for description, the target object is a virtual vehicle participating in a racing controlled by the client, and the target action is a drifting action.
- the first virtual key is a direction key, and the second virtual key is a drift start key.
- step S602 the client terminal obtains the operation instructions generated by simultaneously long pressing the direction key (assuming long pressing the right direction key) and the drift start key.
- the client will execute step S604 to control the corresponding virtual vehicle to start drifting on the current path.
- the target angle of the virtual vehicle will continue to increase.
- step S606 it is detected whether the generated target angle reaches the first angle threshold ⁇ . If it is detected that the target angle has not reached the first angle threshold ⁇ , return to step S604 to maintain the steering force to continue steering drift; if it is detected that the target angle reaches the first angle threshold ⁇ , step S608 is executed to control the virtual vehicle to enter a passive drift state.
- the target angle ⁇ generated by the virtual vehicle during the drifting action can be shown in Figure 7, which is the forward direction of the virtual vehicle (i.e. the front direction) and the sliding direction of the virtual vehicle (i.e. the vector direction of the actual vehicle speed) The angle between.
- Figure 7 is the forward direction of the virtual vehicle (i.e. the front direction) and the sliding direction of the virtual vehicle (i.e. the vector direction of the actual vehicle speed) The angle between.
- the target object controlled by the client terminal executes the target action on the current path, by detecting the relative relationship between the target angle generated by the target object and the first angle threshold, the target object is controlled to automatically enter The state of passively executing the target action, instead of relying on the player's game experience, so that the player does not need to manually adjust the control operation according to the game experience, determine the angle required by the target object to perform the target action, reduce the difficulty of the player's operation, and improve the target execution
- the control accuracy during the action overcomes the problem of poor control accuracy caused by the player's unskilled control of the target object in the related technology.
- the method further includes:
- the display state of the virtual key may be, but not limited to, presented through the UI performance of the virtual key in the human-computer interaction interface.
- the display state of the corresponding virtual key is "operation state”
- no operation is performed on the virtual key
- the display state of the corresponding virtual key is "no operation state”.
- the first virtual button can be, but is not limited to, the direction keys shown in the lower left corner of FIG. 8, including "left direction button” and “right direction button”.
- the second virtual key can be, but is not limited to, the "drift start key” as shown in the lower right corner of FIG. 8.
- the display state of the "right direction button” is “operating state”, as shown in Figure 8 shows a “solid line”, and The display state of the "left direction button” that has not been detected to be operated by the user is “no operation state”, and a “dashed line” appears as shown in FIG. 8.
- the display state of the "drift start key” is “operating state”, as shown in Figure 8 showing "grid filling”.
- the target angle ⁇ generated by the virtual vehicle during the drifting action is detected. With the interaction between the torsion force F and the frictional resistance f generated during the execution of the drift action, the target angle ⁇ will continue to increase.
- the button states of the "right direction button” and the “drift start button” are adjusted to the disabled state to make the virtual vehicle The vehicle enters a state of passive drift.
- the display state of the first virtual button and the second virtual button that are in the disabled state is controlled, and the display state when the virtual button is long pressed is maintained
- the state is consistent, so that the user can control the target object to complete the target action on the current path without perception.
- the execution of the target action in the current path is automatically completed according to the first angle threshold when the user does not perceive it, which reduces the operation difficulty of the user's operation and avoids the problem of errors caused by unskilled operation.
- the method before obtaining the operation instructions generated by the long-press operation of the first virtual button and the second virtual button in the human-computer interaction interface displayed on the client, the method further includes:
- the above-mentioned target classification model can be, but is not limited to, used to classify the driving difficulty of the current route according to the path information of the current route, and determine the first angle that matches the current route according to the classification result. Threshold, and output the first angle threshold as the output result.
- the first angle threshold may also be used but not limited to indicate the sensitivity of the player to control the target object through the current path.
- the above-mentioned target classification model may be as shown in FIG. 9.
- the classification of the driving difficulty of the current path according to the path information of the current path may include but is not limited to: extracting the path features of the current path through the target classification model 900 (as shown in FIG. 9 Path features 1-k), where the path features may include, but are not limited to: curve angle, curve length, friction resistance, etc. And store the aforementioned path characteristics in the database 902. Then, through the embedding function 904-2 in the deep network 904 and the neural network layer 904-4, deep learning is performed on the path features of the current path in the database 902, and the classification of the driving difficulty of the current path is determined by the classifier 906 grade.
- the angle threshold value adapted to the classification level is obtained as the first angle threshold value matching the current path, and the output result 908 is obtained.
- the first angle threshold is the angle that takes the shortest time to execute the target action in the path corresponding to the above classification level.
- the method before inputting the path information of the current path into the target classification model, the method further includes: acquiring sample data generated when the target action is executed in the N sample paths, where the sample data is included in the i-th The angle used to execute the target action in the sample paths and the time taken to complete the target action, i is an integer greater than or equal to 1 and less than or equal to N; input the sample data into the pre-built initial classification model, and according to the output result of the initial classification model Adjust the parameters in the initial classification model to train the target classification model.
- the initial classification model is constructed in advance, and it is assumed that the sample data generated when the target action is executed in N sample paths is obtained.
- the above sample data may include but is not limited to: the angle used when the target action is executed in each sample path and the completion of the target The duration of the action.
- the above-mentioned angle may include, but is not limited to, [angle min , angle max ] and the corresponding duration when the target action is executed in the sample path.
- the path characteristics of the aforementioned sample path such as the angle of the curve, the length of the curve, and the friction resistance, are obtained.
- deep learning is performed on the path features and sample data of the above N sample paths.
- sample path 1 obtain the path features of sample path 1 (path features 1 to k as shown in Figure 10) and the corresponding sample data 1 and store them in the database 1002, and enter the path features 1 to k and sample data 1 above
- the deep network 1004 uses the embedding function 1004-2 and the neural network layer 1004-4 in the deep network 1004 to perform deep learning on the path features of the sample path 1 and the sample data 1, and obtain the output result 1008 through the classifier 1006.
- the parameters in the deep network 1004 in the initial classification model are adjusted and optimized to train the target classification model with convergent results. In order to use the target classification model to determine the optimal angle threshold for executing the target action under each path.
- the path information of the current path is input into the target classification model, so that the target classification model can be used to determine the optimal angle threshold when the target action is performed under the current path. This shortens the time for the target object to perform the target action under the current path, and improves the accuracy of controlling the target object to perform the target action.
- the method further includes:
- the first angle threshold is determined based on the target classification model, it can also be but not limited to the human-computer interaction interface
- the configuration interface is displayed on the top, and the configuration interface can include angle threshold configuration items, as shown in Figure 11 "angle threshold". Further, a configuration instruction generated by performing a configuration operation on the parameter value ⁇ of the angle threshold configuration item is acquired, so as to achieve configuration optimization of the first angle threshold determined by the target classification model.
- the configuration instruction generated by performing the configuration operation on the angle threshold configuration item in the configuration interface is obtained, and according to the configuration It is instructed to further optimize the adjustment of the first angle threshold, so that the adjusted first angle threshold is adapted to the player's operating habits. Therefore, it is convenient for different players to flexibly adjust different first angle thresholds, so as to improve the flexibility of object control.
- the method further includes:
- the button state of the second virtual button is controlled to return to the effective state, where the effective state is used for Instruct the button response logic of the second virtual button to return to normal;
- the button state of the first virtual button is controlled to remain in the disabled state , So that the user can complete the target action in a short period of time without perception.
- the button response logic of the second virtual button is restored, so that The second virtual button can be restarted quickly in response to the user's operation again, shortening the start time of the next execution of the target action.
- the virtual key is controlled to perform differently according to the user's different operation modes (such as pressing or lifting) of the virtual key Operation logic to achieve the effect of expanding key operation functions.
- an object control device for implementing the above object control method.
- the device includes:
- the first obtaining unit 1202 is configured to obtain the operation instructions generated by the long-press operation of the first virtual button and the second virtual button in the human-computer interaction interface displayed on the client, where the first virtual button is used to adjust The forward direction of the target object controlled by the client, and the second virtual key is used to trigger the target object to perform the target action;
- the first control unit 1204 is used to respond to operation instructions to control the target object to perform the target action in the current path, and to detect the target angle generated by the target object during the execution of the target action, where the target angle is the forward movement of the target object The angle between the orientation and the sliding direction of the target object;
- the first adjustment unit 1206 is configured to: when it is detected that a long press operation is currently performed on the first virtual button and the second virtual button, and the target angle reaches the first angle threshold that matches the current path, the first The key state of the virtual key and the second virtual key is adjusted to the invalid state, so that the target object enters the state of passively performing the target action, wherein the invalid state is used to indicate that the key response logic of the first virtual key and the second virtual key is in the suspended state .
- the foregoing device further includes:
- the display unit is used to control the display of the first and second virtual buttons in the disabled state in the human-computer interaction interface when the button states of the first and second virtual buttons are adjusted to the disabled state
- the state is consistent with the display state of the first virtual button and the second virtual button when the long press operation is performed on the first virtual button and the second virtual button.
- the display state of the first virtual button and the second virtual button that are in the disabled state is controlled, and the button identification of the virtual button when the long press operation is maintained and executed
- the display status of is consistent, so that the user can control the target object to complete the target action on the current path without perception.
- the execution of the target action in the current path is automatically completed according to the first angle threshold without the user's perception, which reduces the operation difficulty of the user's operation and avoids the problem of errors caused by unskilled operation.
- the foregoing device further includes:
- the input unit is used to input the path information of the current path into the target classification model before obtaining the operation instructions generated by the long-press operation of the first virtual key and the second virtual key in the human-computer interaction interface displayed on the client,
- the target classification model is a model obtained after machine training using sample data to determine an angle threshold that matches the path information of the path, and the angle threshold is the angle that takes the shortest time to complete the target action in the path;
- the determining unit is used to determine the first angle threshold that matches the current path according to the output result of the target classification model.
- the foregoing device further includes:
- the second acquiring unit is used to acquire sample data generated when the target action is executed in N sample paths before inputting the path information of the current path into the target classification model, where the sample data is included in the i-th sample path
- the angle used to execute the target action and the time used to complete the target action, i is an integer greater than or equal to 1 and less than or equal to N;
- the training unit is used to input sample data into the pre-built initial classification model, and adjust the parameters in the initial classification model according to the output results of the initial classification model to train to obtain the target classification model.
- the path information of the current path is input into the target classification model, so that the target classification model can be used to determine the optimal angle threshold when the target action is performed under the current path. This shortens the time for the target object to perform the target action under the current path, and improves the accuracy of controlling the target object to perform the target action.
- the foregoing device further includes:
- the third acquiring unit is configured to acquire a configuration instruction generated by performing a configuration operation on the angle threshold configuration item in the configuration interface of the client after determining the first angle threshold that matches the current path according to the output result of the target classification model;
- the second adjustment unit is configured to adjust the first angle threshold in response to the configuration instruction to obtain the adjusted first angle threshold.
- the configuration instruction generated by performing the configuration operation on the angle threshold configuration item in the configuration interface is obtained, and according to the configuration It is instructed to further optimize the adjustment of the first angle threshold, so that the adjusted first angle threshold is adapted to the player's operating habits.
- different players can flexibly adjust different first angle thresholds, so as to improve the flexibility of object control.
- the foregoing device further includes:
- the second control unit is configured to, after adjusting the button states of the first virtual button and the second virtual button to the disabled state, after detecting that the long press operation is currently performed on the first virtual button, but the second virtual button is not currently When the key is pressed, the key state of the control first virtual key remains in the disabled state;
- the third control unit is configured to control the button state of the second virtual button to return to the valid state when it is detected that the pressing operation is not currently performed on the first virtual button but the long pressing operation is currently performed on the second virtual button , Where the valid state is used to indicate that the button response logic of the second virtual button returns to normal; or
- the fourth control unit is configured to control the button state of the first virtual button and the second virtual button to return to a valid state when it is detected that no pressing operation is currently performed on the first virtual button and the second virtual button, wherein ,
- the valid state is used to indicate that the button response logic of the first virtual button and the second virtual button returns to normal.
- the virtual key is controlled to perform differently according to the user's different operation modes (such as pressing or lifting) of the virtual key Operation logic to achieve the effect of expanding key operation functions.
- the electronic device for implementing the above object control method.
- the electronic device includes a memory 1302 and a processor 1304.
- the memory 1302 stores a computer
- the processor 1304 is configured to execute the steps in any one of the foregoing method embodiments through a computer program.
- the above electronic device may be located in at least one network device among multiple network devices in the computer network.
- the foregoing processor may be configured to execute the following steps through a computer program:
- FIG. 13 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
- FIG. 13 does not limit the structure of the above electronic device.
- the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 13, or have a configuration different from that shown in FIG.
- the memory 1302 can be used to store software programs and modules, such as program instructions/modules corresponding to the object control method and device in the embodiments of the present application.
- the processor 1304 executes the software programs and modules stored in the memory 1302 by running the software programs and modules. This kind of functional application and data processing realizes the above-mentioned object control method.
- the memory 1302 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
- the memory 1302 may further include a memory remotely provided with respect to the processor 1304, and these remote memories may be connected to the terminal through a network.
- the memory 1302 can be specifically, but not limited to, used for operation instructions, first angle threshold, target angle and other information.
- the memory 1302 may include, but is not limited to, the extraction unit 1102, the determination unit 1104, the generation unit 1106, and the processing unit 1108 in the above object control device.
- the memory 1302 may also include, but is not limited to, other module units in the above object control device, which will not be repeated in this example.
- the aforementioned transmission device 1306 is used to receive or send data via a network.
- the above-mentioned specific examples of networks may include wired networks and wireless networks.
- the transmission device 1306 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
- the transmission device 1306 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
- RF radio frequency
- the above-mentioned electronic device further includes: a display 1308 for displaying the above-mentioned human-computer interaction interface and a screen of the target object performing the target action in the current path; and a connection bus 1310 for connecting each module component in the above-mentioned electronic device.
- a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the foregoing method embodiments when running.
- the foregoing storage medium may be configured to store a computer program for executing the following steps:
- the storage medium may include: a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
- the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
- the technical solution of this application essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, A number of instructions are included to make one or more computer devices (which may be personal computers, servers, or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the disclosed client can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division.
- multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Optics & Photonics (AREA)
Abstract
Description
Claims (16)
- 一种对象控制方法,应用于终端设备,包括:获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,所述第一虚拟按键用于调整通过所述客户端控制的目标对象的前进朝向,所述第二虚拟按键用于触发所述目标对象执行目标动作;响应所述操作指令,控制所述目标对象在当前路径中执行所述目标动作,并检测所述目标对象在执行所述目标动作的过程中产生的目标角度,其中,所述目标角度为所述目标对象的前进朝向和所述目标对象的滑行方向之间的夹角;在检测到当前针对所述第一虚拟按键与所述第二虚拟按键执行所述长按操作,且所述目标角度达到与所述当前路径相匹配的第一角度阈值的情况下,将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态,以使所述目标对象进入被动执行所述目标动作的状态,其中,所述失效状态用于指示所述第一虚拟按键和所述第二虚拟按键的按键响应逻辑处于中止状态。
- 根据权利要求1所述的方法,在将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态时,还包括:在所述人机交互界面中,控制处于所述失效状态的所述第一虚拟按键和所述第二虚拟按键的显示状态,与针对所述第一虚拟按键和所述第二虚拟按键执行所述长按操作时所述第一虚拟按键和所述第二虚拟按键的显示状态保持一致。
- 根据权利要求1所述的方法,在所述获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,还包括:将所述当前路径的路径信息输入目标分类模型,其中,所述目标分类模型是利用样本数据进行机器训练后得到的模型,用于确定与路径的路径信息相匹配的角度阈值,所述角度阈值为在所述路径中完成所述目标动作所用时长最短的角度;根据所述目标分类模型的输出结果,确定与所述当前路径相匹配的所述第一角度阈值。
- 根据权利要求3所述的方法,在所述将所述当前路径的路径信息输入目标分类模型之前,还包括:获取在N个样本路径中执行所述目标动作时产生的所述样本数据,其中,所述样本数据包括在第i个样本路径中执行所述目标动作时使用的角度及完成所述目标动作所用时长,i为大于等于1,且小于等于N的整数;将所述样本数据输入预先构建的初始分类模型,并根据所述初始分类模型的输出结果调整所述初始分类模型中的参数,以训练得到所述目标分类模 型。
- 根据权利要求3所述的方法,在所述根据所述目标分类模型的输出结果确定与所述当前路径相匹配的所述第一角度阈值之后,还包括:获取对所述客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;响应所述配置指令,调整所述第一角度阈值,得到调整后的所述第一角度阈值。
- 根据权利要求1所述的方法,在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,还包括:在所述目标对象进入被动执行所述目标动作的状态后,根据与所述当前路径相匹配的摩擦阻力,确定所述目标对象完成所述目标动作的剩余时长;在所述剩余时长内,控制所述目标对象完成所述目标动作。
- 根据权利要求1至6中任一项所述的方法,在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,还包括:在检测到当前针对所述第一虚拟按键执行所述长按操作,而当前未针对所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键的按键状态保持所述失效状态;或者在检测到当前未针对所述第一虚拟按键执行按压操作,而当前针对所述第二虚拟按键执行所述长按操作的情况下,控制所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第二虚拟按键的按键响应逻辑恢复正常;或者在检测到当前未针对所述第一虚拟按键与所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键与所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第一虚拟按键与所述第二虚拟按键的按键响应逻辑恢复正常。
- 一种对象控制装置,包括:第一获取单元,用于获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令,其中,所述第一虚拟按键用于调整通过所述客户端控制的目标对象的前进朝向,所述第二虚拟按键用于触发所述目标对象执行目标动作;第一控制单元,用于响应所述操作指令,控制所述目标对象在当前路径中执行所述目标动作,并检测所述目标对象在执行所述目标动作的过程中产生的目标角度,其中,所述目标角度为所述目标对象的前进朝向和所述目标对象的滑行方向之间的夹角;第一调整单元,用于在检测到当前针对所述第一虚拟按键与所述第二虚拟按键执行所述长按操作,且所述目标角度达到与所述当前路径相匹配的第一角度阈值的情况下,将所述第一虚拟按键和所述第二虚拟按键的按键状态 调整为失效状态,以使所述目标对象进入被动执行所述目标动作的状态,其中,所述失效状态用于指示所述第一虚拟按键和所述第二虚拟按键的按键响应逻辑处于中止状态。
- 根据权利要求8所述的装置,所述装置还包括:显示单元,用于在将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态时,在所述人机交互界面中,控制处于所述失效状态的所述第一虚拟按键和所述第二虚拟按键的显示状态,与针对所述第一虚拟按键和所述第二虚拟按键执行所述长按操作时所述第一虚拟按键和所述第二虚拟按键的显示状态保持一致。
- 根据权利要求8所述的装置,还包括:输入单元,用于在所述获取对客户端显示的人机交互界面中的第一虚拟按键及第二虚拟按键执行长按操作生成的操作指令之前,将所述当前路径的路径信息输入目标分类模型,其中,所述目标分类模型是利用样本数据进行机器训练后得到的,用于确定与路径的路径信息相匹配的角度阈值,所述角度阈值为在所述路径中完成所述目标动作所用时长最短的角度;确定单元,用于根据所述目标分类模型的输出结果,确定与所述当前路径相匹配的所述第一角度阈值。
- 根据权利要求10所述的装置,还包括:第二获取单元,用于在所述将所述当前路径的路径信息输入目标分类模型之前,获取在N个样本路径中执行所述目标动作时产生的所述样本数据,其中,所述样本数据包括在第i个样本路径中执行所述目标动作时使用的角度及完成所述目标动作所用时长,i为大于等于1,且小于等于N的整数;训练单元,用于将所述样本数据输入预先构建的初始分类模型,并根据所述初始分类模型的输出结果调整所述初始分类模型中的参数,以训练得到所述目标分类模型。
- 根据权利要求10所述的装置,还包括:第三获取单元,用于在所述根据所述目标分类模型的输出结果确定与所述当前路径相匹配的所述第一角度阈值之后,获取对所述客户端的配置界面中的角度阈值配置项执行配置操作生成的配置指令;第二调整单元,用于响应所述配置指令,调整所述第一角度阈值,得到调整后的所述第一角度阈值。
- 根据权利要求8至12中任一项所述的装置,还包括:第二控制单元,用于在所述将所述第一虚拟按键和所述第二虚拟按键的按键状态调整为失效状态之后,在检测到当前针对所述第一虚拟按键执行所述长按操作,而当前未针对所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键的按键状态保持所述失效状态;或者第三控制单元,用于在检测到当前未针对所述第一虚拟按键执行按压操 作,而当前针对所述第二虚拟按键执行所述长按操作的情况下,控制所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第二虚拟按键的按键响应逻辑恢复正常;或者第四控制单元,用于在检测到当前未针对所述第一虚拟按键与所述第二虚拟按键执行按压操作的情况下,控制所述第一虚拟按键与所述第二虚拟按键的按键状态恢复为有效状态,其中,所述有效状态用于指示所述第一虚拟按键与所述第二虚拟按键的按键响应逻辑恢复正常。
- 一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至7任一项中所述的方法。
- 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至7任一项中所述的方法。
- 一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行权利要求1至7任一项中所述的方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217013308A KR102549758B1 (ko) | 2019-02-21 | 2020-01-17 | 오브젝트 제어 방법 및 장치, 저장 매체 및 전자 장치 |
SG11202103686VA SG11202103686VA (en) | 2019-02-21 | 2020-01-17 | Object control method and apparatus, storage medium, and electronic apparatus |
JP2021536060A JP7238136B2 (ja) | 2019-02-21 | 2020-01-17 | オブジェクト制御方法とオブジェクト制御装置、コンピュータ・プログラム、および電子装置 |
US17/320,051 US11938400B2 (en) | 2019-02-21 | 2021-05-13 | Object control method and apparatus, storage medium, and electronic apparatus |
US18/444,415 US20240189711A1 (en) | 2019-02-21 | 2024-02-16 | Drift control assistance in virtual environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910130187.XA CN109806590B (zh) | 2019-02-21 | 2019-02-21 | 对象控制方法和装置、存储介质及电子装置 |
CN201910130187.X | 2019-02-21 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/320,051 Continuation US11938400B2 (en) | 2019-02-21 | 2021-05-13 | Object control method and apparatus, storage medium, and electronic apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020168877A1 true WO2020168877A1 (zh) | 2020-08-27 |
Family
ID=66607100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072635 WO2020168877A1 (zh) | 2019-02-21 | 2020-01-17 | 对象控制方法和装置、存储介质及电子装置 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11938400B2 (zh) |
JP (1) | JP7238136B2 (zh) |
KR (1) | KR102549758B1 (zh) |
CN (1) | CN109806590B (zh) |
SG (1) | SG11202103686VA (zh) |
WO (1) | WO2020168877A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113476823A (zh) * | 2021-07-13 | 2021-10-08 | 网易(杭州)网络有限公司 | 虚拟对象控制方法、装置、存储介质及电子设备 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109806590B (zh) * | 2019-02-21 | 2020-10-09 | 腾讯科技(深圳)有限公司 | 对象控制方法和装置、存储介质及电子装置 |
CN110207716B (zh) * | 2019-04-26 | 2021-08-17 | 纵目科技(上海)股份有限公司 | 一种参考行驶线快速生成方法、系统、终端和存储介质 |
CN110201387B (zh) | 2019-05-17 | 2021-06-25 | 腾讯科技(深圳)有限公司 | 对象控制方法和装置、存储介质及电子装置 |
CN111388991B (zh) * | 2020-03-12 | 2023-12-01 | 安徽艺像网络科技有限公司 | 一种基于多点触控的游戏交互方法 |
CN116481781B (zh) * | 2022-12-01 | 2024-08-23 | 广州星际悦动股份有限公司 | 按键测试方法及系统、按键测试设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170228102A1 (en) * | 2014-09-28 | 2017-08-10 | Zte Corporation | Method and device for operating a touch screen |
CN108985367A (zh) * | 2018-07-06 | 2018-12-11 | 中国科学院计算技术研究所 | 计算引擎选择方法和基于该方法的多计算引擎平台 |
CN109107152A (zh) * | 2018-07-26 | 2019-01-01 | 网易(杭州)网络有限公司 | 控制虚拟对象漂移的方法和设备 |
CN109806590A (zh) * | 2019-02-21 | 2019-05-28 | 腾讯科技(深圳)有限公司 | 对象控制方法和装置、存储介质及电子装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3165768B2 (ja) * | 1993-10-21 | 2001-05-14 | 株式会社ナムコ | ビデオゲーム装置 |
JP2016120131A (ja) * | 2014-12-25 | 2016-07-07 | 株式会社バンダイナムコエンターテインメント | ゲームシステム及びサーバ |
US9687741B1 (en) * | 2015-03-10 | 2017-06-27 | Kabam, Inc. | System and method for providing separate drift and steering controls |
JP6869692B2 (ja) * | 2016-10-19 | 2021-05-12 | 任天堂株式会社 | ゲームプログラム、ゲーム処理方法、ゲームシステム、およびゲーム装置 |
WO2018216080A1 (ja) * | 2017-05-22 | 2018-11-29 | 任天堂株式会社 | ゲームプログラム、情報処理装置、情報処理システム、および、ゲーム処理方法 |
EP3441120A4 (en) * | 2017-05-22 | 2020-01-22 | Nintendo Co., Ltd. | GAME PROGRAM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND GAME PROCESSING METHOD |
CN109491579B (zh) * | 2017-09-12 | 2021-08-17 | 腾讯科技(深圳)有限公司 | 对虚拟对象进行操控的方法和装置 |
CN108939546B (zh) * | 2018-05-21 | 2021-09-03 | 网易(杭州)网络有限公司 | 虚拟对象的漂移控制方法及装置、电子设备、存储介质 |
CN109513210B (zh) * | 2018-11-28 | 2021-02-12 | 腾讯科技(深圳)有限公司 | 虚拟世界中的虚拟车辆漂移方法、装置及存储介质 |
CN109806586B (zh) * | 2019-02-28 | 2022-02-22 | 腾讯科技(深圳)有限公司 | 游戏辅助功能的开启方法、装置、设备及可读存储介质 |
-
2019
- 2019-02-21 CN CN201910130187.XA patent/CN109806590B/zh active Active
-
2020
- 2020-01-17 JP JP2021536060A patent/JP7238136B2/ja active Active
- 2020-01-17 WO PCT/CN2020/072635 patent/WO2020168877A1/zh active Application Filing
- 2020-01-17 SG SG11202103686VA patent/SG11202103686VA/en unknown
- 2020-01-17 KR KR1020217013308A patent/KR102549758B1/ko active IP Right Grant
-
2021
- 2021-05-13 US US17/320,051 patent/US11938400B2/en active Active
-
2024
- 2024-02-16 US US18/444,415 patent/US20240189711A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170228102A1 (en) * | 2014-09-28 | 2017-08-10 | Zte Corporation | Method and device for operating a touch screen |
CN108985367A (zh) * | 2018-07-06 | 2018-12-11 | 中国科学院计算技术研究所 | 计算引擎选择方法和基于该方法的多计算引擎平台 |
CN109107152A (zh) * | 2018-07-26 | 2019-01-01 | 网易(杭州)网络有限公司 | 控制虚拟对象漂移的方法和设备 |
CN109806590A (zh) * | 2019-02-21 | 2019-05-28 | 腾讯科技(深圳)有限公司 | 对象控制方法和装置、存储介质及电子装置 |
Non-Patent Citations (2)
Title |
---|
GAME FRONTLINE INFORMATION ANALYSIS: "QQ Speed Mobile Games: speak from the data! The B car data you want is here!", BAIJIAHAO.BAIDU.COM, BAIDU, CN, 26 February 2018 (2018-02-26), CN, pages 1 - 10, XP055730719, Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1593329611890592319&wfr=spider&for=pc> [retrieved on 20200915] * |
SHARK GIRL MOBILE GAMES VIDEO: "QQ Speed, advanced drift teaching, new fast drift CWW jet", BILIBILI.COM, 17 April 2018 (2018-04-17), CN, pages 1 - 2, XP054980895, [retrieved on 20200915] * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113476823A (zh) * | 2021-07-13 | 2021-10-08 | 网易(杭州)网络有限公司 | 虚拟对象控制方法、装置、存储介质及电子设备 |
CN113476823B (zh) * | 2021-07-13 | 2024-02-27 | 网易(杭州)网络有限公司 | 虚拟对象控制方法、装置、存储介质及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN109806590A (zh) | 2019-05-28 |
JP7238136B2 (ja) | 2023-03-13 |
US20210260478A1 (en) | 2021-08-26 |
CN109806590B (zh) | 2020-10-09 |
KR102549758B1 (ko) | 2023-06-29 |
US20240189711A1 (en) | 2024-06-13 |
SG11202103686VA (en) | 2021-05-28 |
KR20210064373A (ko) | 2021-06-02 |
US11938400B2 (en) | 2024-03-26 |
JP2022520699A (ja) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020168877A1 (zh) | 对象控制方法和装置、存储介质及电子装置 | |
JP7077463B2 (ja) | スマートデバイスの識別および制御 | |
WO2020199820A1 (zh) | 对象控制方法和装置、存储介质及电子装置 | |
US11526325B2 (en) | Projection, control, and management of user device applications using a connected resource | |
CN109952757B (zh) | 基于虚拟现实应用录制视频的方法、终端设备及存储介质 | |
US9400548B2 (en) | Gesture personalization and profile roaming | |
WO2020224361A1 (zh) | 动作执行方法和装置、存储介质及电子装置 | |
US9019201B2 (en) | Evolving universal gesture sets | |
WO2020238636A1 (zh) | 虚拟对象控制方法和装置、存储介质及电子装置 | |
WO2017133500A1 (zh) | 应用程序的处理方法和装置 | |
WO2020233395A1 (zh) | 对象控制方法和装置、存储介质及电子装置 | |
WO2022142626A1 (zh) | 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品 | |
CN111367488B (zh) | 语音设备及语音设备的交互方法、设备、存储介质 | |
US10678327B2 (en) | Split control focus during a sustained user interaction | |
WO2020216018A1 (zh) | 操作控制方法和装置、存储介质及设备 | |
US9952668B2 (en) | Method and apparatus for processing virtual world | |
US9437158B2 (en) | Electronic device for controlling multi-display and display control method thereof | |
US11314344B2 (en) | Haptic ecosystem | |
CN108536367B (zh) | 一种交互页面卡顿的处理方法、终端及可读存储介质 | |
US20170161011A1 (en) | Play control method and electronic client | |
KR102405307B1 (ko) | 전자 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체 | |
CN113413590B (zh) | 一种信息验证方法、装置、计算机设备及存储介质 | |
US20160078635A1 (en) | Avatar motion modification | |
US9075880B2 (en) | Method of associating multiple applications | |
CN114281185B (zh) | 基于嵌入式平台的体态识别及体感交互系统和方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20758881 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217013308 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021536060 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20758881 Country of ref document: EP Kind code of ref document: A1 |