CN109529340B - Virtual object control method and device, electronic equipment and storage medium - Google Patents

Virtual object control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109529340B
CN109529340B CN201811393396.5A CN201811393396A CN109529340B CN 109529340 B CN109529340 B CN 109529340B CN 201811393396 A CN201811393396 A CN 201811393396A CN 109529340 B CN109529340 B CN 109529340B
Authority
CN
China
Prior art keywords
body part
information
area
matching degree
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811393396.5A
Other languages
Chinese (zh)
Other versions
CN109529340A (en
Inventor
仇蒙
潘佳绮
崔维健
张书婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811393396.5A priority Critical patent/CN109529340B/en
Publication of CN109529340A publication Critical patent/CN109529340A/en
Application granted granted Critical
Publication of CN109529340B publication Critical patent/CN109529340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/843Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a virtual object control method and device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying a user interface, wherein a virtual object is displayed in the user interface; acquiring area information of a trigger area of interactive operation according to the interactive operation on the user interface; acquiring body part indicating information according to the area information of the trigger area, wherein the body part indicating information is used for indicating body parts used for interactive operation of a user; acquiring action control instructions corresponding to the body part indication information; and responding to the action control instruction, and controlling the virtual object to execute the action corresponding to the action control instruction. In the invention, the operation process only needs the user to carry out interactive operation through the body part, and multiple clicking operations are not needed, the control efficiency of the virtual object is high, and the operation mode is richer than that of the clicking operation, so that the substitution feeling can be increased for the user, and the interestingness of the virtual object control process is improved.

Description

Virtual object control method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a virtual object, an electronic device, and a storage medium.
Background
With the development of computer technology and diversification of terminal functions, game types of electronic games that can be played in terminals are increasing. For example, a shooting game is an electronic game. In the electronic game, the terminal may control the virtual object to perform different actions based on user operations.
At present, in a virtual object control method, a user usually performs a multi-step clicking operation to select one action from a plurality of provided actions, so that a terminal can control a virtual object to execute the selected action. For example, as shown in fig. 1, the terminal displays an action selection button in the interface, the user clicks the action selection button, when the terminal detects a click operation on the action selection button, the terminal may display an action control button panel as shown in fig. 2 in the interface, the user may click a certain action control button in the action control button panel, as shown in fig. 3, and when the terminal detects a click operation on a certain action control button, the terminal may control the virtual object to execute the action.
According to the virtual object control method, a user needs to perform multiple click operations, the operation process of the user is complicated, the time spent on the operation process is long, the control efficiency of the virtual object is low, the operation mode is only click operation, the operation mode is single, the substitution feeling is lacked, and the interestingness is low.
Disclosure of Invention
The embodiment of the invention provides a virtual object control method and device, electronic equipment and a storage medium, and solves the problems that in the related technology, a user operation process is complicated, the control efficiency of a virtual object is low, an operation mode is single, and substitution feeling and interestingness are poor. The technical scheme is as follows:
in one aspect, a virtual object control method is provided, and the method includes:
displaying a user interface in which a virtual object is displayed;
acquiring the area information of a trigger area of the interactive operation according to the interactive operation on the user interface;
acquiring body part indication information according to the area information of the trigger area, wherein the body part indication information is used for indicating a body part used for the user to perform the interactive operation;
acquiring action control instructions corresponding to the body part indication information;
and responding to the action control instruction, and controlling the virtual object to execute the action corresponding to the action control instruction.
In one aspect, a virtual object control apparatus is provided, the apparatus including:
the display module is used for displaying a user interface, and a virtual object is displayed in the user interface;
the acquisition module is used for acquiring the area information of the trigger area of the interactive operation according to the interactive operation on the user interface;
the acquisition module is further configured to acquire body part indication information according to the area information of the trigger area, where the body part indication information is used to indicate a body part of the user for performing the interactive operation;
the acquisition module is further used for acquiring action control instructions corresponding to the body part indication information;
and the control module is used for responding to the action control instruction and controlling the virtual object to execute the action corresponding to the action control instruction.
In one aspect, an electronic device is provided and includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the virtual object control method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement operations performed by the virtual object control method.
According to the embodiment of the invention, the body part of the user for performing the interactive operation is determined based on the area information of the trigger area of the interactive operation, so that the action control instruction corresponding to the determined body part can be obtained and responded, the action control of the virtual object is realized, the user only needs to perform the interactive operation through the body part in the operation process, multiple click operations are not needed, the control efficiency of the virtual object is high, the operation mode is richer than the click operation, the substitution feeling can be increased for the user, and the interestingness in the virtual object control process is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a terminal interface provided with a control selection button according to the background art of the present invention;
fig. 2 is a schematic diagram of a terminal interface provided with a motion control button according to the background art of the present invention;
FIG. 3 is a schematic diagram of a terminal interface for controlling a virtual object to execute an action according to the background art of the present invention;
fig. 4 is a schematic diagram of a terminal interface before a virtual object enters a virtual scene according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal interface where a virtual object is located in a virtual scene according to an embodiment of the present invention;
fig. 6 is a flowchart of a virtual object control method according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a user operation manner according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a trigger region provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of a terminal interface for controlling a virtual object to perform an action according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a user operation manner according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a trigger region provided by an embodiment of the present invention;
FIG. 12 is a schematic diagram of a terminal interface for controlling a virtual object to perform an action according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a user operation manner according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a trigger region provided by an embodiment of the present invention;
FIG. 15 is a schematic diagram of a terminal interface for controlling a virtual object to perform an action according to an embodiment of the present invention;
FIG. 16 is a flowchart of a method for controlling a virtual object according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
FIG. 18 is a flowchart of a method for controlling a virtual object according to an embodiment of the present invention;
fig. 19 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present invention;
fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention mainly relates to an electronic game or a simulated training scene, taking the electronic game scene as an example, a user can operate on the terminal in advance, the terminal can download a game configuration file of the electronic game after detecting the operation of the user, and the game configuration file can comprise an application program, interface display data or virtual scene data and the like of the electronic game, so that the user can call the game configuration file when logging in the electronic game on the terminal to render and display an interface of the electronic game. A user may perform a touch operation on a terminal, and after the terminal detects the touch operation, the terminal may determine game data corresponding to the touch operation, and render and display the game data, where the game data may include virtual scene data, behavior data of a virtual object in the virtual scene, and the like.
The virtual scene related to the invention can be used for simulating a three-dimensional virtual space and can also be used for simulating a two-dimensional virtual space, and the three-dimensional virtual space or the two-dimensional virtual space can be an open space. The virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, the land may include environmental elements such as desert, city, and the like, and a user may control a virtual object to move in the virtual scene. The virtual object may be an avatar that is virtual and represents the user, and the avatar may be in any form, such as human, animal, etc., but the invention is not limited thereto.
When the virtual object is located in the virtual scene, the virtual scene may further include other virtual objects, that is, the virtual scene may include a plurality of virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene. In a possible implementation manner, the terminal may also display the virtual object in the game interface before controlling the virtual object to enter the virtual scene, of course, if the user performs a team formation operation, the terminal may also perform the team formation operation based on the virtual object and other virtual objects belonging to a team with the virtual object in the interface. For example, virtual objects belonging to the same team may be co-located in the same virtual room.
Taking a shooting game as an example, the user may control the virtual object to freely fall, glide or open a parachute to fall in the sky of the virtual scene, run, jump, crawl, bow to move on the land, or control the virtual object to swim, float or dive in the sea, or the like. The user can also control the virtual object to fight with other virtual objects through weapons, wherein the weapons can be cold weapons or hot weapons, and the embodiment of the invention is not particularly limited in this respect.
In the embodiment of the present invention, the user may further control the virtual object to perform different actions in the virtual scene, for example, a hand waving, a palm waving, a kissing, a nodding or a shaking motion, and for example, as shown in fig. 3, the terminal may control the virtual object to perform a kissing action in the virtual scene. Of course, the user may also control the virtual object to dance in the virtual scene, and the dance may also include multiple types, which is not listed here, and of course, the action of the virtual object may also include other actions, such as laughing, which is not limited in the embodiment of the present invention. Therefore, the virtual object performs different actions in the virtual scene, so that the actions which can be executed by the user for controlling the virtual object are richer, the virtual object is more real, the interaction among the users can be increased, the interestingness of the electronic game can be effectively improved, and the user experience is improved. In one possible implementation manner, before the virtual object enters the virtual scene, the user can also control the virtual object to perform the above-mentioned action, so as to improve the interest of the virtual object before entering the virtual scene and increase the interaction with other users. For example, as shown in fig. 4, the user may control the virtual object to perform an action before the virtual object enters the virtual scene, and as shown in fig. 5, the user may control the virtual object to perform an action before the virtual object enters the virtual scene and is in a team state with other virtual objects.
Fig. 6 is a flowchart of a virtual object control method according to an embodiment of the present invention, where the virtual object control method may be applied to an electronic device, and the electronic device may be a terminal or a server, and the following description only takes the electronic device as an example. Referring to fig. 6, the method may include the steps of:
600. the terminal displays a user interface in which the virtual object is displayed.
This step 600 may be performed when the game application is in an open state, or when the terminal detects that the user is in any electronic game. That is, when the terminal performs the step 600, two situations may be included: in one case, when a certain game application is started, the terminal may display a user interface in the game application, where a virtual object may be displayed in the user interface, and the user interface may also include user information of the user and other electronic game function buttons, for example, a user rating of the user, a nickname of the user, a buddy list, or the like, or a start button of an electronic game. In another case, when the user is in any electronic game, the terminal may display a user interface, and display a virtual object in the user interface, and of course, the terminal may also include information of the virtual object, a virtual scene where the virtual object is located, information of the electronic game, and the like. The two situations are corresponding to the situations of the virtual object before entering the virtual scene and the situation of the virtual object after entering the virtual scene, and the embodiment of the present invention is not described herein in detail nor limits the display content of the user interface.
601. And the terminal acquires the area information of the trigger area of the interactive operation according to the interactive operation on the user interface.
The terminal can provide an action control function, when a user wants to control the virtual object to execute the action, the user can carry out interactive operation on a user interface displayed by the terminal, and when the terminal detects the interactive operation, the terminal can correspondingly control the virtual object. The terminal may further have a biometric function, and the user may use his/her own body part to perform an interactive operation on the user interface of the terminal, for example, the screen of the terminal may be a capacitive touch screen, the user may press a palm of the hand against the screen, may attach an ear to the screen, or may tap the screen with a phalange. After detecting the interactive operation of the user, the terminal can identify which body part the user performs the interactive operation.
In the embodiment of the invention, in the action control function provided by the terminal, when a user uses different body parts to perform interactive operation on the user interface of the terminal, the terminal can control the virtual object to execute different actions according to different body parts used by the user. That is, when the interactive operations performed by the user are different, the terminal may process the interactive operations in different manners.
In one possible implementation manner, the terminal may provide a motion control start button, and when a user wants to control the virtual object to perform different motions by using different body parts, the user may perform a touch operation on the motion control start button to start the motion control function. When the terminal detects the touch operation on the motion control start button, the motion control function is started, that is, when the terminal detects the interactive operation, the steps provided by the embodiment of the invention can be executed, so as to realize the motion control process of the virtual object through the interactive operation of different body parts.
That is, there may be a correspondence between the body parts and the actions of the virtual object, with different body parts corresponding to different actions of the virtual object. When a user wants to control a virtual object to execute a certain action, the user may use a certain body part of the user to perform an interactive operation on the terminal, and when the terminal detects the interactive operation, the terminal may be triggered to execute the step 601, so as to execute a subsequent step based on the area information of the trigger area of the interactive operation, identify the body part of the user performing the interactive operation, and thereby determine which action the interactive operation is used to control the virtual object to execute.
The trigger area may be an area where a contact point of the body part and the user interface is located when the user performs the interactive operation, or another area obtained based on the area where the contact point is located. Specifically, this step 601 may include two possible scenarios:
in the first case, the terminal acquires the area information of the area where the contact point of the interactive operation is located on the user interface.
In the first case, when the terminal directly performs the interactive operation with the user, the area where the contact point between the body part and the user interface is located is used as the trigger area, so that the terminal can obtain the area information of the trigger area. In one possible implementation, the region information may include an area and a shape of the trigger region. Because the shapes and sizes of different body parts are different, when the user uses different body parts to operate on the user interface, the areas and the shapes of the contact areas of the body parts and the user interface are also different, and the terminal can identify the body parts of the user for interactive operation based on the areas and the shapes of the trigger areas of the interactive operation. Of course, the region information may also include other information, for example, texture information, which is not limited in this embodiment of the present invention.
In the second situation, the terminal acquires the area where the contact point of the interactive operation is located on the user interface, the terminal acquires the trigger area of the interactive operation based on the area where the contact point is located, and the terminal acquires the area information of the trigger area.
In the second case, the terminal may first acquire the area where the contact point between the body part and the user interface is located when the user performs the interactive operation, and since the user may not perform the interactive operation normally each time, the terminal may process the area where the body part of the user actually contacts the user interface to acquire the required trigger area, and then acquire the area information of the trigger area to perform the subsequent identification step.
Specifically, the process of acquiring the trigger area of the interactive operation by the terminal based on the area where the contact point is located may include at least one of the following steps:
step one, the terminal carries out smoothing processing on the area where the contact point is located to obtain a trigger area of interactive operation.
In step one, it is understood that when a user interacts with a body part on a user interface, the body part may be in contact with the user interface in some places, not in contact with the user interface in some places, the area or shape of the contact point of the interactive operation obtained by the terminal on the interactive interface may deviate from the real area and shape covered by the body part of the user on the interactive interface, therefore, the terminal can carry out smooth processing on the area where the contact point is located, so that the cavity can be filled when the body part of some users does not contact the interactive interface and the body part in the area where the contact point is located is hollow, therefore, the obtained trigger area is more in line with the real situation of the human body part, and the accuracy of body part identification is improved.
And step two, the terminal zooms the area where the contact point is located according to the target zooming proportion to obtain the triggering area of the interactive operation.
In the second step, the sizes of the body parts of different users may be different, for example, the sizes of the palms of the children and the adults are different, so that the terminal may further zoom the contact point after acquiring the region where the contact point is located, and then perform the body part identification step based on the trigger region obtained after zooming. In a possible implementation manner, when the area information of the area where the contact point is located is different, the target scaling ratio used by the terminal to scale the area where the contact point is located may also be different, for example, if the area of the area information of the area where the contact point is located is greater than the area threshold, the first scaling ratio may be used to reduce the area to a smaller area, that is, a trigger area; if the area of the area information of the area where the contact point is located is smaller than the area threshold, the second scaling may be used, and the second scaling may be enlarged to a larger area, and of course, the target scaling used in the scaling may also be determined in other manners, for example, the target scaling may be determined based on the body part information to be subsequently matched, which is not limited in the embodiment of the present invention.
It should be noted that the scaling process is scaling equally, so that the original shape of the area where the contact point is located is not damaged. The target scaling may be preset by a relevant technician, and the value of the target scaling is not specifically limited in the embodiment of the present invention.
602. And the terminal matches the region information of the trigger region with at least one body part information to obtain at least one matching degree.
In one possible implementation, at least one body part information may be stored in the terminal in advance, and each body part information may include target area information of a corresponding body part. In one possible implementation, each body part information may include a target area and a target shape of the corresponding body part.
Wherein the target area and target shape may be a criterion for the area and shape of the contact area of the body part with the user interface, e.g. the target area may be a range of areas. For example, for each body part information, the terminal may acquire whether the area of the trigger region is within the area range in the body part information, and whether the shape of the trigger region is the same as the shape in the body part information, if both are the same, the trigger region is matched with the body part information, and if either is not the same, the trigger region is failed to be matched with the body part information. Of course, the body part information may also include other information, which is not limited in the embodiment of the present invention.
In a possible implementation manner, in step 602, the terminal may match the area information of the trigger area with the at least one body part information to obtain at least one matching degree, where each matching degree is a matching degree of the area information of the trigger area and one body part information. The terminal can determine the trigger area to which body part of the body part information the user performs the interactive operation based on the matching degree of the trigger area and each body part information. That is, the terminal may obtain body part indication information indicating a body part used by the user for performing the interactive operation based on the at least one matching degree.
In a specific embodiment, the process that the terminal may directly perform matching based on the area information of the trigger area may also be: the terminal obtains an image including the trigger region based on the trigger region, and then matches the image with at least one body part information, and accordingly, the at least one body part information may be at least one image. That is, the terminal generates an image including the trigger region based on the trigger region; and the terminal matches the image with the image of at least one body part, so as to acquire body part indication information corresponding to the matched image. The embodiment of the present invention provides only one example, and which implementation manner is adopted by the terminal in the specific matching process is not limited in the embodiment of the present invention
603. And the terminal acquires body part indication information corresponding to the body part information with the matching degree meeting the target condition.
In step 602, the terminal matches the region information of the trigger region with at least one body part information, and if there is matched body part information, the terminal may perform step 603. It can be understood that the user may perform an interactive operation using the body part corresponding to the matched body part information, so that the interactive operation performed on the trigger area can be detected on the user interface, that is, it may be determined that the user performs the interactive operation using the body part indicated by the body part indication information through the matching process.
In step 603, after the terminal obtains at least one matching degree, it may determine whether each matching degree meets the target condition. The target condition may be preset by a relevant technician, which is not limited in the embodiment of the present invention, and the following two possible cases of the target condition are used to describe the step executed by the terminal in step 603:
in the first case, the terminal acquires body part indication information corresponding to body part information with the maximum matching degree in the at least one matching degree.
In the first case, the terminal performs matching to obtain at least one matching degree, where the matching degree may be used to represent a probability that the trigger region may perform an interactive operation for the body part indicated by the body part indication information corresponding to each piece of body part information, and the terminal may select the body part indication information with the highest probability.
In the second case, when only one matching degree is greater than the threshold value of the matching degree, the terminal acquires body part indication information corresponding to the body part information of which the matching degree is greater than the threshold value of the matching degree; or when the at least one matching degree comprises a plurality of matching degrees which are greater than the threshold value of the matching degree, the terminal acquires body part indicating information corresponding to body part information with the maximum matching degree in the plurality of matching degrees which are greater than the threshold value of the matching degree.
In the second case, a threshold of matching degree may be set in the terminal, and it is understood that, for the trigger region and some body part information, if the matching degree is less than or equal to the threshold of matching degree, the trigger region has a low possibility of performing an interactive operation for the user based on the body part corresponding to the body part information. The matching degree threshold value may be preset by a relevant technician, and the specific value of the matching degree threshold value is not limited in the embodiment of the present invention. For example, the threshold matching degree may be 90%, so that the trigger region and the body part information do not need to be completely matched, and the body part indication information can be determined. Therefore, the trigger areas obtained by each operation of the user during operation may be different, and the user operation cannot be accurately identified due to high matching precision by setting a certain fault tolerance rate.
However, for at least one matching degree corresponding to at least one body part information, only one matching degree may be greater than the threshold matching degree, or a plurality of matching degrees may be greater than the threshold matching degree, so that the second case may include two scenarios:
in the first scenario, only one of the at least one matching degree is greater than the threshold matching degree.
For the body part information with the matching degree greater than the matching degree threshold, it can be understood that the possibility that the user performs the above-mentioned interactive operation based on the body part indicated by the body part indication information corresponding to the body part information is relatively high, and the possibility that the body parts indicated by other body part information are relatively low. Therefore, the terminal can directly acquire the body part indication information corresponding to the body part information with the matching degree larger than the matching degree threshold value.
In a second scenario, the at least one matching degree includes a plurality of matching degrees greater than a threshold matching degree.
If the matching degrees are all larger than the threshold value of the matching degree, the terminal can select the corresponding body part indication information with the highest probability from the body part information with high probability. Therefore, through the setting of the matching degree threshold value, the identification of the user operation has certain fault-tolerant rate, and the accuracy of the identification result can be ensured.
For example, taking the matching threshold as 80%, the terminal obtains three matching degrees through matching: 85%, 50% and 90%, 50% being less than the matching degree threshold 80%, the 50% corresponding body part may be disregarded, and for 85% and 90%, the terminal may use the 90% corresponding body part as the body part indication information. For another example, taking the matching threshold as 80%, the terminal obtains three matching degrees through matching: 50%, 40%, and 30%, which are all less than the matching degree threshold value of 80%, indicating that the body parts corresponding to the three matching degrees are less likely to be the body part indication information, and thus, in this case, it can be considered that the matching is failed. The matching degree threshold value may be preset by a relevant technician, and the specific value of the matching degree threshold value is not limited in the embodiment of the present invention.
The above step 602 and step 603 are processes of acquiring body part indication information according to the area information of the trigger area, where the body part indication information is used for indicating a body part of the user for performing the interactive operation. Through the matching process, the terminal can obtain the body part indication information.
In the second case, there is also a matching failure scenario, specifically, when the at least one matching degree is less than or equal to the matching degree threshold, the terminal ignores the interaction operation, or, based on the position of the trigger area in the user interface, executes an instruction corresponding to the position. In such a scenario, if the matching result is not ideal, the terminal may consider that the body part of the user performing the interactive operation is not any preset body part, or the user performs the interactive operation based on the body part but by other means. In particular, the terminal may ignore the interaction. Or, the interactive operation may not be a normal interactive operation in which the user wants to control the virtual object to perform an action through a different body part, for example, a touch operation on the virtual joystick region is performed to control the virtual object to move in the virtual scene, or a view angle rotation operation is performed to change a view angle of the virtual scene, or a click operation on a preset button in the interface is performed to open or close the corresponding interface or start or close the corresponding function, and the terminal may implement the function corresponding to the normal interactive operation. The terminal may execute a corresponding instruction based on the location of the trigger region in the user interface.
It should be noted that, by setting the threshold of the matching degree, when the matching degree of the trigger area and each body part information is small, the matching is considered to be failed, and the body part indicating information with the largest matching degree is no longer used, so that the situation that the identification is inaccurate due to the fact that one body part indicating information corresponding to the body part information with the smaller matching degree is obtained is avoided.
In a possible implementation manner, the at least one body part information may also be entered by the user on the terminal in advance, so that when the other users perform interactive operation and the terminal performs matching, the action control operation provided for the other users may be rejected due to the mismatch of the body part information of the user, thereby providing an action control function customized by the user.
Specifically, the terminal may display input prompt information in the user interface, and the user may input body part information of the user when seeing the input prompt information, so that the terminal may acquire at least one body part information input by any user based on the input prompt information, and perform associated storage of the at least one body part information and a user identifier of the any user. When detecting an interactive operation on the user interface, the subsequent terminal may obtain at least one body part information corresponding to the user identifier according to the user identifier of the currently logged-in user, so that in step 602, the terminal may match the area information of the trigger area with the at least one body part information corresponding to the user identifier of the currently logged-in user, and then two cases may be included according to different matching results:
in case of successful matching, the terminal may obtain body part indication information corresponding to the matched body part information. In the case of a failure in matching, when the matching result indicates that the area information of the trigger area and the at least one body part information corresponding to the user identifier both fail to match, the terminal may display a user mismatch prompt message. That is, the user performing the interactive operation is not the currently logged-in user, the body part information is not matched, and the user sees the user mismatch prompt information, so that it can be known that the user does not have the right to perform the interactive operation on the user interface of the currently logged-in user to realize the action control function.
604. And the terminal acquires the action control instruction corresponding to the body part indication information.
After the terminal acquires the body part indication information, the terminal can acquire the action control instruction corresponding to the body part indication information, so that the virtual object can be controlled to execute corresponding actions based on the action control instruction, and therefore, the user can perform interactive operation based on different body parts, and the virtual object can be controlled to execute different actions.
In the terminal, different body part indication information may correspond to different motion control instructions, and a corresponding relationship between the body part indication information and the motion control instructions may be preset by a related technician, or may be adjusted or set by a user according to a use habit of the user, which is not limited in the embodiment of the present invention.
For the situation of setting or adjusting by the user, the user can act on the terminal to perform setting operation, and the terminal can perform setting of the corresponding relation when receiving an action setting instruction triggered by the action setting operation. Specifically, when receiving a motion setting instruction, the terminal sets a correspondence relationship between the body part instruction information and the motion control instruction based on the motion setting instruction. In this case, the step 604 may specifically be: and the terminal acquires the action control instruction corresponding to the body part instruction information based on the corresponding relation between the body part instruction information and the action control instruction and the body part instruction information.
For example, the related art is previously provided with: the palm corresponds to the motion control command a and the fist corresponds to the motion control command B. The user can adjust according to self use habit, and the user adjustment back, the palm is corresponding to action control command B, and the fist is corresponding to action control command A. Of course, the user may also reset to set the palm to correspond to motion control command C and the fist to correspond to motion control command a.
In the above case where the motion control instruction is associated with only the body part indication information, the terminal may obtain the corresponding motion control instruction, and there is a possible case where the motion control instruction is not associated with the body part indication information after the body part indication information is determined, and in this case, the terminal may ignore the interactive operation, that is, the current interactive operation is invalid. In a possible implementation manner, the motion control instruction may further have a state, where the state may include an executable state and an execution prohibition state, and after the terminal determines the body part indication information, the terminal may further determine the state of the motion control instruction corresponding to the body part indication information, and if the state is the execution prohibition state, the terminal may not respond to the motion control instruction, so that the terminal may ignore the interactive operation, that is, the interactive operation is invalid this time.
That is, when the body part indication information is not associated with the motion control command, or the state of the motion control command corresponding to the body part indication information is the execution prohibition state, the terminal ignores the interactive operation. For example, some motion control instructions may be executed, while other motion control instructions require unlocking for use. When the terminal determines that the body part indication information is the palm, the palm corresponds to the action control instruction A, and the terminal does not need to acquire the action control instruction A and directly ignores the interactive operation.
605. And the terminal responds to the action control command and controls the virtual object to execute the action corresponding to the action control command.
After the terminal acquires the action control instruction, the terminal can directly respond to the action control instruction without further operation of a user and control the virtual object to execute the action corresponding to the action control instruction. Therefore, the user only needs to use different body parts to perform interactive operation on the terminal user interface, the terminal can automatically control the virtual object to execute corresponding actions based on the recognized body part indication information of the user, compared with the related technology, the user does not need to perform multi-step clicking operation, the operation is simple, the operation modes are various, the user can also use the same body part as the virtual object to perform operation, the substitution feeling is strong, and the interestingness is high.
The above-described virtual object control process is explained below by three examples: in an example one, as shown in fig. 7, a user may perform a tongue opening motion and perform an operation on a user interface, when the terminal detects the user operation, the terminal may perform step 601 described above to obtain area information of a trigger area, as shown in fig. 8, and then the terminal may perform step 602 and step 603 to obtain body part indication information as a tongue, so as to obtain and execute a motion control instruction corresponding to the tongue, for example, a fly-kiss motion control instruction, and as shown in fig. 9, the terminal may control the virtual object to perform a fly-kiss motion.
In one possible implementation, the process of the terminal recognizing the body part may be implemented by a body print (Bodyprint) based biometric authentication system. The terminal can be based on the body print's biological authentication system, and the discernment obtains the body part that the user carries out the interactive operation to body part instruction information is correlated with action control command, and the terminal can respond to corresponding action control command automatically after obtaining body part instruction information, and easy operation is interesting, can effectively improve virtual object control process's efficiency and interest.
In a specific possible embodiment, the same body part indication information may also correspond to multiple motion control instructions, and the body part indication information may also have different states, for example, a single state or a combined state, for example, for a palm, the single state of the palm corresponds to an interactive operation that may be pressing the user interface with the palm, the combined state of the palm may be that the palm taps the user interface multiple times, and each state of the palm corresponds to one motion control instruction, which may be specifically referred to in example two and example three below.
In example two, as shown in fig. 10, the user may press a palm on the user interface, the terminal may obtain the trigger area and the area information of the trigger area as shown in fig. 11 in step 601, and then obtain body part indication information as the palm, the state of the palm is a single state, and taking an action control instruction corresponding to the single state of the palm as a hand waving action control instruction as an example, as shown in fig. 12, the terminal may control the virtual object to execute a hand waving action.
In the third example, as shown in fig. 13, the user may tap the user interface with the palm multiple times, and the terminal may obtain the trigger region and the region information of the trigger region as shown in fig. 14 in step 601, and then obtain that the body part indication information is the palm, the state of the palm is the combined state, and take the action control command corresponding to the combined state of the palm as the clapping action control command as an example, as shown in fig. 15, the terminal may control the virtual object to execute the clapping action.
Fig. 16 is a flowchart of a virtual object control method according to an embodiment of the present invention, referring to fig. 16, in the virtual object control process, when the terminal may detect an interactive operation, it may be determined through the above steps 601 to 603 whether the user performs the interactive operation based on a body part, if so, whether the identified body part indication information is associated with an "expression", which is a motion control instruction of the virtual object, here, if it is determined that the user does not perform the interactive operation based on the body part, the method may end, that is, the terminal may not provide a motion control function of the virtual object, and may omit the interactive operation or perform a corresponding function based on a position of the interactive operation. When the terminal judges whether the body part indication information is associated with the expression, if so, the terminal can play the corresponding expression action, that is, the terminal can acquire and execute the action control instruction corresponding to the body part indication information, and control the virtual object to execute the corresponding action, wherein the action can be an action animation, and the terminal can play the action animation. And if the body part indication information is not associated with the expression, the terminal may not provide the action control function, and may omit the interactive operation or perform a corresponding function based on the position of the interactive operation. Through the virtual object control process, a user can directly trigger the playing of different expression actions only by contacting different body parts with the user interface, the control process is more efficient, and meanwhile, the man-machine interaction inductance, the simulation degree and the game interest can be greatly increased. Furthermore, the incidence relation between the expression actions and the body parts can be set by a user in a user-defined mode, and the personalized customization of the expressions is improved.
In the virtual object control process, each structural unit of the terminal may perform the above steps 601 to 605. Fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present invention, referring to fig. 17, a user may perform an interactive operation on an input unit of the terminal through a human body part, the terminal may detect the interactive operation and recognize the human body part through a capacitive touch screen and a processor, the recognition process may be implemented based on an image sensor, and after determining an action control instruction to be executed, the processor may execute the action control instruction, for example, play an expression action.
According to the embodiment of the invention, the body part of the user for performing the interactive operation is determined based on the area information of the trigger area of the interactive operation, so that the action control instruction corresponding to the determined body part can be obtained and responded, the action control of the virtual object is realized, the user only needs to perform the interactive operation through the body part in the operation process, multiple click operations are not needed, the control efficiency of the virtual object is high, the operation mode is richer than the click operation, the substitution feeling can be increased for the user, and the interestingness in the virtual object control process is improved.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Although the virtual object control method has been described in the embodiment shown in fig. 6, a specific flow of the virtual object control method is further described below by using a specific example in the embodiment shown in fig. 18, so as to facilitate understanding of the timing and logical association of each step executed by the terminal in the virtual object control method. Fig. 18 is a flowchart of a virtual object control method according to an embodiment of the present invention, and referring to fig. 18, the method may include the following steps:
1800. the terminal displays a user interface in which the virtual object is displayed.
1801. And the terminal acquires the area information of the trigger area of the interactive operation according to the interactive operation on the user interface.
1802. The terminal matches the area information of the trigger area with at least one body part information, if the matching is successful, step 1803 is executed, and if the matching is failed, step 1808 is executed.
After the terminal acquires the area information of the trigger area, it may be determined whether the interactive operation is performed by the user based on a certain body part, if so, the terminal may continue to perform the subsequent steps, and if not, the terminal does not need to provide an action control function, and the terminal may perform step 1808, and ignore the interactive operation.
1803. And the terminal acquires body part indication information corresponding to the body part information with the matching degree meeting the target condition.
Steps 1800 to 1803 are similar to steps 600 to 603, and the embodiment of the present invention is not described herein.
1804. The terminal determines whether or not the motion control command is associated with the body part instruction information, and if so, executes step 1805, and if not, executes step 1808.
After the terminal acquires the body part indication information, there may be two cases, if the body part indication information is associated with a motion control instruction, the terminal may provide a motion control function, and if the body part indication information is not associated with a motion control instruction, the terminal may not provide a motion control function, and step 1808 may be executed to ignore the interaction operation.
1805. The terminal acquires the state of the motion control command, and if the state of the motion control command is executable, executes step 1806 and step 1807, and if the state of the motion control command is execution-prohibited, executes step 1808.
The action control command may include two states, and the action control command may be an executable state, for example, an unlocked state, or an execution-prohibited state, for example, an unlocked state, and when the states are different, the terminal may also execute different steps.
1806. And the terminal acquires the action control instruction corresponding to the body part indication information.
1807. And the terminal responds to the action control command and controls the virtual object to execute the action corresponding to the action control command.
Step 1806 and step 1807 are the same as step 604 and step 605, and the embodiment of the present invention is not described herein.
This step 1806 and step 1807 are: when the body part indication information is associated with a motion control instruction, and the state of the motion control instruction can be executed, the terminal acquires and responds to the motion control instruction corresponding to the body part indication information.
1808. The terminal ignores the interactive operation.
Here, only the terminal omits the interactive operation as an example, there is a possible scenario that the interactive operation may be a normal touch operation performed by the user on an element on the user interface, and the terminal may also execute a corresponding instruction based on a position of the interactive operation on the user interface, which is not limited in the embodiment of the present invention.
The above-mentioned step 1804 and step 1808, or step 1805 and step 1808, are the process of ignoring the interactive operation when the body part indication information is not associated with the motion control instruction, or the state of the motion control instruction corresponding to the body part indication information is the execution prohibition state.
According to the embodiment of the invention, after the interactive operation is detected, the interactive operation can be executed by the user based on the body part, the determined body part is associated with the action control instruction, and when the action control instruction is in an executable state, the action control instruction corresponding to the obtained body part indication information is automatically obtained and responded, so that the action control of the virtual object is realized.
Fig. 19 is a schematic structural diagram of a virtual object control apparatus according to an embodiment of the present invention, and referring to fig. 19, the apparatus includes:
a display module 1901, configured to display a user interface, where a virtual object is displayed in the user interface;
an obtaining module 1902, configured to obtain, according to an interactive operation on the user interface, area information of a trigger area of the interactive operation;
the obtaining module 1902 is further configured to obtain body part indication information according to the area information of the trigger area, where the body part indication information is used to indicate a body part used by the user to perform the interactive operation;
the obtaining module 1902 is further configured to obtain an action control instruction corresponding to the body part indication information;
the control module 1903 is configured to, in response to the motion control instruction, control the virtual object to execute a motion corresponding to the motion control instruction.
In one possible implementation, the obtaining module 1902 is configured to obtain area information of an area where a contact point of the interactive operation is located on the user interface.
In one possible implementation, the obtaining module 1902 is configured to:
acquiring the area of the contact point of the interactive operation on the user interface;
acquiring a trigger area of the interactive operation based on the area where the contact point is located;
and acquiring the area information of the trigger area.
In one possible implementation, the obtaining module 1902 is configured to:
carrying out smoothing processing on the area where the contact point is located to obtain a trigger area of interactive operation;
and zooming the area where the contact point is located according to the target zooming scale to obtain the trigger area of the interactive operation.
In one possible implementation, the region information includes an area and a shape of the trigger region.
In one possible implementation, the obtaining module 1902 is configured to:
matching the area information of the trigger area with at least one body part information to obtain at least one matching degree;
and acquiring body part indication information corresponding to the body part information with the matching degree meeting the target condition.
In one possible implementation manner, the obtaining module 1902 is configured to obtain body part indication information corresponding to body part information with a largest matching degree in the at least one matching degree.
In one possible implementation, the obtaining module 1902 is configured to:
when only one matching degree in the at least one matching degree is greater than a matching degree threshold value, acquiring body part indication information corresponding to the body part information of which the matching degree is greater than the matching degree threshold value; or the like, or, alternatively,
and when the at least one matching degree comprises a plurality of matching degrees which are greater than the threshold value of the matching degree, acquiring body part indication information corresponding to body part information with the maximum matching degree in the plurality of matching degrees which are greater than the threshold value of the matching degree.
In a possible implementation manner, the control module 1903 is further configured to ignore the interaction operation when the at least one matching degree is less than or equal to the threshold matching degree, or execute an instruction corresponding to the position based on the position of the trigger area in the user interface.
In a possible implementation manner, the control module 1903 is further configured to ignore the interaction operation when the body part indication information is not associated with a motion control instruction, or a state of the motion control instruction corresponding to the body part indication information is an execution prohibition state.
In one possible implementation manner, the obtaining module 1902 is further configured to obtain at least one body part information input by any user based on the input prompt information;
the device also includes:
the storage module is used for storing the at least one body part information and the user identification of any user in a correlation manner;
correspondingly, the obtaining module 1902 is further configured to match the area information of the trigger area with at least one body part information corresponding to the user identifier of the currently logged-in user;
the obtaining module 1902 is further configured to obtain body part indication information corresponding to the matched body part information; or, the display module 1901 is further configured to display a user mismatch prompting message when the matching result indicates that the region information of the trigger region and the at least one body part information corresponding to the user identifier all fail to match.
In one possible implementation, the apparatus further includes:
and the setting module is used for setting the corresponding relation between the body part indication information and the action control instruction based on the action setting instruction when the action setting instruction is received.
According to the device provided by the embodiment of the invention, the body part of the user for performing the interactive operation is determined based on the area information of the trigger area of the interactive operation, so that the action control instruction corresponding to the determined body part can be obtained and responded, the action control of the virtual object is realized, the operation process only needs the user to perform the interactive operation through the body part, multiple click operations are not needed, the control efficiency of the virtual object is high, the operation mode is richer than the click operations, the substitution feeling can be increased for the user, and the interestingness of the virtual object control process is improved.
It should be noted that: the virtual object control apparatus provided in the above embodiments is only illustrated by the division of the functional modules when controlling a virtual object, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the electronic device may be divided into different functional modules to complete all or part of the functions described above. In addition, the virtual object control apparatus provided in the above embodiments and the virtual object control method embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments, and are not described herein again.
Fig. 20 is a schematic structural diagram of an electronic device 2000 according to an embodiment of the present invention, where the electronic device 2000 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 2001 and one or more memories 2002, where the memory 2002 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 2001 to implement the virtual object control method provided by the foregoing method embodiments. Of course, the electronic device may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the electronic device may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor to perform the virtual object control method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (22)

1. A virtual object control method, characterized in that the method comprises:
displaying a user interface in which a virtual object is displayed;
acquiring region information of a trigger region of the interactive operation according to the interactive operation on the user interface, wherein the region information comprises the area and the shape of the trigger region;
matching the region information of the trigger region with at least one body part information to obtain at least one matching degree, wherein the body part information comprises a target area and a target shape of a corresponding body part;
acquiring body part indication information corresponding to body part information with matching degree meeting target conditions, wherein the body part indication information is used for indicating body parts used for the user to perform interactive operation;
acquiring action control instructions corresponding to the body part indication information;
and responding to the action control instruction, and controlling the virtual object to execute the action corresponding to the action control instruction.
2. The method according to claim 1, wherein the obtaining, according to the interactive operation on the user interface, area information of a trigger area of the interactive operation includes:
and acquiring the area information of the area where the contact point of the interactive operation is located on the user interface.
3. The method according to claim 1, wherein the obtaining, according to the interactive operation on the user interface, area information of a trigger area of the interactive operation includes:
acquiring the area of the contact point of the interactive operation on the user interface;
acquiring a trigger area of the interactive operation based on the area where the contact point is located;
and acquiring the area information of the trigger area.
4. The method according to claim 3, wherein the obtaining of the trigger area of the interactive operation based on the area where the contact point is located comprises at least one of the following steps:
carrying out smoothing processing on the area where the contact point is located to obtain a trigger area of interactive operation;
and zooming the area where the contact point is located according to the target zooming scale to obtain the trigger area of the interactive operation.
5. The method according to claim 1, wherein the obtaining body part indication information corresponding to the body part information with the matching degree meeting the target condition comprises:
and acquiring body part indication information corresponding to the body part information with the maximum matching degree in the at least one matching degree.
6. The method according to claim 5, wherein the obtaining body part indication information corresponding to the body part information with the matching degree meeting the target condition comprises:
when only one matching degree in the at least one matching degree is greater than a matching degree threshold value, acquiring body part indication information corresponding to the body part information of which the matching degree is greater than the matching degree threshold value; or the like, or, alternatively,
and when the at least one matching degree comprises a plurality of matching degrees which are greater than a matching degree threshold value, acquiring body part indication information corresponding to body part information with the maximum matching degree in the plurality of matching degrees which are greater than the matching degree threshold value.
7. The method of claim 6, further comprising:
and when the at least one matching degree is less than or equal to the threshold value of the matching degree, ignoring the interaction operation, or executing an instruction corresponding to the position based on the position of the trigger area in the user interface.
8. The method according to claim 1, wherein after acquiring body part indication information corresponding to body part information with matching degree meeting target conditions, the method further comprises:
and when the body part indication information is not associated with an action control instruction or the state of the action control instruction corresponding to the body part indication information is an execution prohibition state, ignoring the interactive operation.
9. The method according to claim 1, wherein before the obtaining of the area information of the trigger area of the interactive operation according to the interactive operation on the user interface, the method further comprises:
acquiring at least one body part information input by any user based on the input prompt information;
storing the at least one body part information in association with a user identification of the any user;
correspondingly, the acquiring body part indication information according to the area information of the trigger area comprises:
matching the area information of the trigger area with at least one body part information corresponding to the user identification of the currently logged-in user;
acquiring body part indication information corresponding to the matched body part information; or when the matching result indicates that the area information of the trigger area and the at least one body part information corresponding to the user identification are failed to be matched, displaying user mismatching prompt information.
10. The method of claim 1, further comprising:
when receiving an action setting instruction, setting a corresponding relation between the body part indication information and the action control instruction based on the action setting instruction.
11. An apparatus for controlling a virtual object, the apparatus comprising:
the display module is used for displaying a user interface, and a virtual object is displayed in the user interface;
the acquisition module is used for acquiring the region information of a trigger region of the interactive operation according to the interactive operation on the user interface, wherein the region information comprises the area and the shape of the trigger region; matching the region information of the trigger region with at least one body part information to obtain at least one matching degree, wherein the body part information comprises a target area and a target shape of a corresponding body part; acquiring body part indication information corresponding to body part information with matching degree meeting target conditions, wherein the body part indication information is used for indicating body parts used for the user to perform interactive operation;
the acquisition module is further used for acquiring action control instructions corresponding to the body part indication information;
and the control module is used for responding to the action control instruction and controlling the virtual object to execute the action corresponding to the action control instruction.
12. The apparatus of claim 11, wherein the obtaining module is configured to:
and acquiring the area information of the area where the contact point of the interactive operation is located on the user interface.
13. The apparatus of claim 11, wherein the obtaining module is configured to:
acquiring the area of the contact point of the interactive operation on the user interface;
acquiring a trigger area of the interactive operation based on the area where the contact point is located;
and acquiring the area information of the trigger area.
14. The apparatus of claim 13, wherein the obtaining module is configured to:
carrying out smoothing processing on the area where the contact point is located to obtain a trigger area of interactive operation;
and zooming the area where the contact point is located according to the target zooming scale to obtain the trigger area of the interactive operation.
15. The apparatus of claim 11, wherein the obtaining module is configured to:
and acquiring body part indication information corresponding to the body part information with the maximum matching degree in the at least one matching degree.
16. The apparatus of claim 15, wherein the means for obtaining is configured to:
when only one matching degree in the at least one matching degree is greater than a matching degree threshold value, acquiring body part indication information corresponding to the body part information of which the matching degree is greater than the matching degree threshold value; or the like, or, alternatively,
and when the at least one matching degree comprises a plurality of matching degrees which are greater than a matching degree threshold value, acquiring body part indication information corresponding to body part information with the maximum matching degree in the plurality of matching degrees which are greater than the matching degree threshold value.
17. The apparatus of claim 16, wherein the control module is further configured to:
and when the at least one matching degree is less than or equal to the threshold value of the matching degree, ignoring the interaction operation, or executing an instruction corresponding to the position based on the position of the trigger area in the user interface.
18. The apparatus of claim 11, wherein the control module is further configured to:
and when the body part indication information is not associated with an action control instruction or the state of the action control instruction corresponding to the body part indication information is an execution prohibition state, ignoring the interactive operation.
19. The apparatus according to claim 11, wherein the obtaining module is further configured to obtain at least one body part information input by any user based on the input prompt information;
the device further comprises:
the storage module is used for storing the at least one body part information and the user identification of any user in an associated manner;
correspondingly, the acquisition module is further configured to match the area information of the trigger area with at least one body part information corresponding to the user identifier of the currently logged-in user;
the acquisition module is also used for acquiring body part indication information corresponding to the matched body part information; or when the matching result indicates that the area information of the trigger area and the at least one body part information corresponding to the user identification are failed to be matched, displaying user mismatching prompt information.
20. The apparatus of claim 11, further comprising:
and the setting module is used for setting the corresponding relation between the body part indication information and the action control instruction based on the action setting instruction when the action setting instruction is received.
21. An electronic device, comprising a processor and a memory, wherein at least one instruction is stored in the memory, and wherein the instruction is loaded and executed by the processor to perform the operations performed by the virtual object control method of any one of claims 1 to 10.
22. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the virtual object control method of any one of claims 1 to 10.
CN201811393396.5A 2018-11-21 2018-11-21 Virtual object control method and device, electronic equipment and storage medium Active CN109529340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811393396.5A CN109529340B (en) 2018-11-21 2018-11-21 Virtual object control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811393396.5A CN109529340B (en) 2018-11-21 2018-11-21 Virtual object control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109529340A CN109529340A (en) 2019-03-29
CN109529340B true CN109529340B (en) 2020-08-11

Family

ID=65849033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811393396.5A Active CN109529340B (en) 2018-11-21 2018-11-21 Virtual object control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109529340B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110585708B (en) * 2019-09-12 2020-09-18 腾讯科技(深圳)有限公司 Method, device and readable storage medium for landing from aircraft in virtual environment
CN110928411B (en) * 2019-11-18 2021-03-26 珠海格力电器股份有限公司 AR-based interaction method and device, storage medium and electronic equipment
CN112657200B (en) * 2020-12-23 2023-02-10 上海米哈游天命科技有限公司 Role control method, device, equipment and storage medium
CN116681872A (en) * 2022-02-22 2023-09-01 Oppo广东移动通信有限公司 Content display method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101069778A (en) * 2007-03-29 2007-11-14 腾讯科技(深圳)有限公司 Control method and system for computer game
KR20140135276A (en) * 2013-05-07 2014-11-26 (주)위메이드엔터테인먼트 Method and Apparatus for processing a gesture input on a game screen
CN106474738A (en) * 2016-11-17 2017-03-08 成都中科创达软件有限公司 A kind of virtual electronic organ playing method based on fingerprint recognition and device
CN108671539A (en) * 2018-05-04 2018-10-19 网易(杭州)网络有限公司 Target object exchange method and device, electronic equipment, storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101069778A (en) * 2007-03-29 2007-11-14 腾讯科技(深圳)有限公司 Control method and system for computer game
KR20140135276A (en) * 2013-05-07 2014-11-26 (주)위메이드엔터테인먼트 Method and Apparatus for processing a gesture input on a game screen
CN106474738A (en) * 2016-11-17 2017-03-08 成都中科创达软件有限公司 A kind of virtual electronic organ playing method based on fingerprint recognition and device
CN108671539A (en) * 2018-05-04 2018-10-19 网易(杭州)网络有限公司 Target object exchange method and device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN109529340A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109529340B (en) Virtual object control method and device, electronic equipment and storage medium
CN108379844B (en) Method, device, electronic device and storage medium for controlling movement of virtual object
US10628478B2 (en) Method and device thereof for user interaction based on virtual objects and non-volatile storage medium
CN109529356B (en) Battle result determining method, device and storage medium
TWI818343B (en) Method of presenting virtual scene, device, electrical equipment, storage medium, and computer program product
CN108681402A (en) Identify exchange method, device, storage medium and terminal device
US11270087B2 (en) Object scanning method based on mobile terminal and mobile terminal
WO2022267729A1 (en) Virtual scene-based interaction method and apparatus, device, medium, and program product
CN113350779A (en) Game virtual character action control method and device, storage medium and electronic equipment
CN112995687B (en) Interaction method, device, equipment and medium based on Internet
TWI729323B (en) Interactive gamimg system
WO2018000612A1 (en) Touchpad-based method for unlocking terminal and electronic device
EP4378552A1 (en) Method and apparatus for interaction in virtual environment
JP7163526B1 (en) Information processing system, program and information processing method
JP7286856B2 (en) Information processing system, program and information processing method
JP7286857B2 (en) Information processing system, program and information processing method
US20240177435A1 (en) Virtual interaction methods, devices, and storage media
WO2024060895A1 (en) Group establishment method and apparatus for virtual scene, and device and storage medium
CN117224947A (en) AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium
CN116983649A (en) Virtual object control method, device, equipment and storage medium
CN113873162A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115811623A (en) Live broadcasting method and system based on virtual image
CN115888094A (en) Game control method, device, terminal equipment and storage medium
CN116679847A (en) Chat message processing method, device, equipment and medium
JP2023015979A (en) Information processing system, program, and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant