CN111061360A - Control method, device, medium and electronic equipment based on head action of user - Google Patents

Control method, device, medium and electronic equipment based on head action of user Download PDF

Info

Publication number
CN111061360A
CN111061360A CN201911102941.5A CN201911102941A CN111061360A CN 111061360 A CN111061360 A CN 111061360A CN 201911102941 A CN201911102941 A CN 201911102941A CN 111061360 A CN111061360 A CN 111061360A
Authority
CN
China
Prior art keywords
head
weapon
user
preset
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911102941.5A
Other languages
Chinese (zh)
Other versions
CN111061360B (en
Inventor
李云飞
张前川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911102941.5A priority Critical patent/CN111061360B/en
Publication of CN111061360A publication Critical patent/CN111061360A/en
Application granted granted Critical
Publication of CN111061360B publication Critical patent/CN111061360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a control method, a control device, a control medium and electronic equipment based on head actions of a user, wherein the control method comprises the following steps: displaying a user interface of an application program on a sensing control screen of the terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying a head portrait; receiving a sensing operation generated based on the head action of a user; and judging whether the induction operation meets the execution condition, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition. According to the method and the device, the sensing operation generated based on the head action of the user is received, and the virtual character is controlled to execute the target action under the condition that the sensing operation meets the execution condition, so that the user interface of the application program can be controlled more conveniently and flexibly, the two hands of the user are liberated, and the user experience is improved.

Description

Control method, device, medium and electronic equipment based on head action of user
Technical Field
The invention relates to the technical field of computers, in particular to a control method, a control device, a control medium and electronic equipment based on head actions of a user.
Background
The battle type game is mostly RPG, and the player has lost the enjoyment that gets into the role of playing, and the control mode is mostly gesture control, is not convenient for operate to the less cell-phone of screen.
Therefore, in the long-term research and development, the inventor has conducted a great deal of research and study on a game control method, and has proposed a control method based on the head movement of the user to solve one of the above technical problems.
Disclosure of Invention
An object of the present invention is to provide a method, an apparatus, a medium, and an electronic device for controlling based on head movements of a user, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific implementation manner of the invention, in a first aspect, the invention provides a control method based on head motion of a user, which comprises the following steps:
displaying a user interface of an application program on a sensing control screen of a terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying a head portrait;
receiving a sensing operation generated based on the head action of a user;
and judging whether the induction operation meets an execution condition, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition.
Optionally, before the determining whether the sensing operation satisfies the execution condition, the method further includes:
reading the execution condition;
wherein the execution conditions at least include:
the sensing operation is an operation of sensing a specified action performed by the user, and the specified action at least comprises the following steps: the method comprises the steps that the direction of nodding heads is pointed by a nose while nodding heads, the direction of nodding heads is pointed by the nose, the terminal corresponds to the first position area of the control screen, and the direction of shaking heads is pointed by ears while shaking heads, at least one of the second position areas of the control screen is the terminal.
Optionally, before the controlling virtual character performs the target action, the method further includes:
reading the target action;
wherein the target action comprises:
the method comprises at least one of a first target action of firing by using a first preset type weapon, a second target action of firing by using a second preset type weapon, a third target action of firing by using a third preset type weapon, a fourth target action of ammunition loading, a fifth target action of ammunition changing, a sixth target action of switching weapons among various preset weapons, and a seventh target action of unloading ammunition.
Optionally, before the receiving the sensing operation generated based on the head motion of the user, the method further includes:
and determining the head action of the user according to the coordinate data of the plurality of feature points selected from the head image of the user.
Optionally, the determining the head action of the user according to the coordinate data of the plurality of feature points selected from the head image of the user includes:
acquiring continuous head coordinate data corresponding to a plurality of feature points selected from the user head image within a preset time period;
determining the change condition of the head coordinate of the user according to the continuous head coordinate data of any one of the plurality of feature points;
and determining the head action of the user according to the coordinate change condition of the head of the user.
Optionally, before the controlling virtual character performs the target action, the method further includes:
acquiring nodding force corresponding to the head action of the user;
acquiring a first corresponding relation between preset nodding strength and weapon types, and determining any one of the weapons to be used by the virtual character, which at least comprises a burst type weapon corresponding to the first preset type weapon and a shooting type weapon corresponding to the second preset type weapon, according to the first corresponding relation.
Optionally, the determining, according to the first correspondence, that the virtual character to be used at least includes any one of a burst-type weapon corresponding to the first preset-type weapon and a shooting-type weapon corresponding to the second preset-type weapon includes:
if the nodding strength is in a first nodding strength level, determining that the type of the weapon used by the virtual character is the eruption type weapon; alternatively, the first and second electrodes may be,
if the nodding strength is in a second nodding strength level, determining that the type of the weapon used by the virtual character is the shooting type weapon;
wherein the first nodding force level is lower than the second nodding force level, and a first lethality attribute corresponding to the burst-type weapon is lower than a second lethality attribute corresponding to the fire-type weapon.
Optionally, before the controlling virtual character performs the target action, the method further includes:
acquiring a nodding frequency corresponding to the head action of the user;
and acquiring a second corresponding relation between the preset nodding frequency and the weapon continuous injection frequency, and determining the weapon continuous injection frequency of the virtual character according to the second corresponding relation.
Optionally, before the controlling virtual character performs the target action, the method further includes:
acquiring a head shaking amplitude corresponding to the head action of the user;
and acquiring a third corresponding relation between a preset head shaking amplitude and a weapon type, and determining any one of the weapons to be used by the virtual character, which at least comprises a shooting type weapon corresponding to the second preset type weapon and a bomb type weapon corresponding to the third preset type weapon, according to the third corresponding relation.
Optionally, the determining, according to the third correspondence, that the virtual character to be used at least includes any one of a shooting-type weapon corresponding to the second preset-type weapon and a bomb-type weapon corresponding to the third preset-type weapon includes:
if the head shaking amplitude is at a first head shaking amplitude level, determining that the type of the weapon used by the virtual character is the bomb type weapon; alternatively, the first and second electrodes may be,
if the head shaking amplitude is in a second head shaking amplitude level, determining that the type of the weapon used by the virtual character is the shooting type weapon;
wherein the first head shake amplitude level is lower than the second head shake amplitude level, and the third lethality attribute corresponding to the bomb-type weapon is higher than the second lethality attribute corresponding to the shooting-type weapon.
Optionally, before the controlling virtual character performs the target action, the method further includes:
acquiring head shaking time corresponding to the head action of the user;
and acquiring a fourth corresponding relation between the preset head shaking time length and the weapon emission quantity, and determining the weapon emission quantity of the virtual character according to the fourth corresponding relation.
Optionally, the determining the weapon firing number of the virtual character according to the fourth corresponding relationship includes:
if the head shaking time length is longer, the corresponding weapon emission quantity is larger.
Optionally, before the controlling virtual character performs the target action, the method further includes:
acquiring a head moving direction corresponding to the head action of the user;
and acquiring a fifth corresponding relation between a preset head moving direction and a weapon launched by ears in the preset direction, and determining that the weapon launched by ears in the preset direction is launched by the virtual character according to the fifth corresponding relation.
Optionally, the determining that the virtual character launches a weapon with ears in a preset direction according to the fifth corresponding relationship includes:
if the head of the user is twisted towards the right side, determining that the virtual character launches a weapon with an ear in any preset direction from the upper right of the virtual environment interface, the right side of the virtual environment interface and the lower right of the virtual environment interface; alternatively, the first and second electrodes may be,
if the head of the user is twisted to the left side, the virtual character is determined to be used for shooting weapons with ears in any preset direction in the upper left direction, the left side direction and the lower left direction of the virtual environment interface.
According to a second aspect of the present invention, there is provided a control device based on head movements of a user, comprising:
the display unit is used for displaying a user interface of an application program on a sensing control screen of the terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying a head portrait;
the receiving unit is used for receiving sensing operation generated based on head movement of a user;
and the processing unit is used for judging whether the induction operation received by the receiving unit meets an execution condition or not, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition.
According to a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements a method of user head motion-based control as described in any one of the above.
According to a fourth aspect of the present invention, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of user head motion based control as claimed in any one of the preceding claims.
Compared with the prior art, the scheme of the embodiment of the invention at least has the following beneficial effects: the invention provides a control method, a control device, a control medium and electronic equipment based on user head actions.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a diagram illustrating an application scenario of a control method based on a head action of a user according to an embodiment of the present invention;
FIG. 2 shows a flow chart of a control method based on user head movements according to an embodiment of the invention;
FIG. 3 shows a schematic diagram of determining head movements of a user by an image capture device according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a control device based on head movements of a user according to an embodiment of the present invention;
fig. 5 shows a schematic diagram of an electronic device connection structure according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present invention, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in the article or device in which the element is included.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an application scenario diagram according to an embodiment of the present invention is an application scenario in which a current user operates a client installed on a terminal device through the terminal device such as a mobile phone, and the client performs data communication with a background server through a network. A specific application scenario is an application scenario for controlling a virtual character to execute a target action based on a head action of a user, but the application scenario is not limited to the only application scenario, and any scenario that can be applied to this embodiment is included.
As shown in fig. 2, according to an embodiment of the present invention, in a first aspect, the present invention provides a control method based on a head movement of a user, which is applied to a terminal, and specifically includes the following method steps:
s202: and displaying a user interface of the application program on a sensing control screen of the terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying the head portrait.
In this step, the user interface includes not only the virtual environment interface but also a real interface displaying the avatar.
The virtual environment interface is displayed in real time, so that a user can see game conditions in real time, and the user can continuously adjust game strategies, change weapons and change the number of weapons.
In addition, the user interface also comprises a real interface for displaying the head portrait, so that the user can continuously perform head nodding, head shaking, head twisting left or head twisting right operations through the head portrait image data acquired by the image acquisition device installed on the terminal, such as a camera, so as to achieve the purpose of controlling the virtual object to execute the target action through the head action of the user.
S204: and receiving a sensing operation generated based on the head action of the user.
As shown in fig. 3, a schematic diagram of determining head movements of a user by an image capturing device according to an embodiment of the present invention is shown.
As shown in fig. 3, on a real interface displaying the avatar, a large amount of avatar image data of the current user can be obtained through an image acquisition device, such as a camera, and the head movement of the user is determined by analyzing coordinate data of a plurality of feature points, such as feature point a, feature point B, feature point C, and feature point D, of each feature point shown in fig. 3 and according to the coordinate data of the plurality of feature points.
Specifically, the step of determining the head action of the user according to the coordinate data of the plurality of feature points selected from the head image of the user comprises the following steps:
acquiring continuous head coordinate data corresponding to a plurality of feature points selected from a user head image within a preset time period;
determining the change condition of the head coordinate of the user according to the continuous head coordinate data of any one of the plurality of feature points;
and determining the head action of the user according to the coordinate change condition of the head of the user.
Finally, as shown in fig. 3, the head movement of the user is determined by analyzing the change of the coordinate data of the plurality of feature points of the feature point a, the feature point B, the feature point C and the feature point D, wherein the determined head movement of the user includes head nodding, head shaking, head twisting left or head twisting right.
In this step, it is determined that the head movement of the user is a conventional technique according to the transformation condition of the coordinate data of the plurality of feature points, and there is a corresponding algorithm, which is not described herein again.
In order to achieve the purpose of controlling the virtual character to execute the target action through the head action of the user more conveniently and accurately, the head action of the user is combined with the nose action of the user and the ear action of the user, for example, the nose points to a first position area of a control screen of the terminal corresponding to the head pointing direction while the head is pointed, and the ear points to a second position area of the control screen of the terminal corresponding to the head shaking direction while the head is shaken.
S206: and judging whether the induction operation meets the execution condition, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition.
In this step, in the case that the sensing operation satisfies the execution condition, the target action for controlling the virtual character to execute at least includes: the method comprises at least one of a first target action of firing by using a first preset type weapon, a second target action of firing by using a second preset type weapon, a third target action of firing by using a third preset type weapon, a fourth target action of ammunition loading, a fifth target action of ammunition changing, a sixth target action of switching weapons among various preset weapons, and a seventh target action of unloading ammunition.
Only common target actions are listed, new skills can be added according to the requirements of the user, corresponding execution conditions are set for the added new skills, and the virtual character is controlled to execute the new target actions under the condition that the induction operation meets the new execution conditions, which is not described herein again.
Optionally, before determining whether the sensing operation satisfies the execution condition, the method further includes: reading an execution condition; wherein the execution conditions at least include: the sensing operation is an operation of sensing a designated action performed by a user, and the designated action at least comprises: and pointing the first position area of the control screen of the terminal corresponding to the head nodding direction with the nose while nodding the head, and pointing the second position area of the control screen of the terminal corresponding to the head shaking direction with the ear while shaking the head.
In the step, a first position area of the control screen of the terminal corresponding to the head nodding direction is pointed by a nose while nodding the head, and a second position area of the control screen of the terminal corresponding to the head shaking direction is pointed by an ear while shaking the head, so that the aim of controlling the virtual character to execute the target action through the head action of the user is achieved more conveniently and accurately. In this step, the range of the first position area and the range of the second position area are not particularly limited, and if a blowout type weapon with a low lethality property, such as a fire, needs to be operated when the virtual character is controlled to execute the target action, the numerical value corresponding to the range of the first position area and the numerical value corresponding to the range of the second position area may be set to be smaller, for example, both set within a first preset range; otherwise; if a bomb-type weapon with a high lethality property needs to be operated when the virtual character is controlled to execute the target action, for example, a bomb is shot, the numerical value corresponding to the range of the first position region and the numerical value corresponding to the range of the second position region may be set to be larger; for example, all are set within a second preset range; if the virtual character is controlled to execute the target action and a shooting type weapon needs to be operated, setting a numerical value corresponding to the range of the first position area and a numerical value corresponding to the range of the second position area in a third preset range, wherein the first numerical value corresponding to the first preset range is smaller than the third numerical value corresponding to the third preset range, and the third numerical value corresponding to the third preset range is smaller than the second numerical value corresponding to the second preset range; here, the first numerical value, the second numerical value, and the third numerical value are not particularly limited.
Optionally, before controlling the virtual character to execute the target action, the method further includes: reading a target action; wherein the target action comprises: the method comprises at least one of a first target action of firing by using a first preset type weapon, a second target action of firing by using a second preset type weapon, a third target action of firing by using a third preset type weapon, a fourth target action of ammunition loading, a fifth target action of ammunition changing, a sixth target action of switching weapons among various preset weapons, and a seventh target action of unloading ammunition.
Optionally, before receiving the sensing operation generated based on the head motion of the user, the method further includes: and determining the head action of the user according to the coordinate data of the plurality of feature points selected from the head image of the user.
Optionally, before controlling the virtual character to execute the target action, the method further includes:
acquiring nodding force corresponding to head action of a user;
and acquiring a first corresponding relation between the preset nodding strength and the weapon type, and determining any weapon of the eruption type corresponding to the first preset type weapon and the shooting type corresponding to the second preset type weapon to be used by the virtual character according to the first corresponding relation.
Optionally, determining, according to the first correspondence, that the virtual character to be used at least includes any one of a burst-type weapon corresponding to the first preset-type weapon and a shooting-type weapon corresponding to the second preset-type weapon includes:
if the nodding strength is in the first nodding strength level, determining that the weapon type used by the virtual character is a eruption type weapon; alternatively, the first and second electrodes may be,
if the nodding force is in the second nodding force level, determining that the type of the weapon used by the virtual character is a shooting type weapon;
wherein the first nodding force level is lower than the second nodding force level, and the first lethality attribute corresponding to the burst-type weapon is lower than the second lethality attribute corresponding to the fire-type weapon.
Optionally, before controlling the virtual character to execute the target action, the method further includes:
acquiring nodding frequency corresponding to head action of a user;
and acquiring a second corresponding relation between the preset nodding frequency and the weapon continuous injection frequency, and determining the weapon continuous injection frequency of the virtual character according to the second corresponding relation.
Optionally, before controlling the virtual character to execute the target action, the method further includes:
acquiring a head shaking amplitude corresponding to the head action of a user;
and acquiring a third corresponding relation between the preset head shaking amplitude and the weapon type, and determining any weapon of the virtual character to be used, which at least comprises a shooting type weapon corresponding to the second preset type weapon and a bomb type weapon corresponding to the third preset type weapon, according to the third corresponding relation.
Optionally, determining, according to the third correspondence, that any one of the weapons of the shooting type corresponding to the second preset type weapon and the bombing type corresponding to the third preset type weapon, which are to be used by the virtual character, includes:
if the head shaking amplitude is at the first head shaking amplitude level, determining that the weapon type used by the virtual character is a bomb type weapon; alternatively, the first and second electrodes may be,
if the head shaking amplitude is in the second head shaking amplitude level, determining that the type of the weapon used by the virtual character is a shooting type weapon;
wherein the first head shake amplitude level is lower than the second head shake amplitude level, and the third lethality attribute corresponding to the bomb-type weapon is higher than the second lethality attribute corresponding to the shooting-type weapon.
Optionally, before controlling the virtual character to execute the target action, the method further includes:
acquiring head shaking time corresponding to head movement of a user;
and acquiring a fourth corresponding relation between the preset head shaking time length and the weapon firing number, and determining the weapon firing number of the virtual character according to the fourth corresponding relation.
Optionally, determining the weapon firing number of the virtual character according to the fourth correspondence includes:
if the head shaking time length is longer, the corresponding weapon emitting quantity is larger.
Optionally, before controlling the virtual character to execute the target action, the method further includes:
acquiring a head moving direction corresponding to the head action of a user;
and acquiring a fifth corresponding relation between the preset head moving direction and the weapon launched by the ear in the preset direction, and determining that the weapon launched by the ear in the preset direction by the virtual character according to the fifth corresponding relation.
Optionally, determining that the virtual character launches the weapon with the ear in the preset direction according to the fifth corresponding relationship includes:
if the head of the user is twisted towards the right side, determining that the virtual character launches a weapon with an ear in any preset direction from the upper right side of the virtual environment interface, the right side of the virtual environment interface and the lower right side of the virtual environment interface; alternatively, the first and second electrodes may be,
if the head of the user twists to the left, it is determined that the virtual character fires a weapon with the ear in any predetermined direction, up left of the virtual environment interface, to the left side of the virtual environment interface, and down left of the virtual environment interface.
The invention provides a control method based on user head actions, which is characterized in that induction operation generated based on the user head actions is received, and a virtual role is controlled to execute target actions under the condition that the induction operation meets execution conditions, so that a user interface of an application program can be controlled more conveniently and flexibly, the two hands of a user are liberated, and the user experience is improved.
Example 2
As shown in fig. 1, an application scenario diagram according to an embodiment of the present invention is an application scenario in which a current user operates a client installed on a terminal device through the terminal device such as a mobile phone, and the client performs data communication with a background server through a network. A specific application scenario is an application scenario for controlling a virtual character to execute a target action based on a head action of a user, but is not limited to the only application scenario, and any scenario that can be applied to the present embodiment is included. The embodiment is similar to embodiment 1 in the explanation of the method steps for implementing the method steps as described in embodiment 1 based on the same names and meanings, and has the same technical effects as embodiment 1, and thus the description thereof is omitted.
Referring to fig. 4, according to an embodiment of the present invention, in a second aspect, the present invention provides a control device based on head movements of a user, which specifically includes a display unit 402, a receiving unit 404, and a processing unit 406, and specifically as follows:
a display unit 402, configured to display a user interface of an application program on a sensing control screen of the terminal, where the user interface includes a virtual environment interface and a real interface for displaying a head portrait;
a receiving unit 404, configured to receive a sensing operation generated based on a head motion of a user;
the processing unit 406 is configured to determine whether the sensing operation received by the receiving unit 404 satisfies an execution condition, and control the virtual character to execute the target action when the sensing operation satisfies the execution condition.
Optionally, the apparatus further comprises:
a reading unit (not shown in fig. 4) for reading the execution condition before the processing unit 406 determines whether the sensing operation satisfies the execution condition;
the execution conditions read by the reading unit at least include:
the sensing operation is an operation of sensing a designated action performed by a user, and the designated action at least comprises: and pointing the first position area of the control screen of the terminal corresponding to the head nodding direction with the nose while nodding the head, and pointing the second position area of the control screen of the terminal corresponding to the head shaking direction with the ear while shaking the head.
Optionally, the reading unit is further configured to:
reading the target action before the processing unit 406 controls the virtual character to execute the target action;
the target action read by the reading unit comprises:
the method comprises at least one of a first target action of firing by using a first preset type weapon, a second target action of firing by using a second preset type weapon, a third target action of firing by using a third preset type weapon, a fourth target action of ammunition loading, a fifth target action of ammunition changing, a sixth target action of switching weapons among various preset weapons, and a seventh target action of unloading ammunition.
Optionally, the apparatus further comprises:
a determining unit (not shown in fig. 4) configured to determine the head movement of the user according to the coordinate data of the plurality of feature points selected from the head image of the user before the receiving unit 404 receives the sensing operation generated based on the head movement of the user.
Optionally, the determining unit is specifically configured to:
acquiring continuous head coordinate data corresponding to a plurality of feature points selected from a user head image within a preset time period;
determining the change condition of the head coordinate of the user according to the continuous head coordinate data of any one of the plurality of feature points;
and determining the head action of the user according to the coordinate change condition of the head of the user.
Optionally, the apparatus further comprises:
an obtaining unit (not shown in fig. 4) configured to obtain a nodding strength corresponding to the head action of the user before the processing unit 406 controls the virtual character to execute the target action; and
acquiring a first corresponding relation between preset nodding strength and weapon type;
the determination unit is further configured to:
and determining any weapon of the eruption type corresponding to the first preset type weapon and the shooting type corresponding to the second preset type weapon to be used by the virtual character according to the first corresponding relation acquired by the acquisition unit.
Optionally, the determining unit is specifically configured to:
if the nodding strength is in the first nodding strength level, determining that the weapon type used by the virtual character is a eruption type weapon; alternatively, the first and second electrodes may be,
if the nodding force is in the second nodding force level, determining that the type of the weapon used by the virtual character is a shooting type weapon;
wherein the first nodding force level is lower than the second nodding force level, and the first lethality attribute corresponding to the burst-type weapon is lower than the second lethality attribute corresponding to the fire-type weapon.
Optionally, the obtaining unit is further configured to:
before the processing unit 406 controls the virtual character to execute the target action, acquiring a nodding frequency corresponding to the head action of the user; and
acquiring a second corresponding relation between the preset nodding frequency and the weapon continuous injection frequency;
the determination unit is further configured to:
and determining the continuous weapon spraying frequency of the virtual character according to the second corresponding relation acquired by the acquisition unit.
Optionally, the obtaining unit is further configured to:
before the processing unit 406 controls the virtual character to execute the target action, acquiring a head shaking amplitude corresponding to the head action of the user; and
acquiring a third corresponding relation between the preset head shaking amplitude and the weapon type;
the determination unit is further configured to:
and determining any weapon of the weapons of the shooting type corresponding to the second preset type weapon and the bomb type corresponding to the third preset type weapon to be used by the virtual character according to the third corresponding relation acquired by the acquisition unit.
Optionally, the determining unit is further specifically configured to:
if the head shaking amplitude is at the first head shaking amplitude level, determining that the weapon type used by the virtual character is a bomb type weapon; alternatively, the first and second electrodes may be,
if the head shaking amplitude is in the second head shaking amplitude level, determining that the type of the weapon used by the virtual character is a shooting type weapon;
wherein the first head shake amplitude level is lower than the second head shake amplitude level, and the third lethality attribute corresponding to the bomb-type weapon is higher than the second lethality attribute corresponding to the shooting-type weapon.
Optionally, the obtaining unit is further configured to:
before the processing unit 406 controls the virtual character to execute the target action, acquiring a head shaking duration corresponding to the head action of the user; and
acquiring a fourth corresponding relation between the preset head shaking time and the weapon firing number;
the determination unit is further configured to:
and determining the weapon emission quantity of the virtual character according to the fourth corresponding relation acquired by the acquisition unit.
Optionally, the determining unit is further specifically configured to:
if the head shaking time length is longer, the corresponding weapon emitting quantity is larger.
Optionally, the obtaining unit is further configured to:
before the processing unit 406 controls the virtual character to execute the target action, acquiring a head moving direction corresponding to the head action of the user; and
acquiring a fifth corresponding relation between a preset head moving direction and a weapon fired by an ear in the preset direction;
the determination unit is further configured to:
and determining that the virtual character launches the weapon with ears in the preset direction according to the fifth corresponding relation acquired by the acquisition unit.
Optionally, the determining unit is further specifically configured to:
if the head of the user is twisted towards the right side, determining that the virtual character launches a weapon with an ear in any preset direction from the upper right side of the virtual environment interface, the right side of the virtual environment interface and the lower right side of the virtual environment interface; alternatively, the first and second electrodes may be,
if the head of the user twists to the left, it is determined that the virtual character fires a weapon with the ear in any predetermined direction, up left of the virtual environment interface, to the left side of the virtual environment interface, and down left of the virtual environment interface.
The invention provides a control device based on user head actions, which receives induction operations generated based on the user head actions through a receiving unit and controls a virtual character to execute target actions under the condition that the induction operations received by the receiving unit meet execution conditions.
Example 3
As shown in fig. 5, the present embodiment provides an electronic device for a control method based on a head motion of a user, the electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to: by receiving the sensing operation generated based on the head action of the user and controlling the virtual character to execute the target action under the condition that the sensing operation meets the execution condition, the user interface of the application program can be controlled more conveniently and flexibly, the hands of the user are freed, and the user experience is improved.
Example 4
The disclosed embodiments provide a non-volatile computer storage medium storing computer-executable instructions that can perform a control method based on a user head action in any of the above method embodiments.
Example 5
Referring now to FIG. 5, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: by receiving the sensing operation generated based on the head action of the user and controlling the virtual character to execute the target action under the condition that the sensing operation meets the execution condition, the user interface of the application program can be controlled more conveniently and flexibly, the hands of the user are freed, and the user experience is improved.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: by receiving the sensing operation generated based on the head action of the user and controlling the virtual character to execute the target action under the condition that the sensing operation meets the execution condition, the user interface of the application program can be controlled more conveniently and flexibly, the hands of the user are freed, and the user experience is improved.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.

Claims (17)

1. A control method based on head actions of a user is characterized by comprising the following steps:
displaying a user interface of an application program on a sensing control screen of a terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying a head portrait;
receiving a sensing operation generated based on the head action of a user;
and judging whether the induction operation meets an execution condition, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition.
2. The method of claim 1, wherein prior to said determining whether the induction operation satisfies an execution condition, the method further comprises:
reading the execution condition;
wherein the execution conditions at least include:
the sensing operation is an operation of sensing a specified action performed by the user, and the specified action at least comprises the following steps: the method comprises the steps that the direction of nodding heads is pointed by a nose while nodding heads, the direction of nodding heads is pointed by the nose, the terminal corresponds to the first position area of the control screen, and the direction of shaking heads is pointed by ears while shaking heads, at least one of the second position areas of the control screen is the terminal.
3. The method of claim 2, wherein prior to the control avatar performing the target action, the method further comprises:
reading the target action;
wherein the target action comprises:
the method comprises at least one of a first target action of firing by using a first preset type weapon, a second target action of firing by using a second preset type weapon, a third target action of firing by using a third preset type weapon, a fourth target action of ammunition loading, a fifth target action of ammunition changing, a sixth target action of switching weapons among various preset weapons, and a seventh target action of unloading ammunition.
4. The method of claim 1, wherein prior to said receiving a sensing operation generated based on a head motion of a user, the method further comprises:
and determining the head action of the user according to the coordinate data of the plurality of feature points selected from the head image of the user.
5. The method of claim 4, wherein the determining the user head action according to the coordinate data of the plurality of feature points selected from the user head image comprises:
acquiring continuous head coordinate data corresponding to a plurality of feature points selected from the user head image within a preset time period;
determining the change condition of the head coordinate of the user according to the continuous head coordinate data of any one of the plurality of feature points;
and determining the head action of the user according to the coordinate change condition of the head of the user.
6. The method of claim 3, wherein prior to the control avatar performing the target action, the method further comprises:
acquiring nodding force corresponding to the head action of the user;
acquiring a first corresponding relation between preset nodding strength and weapon types, and determining any one of the weapons to be used by the virtual character, which at least comprises a burst type weapon corresponding to the first preset type weapon and a shooting type weapon corresponding to the second preset type weapon, according to the first corresponding relation.
7. The method of claim 6, wherein the determining, according to the first correspondence, that the virtual character is to use at least one of a burst-type weapon corresponding to the first preset-type weapon and a shoot-type weapon corresponding to the second preset-type weapon comprises:
if the nodding strength is in a first nodding strength level, determining that the type of the weapon used by the virtual character is the eruption type weapon; alternatively, the first and second electrodes may be,
if the nodding strength is in a second nodding strength level, determining that the type of the weapon used by the virtual character is the shooting type weapon;
wherein the first nodding force level is lower than the second nodding force level, and a first lethality attribute corresponding to the burst-type weapon is lower than a second lethality attribute corresponding to the fire-type weapon.
8. The method of claim 7, wherein prior to the control avatar performing the target action, the method further comprises:
acquiring a nodding frequency corresponding to the head action of the user;
and acquiring a second corresponding relation between the preset nodding frequency and the weapon continuous injection frequency, and determining the weapon continuous injection frequency of the virtual character according to the second corresponding relation.
9. The method of claim 7, wherein prior to the control avatar performing the target action, the method further comprises:
acquiring a head shaking amplitude corresponding to the head action of the user;
and acquiring a third corresponding relation between a preset head shaking amplitude and a weapon type, and determining any one of the weapons to be used by the virtual character, which at least comprises a shooting type weapon corresponding to the second preset type weapon and a bomb type weapon corresponding to the third preset type weapon, according to the third corresponding relation.
10. The method according to claim 9, wherein the determining, according to the third correspondence, that the virtual character is to be used includes at least any one of a shooting-type weapon corresponding to the second preset-type weapon and a bomb-type weapon corresponding to the third preset-type weapon includes:
if the head shaking amplitude is at a first head shaking amplitude level, determining that the type of the weapon used by the virtual character is the bomb type weapon; alternatively, the first and second electrodes may be,
if the head shaking amplitude is in a second head shaking amplitude level, determining that the type of the weapon used by the virtual character is the shooting type weapon;
wherein the first head shake amplitude level is lower than the second head shake amplitude level, and the third lethality attribute corresponding to the bomb-type weapon is higher than the second lethality attribute corresponding to the shooting-type weapon.
11. The method of claim 9, wherein prior to the control avatar performing the target action, the method further comprises:
acquiring head shaking time corresponding to the head action of the user;
and acquiring a fourth corresponding relation between the preset head shaking time length and the weapon emission quantity, and determining the weapon emission quantity of the virtual character according to the fourth corresponding relation.
12. The method of claim 11, wherein said determining the weapon firing size of the virtual character based on the fourth correspondence comprises:
if the head shaking time length is longer, the corresponding weapon emission quantity is larger.
13. The method of claim 9, wherein prior to the control avatar performing the target action, the method further comprises:
acquiring a head moving direction corresponding to the head action of the user;
and acquiring a fifth corresponding relation between a preset head moving direction and a weapon launched by ears in the preset direction, and determining that the weapon launched by ears in the preset direction is launched by the virtual character according to the fifth corresponding relation.
14. The method of claim 13, wherein said determining that the virtual character fires a weapon with an ear in a preset direction according to the fifth correspondence comprises:
if the head of the user is twisted towards the right side, determining that the virtual character launches a weapon with an ear in any preset direction from the upper right of the virtual environment interface, the right side of the virtual environment interface and the lower right of the virtual environment interface; alternatively, the first and second electrodes may be,
if the head of the user is twisted to the left side, the virtual character is determined to be used for shooting weapons with ears in any preset direction in the upper left direction, the left side direction and the lower left direction of the virtual environment interface.
15. A control device based on head movements of a user, comprising:
the display unit is used for displaying a user interface of an application program on a sensing control screen of the terminal, wherein the user interface comprises a virtual environment interface and a real interface for displaying a head portrait;
the receiving unit is used for receiving sensing operation generated based on head movement of a user;
and the processing unit is used for judging whether the induction operation received by the receiving unit meets an execution condition or not, and controlling the virtual character to execute the target action under the condition that the induction operation meets the execution condition.
16. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 14.
17. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 14.
CN201911102941.5A 2019-11-12 2019-11-12 Control method and device based on user head motion, medium and electronic equipment Active CN111061360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911102941.5A CN111061360B (en) 2019-11-12 2019-11-12 Control method and device based on user head motion, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911102941.5A CN111061360B (en) 2019-11-12 2019-11-12 Control method and device based on user head motion, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111061360A true CN111061360A (en) 2020-04-24
CN111061360B CN111061360B (en) 2023-08-22

Family

ID=70298023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911102941.5A Active CN111061360B (en) 2019-11-12 2019-11-12 Control method and device based on user head motion, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111061360B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222021A (en) * 2020-09-03 2022-03-22 荣耀终端有限公司 Screen-off display method and electronic equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006277076A (en) * 2005-03-28 2006-10-12 Fuji Electric Device Technology Co Ltd Image interface device
CN2892214Y (en) * 2006-04-30 2007-04-25 吴铁励 Entertainment machine by human's body gesture operation
CN201945946U (en) * 2011-01-20 2011-08-24 叶尔肯·拜山 Head control mouse
CN104820542A (en) * 2015-05-27 2015-08-05 网易(杭州)网络有限公司 Display method and device for mobile game operating interface
JP2016122177A (en) * 2014-12-25 2016-07-07 セイコーエプソン株式会社 Display device and control method of display device
CN106407985A (en) * 2016-08-26 2017-02-15 中国电子科技集团公司第三十八研究所 Three-dimensional human head point cloud feature extraction method and device thereof
CN107357432A (en) * 2017-07-18 2017-11-17 歌尔科技有限公司 Exchange method and device based on VR
KR20180056035A (en) * 2016-11-18 2018-05-28 엘지전자 주식회사 Head mounted display and method for controlling the same
JP2018124981A (en) * 2017-10-25 2018-08-09 株式会社コロプラ Information processing method, information processing device and program causing computer to execute information processing method
US20180321903A1 (en) * 2013-08-23 2018-11-08 Tobii Ab Systems and methods for providing audio to a user based on gaze input
WO2019060889A1 (en) * 2017-09-25 2019-03-28 Ventana 3D, Llc Artificial intelligence (a) character system capable of natural verbal and visual interactions with a human
CN109558243A (en) * 2018-11-30 2019-04-02 Oppo广东移动通信有限公司 Processing method, device, storage medium and the terminal of virtual data
CN109568937A (en) * 2018-10-31 2019-04-05 北京市商汤科技开发有限公司 Game control method and device, game terminal and storage medium
US20190118078A1 (en) * 2017-10-23 2019-04-25 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
JP2019136066A (en) * 2018-02-06 2019-08-22 グリー株式会社 Application processing system, application processing method, and application processing program
CN110308792A (en) * 2019-07-01 2019-10-08 北京百度网讯科技有限公司 Control method, device, equipment and the readable storage medium storing program for executing of virtual role
CN110420457A (en) * 2018-09-30 2019-11-08 网易(杭州)网络有限公司 A kind of suspension procedure method, apparatus, terminal and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006277076A (en) * 2005-03-28 2006-10-12 Fuji Electric Device Technology Co Ltd Image interface device
CN2892214Y (en) * 2006-04-30 2007-04-25 吴铁励 Entertainment machine by human's body gesture operation
CN201945946U (en) * 2011-01-20 2011-08-24 叶尔肯·拜山 Head control mouse
US20180321903A1 (en) * 2013-08-23 2018-11-08 Tobii Ab Systems and methods for providing audio to a user based on gaze input
JP2016122177A (en) * 2014-12-25 2016-07-07 セイコーエプソン株式会社 Display device and control method of display device
CN104820542A (en) * 2015-05-27 2015-08-05 网易(杭州)网络有限公司 Display method and device for mobile game operating interface
CN106407985A (en) * 2016-08-26 2017-02-15 中国电子科技集团公司第三十八研究所 Three-dimensional human head point cloud feature extraction method and device thereof
KR20180056035A (en) * 2016-11-18 2018-05-28 엘지전자 주식회사 Head mounted display and method for controlling the same
CN107357432A (en) * 2017-07-18 2017-11-17 歌尔科技有限公司 Exchange method and device based on VR
WO2019060889A1 (en) * 2017-09-25 2019-03-28 Ventana 3D, Llc Artificial intelligence (a) character system capable of natural verbal and visual interactions with a human
US20190118078A1 (en) * 2017-10-23 2019-04-25 Netease (Hangzhou) Network Co.,Ltd. Information Processing Method and Apparatus, Storage Medium, and Electronic Device
JP2018124981A (en) * 2017-10-25 2018-08-09 株式会社コロプラ Information processing method, information processing device and program causing computer to execute information processing method
JP2019136066A (en) * 2018-02-06 2019-08-22 グリー株式会社 Application processing system, application processing method, and application processing program
CN110420457A (en) * 2018-09-30 2019-11-08 网易(杭州)网络有限公司 A kind of suspension procedure method, apparatus, terminal and storage medium
CN109568937A (en) * 2018-10-31 2019-04-05 北京市商汤科技开发有限公司 Game control method and device, game terminal and storage medium
CN109558243A (en) * 2018-11-30 2019-04-02 Oppo广东移动通信有限公司 Processing method, device, storage medium and the terminal of virtual data
CN110308792A (en) * 2019-07-01 2019-10-08 北京百度网讯科技有限公司 Control method, device, equipment and the readable storage medium storing program for executing of virtual role

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222021A (en) * 2020-09-03 2022-03-22 荣耀终端有限公司 Screen-off display method and electronic equipment
US11823603B2 (en) 2020-09-03 2023-11-21 Honor Device Co., Ltd. Always-on-display method and electronic device

Also Published As

Publication number Publication date
CN111061360B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111265869B (en) Virtual object detection method, device, terminal and storage medium
US20210001218A1 (en) Virtual character control method and apparatus, terminal, and computer-readable storage medium
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
US20190091561A1 (en) Method and apparatus for controlling virtual character, electronic device, and storage medium
CN109091869B (en) Method and device for controlling action of virtual object, computer equipment and storage medium
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN108837507A (en) Virtual item control method and device, electronic equipment, storage medium
EP3726843B1 (en) Animation implementation method, terminal and storage medium
US20230076343A1 (en) Virtual item selection interface
US20220193550A1 (en) Action Generation Method, Electronic Device, and Non-Transitory Computer-Readable Medium
CN112569596B (en) Video picture display method and device, computer equipment and storage medium
CN110755844B (en) Skill activation method and device, electronic equipment and storage medium
US20170161011A1 (en) Play control method and electronic client
US20230356075A1 (en) Method, computer device, and storage medium for virtual object switching
US20220105432A1 (en) Virtual object control method and apparatus, terminal, and storage medium
US20230271083A1 (en) Method and apparatus for controlling ar game, electronic device and storage medium
CN110992947B (en) Voice-based interaction method, device, medium and electronic equipment
CN111061360B (en) Control method and device based on user head motion, medium and electronic equipment
CN109731337B (en) Method and device for creating special effect of particles in Unity, electronic equipment and storage medium
CN111318020B (en) Virtual object control method, device, equipment and storage medium
CN110882537B (en) Interaction method, device, medium and electronic equipment
CN108769149B (en) Application partition processing method and device and computer readable storage medium
CN114245031B (en) Image display method and device, electronic equipment and storage medium
CN111013139B (en) Role interaction method, system, medium and electronic equipment
CN111068308A (en) Data processing method, device, medium and electronic equipment based on mouth movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant