CN110928410A - Interaction method, device, medium and electronic equipment based on multiple expression actions - Google Patents
Interaction method, device, medium and electronic equipment based on multiple expression actions Download PDFInfo
- Publication number
- CN110928410A CN110928410A CN201911102952.3A CN201911102952A CN110928410A CN 110928410 A CN110928410 A CN 110928410A CN 201911102952 A CN201911102952 A CN 201911102952A CN 110928410 A CN110928410 A CN 110928410A
- Authority
- CN
- China
- Prior art keywords
- region
- head
- expression image
- mouth
- eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 96
- 230000009471 action Effects 0.000 title claims abstract description 71
- 230000003993 interaction Effects 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000033001 locomotion Effects 0.000 claims abstract description 87
- 230000002452 interceptive effect Effects 0.000 claims abstract description 61
- 238000012544 monitoring process Methods 0.000 claims abstract description 35
- 210000003128 head Anatomy 0.000 claims description 85
- 230000008859 change Effects 0.000 claims description 26
- 230000006378 damage Effects 0.000 claims description 18
- 208000027418 Wounds and injury Diseases 0.000 claims description 16
- 208000014674 injury Diseases 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 210000004709 eyebrow Anatomy 0.000 claims description 7
- 210000005069 ears Anatomy 0.000 claims description 6
- 210000000744 eyelid Anatomy 0.000 claims description 6
- 210000001747 pupil Anatomy 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 8
- 230000008921 facial expression Effects 0.000 description 12
- 244000027321 Lychnis chalcedonica Species 0.000 description 11
- 235000017899 Spathodea campanulata Nutrition 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008451 emotion Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 241000086550 Dinosauria Species 0.000 description 2
- 208000031074 Reinjury Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1087—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides an interaction method, an interaction device, an interaction medium and electronic equipment based on a plurality of expression actions, wherein the method comprises the following steps: acquiring an expression image of a user through a camera, and displaying the expression image on an interactive interface; monitoring the motion characteristics of the expression images in real time, wherein the motion characteristics comprise: head, eye, and mouth motion characteristics; and when at least two action characteristics occur simultaneously, triggering the aggressive identification to display the aggressive identification on the interactive interface in a preset form. According to the method, on one hand, an additional interaction mode is provided, the character roles in the interface terminal can be controlled or the interaction among a plurality of character roles can be controlled through the expression actions, the control of gestures is released, and the control mode is simpler and more convenient. On the other hand, the real head portrait of the user is mapped to the interactive interface, so that the user can participate in the role interaction process more deeply, and the fun of the face interaction of the user is increased.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an interaction method, an interaction device, an interaction medium, and an electronic device based on multiple expressions and actions.
Background
With the development of mobile internet at any time, interactive terminals are gradually transferred from a PC terminal to a mobile terminal, for example, for the development and control modes of a hand game, interactive control is still needed in a physical touch mode, for some scenes, the control mode with both hands is obviously too complicated in a horizontal screen state, and particularly for entertainment of some small programs anytime and anywhere, the complicated control characteristics are more clumsy, and the mobile phone with a smaller screen is more inconvenient to operate.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The present disclosure is directed to an interaction method, an interaction device, an interaction medium, and an electronic device based on multiple emotions, which are capable of solving at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific implementation manner of the present disclosure, in a first aspect, the present disclosure provides an interaction method based on multiple expression actions, including:
the method comprises the steps that an expression image of a user is obtained through a camera, and the expression image is displayed on an interactive interface, wherein the interactive interface comprises a first area for displaying a head portrait of the user and a second area for displaying a virtual head portrait;
monitoring the motion characteristics of the expression images in real time, wherein the motion characteristics comprise: head, eye, and mouth motion characteristics;
and when at least two action characteristics occur simultaneously, triggering the aggressive identification to display the aggressive identification on the interactive interface in a preset form.
Optionally, the obtaining of the expression image of the user through the camera and displaying the expression image on the interactive interface include:
the method comprises the steps that a real expression image of a user is obtained through a camera, and the real expression image is displayed on an interactive interface in real time; or,
the method comprises the steps of obtaining a real expression image of a user through a camera, selecting a model head portrait matched with the real expression image, and displaying the model head portrait on an interactive interface in real time.
Optionally, the motion characteristics of the expression image are monitored in real time, and the motion characteristics include: head motion characteristics, eye motion characteristics, and mouth motion characteristics, including:
dividing the expression image into a head region, an eye region and a mouth region;
the head region, eye region and mouth region distribution comprises a plurality of feature regions;
selecting at least one feature point in each feature region;
and monitoring the motion characteristics of the expression image through the position change of the at least one feature point.
Optionally, the head region, the eye region and the mouth region include a plurality of feature regions, including:
the characteristic region of the head region includes: ears and nose;
the characteristic region of the mouth region includes: upper lip, lower lip and chin;
the characteristic region of the eye region includes: eyebrows, eyelids, and pupils.
Optionally, the determining the motion vector R of the at least one feature point through the change of the position information includes:
at a first time t1Acquiring first position coordinate information of one feature point;
at a second time t2Acquiring second position coordinate information of the same characteristic point;
calculating a movement vector R of the feature point according to the first position coordinate information and the second position coordinate information;
and monitoring the motion characteristics of the expression image through the movement vector R of the at least one feature point.
Optionally, the second time t2And a first time t1In the roomThe interval Δ t is within a preset range.
Optionally, the monitoring the motion characteristics of the expression image through the position change of the at least one feature point includes:
acquiring two continuous frames of images of the expression image;
comparing the position information of all the characteristic points in the two continuous frames of images;
monitoring whether the position information of each feature point is changed;
and determining the action characteristics of the head region, the eye region and/or the mouth region according to the change of the position information of each feature point.
Optionally, when at least two of the motion characteristics occur simultaneously, triggering an offensive identifier, so that the offensive identifier is displayed on the interactive interface in a preset form, including:
when any two or three regions of the head region, the eye region and the mouth region simultaneously generate respective action characteristics;
triggering at least one offensive mark of the head area, the eye area and the mouth area, so that the offensive mark is displayed on the interactive interface in a preset form.
Optionally, the head motion characteristic, the eye motion characteristic and the mouth motion characteristic respectively have respective offensive identifiers, and the offensive identifiers include an injury feature value, a speed feature value and a distance feature value.
When at least two action characteristics occur simultaneously, triggering an aggressive identifier to enable the aggressive identifier to be displayed on the interactive interface in a preset form, wherein the method comprises the following steps:
when the eye action characteristics and/or mouth action characteristics and the head action characteristics occur, triggering the eye and/or mouth offensive identification, and simultaneously triggering the head action characteristics, so that the eye and/or mouth offensive identification is displayed on the interactive interface in a preset form by combining the head action characteristics.
Optionally, the method further includes:
acquiring gesture direction information through a camera;
determining a projection direction of the different offensive marks based on the gesture direction information.
According to a second aspect, the present disclosure provides an interactive device based on multiple emotions, including:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an expression image of a user through a camera and displaying the expression image on an interactive interface, and the interactive interface comprises a first area for displaying a head portrait of the user and a second area for displaying a virtual head portrait;
the monitoring unit is used for monitoring the action characteristics of the expression image in real time, and the action characteristics comprise: head, eye, and mouth motion characteristics;
and the triggering unit is used for triggering the aggressive identification when at least two action characteristics occur simultaneously, so that the aggressive identification is displayed on the interactive interface in a preset form.
According to a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
According to a fourth aspect thereof, the present disclosure provides an electronic device, comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out a method as claimed in any preceding claim.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects: according to the interaction method, the interaction device, the interaction medium and the electronic equipment based on the multiple expression actions, on one hand, an interaction mode different from touch control is provided, the character roles in the interface terminal can be controlled or the interaction among the character roles can be controlled through the coordination and coordination of the multiple expression actions, the control of gestures is released, and the control mode is simpler and more convenient. On the other hand, the real head portrait of the user is mapped to the interactive interface, so that the user can participate in the role interaction process more deeply, and the fun of the face interaction of the user is increased.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 illustrates an application scene diagram of an interaction method based on facial expression actions according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of an interaction method based on facial expression actions according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of an interaction based on a facial expression action according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a structure of an interactive device based on facial expression actions according to an embodiment of the present disclosure;
fig. 5 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, rather than all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
The terminology used in the embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in the disclosed embodiments and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present disclosure, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and, similarly, the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present disclosure.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in the article or device in which the element is included.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, the application scene is an application scene graph according to an embodiment of the present disclosure, where the application scene is an applet providing real-time interaction using real expressions, and the applet includes a first area and a second area, the first area is used for displaying a real avatar of a user and a character tag displayed in cooperation with the real avatar, the real avatar is acquired by a camera of a mobile terminal and then projected to a display interface, and the second area is an area providing data interaction with the first area. For example, one embodiment is where the second area includes a plurality of confrontational minuses, the minuses competing with users of the first area; another embodiment is that the second area is a real head portrait of another user, the two users carry out fighting, the two areas carry out fighting by launching various weapons, and the launching of the weapons is controlled by the expression motions of the characters. However, the present invention is not limited to the above application scenario, and any interaction scenario that can be applied to the present embodiment is included, and for convenience of description, the present embodiment takes the application scenario of the above character confrontation applet as an example.
As shown in fig. 2, according to a specific embodiment of the present disclosure, the present disclosure provides an interaction method based on multiple emotions, which includes the following steps:
step S202: the method comprises the steps of obtaining an expression image of a user through a camera, and displaying the expression image on an interactive interface.
The camera can be a device which is carried by any terminal and is used for conveniently acquiring the head portrait of the user, and is particularly suitable for mobile terminals, mobile phones and other devices. The camera stores the acquired expression images of the user in a terminal memory, and then the expression images are displayed on an application program interface after being called by the application program.
The mobile terminal client side obtains the expression image information and then sends the expression image information to the display interface, and in one implementation mode, real facial information of a user can be displayed on the display interface and can move in real time along with the action of facial expressions of the user, and the display mode can increase the interest of the user in interaction. In another embodiment, after the real facial expression of the user is obtained, the user can select whether to display the real facial expression or not through the interface, if not, a model facial display can be selected, any position of the model facial expression corresponds to the position corresponding to the user facial expression one by one, and when the facial expression changes, the expression at the corresponding position of the model also reflects the corresponding change. For example, selecting any preset head portrait of dinosaur model, cartoon model and the like.
As shown in fig. 1, the application program is usually an applet that can be opened at any time, the applet has the advantage of flexible and convenient access, and can be used for entertainment of a user at any time and any place, one embodiment is that the applet interface is divided into an upper area and a lower area, and characters between the two areas interact, for example, the applet includes a first area and a second area, the first area is used for displaying a real avatar of the user and a character tag displayed in cooperation with the real avatar, such as a life value of a character, a money amount of the user, a score value, a weapon amount, and the like, the real avatar is acquired by a camera of the mobile terminal and then is thrown into the display interface, and the second area is an area for providing data interaction with the first area. The first region receives the real-time head portrait of the user, and performs multi-dimensional interaction with the second region through cooperation of real-time facial expression and head expression actions of the user, and the interaction mode is not limited, for example, a mouth launches a fireball to the second region, and an ear launches a weapon such as a bullet to the second region. The second interface comprises a battle type character role which can adopt evasion and defense means to fight weapons such as fire balls and bullets launched from the first area. While the interactive objects (e.g., characters) of the second zone may launch weapons or other indicia such as fireballs, bullets, etc. The user is also protected from an aggressive interaction by the second zone. After the two interactive parties are attacked, the information (such as the life value) representing the end of the interaction is reduced until one interactive party is reduced to zero, and the interaction is ended.
Optionally, the obtaining of the expression image of the user through the camera and displaying the expression image on the interactive interface include:
the method comprises the steps that a real expression image of a user is obtained through a camera, and the real expression image is displayed on an interactive interface in real time; or,
the method comprises the steps of obtaining a real expression image of a user through a camera, selecting a model head portrait matched with the real expression image, and displaying the model head portrait on an interactive interface in real time.
The method comprises the steps of displaying with a real head portrait or with a model head portrait, selecting in the process of entering an interactive interface, and further selecting which model head portrait to display when selecting to display with the model head portrait, for example, selecting a dinosaur or frog animal head portrait model, selecting an animation head portrait model, and the like.
Step S204: monitoring the motion characteristics of the expression images in real time, wherein the motion characteristics comprise: head motion characteristics, eye motion characteristics, and mouth motion characteristics.
As shown in fig. 3, after the expression image of the real avatar or the model avatar of the user is acquired, the change process of the expression image needs to be monitored in real time in a software and/or hardware manner, and whether there is any motion in the head, eyes and mouth region is monitored.
Wherein the monitoring of the head region is by monitoring whether the position of the ears and/or nose changes; monitoring the eye region by monitoring whether the positions of eyebrows, eyelids and pupils are changed; the mouth region is monitored by monitoring whether the position of the upper lip, lower lip, and chin changes.
Optionally, the motion characteristics of the expression image are monitored in real time, and the motion characteristics include: head motion characteristics, eye motion characteristics, and mouth motion characteristics, comprising the sub-steps of:
step S204-1: the expression image is divided into a head region, an eye region, and a mouth region, as shown in fig. 3.
Optionally, the head region, the eye region and the mouth region include a plurality of feature regions, including: the characteristic region of the head region includes: ears and nose; the characteristic region of the mouth region includes: upper lip, lower lip and chin; the characteristic region of the eye region includes: eyebrows, eyelids, and pupils.
Step S204-2: at least one feature point is selected in each of the feature regions.
Feature points, which are randomly defined for analyzing the movement of each region of the face, may be selected from one or more feature points, as shown in fig. 3, for example, the feature points selected in the head region may be the feature point a1 selected on any one or two ears, and the feature point a2 selected on the tip of the nose; the feature points selected in the mouth region may be feature point b1 selected on the upper lip, feature b2 selected on the lower lip, and feature point b3 selected on the chin; the feature points selected in the eye region may be feature point c1 selected on any eyebrow, feature c2 selected on any eyelid, and feature point c3 selected on any pupil. The feature points are selected arbitrarily, but the positions with obvious action amplitude of the feature areas, such as the center of lips and the bottommost position of chin, are preferably selected, so that the motion vectors of the feature points can be accurately recorded.
Step S204-3: and monitoring the motion characteristics of the expression image through the position change of the at least one feature point.
As an optional implementation, the monitoring the motion characteristic of the expression image through the position change of the at least one feature point includes:
determining location information of the at least one feature point. Determining a movement vector R of the at least one feature point by the change of the position information. And monitoring the motion characteristics of the expression image through the movement vector R of the at least one feature point.
Determining a movement vector R of at least one feature point by a change in position information of the at least one feature point, e.g. at a first time t1And acquiring first position coordinate information of one characteristic point. At a second time t2And obtaining second position coordinate information of the same characteristic point. And calculating the movement vector R of the characteristic point according to the first position coordinate information and the second position coordinate information. Wherein the second time t2And a first time t1Is within a preset range. The interval Δ t passes within 0.1-2s, and if the interval time is too long, it can be considered as ineffective autonomous control.
After the mobile terminal acquires the face information of the user through the camera, it monitors in real time whether the coordinate information of the feature point defined by the user changes, for example, first, after the interaction starts, the position coordinate parameter of the feature point a1 is recorded, and continuously monitors whether the position coordinate of the feature point a1 changes.
At t2At this time, if the mobile terminal monitors that the position information of the feature point a1 changes, for example, by comparing the coordinate parameters of the feature point a1, it can be determined whether the position of the feature point a1 changes. The position change is usually within a reasonable stroke, and if the position change exceeds the stroke, the position change can be judged to be abnormal, such as the camera moving amplitude is too large, or the user head portrait moves to other positions to cause uncontrolled action.
By calculating the coordinate parameter between the two positions, a movement vector R including a movement distance and a movement direction can be obtained, and by judging the movement direction, for example, the specific motion of each part of the facial expression can be determined.
As another optional implementation, the monitoring the motion characteristics of the expression image through the position change of the at least one feature point includes:
acquiring two continuous frames of images of the expression image; comparing the position information of all the characteristic points in the two continuous frames of images; monitoring whether the position information of each feature point is changed; and determining the action characteristics of the head region, the eye region and/or the mouth region according to the change of the position information of each feature point.
After the mobile terminal acquires the mouth region image of the user, the image after the small program is started is stored in a back-end database in real time, and at the moment, the database can continuously store the mouth image of the user. And for the stored mouth image, the back-end server compares the feature point position information in the two continuous frames of images in real time, if the two frames of images are the same, the same image in the previous frame is deleted, and if the feature point position information changes, the action characteristic of the mouth is determined according to the change. For example, it can be determined that the subsequent frame image is open, beep, or left-falling relative to the previous frame image.
Step S206: and when at least two action characteristics occur simultaneously, triggering the aggressive identification to display the aggressive identification on the interactive interface in a preset form.
When at least two of the above-mentioned action characteristics occur simultaneously, for example, the head moves (head shaking or nodding), the eyes move, such as blinking and eyebrow picking, or the mouth moves, such as a constant mouth-beeping, so that the three actions are combined to achieve the effect of shooting weapons by the eyes and/or mouth and directing the direction of the head.
The offensive mark can be presented in any form, such as a line, a circle, and the like, as a preferred mode, the offensive mark can be an offensive weapon matched with the actions of the head, the eyes, and the mouth, and different types of offensive marks can be presented on the interactive interface in a defining, setting, and rendering mode according to the requirements of different programs. For the type of countermeasure interaction, alternative offensiveness identifiers include, but are not limited to, laser beams, bullets, shells, fireballs, etc., each model having its own characteristics, such as injury characteristic values represented as data, velocity characteristic values represented as data, distance characteristic values represented as data, etc. For example, the injury characteristic value of a fireball is 10000, the velocity characteristic value is 100, and the distance characteristic value is 20; the attack injury characteristic value of the cannonball is 8000, the speed characteristic value is 200 and the distance characteristic value is 30; the bullet has an injury characteristic of 1000, a velocity characteristic of 300, a distance characteristic of 40, and so on.
Different types of offensive marks need to be rendered, so that the countermeasure effect is better, different weapons can be rendered with different effects in a rendering mode according to the appearance characteristic of authenticity, and the rendering method is a known method and is not described herein any more. Specific effects may be described as, for example, opening the mouth, releasing a fireball from the mouth, beeping the mouth to release a shell, skimming the mouth to release a bullet, etc. The above description of an attacking weapon is not to be taken as being exclusive.
Triggering different offensive marks according to different expression and action characteristics, wherein the offensive marks comprise an injury characteristic value, a speed characteristic value and a distance characteristic value, the injury characteristic value refers to a value which is matched with the injury characteristic value and is reduced by the life value of the interactive character after being attacked by a weapon with the injury characteristic value, for example, the value is 100000, and the value is reduced by 10000 after being attacked by a fireball; the speed characteristic value refers to a moving speed of the weapon on a screen after the weapon is fired, for example, the moving speed of a fireball is 100, and the corresponding moving speed can be a moving speed marked by 100 pixels or other corresponding marking modes; the distance characteristic is the distance at which the weapon can fire, for example a fireball can only fire at a position directly under the mouth, while a bullet can fire the entire screen. The features of the attacking weapon may also include the range of damage to the weapon, for example the fireball may have a diameter, and the range of damage may be a range slightly larger than the fireball diameter, so long as targets within this range are damaged.
For example:
opening mouth to trigger a first aggressive identification, wherein the first aggressive identification has a first injury characteristic value, a first speed characteristic value and a first distance characteristic value; and/or the presence of a gas in the gas,
the beep mouth triggers a second offensive flag having a second injury characteristic value, a second speed characteristic value, and a second distance characteristic value; and/or the presence of a gas in the gas,
the mouth-left trigger triggers a third offensive identifier having a third injury characteristic value, a third speed characteristic value, and a third distance characteristic value.
The first injury characteristic value is greater than a second injury characteristic value, which is greater than a third injury characteristic value; the first speed characteristic value is less than a second speed characteristic value, which is less than a third speed characteristic value; the first distance eigenvalue is less than a second distance eigenvalue, and the second distance eigenvalue is less than a third distance eigenvalue.
Optionally, when at least two of the motion characteristics occur simultaneously, triggering an offensive identifier to display the offensive identifier on the interactive interface in a preset form, includes the following substeps:
step S206-1: when any two or three regions of the head region, the eye region and the mouth region simultaneously generate respective action characteristics.
Step S206-2: triggering at least one offensive mark of the head area, the eye area and the mouth area, so that the offensive mark is displayed on the interactive interface in a preset form.
Optionally, the head motion characteristic, the eye motion characteristic and the mouth motion characteristic respectively have respective offensive identifiers, and the offensive identifiers include an injury feature value, a speed feature value and a distance feature value.
Optionally, when at least two of the motion characteristics occur simultaneously, triggering an offensive identifier, so that the offensive identifier is displayed on the interactive interface in a preset form, including:
when the eye action characteristics and/or mouth action characteristics and the head action characteristics occur, triggering the eye and/or mouth offensive identification, and simultaneously triggering the head action characteristics, so that the eye and/or mouth offensive identification is displayed on the interactive interface in a preset form by combining the head action characteristics.
According to the method, an interaction method based on the mutual matching and execution of a plurality of expression actions is provided, on one hand, an interaction mode different from touch control is provided, the character role in the interface terminal can be controlled or the interaction among the character roles can be controlled through the coordination and the matching of the expression actions, the control of gestures is released, and the control mode is simpler and more convenient. On the other hand, the real head portrait of the user is mapped to the interactive interface, so that the user can participate in the role interaction process more deeply, and the fun of the face interaction of the user is increased.
Example 2
The embodiment is similar to embodiment 1 in the explanation of the method steps for implementing the method steps as described in embodiment 1 based on the same names and meanings, and has the same technical effects as embodiment 1, and thus the description thereof is omitted.
As shown in fig. 4, according to a specific embodiment of the present disclosure, the present disclosure provides an interactive device based on multiple emotions, including: acquisition unit 402 and monitoring unit 404
The obtaining unit 402 is configured to obtain an expression image of a user through a camera, and display the expression image on an interactive interface.
Optionally, the obtaining of the expression image of the user through the camera and displaying the expression image on the interactive interface include:
the method comprises the steps that a real expression image of a user is obtained through a camera, and the real expression image is displayed on an interactive interface in real time; or,
the method comprises the steps of obtaining a real expression image of a user through a camera, selecting a model head portrait matched with the real expression image, and displaying the model head portrait on an interactive interface in real time.
A monitoring unit 404, configured to monitor motion characteristics of the expression image in real time, where the motion characteristics include: head motion characteristics, eye motion characteristics, and mouth motion characteristics.
Optionally, the monitoring unit 404 is further configured to:
first, the expression image is divided into a head region, an eye region, and a mouth region, as shown in fig. 3.
Optionally, the head region, the eye region and the mouth region include a plurality of feature regions, including: the characteristic region of the head region includes: ears and nose; the characteristic region of the mouth region includes: upper lip, lower lip and chin; the characteristic region of the eye region includes: eyebrows, eyelids, and pupils.
Second, at least one feature point is selected in each of the feature regions.
Thirdly, the motion characteristics of the expression image are monitored through the position change of the at least one feature point.
As an embodiment, the monitoring the motion characteristics of the expression image through the position change of the at least one feature point includes:
1) determining location information of the at least one feature point.
2) Determining a movement vector R of the at least one feature point by the change of the position information.
3) And monitoring the motion characteristics of the expression image through the movement vector R of the at least one feature point.
Optionally, the determining the location information of the at least one feature point includes the following sub-steps:
and determining a reference position O of the interactive interface. Based on the reference position O, coordinate information of each feature point is determined. And determining the position information of at least one characteristic point on the interactive interface according to the coordinate information.
Optionally, the determining the motion vector R of the at least one feature point through the change of the position information includes:
(1) at a first time t1And acquiring first position coordinate information of one characteristic point.
(2) At a second time t2And obtaining second position coordinate information of the same characteristic point.
(3) And calculating the movement vector R of the characteristic point according to the first position coordinate information and the second position coordinate information.
Optionally, the second time t2And a first time t1Is within a preset range. The interval Δ t passes within 0.1-2s, and if the interval time is too long, it can be considered as ineffective autonomous control.
As another embodiment, the monitoring the motion characteristic of the expression image through the position change of the at least one feature point includes the following sub-steps:
4) and acquiring two continuous frames of images of the expression image.
5) And comparing the position information of the feature points in the two continuous frames of images.
6) And monitoring whether the position information of the feature points changes.
7) And if so, determining the action characteristics of the head region, the eye region and/or the mouth region according to the change.
The triggering unit 406 is configured to trigger the aggressive identifier when at least two of the motion characteristics occur simultaneously, so that the aggressive identifier is displayed on the interactive interface in a preset form.
Optionally, the triggering unit 406 is further configured to:
first, when any two or three regions of the head region, the eye region and the mouth region simultaneously generate respective action characteristics.
Secondly, triggering at least one offensive mark of the head area, the eye area and the mouth area, so that the offensive mark is displayed on the interactive interface in a preset form.
Optionally, the head motion characteristic, the eye motion characteristic and the mouth motion characteristic respectively have respective offensive identifiers, and the offensive identifiers include an injury feature value, a speed feature value and a distance feature value.
Optionally, when at least two of the motion characteristics occur simultaneously, triggering an offensive identifier, so that the offensive identifier is displayed on the interactive interface in a preset form, including:
when the eye action characteristics and/or mouth action characteristics and the head action characteristics occur, triggering the eye and/or mouth offensive identification, and simultaneously triggering the head action characteristics, so that the eye and/or mouth offensive identification is displayed on the interactive interface in a preset form by combining the head action characteristics.
Through providing an interactive installation based on a plurality of expression actions are mutually supported and are carried out, provide the interactive mode that is different from the touch-control on the one hand, just can control the persona in the interface terminal or control the interaction between a plurality of personas through the coordination cooperation of a plurality of expression actions, liberated the control of gesture for the control mode is simpler, convenient. On the other hand, the real head portrait of the user is mapped to the interactive interface, so that the user can participate in the role interaction process more deeply, and the fun of the face interaction of the user is increased.
Example 3
As shown in fig. 5, the present embodiment provides an electronic device, where the electronic device is used for interaction, and the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the method steps of the above embodiments.
Example 4
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that may perform the method steps as described in the embodiments above.
Example 5
Referring now to FIG. 5, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other by a bus 505. An input/output (I/O) interface 505 is also connected to bus 505.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 505 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, etc.; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 505. The communication means 505 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 505, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
Claims (11)
1. An interaction method based on a plurality of expression actions is characterized by comprising the following steps:
the method comprises the steps that an expression image of a user is obtained through a camera, and the expression image is displayed on an interactive interface, wherein the interactive interface comprises a first area for displaying a head portrait of the user and a second area for displaying a virtual head portrait;
monitoring the motion characteristics of the expression images in real time, wherein the motion characteristics comprise: head, eye, and mouth motion characteristics;
and when at least two action characteristics occur simultaneously, triggering the aggressive identification to display the aggressive identification on the interactive interface in a preset form.
2. The interaction method of claim 1, wherein the obtaining of the expression image of the user through the camera and the displaying of the expression image on the interaction interface comprises:
acquiring a real expression image of a user through a camera, and displaying the real expression image in the first area in real time; or,
the real expression image of the user is obtained through the camera, the model head portrait matched with the real expression image is selected, and the model head portrait is displayed in the first area in real time.
3. The interaction method according to claim 1, wherein the real-time monitoring of the motion characteristics of the expression image comprises: head motion characteristics, eye motion characteristics, and mouth motion characteristics, including:
dividing the expression image into a head region, an eye region and a mouth region;
the head region, the eye region and the mouth region respectively comprise a plurality of feature regions;
respectively selecting at least one characteristic point in each characteristic area;
and monitoring the motion characteristics of the expression image through the position change of the at least one feature point.
4. The interaction method according to claim 3, wherein the head region, the eye region and the mouth region each include a plurality of feature regions, including:
the characteristic region of the head region includes: ears and nose;
the characteristic region of the mouth region includes: upper lip, lower lip and chin;
the characteristic region of the eye region includes: eyebrows, eyelids, and pupils.
5. The interaction method according to claim 4, wherein the monitoring of the motion characteristics of the expression image through the position change of the at least one feature point comprises:
at a first time t1Acquiring first position coordinate information of one feature point; at a second time t2Acquiring second position coordinate information of the same characteristic point; wherein the second time t2And a first time t1The interval delta t of (a) is within a preset range;
calculating a movement vector R of the feature point according to the first position coordinate information and the second position coordinate information;
and monitoring the motion characteristics of the expression image through the movement vector R of the at least one feature point.
6. The interaction method according to claim 5, wherein the monitoring of the motion characteristics of the expression image through the position change of the at least one feature point comprises:
acquiring two continuous frames of images of the expression image;
comparing the position information of all the characteristic points in the two continuous frames of images;
monitoring whether the position information of each feature point is changed;
and determining the action characteristics of the head region, the eye region and/or the mouth region according to the change of the position information of each feature point.
7. The interaction method according to claim 1, wherein the triggering an offensive flag when at least two of the motion characteristics occur simultaneously, so that the offensive flag is displayed in the interaction interface in a preset form, comprises:
when any two or three regions of the head region, the eye region and the mouth region simultaneously generate respective action characteristics;
triggering at least one offensive mark of the head area, the eye area and the mouth area, so that the offensive mark is displayed on the interactive interface in a preset form.
8. The interaction method according to claim 1, wherein the head action characteristic, the eye action characteristic and the mouth action characteristic respectively have respective offensive identifiers, and the offensive identifiers comprise an injury characteristic value, a speed characteristic value and a distance characteristic value;
when at least two action characteristics occur simultaneously, triggering an aggressive identifier to enable the aggressive identifier to be displayed on the interactive interface in a preset form, wherein the method comprises the following steps:
when the eye action characteristics and/or mouth action characteristics and the head action characteristics occur, triggering the eye and/or mouth offensive identification, and simultaneously triggering the head action characteristics, so that the eye and/or mouth offensive identification is displayed on the interactive interface in a preset form by combining the head action characteristics.
9. An interactive device based on a plurality of expression actions, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an expression image of a user through a camera and displaying the expression image on an interactive interface, and the interactive interface comprises a first area for displaying a head portrait of the user and a second area for displaying a virtual head portrait;
the monitoring unit is used for monitoring the action characteristics of the expression image in real time, and the action characteristics comprise: head, eye, and mouth motion characteristics;
and the triggering unit is used for triggering the aggressive identification when at least two action characteristics occur simultaneously, so that the aggressive identification is displayed on the interactive interface in a preset form.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
11. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911102952.3A CN110928410A (en) | 2019-11-12 | 2019-11-12 | Interaction method, device, medium and electronic equipment based on multiple expression actions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911102952.3A CN110928410A (en) | 2019-11-12 | 2019-11-12 | Interaction method, device, medium and electronic equipment based on multiple expression actions |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110928410A true CN110928410A (en) | 2020-03-27 |
Family
ID=69852797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911102952.3A Pending CN110928410A (en) | 2019-11-12 | 2019-11-12 | Interaction method, device, medium and electronic equipment based on multiple expression actions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110928410A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536262A (en) * | 2020-09-03 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Unlocking method and device based on facial expression, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007087346A (en) * | 2005-09-26 | 2007-04-05 | Canon Inc | Information processing device, control method therefor, computer program, and memory medium |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
US20160042548A1 (en) * | 2014-03-19 | 2016-02-11 | Intel Corporation | Facial expression and/or interaction driven avatar apparatus and method |
CN105797376A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for controlling role model behavior according to expression of user |
CN106139519A (en) * | 2016-07-29 | 2016-11-23 | 韩莹光 | A kind of universal treadmill of mixed reality and application process thereof |
CN108771865A (en) * | 2018-05-28 | 2018-11-09 | 网易(杭州)网络有限公司 | Interaction control method, device in game and electronic equipment |
CN108905193A (en) * | 2018-07-03 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Game manipulates processing method, equipment and storage medium |
CN109568937A (en) * | 2018-10-31 | 2019-04-05 | 北京市商汤科技开发有限公司 | Game control method and device, game terminal and storage medium |
-
2019
- 2019-11-12 CN CN201911102952.3A patent/CN110928410A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007087346A (en) * | 2005-09-26 | 2007-04-05 | Canon Inc | Information processing device, control method therefor, computer program, and memory medium |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
US20160042548A1 (en) * | 2014-03-19 | 2016-02-11 | Intel Corporation | Facial expression and/or interaction driven avatar apparatus and method |
CN105797376A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for controlling role model behavior according to expression of user |
CN106139519A (en) * | 2016-07-29 | 2016-11-23 | 韩莹光 | A kind of universal treadmill of mixed reality and application process thereof |
CN108771865A (en) * | 2018-05-28 | 2018-11-09 | 网易(杭州)网络有限公司 | Interaction control method, device in game and electronic equipment |
CN108905193A (en) * | 2018-07-03 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Game manipulates processing method, equipment and storage medium |
CN109568937A (en) * | 2018-10-31 | 2019-04-05 | 北京市商汤科技开发有限公司 | Game control method and device, game terminal and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536262A (en) * | 2020-09-03 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Unlocking method and device based on facial expression, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109445662B (en) | Operation control method and device for virtual object, electronic equipment and storage medium | |
CN110917619B (en) | Interactive property control method, device, terminal and storage medium | |
CN109529356B (en) | Battle result determining method, device and storage medium | |
CN111672099B (en) | Information display method, device, equipment and storage medium in virtual scene | |
US10139901B2 (en) | Virtual reality distraction monitor | |
CN112245921B (en) | Virtual object control method, device, equipment and storage medium | |
CN111589133A (en) | Virtual object control method, device, equipment and storage medium | |
CN110585706B (en) | Interactive property control method, device, terminal and storage medium | |
CN111760278A (en) | Skill control display method, device, equipment and medium | |
CN111659117A (en) | Virtual object display method and device, computer equipment and storage medium | |
CN111589144B (en) | Virtual character control method, device, equipment and medium | |
CN110992947B (en) | Voice-based interaction method, device, medium and electronic equipment | |
US20200391109A1 (en) | Method and system for managing emotional relevance of objects within a story | |
JP2019074962A (en) | Program for providing virtual experience, computer and method | |
JP2022118720A (en) | Method and system for providing tactical support to player in shooting video game | |
CN112704875A (en) | Virtual item control method, device, equipment and storage medium | |
CN111659122A (en) | Virtual resource display method and device, electronic equipment and storage medium | |
CN111013139B (en) | Role interaction method, system, medium and electronic equipment | |
CN112774195B (en) | Information display method, device, terminal and storage medium | |
CN110928410A (en) | Interaction method, device, medium and electronic equipment based on multiple expression actions | |
CN111651616B (en) | Multimedia resource generation method, device, equipment and medium | |
JP2020052775A (en) | Program, virtual space providing method, and information processor | |
CN111068308A (en) | Data processing method, device, medium and electronic equipment based on mouth movement | |
CN111589147A (en) | User interface display method, device, equipment and storage medium | |
CN111013135A (en) | Interaction method, device, medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200327 |