CN109670385B - Method and device for updating expression in application program - Google Patents

Method and device for updating expression in application program Download PDF

Info

Publication number
CN109670385B
CN109670385B CN201710959429.7A CN201710959429A CN109670385B CN 109670385 B CN109670385 B CN 109670385B CN 201710959429 A CN201710959429 A CN 201710959429A CN 109670385 B CN109670385 B CN 109670385B
Authority
CN
China
Prior art keywords
expression
target
data
facial
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710959429.7A
Other languages
Chinese (zh)
Other versions
CN109670385A (en
Inventor
汪俊明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710959429.7A priority Critical patent/CN109670385B/en
Publication of CN109670385A publication Critical patent/CN109670385A/en
Application granted granted Critical
Publication of CN109670385B publication Critical patent/CN109670385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for updating expressions in an application program, wherein the method comprises the following steps: in the running process of an application program, detecting a human face and acquiring a first image comprising the human face; extracting expression data from the first image; wherein the expression data is used for representing the current facial expression of the human face; acquiring a target expression matched with the expression data; and updating the expression of the target virtual character in the application program by using the target expression. By the method, the expression required by the user can be automatically identified, and the technical problem that the operation of manually selecting the required expression from the expression library is complicated in the prior art is solved.

Description

Method and device for updating expression in application program
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a method and a device for updating expressions in an application program.
Background
With the development of internet technology and the popularization of intelligent terminal devices, more and more application programs can realize social functions among users, such as WeChat, QQ, field owners and the like. In the process of making social contact between users, users often express different emotions with different intentions, such as anger, worry, joy, and the like.
However, with the social function provided by the prior art, when a user wants to use an expression in the social process, the user needs to manually select a desired expression from an expression library carried by an application program, and the expression cannot be automatically selected, so that the operation is cumbersome.
Disclosure of Invention
The present invention is directed to solving, at least in part, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide a method for updating an expression in an application program, so as to automatically identify an expression required by a user, and solve the technical problem that an operation of manually selecting a required expression from an expression library is complicated in the prior art.
The second objective of the present invention is to provide a device for updating expressions in an application.
A third object of the invention is to propose a non-transitory computer-readable storage medium.
A fourth object of the invention is to propose a computer program product.
To achieve the above object, an embodiment of a first aspect of the present invention provides a method for updating an expression in an application, including:
in the running process of an application program, detecting a human face and acquiring a first image comprising the human face;
extracting expression data from the first image; the expression data is used for representing the current facial expression of the face;
acquiring a target expression matched with the expression data;
and updating the expression of the target virtual character in the application program by using the target expression.
According to the method for updating the expression in the application program, disclosed by the embodiment of the invention, the face is detected and the first image comprising the face is acquired in the running process of the application program, the expression data is extracted from the first image, the target expression matched with the expression data is acquired, and the expression of the target virtual character in the application program is updated by utilizing the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complexity of operation in the social process is reduced, and the user experience and social interaction are improved.
To achieve the above object, a second embodiment of the present invention provides an apparatus for updating an expression in an application, including:
the acquisition module is used for detecting a face and acquiring a first image comprising the face in the running process of an application program;
the expression extraction module is used for extracting expression data from the first image; wherein the expression data is used for representing the current facial expression of the human face;
the expression matching module is used for acquiring a target expression matched with the expression data;
and the updating module is used for updating the expression of the target virtual character in the application program by using the target expression.
According to the device for updating the expression in the application program, disclosed by the embodiment of the invention, the face is detected and the first image comprising the face is acquired in the running process of the application program, the expression data is extracted from the first image, the target expression matched with the expression data is acquired, and the expression of the target virtual character in the application program is updated by utilizing the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
To achieve the above object, a third embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for updating an expression in an application program according to the first embodiment.
To achieve the above object, a fourth aspect of the present invention provides a computer program product, where instructions in the computer program product, when executed by a processor, perform a method for updating an expression in an application program according to an embodiment of the first aspect.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 (a) is a schematic diagram of social expressions;
FIG. 1 (b) is a schematic diagram of a 3D chat emoticon;
fig. 2 is a flowchart illustrating a method for updating an expression in an application according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for updating an expression in an application according to another embodiment of the present invention;
fig. 4 (a) is a first schematic diagram illustrating the recognized expression displayed in the expression display area corresponding to the virtual character;
fig. 4 (b) is a schematic diagram of displaying the identified expression in the expression display area corresponding to the virtual character;
fig. 4 (c) is a schematic diagram of a result after the expression displayed in the expression display area is sent out;
fig. 5 is a flowchart illustrating a method for updating an expression in an application according to another embodiment of the present invention;
FIG. 6 (a) is a schematic diagram of a process of matching a target expression;
FIG. 6 (b) is a diagram illustrating the result of replacing the facial expression of the virtual character with the target expression;
fig. 7 is a schematic flowchart of a process of acquiring a target expression matched with expression data according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of another process for obtaining a target expression matched with expression data according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a method for updating expressions in an application according to yet another embodiment of the present invention;
FIG. 10 is a schematic diagram of a process of obtaining expression data of a user;
FIG. 11 is a diagram illustrating the hardware architecture of a system for updating a representation in an application according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an apparatus for updating an expression in an application according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method and an apparatus for updating an expression in an application program according to an embodiment of the present invention with reference to the accompanying drawings.
To facilitate an understanding of the invention, before explaining specific embodiments of the invention in detail, terms that may be used in the invention are first explained as follows:
the social emoticon refers to a graphic or an image for expressing an emotion when a user is socialized in a chat software or a game software, such as various emoticons shown in fig. 1 (a).
The 3D chat emotions refer to emotions, such as happiness, anger, sadness, music, surprise, etc., presented by the faces of the virtual characters when the user is in a social contact with the chat software or game software, and can be seen in various emotions in fig. 1 (b).
Fig. 2 is a flowchart illustrating a method for updating an expression in an application according to an embodiment of the present invention, where the method may be applied to an intelligent terminal or a server, the intelligent terminal may be an intelligent electronic device with a camera, such as a mobile phone and a tablet computer, and the intelligent terminal has an application installed therein, and the application has a social function. The method is applied to an intelligent terminal as an example and is explained in detail below.
As shown in fig. 2, the method for updating the expression in the application program includes the following steps:
and S11, in the running process of the application program, detecting the face and collecting a first image comprising the face.
In the process that the user uses the application program, the application program can call the front-facing camera of the intelligent terminal to detect the face, and when the face is detected, the front-facing camera is used for collecting a first image comprising the face.
And S12, extracting expression data from the first image.
The expression data is used for representing the current facial expression of the face, and the expression data may include feature information of each organ in the current facial expression of the user, including but not limited to feature information of the forehead, the eyebrows, the eyes, and the like.
When a user uses an application with a social function, such as chatting with friends using social software such as WeChat and QQ, or communicating with other players during playing with a landlord, the true emotion of his heart is often shown in the face. When the user feels happy, the facial expression of the user is also happy; when a user feels awkward, his facial expression is often expressed awkwardly. That is, the user's facial expression can show emotional changes in the user's mind.
Therefore, in this embodiment, in the process that the user uses the application program, the camera function built in the smart terminal where the application program is located may be used to collect the first image including the face, and the expression data of the user may be extracted from the first image.
Specifically, the change situation of the facial muscles of the user can be captured in real time by using a front-facing shooting function of a built-in camera in the intelligent terminal, and corresponding expression data can be acquired according to the change of the facial muscles of the user based on a related face recognition technology. Or, the photo or the picture can be read from a gallery of the intelligent terminal, or the picture drawn by the user is collected by the camera to be used as a first image, the face in the photo or the picture is identified by adopting a related face identification technology, and expression data representing the facial expression of the user is obtained by extracting the facial features of the face.
And S13, acquiring the target expression matched with the expression data.
In this embodiment, after the expression data is extracted, the target expression matched with the expression data may be further obtained according to the expression data. As an example, assuming that the expression data includes feature information of the forehead, the eyebrows, the eyes, and the lower half of the face, the facial expression of the face may be defined in advance, as shown in table 1. Matching the acquired expression data with the feature information of each part shown in table 1, at least one part containing a plurality of pieces of feature information is matched, and when the feature information of each part in the same expression is matched and consistent, determining that the expression is a target expression corresponding to the acquired expression data of the user, and further acquiring a social expression and a 3D expression as the target expression from an expression library shown in fig. 1 (a) or an expression library shown in fig. 1 (b). For example, when the expression data includes information that the eyebrows are slightly bent downward, the crow's feet are expanded outward from the inner corners of the outer eyes, the mouth corners are enlarged, and the teeth are exposed, the expression data is matched with the feature information of each part under each expression in table 1, it can be determined that the expression matched with the expression data is "happy", and the expression "happy" is the determined target expression, and then the expression "happy" is selected from the expression library shown in fig. 1 (a).
TABLE 1
Figure GDA0004083442420000051
And S14, updating the expression of the target virtual character in the application program by using the target expression.
In this embodiment, after the target expression matched with the expression data is obtained according to the extracted expression data, the expression of the target virtual character in the application program may be updated by using the target expression. The target virtual character is a virtual character used by the current user in the application program, for example, a player character of the current user in the bucket owner.
For example, when the obtained expression data is that eyebrows are wrinkled and pressed together, eyes are large and swollen, lips are closed, lip corners are downward, and nostrils are enlarged, the expression conforming to the expression data can be determined to be anger by matching with the feature information of each part in table 1, and thus, the expression of the target virtual character of the user in the application program can be updated to be angry.
According to the method for updating the expression in the application program, the face is detected and the first image comprising the face is collected in the running process of the application program, the expression data are extracted from the first image, the target expression matched with the expression data is obtained, and the expression of the target virtual character in the application program is updated by utilizing the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complexity of operation in the social process is reduced, and the user experience and social interaction are improved.
In the process of using the application program with the social function, the user often encounters a situation that expressions are sent to express real emotions, for example, when the user is fighting a landlord, if the card of the user or a teammate is not up to home, the user may want to send a "happy" expression to an opposite-end user to express the excited mood of the user. In order to automatically display a "happy" expression in a dialog box corresponding to a virtual character of a user in an application program, an embodiment of the present invention provides another method for updating an expression in an application program, and fig. 3 is a flowchart illustrating the method for updating an expression in an application program according to another embodiment of the present invention.
As shown in fig. 3, on the basis of the embodiment shown in fig. 2, step S14 may include the following steps:
and S21, displaying the target expression in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area.
In this embodiment, after the matched target expression is obtained according to the expression data, the target expression may be displayed in an expression display area corresponding to the target virtual character in the application program.
Specifically, when the expression display area corresponding to the target virtual character in the application program is expressionless, the target expression can be directly displayed in the expression display area; and when the expression display area has the expression, replacing the currently displayed expression in the expression display area to display the matched target expression.
As an example, as shown in fig. 4 (a), during the user's playing with the landlord, the teammate makes a relatively large card, the user feels happy at this time, and the user's current facial expression shows her happiness. The intelligent terminal captures facial expressions of a user through the front camera, collects a first image comprising a human face, acquires expression data of the user from the first image, determines that a target expression is 'happy' according to the expression data, and displays the expression 'happy' in an expression display area 401 corresponding to a target virtual character.
Further, in order to facilitate the user to perform corresponding processing according to the target expression displayed in the expression display area, in a possible implementation manner of the embodiment of the present invention, a "send" button may be further set at an appropriate position outside the expression display area corresponding to the virtual character, as shown in fig. 4 (b), when the user triggers the "send" button 402, the target expression displayed in the expression display area is sent out by the smart terminal where the user is located, as shown in fig. 4 (c), and is simultaneously displayed in a dialog box 403 corresponding to the target virtual character of the user. The sent target expression is forwarded to the intelligent terminal where the opposite-end user is located through the server where the application program is located, and is displayed in the related application program in the intelligent terminal where the opposite-end user is located.
In order to further improve user experience, in a possible implementation manner of the embodiment of the present invention, the method of the embodiment of the present invention may further be configured such that after the target expression is displayed in the expression display area corresponding to the target virtual character, the application program directly sends out the target expression without the user performing a related operation, so that user participation can be further reduced, and user experience is improved.
According to the method for updating the expression in the application program, the target expression is displayed in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area, so that the expression which meets the current mood of the user can be automatically displayed to the user without manual selection of the user, and the user experience is improved.
Preferably, in order to make the social process more real and enhance the social attributes, the facial expression of the user may be visually displayed through the virtual character, so that another method for updating the expression in the application program is provided in the embodiment of the present invention, and fig. 5 is a flowchart of a method for updating the expression in the application program, which is provided in another embodiment of the present invention.
As shown in fig. 5, on the basis of the embodiment shown in fig. 2, step S14 may include the following steps:
and S31, replacing the current facial expression of the target virtual character by the target expression.
In this embodiment, after the target expression matched with the expression data is obtained, the current facial expression of the target virtual character may be replaced by the target expression, so that the target virtual character presents the facial expression the same as or similar to that of the user.
For example, the obtained expression data may be compared with the feature information of each part corresponding to each expression shown in table 1, and when the feature information of each part in the presence of one expression is matched with the obtained expression data, the expression may be determined as a target expression, and then a target expression may be selected from the expression library shown in fig. 1 (b), and the target expression may be used to replace the current facial expression of the target virtual character.
Fig. 6 (a) is a schematic diagram of a process of matching a target expression. As shown in fig. 6 (a), facial expression data of a player, i.e., a user, such as feature information of a mouth, eyes, eyebrows, and the like, may be acquired through a front camera of a smartphone. The smart phone processes the acquired expression data, matches the corresponding expressions, and selects the matched expressions from the expression library. As can be seen from fig. 6 (a), the current facial expression of the user is happy, and therefore, after the smartphone processes the expression data, an expression similar to the facial expression of the user can be matched from the expression library. Finally, the matched expression is played by the target virtual character of the user, namely the matched target expression is used for replacing the current facial expression of the virtual character, and the display effect as shown in fig. 6 (b) is obtained.
According to the method for updating the expression in the application program, the target expression is used for replacing the current facial expression of the virtual character, so that 3D display of the target expression can be achieved, the intuitiveness and the authenticity of the expression display are enhanced, and the user experience is further improved.
Further, in order to more clearly illustrate the implementation process of acquiring the target expression matched with the expression data in the above embodiment, the embodiment of the present invention provides two possible implementation manners of acquiring the target expression matched with the expression data.
As one possible implementation manner, as shown in fig. 7, on the basis of the foregoing embodiment, acquiring the target expression matched with the expression data may include the following steps:
s41, matching is carried out in a preset expression library according to the expression data, and the matching degree of the first expression represented by the expression data and each expression in the expression library is obtained.
In this embodiment, after obtaining the expression data of the user, the expression data may be compared with the feature information of each part under each expression shown in table 1, so as to obtain a first expression represented by the expression data, and then match the first expression in a preset expression library according to the first expression, so as to obtain a matching degree between the first expression and each expression in the expression library.
Specifically, in a possible implementation manner of the embodiment of the present invention, matching the expression data in a preset expression library to obtain a matching degree between a first expression represented by the expression data and each expression in the expression library may include: for each facial organ in the first expression, matching the feature data of the facial organ with the first feature data of the facial organ in each expression in the expression library to obtain the matching degree of the facial organ; and for all expressions in the expression library, weighting the matching degree of each facial organ in the first expression corresponding to the second expression in the expression library one by one according to respective preset weight to obtain the matching degree of the first expression and the second expression, wherein the second expression is any one expression in the expression library.
That is to say, for the facial organ of each expression in the expression library, the facial organ is respectively matched with the facial organ corresponding to the first expression to obtain the matching degree of the facial organ, and then for each expression in the expression library, the matching degree of each facial organ corresponding to each expression is weighted and summed according to the preset weight of each facial organ to obtain the matching degree of each expression in the expression library and the first expression.
The facial feature matching degree is calculated firstly, and then the matching degree of each expression is calculated, so that the accuracy of expression matching can be improved.
And S42, taking the expression with the highest matching degree in the expression library as a target expression.
In this embodiment, after the matching degree between each expression in the expression library and the first expression represented by the expression data is obtained, the target expression may be determined according to the matching degree, and the expression in the expression library with the highest matching degree with the first expression may be used as the target expression.
According to the expression updating method in the application program, the matching degree of the first expression represented by the expression data and each expression in the expression library is obtained by matching the expression data in the preset expression library, and the expression with the highest matching degree in the expression library is used as the target expression, so that the accuracy of expression matching can be improved.
As another possible implementation manner, as shown in fig. 8, on the basis of the foregoing embodiment, acquiring the target expression matched with the expression data may include the following steps:
s51, one or more facial organs are selected from all the facial organs as matching facial organs.
Since a human face includes a plurality of facial organs such as a nose, a mouth, eyes, etc., the related facial organs are different for different expressions. For example, as can be seen from table 1, for a "sad" expression, the facial organs that are relatively relevant are eyebrows, eyes and mouth, but not nose; for "aversive" expressions, the facial organs of comparative interest are the eyebrows, eyes, mouth, nose, etc.
Therefore, in this embodiment, one or more facial organs may be selected from all facial organs included in the expression data as matching facial organs for matching with expressions in the expression library.
And S52, matching in an expression library according to the feature data of the matched facial organ to obtain all expressions comprising the feature data of the matched facial organ as a candidate expression set.
Wherein each candidate expression in the set of candidate expressions comprises feature data of a matching facial organ.
In this embodiment, after the matched facial organs are extracted from the expression data, matching may be performed in the expression library according to the feature data of the matched facial organs, so as to obtain a candidate expression set, where feature information of the facial organs of each expression in the candidate expression set is consistent with the feature data of the matched facial organs.
And S53, screening the candidate expression set by using the feature data of the rest facial organs to obtain a target expression.
In this embodiment, after the candidate expression set is obtained according to the matching of the matched facial organs, the candidate expression set may be further screened according to the feature data of the remaining facial organs except the matched facial organs in the expression data, so as to obtain the target expression from the candidate expression set.
Specifically, the screening of the candidate expression set by using the feature data of the remaining facial organs to obtain the target expression may include: extracting first feature data of the remaining facial organs from the candidate expression set; comparing the feature data of the remaining facial organs with the first feature data, and judging whether a target candidate expression exists in the candidate expression set, wherein the first feature data of the remaining facial organs in the target candidate expression are consistent with the feature data; and if the target candidate expression exists, taking the target candidate expression as the target expression.
According to the method for updating the expression in the application program, one or more facial organs are selected from all facial organs to serve as matched facial organs, matching is performed in the expression library according to the feature data of the matched facial organs, all expressions comprising the feature data of the matched facial organs are obtained to serve as candidate expression sets, the candidate expression sets are screened by using the feature data of the rest facial organs, and the target expression is obtained. The candidate expression set is obtained by adopting the matched facial organs, so that the complexity and the operation amount of expression matching can be reduced; the target expression is determined from the candidate expression set by using the feature data of the remaining facial organs, so that the matching accuracy can be improved.
In order to implement synchronous display of a target expression in peer-to-peer equipment, an embodiment of the present invention provides another method for updating an expression in an application program, and fig. 9 is a flowchart illustrating the method for updating an expression in an application program according to another embodiment of the present invention.
As shown in fig. 9, the method for updating an expression in an application program may include the following steps:
and S61, in the running process of the application program, detecting the face and collecting a first image comprising the face.
And S62, extracting expression data from the first image.
The expression data is used for representing the current facial expression of the human face.
Specifically, extracting expression data from the first image may include: extracting a face image from the first image; identifying each facial organ from the face image and extracting the feature data of each facial organ; expression data is formed using feature data of all facial organs.
As an example, fig. 10 is a schematic diagram of a process of acquiring expression data of a user. As shown in fig. 10, a face image needs to be acquired first, and a built-in camera of the intelligent terminal may be used to acquire the face image, or a photo containing the face image is read from a static file such as a gallery. And preprocessing the acquired face image so as to accurately detect the face. And performing face detection on the preprocessed face image by adopting a related face recognition technology, and cutting the face image from the original image according to different face image proportions according to different face feature extraction methods and expression classification methods to obtain the face image. And according to the used expression feature extraction method, continuously carrying out preprocessing on the face image, including geometric processing and gray processing. And performing expression feature extraction on the preprocessed face image to obtain expression data.
And S63, acquiring the target expression matched with the expression data.
In this embodiment, the social expression and the 3D expression may be obtained from the expression library shown in fig. 1 (a) or the expression library shown in fig. 1 (b) according to the expression data, and the specific obtaining process may refer to the related description in the foregoing embodiment, and will not be described in detail here to avoid redundancy.
And S64, updating the expression of the target virtual character in the application program by using the target expression.
It should be noted that, in the embodiment of the present invention, for the description of step S63 to step S64, reference may be made to the description of step S13 to step S14 in the foregoing embodiment, and the implementation principle is similar, and is not described herein again.
S65, synchronizing the target expression to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment and updating the expression of the target virtual character on the opposite terminal equipment.
When a user starts an application program, for example, when the user enters a client of a landlord to start a game, an intelligent terminal held by the user needs to establish communication connection with an intelligent terminal (opposite terminal device) of an opposite terminal user, and communication between the intelligent terminal held by the user and the intelligent terminal of the opposite terminal user needs a server to which the application program belongs as a carrier. After the communication connection is established, the server can send login information of the application program on the opposite-end device to the intelligent terminal held by the user. And after receiving the login information of the application program on the opposite terminal equipment, the intelligent terminal stores the login information in the memory.
After the target expression is obtained, the intelligent terminal held by the user can extract the login information of the application program on the opposite terminal device from the storage, synchronize the obtained target expression to the opposite terminal device according to the login information, and simultaneously update the expression of the target virtual role of the user in the application program on the opposite terminal device.
According to the method for updating the expression in the application program, the face is detected and the first image comprising the face is collected in the running process of the application program, the expression data is extracted from the first image, the target expression matched with the expression data is obtained, the target expression is synchronized to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment, the expression of the target virtual character on the opposite terminal equipment is updated, synchronous display of the expression can be achieved, the identified expression is displayed on the intelligent terminal where the opposite terminal user is located, the social attribute of the application program is further enhanced, and interestingness is increased.
The foregoing embodiment describes in detail a specific implementation process of the method for updating an expression in an application program according to the embodiment of the present invention when the method is applied to an intelligent terminal, but does not indicate that the method can only be applied to an intelligent terminal, and it should be understood that the method may also be applied to a server.
When the method for updating the expression in the application program is executed by the server, the intelligent terminal still acquires the expression data of the user in the process that the user uses the application program, and the intelligent terminal sends the acquired expression data to the server. And after receiving the expression data sent by the intelligent terminal, the server identifies the corresponding expression according to the expression data, and then updates the expression of the target virtual character of the user in the application program.
It should be noted that, the foregoing description of identifying an expression according to expression data and updating an expression of a virtual character when the intelligent terminal executes the method is also applicable to a case when the method is executed by a server, and the implementation principle is similar, but the execution subject is different, and the specific implementation process when the server executes the method is not described any more.
The method for updating the expression in the application program is executed by the server, the expression data of the user is obtained from the intelligent terminal through the server, the corresponding expression is identified according to the expression data, the expression of the target virtual character of the user in the application program is updated, the expression can be automatically identified, meanwhile, the phenomenon of blocking caused by overhigh memory occupation in the operation of the intelligent terminal when the intelligent terminal executes the method is avoided, and the user experience is further improved.
In order to implement the foregoing embodiment, the present invention further provides a system for updating an expression in an application program, and fig. 11 is a hardware architecture diagram of the system for updating an expression in an application program according to an embodiment of the present invention. As shown in fig. 11, the system for updating the expression in the application program includes the following hardware: the power supply is used for supplying power to each piece of hardware; the memory is used for storing programs required for realizing the method, static file resources, acquired information and the like; the processor is used for processing the face image of the human face and acquiring expression data; the expression matching unit is mainly used for matching the target expression according to the expression data; the input unit mainly comprises a camera, a sensor and other input equipment and is used for providing the acquired face image for the processor; and the execution unit comprises a display panel and is mainly used for displaying the identified target expression or replacing the current facial expression of the target virtual character with the target expression.
Through the system for updating the expression in the application program of the embodiment, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
In order to implement the above embodiment, the present invention further provides a device for updating an expression in an application program.
Fig. 12 is a schematic structural diagram of an apparatus for updating an expression in an application according to an embodiment of the present invention.
As shown in fig. 12, the apparatus 10 for updating expression in the application program includes: an obtaining module 110, an expression extracting module 120, an expression matching module 130, and an updating module 140. Wherein the content of the first and second substances,
the obtaining module 110 is configured to detect a face and acquire a first image including the face during an operation process of an application program.
The expression extraction module 120 is configured to extract expression data from the first image.
The expression data is used for representing the current facial expression of the face.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression extraction module 120 is specifically configured to extract a face image from the first image; identifying each facial organ from the face image and extracting feature data of each facial organ; expression data is formed using feature data of all facial organs.
And the expression matching module 130 is configured to acquire a target expression matched with the expression data.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression matching module 130 is specifically configured to perform matching in a preset expression library according to the expression data, and obtain a matching degree between a first expression represented by the expression data and each expression in the expression library; and taking the expression with the highest matching degree in the expression library as a target expression.
Further, when the expression matching module 130 obtains the matching degree between the first expression represented by the expression data and each expression in the expression library, the feature data of the facial organ may be matched with the first feature data of the facial organ in each expression in the expression library for each facial organ in the first expression to obtain the matching degree of the facial organ; further, for all expressions in the expression library, weighting the matching degree of each facial organ in the first expression corresponding to the second expression in the expression library one by one according to respective preset weight to obtain the matching degree of the first expression and the second expression; and the second expression is any one expression in the expression library.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression matching module 130 is specifically configured to select one or more facial organs from all facial organs as matching facial organs; matching in an expression library according to the feature data of the matched facial organs to obtain all expressions comprising the feature data of the matched facial organs as a candidate expression set; wherein each candidate expression in the candidate expression set comprises feature data of a matching facial organ; and screening the candidate expression set by utilizing the feature data of the rest facial organs to obtain a target expression.
Further, when the expression matching module 130 screens the candidate expression set to obtain the target expression, first feature data of the remaining facial organs may be extracted from the candidate expression set; comparing the feature data of the remaining facial organs with the first feature data, and judging whether a target candidate expression exists in the candidate expression set; wherein the first feature data of the remaining facial organs in the target candidate expression is consistent with the feature data; and if the target candidate expression exists, taking the target candidate expression as the target expression.
And the updating module 140 is configured to update the expression of the target virtual character in the application program by using the target expression.
Optionally, in a possible implementation manner of the embodiment of the present invention, the updating module 120 is specifically configured to replace the current facial expression of the target virtual character with the target expression.
Optionally, in a possible implementation manner of the embodiment of the present invention, the updating module 120 is further specifically configured to display the target expression in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area.
Optionally, in another possible implementation manner of the embodiment of the present invention, the updating module 120 is further configured to synchronize the target expression to the peer device according to the login information of the application on the peer device, and update the expression of the target virtual role on the peer device.
It should be noted that the foregoing explanation of the method embodiment for updating an expression in an application program is also applicable to the apparatus for updating an expression in an application program of this embodiment, and the implementation principle is similar, and is not described herein again.
The device for updating the expression in the application program of the embodiment detects a face and collects a first image including the face in the running process of the application program, extracts expression data from the first image, acquires a target expression matched with the expression data, and updates the expression of a target virtual character in the application program by using the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
In order to implement the foregoing embodiments, the present invention further proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor is capable of implementing the method for updating an expression in an application program according to the foregoing embodiments.
In order to implement the foregoing embodiments, the present invention further provides a computer program product, wherein when instructions in the computer program product are executed by a processor, the method for updating expressions in an application program according to the foregoing embodiments is performed.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for updating an expression in an application program is characterized by comprising the following steps:
in the running process of an application program, detecting a human face and acquiring a first image comprising the human face;
extracting expression data from the first image; the expression data is used for representing the current facial expression of the face;
acquiring a target expression matched with the expression data, wherein the target expression is an expression matched with the expression data in a preset expression library, and the method comprises the following steps: matching the expression data with feature information of each part corresponding to each expression in a predefined expression table, matching at least one piece of feature information in parts containing a plurality of pieces of feature information with the expression data for a current matched expression in the expression table, determining the current matched expression as a first expression represented by the expression data when the feature information of each part in the same current matched expression is matched with the expression data, and acquiring a social expression and a 3D expression which are matched with the first expression from a preset expression library according to the first expression as the target expression;
updating the expression of the target virtual character in the application program by using the target expression, wherein the updating comprises the following steps:
replacing the current facial expression of the target virtual character by using the 3D expression in the target expression so as to enable the target virtual character to present the facial expression which is the same as or similar to the human face; and
displaying the social expression in the target expression in an expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area, sending the social expression to opposite-end equipment when detecting that a sending key arranged outside the expression display area is triggered, displaying the social expression in a dialog box corresponding to the target virtual character, and hiding the content displayed in the expression display area.
2. The method of claim 1, wherein extracting expression data from the first image comprises:
extracting a face image from the first image;
identifying each facial organ from the face image and extracting feature data of each facial organ;
the expression data is formed using the feature data of all facial organs.
3. The method of claim 2, wherein the obtaining of the target expression matching the expression data comprises:
matching in a preset expression library according to the expression data, and acquiring the matching degree of a first expression represented by the expression data and each expression in the expression library;
and taking the expression with the highest matching degree in the expression library as the target expression.
4. The method of claim 3, wherein the matching in a preset expression library according to the expression data to obtain the matching degree between the first expression represented by the expression data and each expression in the expression library comprises:
for each facial organ in the first expression, matching the feature data of the facial organ with the first feature data of the facial organ in each expression in the expression library to obtain the matching degree of the facial organ;
for all expressions in the expression library, weighting the matching degree of each facial organ in the first expression corresponding to a second expression in the expression library one by one according to respective preset weight to obtain the matching degree of the first expression and the second expression; and the second expression is any one expression in the expression library.
5. The method of claim 2, wherein the obtaining of the target expression matching the expression data comprises:
selecting one or more facial organs from all facial organs as matching facial organs;
matching in the expression library according to the feature data of the matched facial organ to obtain all expressions comprising the feature data of the matched facial organ as a candidate expression set; wherein each candidate expression in the set of candidate expressions comprises the feature data of the matching facial organ;
and screening the candidate expression set by using the feature data of the rest facial organs to obtain the target expression.
6. The method of claim 5, wherein the filtering the set of candidate expressions using the feature data of remaining facial organs to derive the target expression comprises:
extracting first feature data of the remaining facial organs from the candidate expression set;
comparing the feature data of the remaining facial organs with the first feature data, and judging whether a target candidate expression exists in the candidate expression set; wherein the first feature data of the remaining facial organs in the target candidate expression is consistent with the feature data;
and if the target candidate expression exists, taking the target candidate expression as the target expression.
7. The method of claim 1, wherein after obtaining the target expression matching the expression data, further comprising:
and synchronizing the target expression to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment and updating the expression of the target virtual role on the opposite terminal equipment.
8. An apparatus for updating an expression in an application, comprising:
the acquisition module is used for detecting a face and acquiring a first image comprising the face in the running process of an application program;
the expression extraction module is used for extracting expression data from the first image; the expression data is used for representing the current facial expression of the face;
the expression matching module is used for acquiring a target expression matched with the expression data, wherein the target expression is an expression matched with the expression data in a preset expression library, and the expression matching module comprises: matching the expression data with feature information of each part corresponding to each expression in a predefined expression table, for a current matched expression in the expression table, matching at least one piece of feature information in the part containing a plurality of pieces of feature information with the expression data, and determining the current matched expression as a first expression represented by the expression data when the feature information of each part in the same current matched expression is matched with the expression data, and acquiring a social expression and a 3D expression which are matched with the first expression from a preset expression library according to the first expression as a target expression;
an updating module, configured to update an expression of a target virtual character in the application program by using the target expression, including:
replacing the current facial expression of the target virtual character by using the 3D chat expression contained in the target expression so as to enable the target virtual character to present the facial expression which is the same as or similar to the human face; and
displaying the social expression in the target expression in an expression display area corresponding to the target virtual character to replace the expression currently displayed in the expression display area, sending the social expression to an opposite terminal device when detecting that a sending key arranged outside the expression display area is triggered, displaying the social expression in a dialog box corresponding to the target virtual character, and hiding the content displayed in the expression display area.
9. The apparatus of claim 8, wherein the expression extraction module is specifically configured to:
extracting a face image from the first image;
identifying each facial organ from the face image and extracting feature data of each facial organ;
the expression data is formed using the feature data of all facial organs.
10. The apparatus of claim 8, further comprising:
and the updating module is used for synchronizing the target expression to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment and updating the expression of the target virtual role on the opposite terminal equipment.
CN201710959429.7A 2017-10-16 2017-10-16 Method and device for updating expression in application program Active CN109670385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710959429.7A CN109670385B (en) 2017-10-16 2017-10-16 Method and device for updating expression in application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710959429.7A CN109670385B (en) 2017-10-16 2017-10-16 Method and device for updating expression in application program

Publications (2)

Publication Number Publication Date
CN109670385A CN109670385A (en) 2019-04-23
CN109670385B true CN109670385B (en) 2023-04-18

Family

ID=66140268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710959429.7A Active CN109670385B (en) 2017-10-16 2017-10-16 Method and device for updating expression in application program

Country Status (1)

Country Link
CN (1) CN109670385B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136228B (en) * 2019-05-16 2023-04-18 腾讯科技(深圳)有限公司 Face replacement method, device, terminal and storage medium for virtual character
WO2020263672A1 (en) * 2019-06-27 2020-12-30 Raitonsa Dynamics Llc Assisted expressions
CN110837294B (en) * 2019-10-14 2023-12-12 成都西山居世游科技有限公司 Facial expression control method and system based on eyeball tracking
US11380037B2 (en) 2019-10-30 2022-07-05 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating virtual operating object, storage medium, and electronic device
CN110755847B (en) * 2019-10-30 2021-03-16 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
CN113099150B (en) * 2020-01-08 2022-12-02 华为技术有限公司 Image processing method, device and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006263122A (en) * 2005-03-24 2006-10-05 Sega Corp Game apparatus, game system, game data processing method, program for game data processing method and storage medium
CN105975563A (en) * 2016-04-29 2016-09-28 腾讯科技(深圳)有限公司 Facial expression recommendation method and apparatus
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006263122A (en) * 2005-03-24 2006-10-05 Sega Corp Game apparatus, game system, game data processing method, program for game data processing method and storage medium
CN105975563A (en) * 2016-04-29 2016-09-28 腾讯科技(深圳)有限公司 Facial expression recommendation method and apparatus
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons

Also Published As

Publication number Publication date
CN109670385A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109670385B (en) Method and device for updating expression in application program
CN112738408B (en) Selective identification and ordering of image modifiers
US9467673B2 (en) Method, system, and computer-readable memory for rhythm visualization
CN111506758B (en) Method, device, computer equipment and storage medium for determining article name
US20090251484A1 (en) Avatar for a portable device
US20220101586A1 (en) Music reactive animation of human characters
US11657575B2 (en) Generating augmented reality content based on third-party content
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
US11900683B2 (en) Setting ad breakpoints in a video within a messaging system
CN111368127B (en) Image processing method, image processing device, computer equipment and storage medium
KR102257427B1 (en) The psychological counseling system capable of real-time emotion analysis and method thereof
CN110970105A (en) Physical examination report broadcasting method and device, electronic equipment and storage medium
CN111240482A (en) Special effect display method and device
US20220103912A1 (en) Inserting ads into a video within a messaging system
CN112632349B (en) Exhibition area indication method and device, electronic equipment and storage medium
CN117203676A (en) Customizable avatar generation system
CN108596241B (en) Method for quickly classifying user genders based on multidimensional sensing data
US11876634B2 (en) Group contact lists generation
CN115228091A (en) Game recommendation method, device, equipment and computer readable storage medium
CN114898395A (en) Interaction method, device, equipment, storage medium and program product
CN113538703A (en) Data display method and device, computer equipment and storage medium
CN111461005A (en) Gesture recognition method and device, computer equipment and storage medium
CN111627095B (en) Expression generating method and device
CN111949813B (en) Friend-making request method, friend-making request device, friend-making request computer device, friend-making request storage medium
US20240144545A1 (en) Avatar fashion delivery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant