CN112000828B - Method, device, electronic equipment and readable storage medium for searching expression picture - Google Patents

Method, device, electronic equipment and readable storage medium for searching expression picture Download PDF

Info

Publication number
CN112000828B
CN112000828B CN202010700613.1A CN202010700613A CN112000828B CN 112000828 B CN112000828 B CN 112000828B CN 202010700613 A CN202010700613 A CN 202010700613A CN 112000828 B CN112000828 B CN 112000828B
Authority
CN
China
Prior art keywords
expression
picture
expression data
user
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010700613.1A
Other languages
Chinese (zh)
Other versions
CN112000828A (en
Inventor
饶少艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010700613.1A priority Critical patent/CN112000828B/en
Publication of CN112000828A publication Critical patent/CN112000828A/en
Application granted granted Critical
Publication of CN112000828B publication Critical patent/CN112000828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a readable storage medium for searching expression pictures, and relates to the technical fields of artificial intelligence, computer vision and image processing. The realization scheme adopted by the application when searching expression pictures is as follows: acquiring a picture to be processed input by a user; generating first expression data according to the face information in the picture to be processed; and determining a target expression picture recommended to the user according to the first expression data. The method and the device can improve the accuracy and convenience of expression picture searching.

Description

Method, device, electronic equipment and readable storage medium for searching expression picture
Technical Field
The application relates to the field of artificial intelligence, in particular to the technical field of image processing. The application provides a method, a device, electronic equipment and a readable storage medium for searching expression pictures.
Background
With the continuous development of terminal technology, the variety of social applications on terminal devices is increasing. In using social applications, users often like to express their moods, ideas, etc. through emoticons. Therefore, the communication through the expression pictures is an important link in our daily social life, and the expression pictures can better contact the relationship between people.
In the prior art, users often search for emoticons by entering keywords. In many cases, however, the user may need to input a plurality of keywords to find an expression picture meeting the requirement, or may not find an expression picture meeting the requirement, so that the expression picture cannot be accurately and conveniently recommended to the user.
Disclosure of Invention
The application provides a method for searching expression pictures, which aims at solving the technical problems and comprises the following steps: acquiring a picture to be processed input by a user; generating first expression data according to the face information in the picture to be processed; and determining a target expression picture recommended to the user according to the first expression data.
The application provides a device for searching expression pictures, which aims at solving the technical problems and comprises: the first acquisition unit is used for acquiring a picture to be processed input by a user; the generating unit is used for generating first expression data according to the face information in the picture to be processed; and the processing unit is used for determining a target expression picture recommended to the user according to the first expression data.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the above method.
A computer program product comprising a computer program which, when executed by a processor, implements the method described above.
One embodiment of the above application has the following advantages or benefits: the method and the device can improve convenience and accuracy of expression picture searching. Because the technical means of carrying out the expression picture search according to the first expression data generated by the picture to be processed is adopted, the technical problems of low accuracy and convenience of the expression picture search caused by the need of inputting keywords by a user in the prior art are solved, and the technical effects of improving the accuracy and convenience of the expression picture search are realized.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a schematic diagram of a first embodiment according to the present application;
FIG. 2 is a schematic diagram of a second embodiment according to the present application;
FIG. 3 is a schematic diagram of a third embodiment according to the present application;
FIG. 4 is a schematic diagram of a fourth embodiment according to the present application;
FIG. 5 is a schematic view of a fifth embodiment according to the application
Fig. 6 is a block diagram of an electronic device for implementing a method of searching for an emoticon according to an embodiment of the application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present application. As shown in fig. 1, the method for searching for an expression picture in this embodiment may specifically include the following steps:
S101, acquiring a picture to be processed input by a user;
s102, generating first expression data according to face information in the picture to be processed;
s103, determining a target expression picture recommended to the user according to the first expression data.
According to the method for searching the expression picture, after the first expression data are generated according to the face information in the picture to be processed, the first expression data are utilized to search the expression picture, so that a target expression picture matched with the expression information of the user is selected and recommended to the user, and accuracy and convenience of searching the expression picture are improved.
In this embodiment, when S101 is executed to obtain the to-be-processed picture input by the user, the to-be-processed picture may be a captured picture uploaded by the receiving user, or a picture captured by the receiving user in real time.
After executing S101 to obtain the to-be-processed picture, the embodiment executes S102 to generate first expression data according to face information in the to-be-processed picture. The first expression data generated in this embodiment is a picture.
Specifically, when executing S102 to generate the first expression data according to the face information in the to-be-processed picture, the present embodiment may adopt the following alternative implementation manners: extracting a face picture in the picture to be processed; determining face information of a user according to the extracted face picture, wherein the determined face information can comprise face contours, eyes, eyebrows, nose, mouth, ears, hair, face angles, face lines and the like of the user; and removing the face information of the first preset type in the face picture, generating first expression data, wherein the removed face information of the first preset type is face information irrelevant to the facial expression, such as facial contours, ears, hair and the like, so that only facial information relevant to the expression, such as eyes, eyebrows, nose, mouth, facial lines formed by the expression, facial angles and the like, is reserved.
Therefore, in the first expression data generated by the method, the situation that the first expression data contains the face information irrelevant to the expression is avoided, and the first expression data only contains the face information relevant to the expression of the user, so that the target expression picture obtained by searching can be ensured to have the similar expression with the face in the picture to be processed, and the accuracy of searching the expression picture is improved.
It can be appreciated that, after executing S102 to remove the face information of the first preset type in the face picture, the embodiment may further include the following: and adjusting face information of a second preset type in the face picture, wherein the face information of the second preset type is facial features of a corresponding user, such as eye size, nose size, mouth shape and the like, so that an adjustment result is used as first expression data. In this embodiment, when adjusting the face information of the second preset type in the face picture, operations such as making the large eyes smaller, making the small eyes larger, and changing the shape of the mouth may be performed.
Therefore, the facial features unique to the user can be changed by adjusting the face information of the second preset type in the face picture, so that a richer target expression picture can be searched for recommendation to the user according to the adjusted first expression data, and the recall rate of the expression picture search is improved.
The present embodiment, after executing S102 to generate the first expression data, executes S103 to determine a target expression picture recommended to the user according to the generated first expression data. The first expression data used in the embodiment is used for retrieving an expression picture of a corresponding person.
In the embodiment, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, the following optional implementation manners may be adopted: calculating the matching degree between the first expression data and each expression picture, for example, calculating the matching degree between the first expression data and the expression picture in the Internet or a preset database; and selecting a plurality of expression pictures as target expression pictures according to the matching degree, for example, selecting the expression pictures arranged in the front N bits as target expression pictures, wherein N is a positive integer greater than or equal to 1.
It can be appreciated that, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, the present embodiment may further include the following: acquiring a search term input by a user; and determining a target expression picture recommended to the user according to the first expression data and the search term. That is, when determining the target expression picture, the present embodiment may further use the search term input by the user as the auxiliary information, thereby acquiring a greater number of target expression pictures.
In addition, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, the present embodiment may further include the following: extracting action information of a user, such as information of body gestures, gestures and the like of the user, from the picture to be processed; and determining a target expression picture recommended to the user according to the first expression data and the action information. That is, when determining the target expression picture, the embodiment can acquire the expression picture matched with the user action in addition to the expression picture matched with the user expression, thereby acquiring a richer target expression picture.
It can be appreciated that the same expression image may be obtained by using the first expression data and the search term and/or the action information, so as to avoid recommending the repeated target expression image to the user, the embodiment needs to perform deduplication on the obtained target expression image, for example, performing deduplication on multiple expression images with identical sources or links and completely matched content.
After the target expression picture recommended to the user is determined in S103, the present embodiment recommends the determined target expression picture to the user, for example, each target expression picture may be recommended to the user by way of being presented in a screen.
By adopting the method provided by the embodiment, after the first expression data is generated according to the face information in the picture to be processed, the first expression data is utilized to search the expression picture, so that the target expression picture matched with the expression information of the user is selected and recommended to the user, and the accuracy of the expression picture search is improved.
Fig. 2 is a schematic diagram according to a first embodiment of the present application. As shown in fig. 2, the method for searching for an expression picture of the present embodiment may specifically include the following steps:
s201, obtaining a picture to be processed input by a user;
S202, generating first expression data according to face information in the picture to be processed;
s203, acquiring an expression symbol corresponding to the first expression data as second expression data;
s204, determining a target expression picture recommended to the user according to the first expression data and the second expression data.
The second expression data obtained in the step S203 is a two-dimensional expression picture corresponding to the first expression data. That is, the present embodiment removes the human feature contained in the first expression data by flattening the first expression data, thereby taking the expression symbol corresponding to the first expression data as the second expression data. The second expression data in this embodiment is used to retrieve the expression picture corresponding to the cartoon or emoji.
In the embodiment, when S203 is executed to acquire the expression symbol corresponding to the first expression data as the second expression data, the expression symbol that is most matched with the first expression data in the internet or a preset database may be used as the second expression data; after the face information in the first expression data is replaced by the corresponding graph, the generated expression symbol can be used as the second expression data, for example, eyes, eyebrows, mouth and the like in the first expression data are replaced by transverse lines with different lengths and different radians.
In the embodiment, when S204 is executed to determine the target picture recommended to the user according to the first expression data and the second expression data, the following optional implementation manners may be adopted: determining a first weight value corresponding to the first expression data and a second weight value corresponding to the second expression data, wherein the first weight value is larger than the second weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree, for example, selecting the expression pictures arranged in the front M bits as target expression pictures, wherein M is a positive integer greater than or equal to 1.
For example, if the first weight value is 0.7, the second weight value is 0.4; if the picture matching degree between the expression picture 1 and the first expression data is 0.8, the embodiment determines that the final matching degree between the expression picture 1 and the first expression data is (0.7×0.8=0.56); if the picture matching degree between the expression picture 2 and the second expression data is 0.8, the embodiment determines that the final matching degree between the expression picture 2 and the second expression data is (0.4×0.8=0.32).
It can be appreciated that the same expression image may be acquired using the first expression data and the second expression data, so as to avoid recommending a repeated target expression image to the user, the embodiment needs to perform deduplication on the acquired target expression image, for example, performing deduplication on multiple expression images with identical sources or links and completely matched content.
In addition, in the embodiment, when S204 is executed to determine the target expression picture recommended to the user according to the first expression data and the second expression data, the search term and/or the action information may be acquired and the search of the expression picture may be performed at the same time.
By adopting the method provided by the embodiment, after the first expression data is generated according to the face information in the picture to be processed, the second expression data is obtained according to the generated first expression data, so that the expression picture corresponding to the person, the cartoon or the emoji is searched at the same time, and the recall rate of the expression picture search is improved.
Fig. 3 is a schematic view of a third embodiment according to the present application. As shown in fig. 3, the method for searching for an expression picture of the present embodiment may specifically include the following steps:
S301, obtaining a picture to be processed input by a user;
s302, generating first expression data according to face information in the picture to be processed;
s303, acquiring emotion information corresponding to the first expression data as third expression data;
s304, determining a target expression picture recommended to the user according to the first expression data and the third expression data.
The third expression data obtained by executing S303 in this embodiment is emotion information corresponding to the first expression data, specifically, a keyword corresponding to the emotion of the user. In this embodiment, the third expression data obtained in S303 may include a plurality of keywords. That is, the present embodiment may perform a keyword search using text in addition to performing a picture search using the first expression data.
In the embodiment, when executing S303 to use the emotion information corresponding to the first expression data as the third expression data, the emotion information corresponding to the first expression data may be obtained by the existing emotion recognition technology, for example, a deep learning model, which is not described herein.
In the embodiment, when S304 is executed to determine the target picture recommended to the user according to the first expression data and the third expression data, the following optional implementation manners may be adopted: determining a first weight value corresponding to the first expression data and a third weight value corresponding to the third expression data, wherein the first weight value is larger than the second weight value, and the second weight value is larger than the third weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree, for example, selecting the expression picture arranged in front K bits as the target expression picture, wherein K is a positive integer greater than or equal to 1.
For example, if the first weight value is 0.7, the second weight value is 0.3; if the picture matching degree between the expression picture 1 and the first expression data is 0.8, the embodiment determines that the final matching degree between the expression picture 1 and the first expression data is (0.7×0.8=0.56); if the picture matching degree between the expression picture 3 and the third expression data is 0.8, the embodiment determines that the final matching degree between the expression picture 3 and the third expression data is (0.3×0.8=0.24).
It can be appreciated that the same expression image may be acquired using the first expression data and the third expression data, so as to avoid recommending a repeated target expression image to the user, the embodiment needs to perform deduplication on the acquired target expression image, for example, performing deduplication on multiple expression images with identical sources or links and completely matched content.
In addition, in the embodiment, when S304 is executed to determine the target expression picture recommended to the user according to the first expression data and the third expression data, the search term and/or the action information may be acquired and the search of the expression picture may be performed at the same time.
Fig. 4 is a schematic view of a fourth embodiment according to the present application. As shown in fig. 4, the method for searching for an expression picture of the present embodiment may specifically include the following steps:
s401, acquiring a picture to be processed input by a user;
s402, generating first expression data according to face information in the picture to be processed;
S403, acquiring an expression symbol corresponding to the first expression data as second expression data;
s404, acquiring emotion information corresponding to the first expression data as third expression data;
S405, determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
According to the method, the search of the expression pictures is simultaneously carried out according to the first expression data, the second expression data and the third expression data, wherein the first expression data and the second expression data are subjected to picture matching, and the third expression data are subjected to keyword matching, so that the comprehensiveness of the target expression pictures obtained through the search can be improved.
In the embodiment, when S405 is executed to determine the target picture recommended to the user according to the first expression data, the second expression data and the third expression data, the following optional implementation manners may be adopted: determining a first weight value corresponding to the first expression data, a weight value corresponding to the second expression data and a third weight value corresponding to the third expression data, wherein the first weight value is larger than the second weight value, and the second weight value is larger than the third weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to a user according to the calculated matching degree, for example, selecting the expression pictures arranged in the front L bits as target expression pictures, wherein L is a positive integer greater than or equal to 1.
It can be appreciated that the first, second and third expression data may be used to obtain the same expression image, so as to avoid recommending repeated target expression images to the user, in this embodiment, the obtained target expression image needs to be de-duplicated, for example, multiple expression images with identical sources or links and completely matched content are de-duplicated.
In addition, in the embodiment, when S405 is executed to determine the target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data, the search term and/or the action information may be acquired and the search of the expression picture may be performed at the same time.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present application. As shown in fig. 5, the apparatus for searching for an expression picture of the present embodiment includes:
a first obtaining unit 501, configured to obtain a to-be-processed picture input by a user;
The generating unit 502 is configured to generate first expression data according to face information in the to-be-processed picture;
the processing unit 503 is configured to determine, according to the first expression data, a target expression picture recommended to the user.
The first obtaining unit 501 may be configured to receive a shot picture uploaded by a user or receive a picture shot by the user in real time when obtaining a to-be-processed picture input by the user.
After the first obtaining unit 501 obtains the to-be-processed picture, the generating unit 502 generates first expression data according to face information in the to-be-processed picture. The first expression data generated by the generating unit 502 is a picture.
Specifically, when the generating unit 502 generates the first expression data according to the face information in the to-be-processed picture, the optional implementation manner may be: extracting a face picture in the picture to be processed; determining face information of a user according to the extracted face picture; and removing the face information of the first preset type in the face picture, and generating first expression data.
Therefore, the first expression data generated by the generating unit 502 through the method avoids containing the facial information irrelevant to the expression, but only contains the facial information relevant to the expression of the user, and can ensure that the target expression picture obtained by searching has similar expression with the facial in the picture to be processed, thereby improving the accuracy of searching the expression picture.
It may be appreciated that, after removing the first preset type of face information in the face picture, the generating unit 502 may further include the following: and adjusting face information of a second preset type in the face picture, so that an adjustment result is used as first expression data. The generating unit 502 may perform operations of reducing the size of the large eyes, enlarging the small eyes, changing the shape of the mouth, and the like when adjusting the face information of the second preset type in the face picture.
Therefore, the generating unit 502 can change the unique facial features of the user by adjusting the face information of the second preset type in the face picture, so that a richer target expression picture can be searched according to the adjusted first expression data to be recommended to the user, and the recall rate of the expression picture search is improved.
The generation unit 502 determines a target expression picture recommended to the user from the generated first expression data by the processing unit 503 after generating the first expression data. Wherein the first expression data used by the processing unit 503 is used to retrieve an expression picture of the corresponding person.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt the following alternative implementation manners: calculating the matching degree between the first expression data and each expression picture; and selecting a plurality of expression pictures as target expression pictures according to the matching degree.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt the following alternative implementation manners: acquiring an expression symbol corresponding to the first expression data as second expression data; and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
The second expression data obtained by the processing unit 503 is a two-dimensional expression picture corresponding to the first expression data. That is, the processing unit 503 removes the human feature contained in the first expression data by flattening the first expression data, thereby taking the expression symbol corresponding to the first expression data as the second expression data. Wherein the second expression data in the processing unit 503 is used to retrieve an expression picture corresponding to the cartoon or emoji.
When the processing unit 503 obtains the expression symbol corresponding to the first expression data as the second expression data, the expression symbol that is most matched with the first expression data in the internet or a preset database may be used as the second expression data; the generated expression symbol may be used as the second expression data after each face information in the first expression data is replaced by the corresponding graphic.
When determining the target picture recommended to the user according to the first expression data and the second expression data, the processing unit 503 may adopt the following alternative implementation manners: determining a first weight value corresponding to the first expression data and a second weight value corresponding to the second expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree.
It will be appreciated that the same expression picture may be acquired using the first expression data and the second expression data, and the processing unit 503 needs to deduplicate the acquired target expression picture in order to avoid recommending a repeated target expression picture to the user.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt the following alternative implementation manners: acquiring emotion information corresponding to the first expression data as third expression data; and determining a target expression picture recommended to the user according to the first expression data and the third expression data.
The third expression data obtained by the processing unit 503 is emotion information corresponding to the first expression data, specifically, a keyword corresponding to the emotion of the user. The processing unit 503 may include a plurality of keywords in the obtained third expression data. That is, the processing unit 503 may perform a keyword search using text in addition to performing a picture search using the first expression data.
When the emotion information corresponding to the first expression data is used as the third expression data, the processing unit 503 may obtain the emotion information corresponding to the first expression data through an existing emotion recognition technology, for example, a deep learning model, which is not described herein in detail.
When determining the target picture recommended to the user according to the first expression data and the third expression data, the processing unit 503 may adopt the following alternative implementation manners: determining a first weight value corresponding to the first expression data and a third weight value corresponding to the third expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree.
It will be appreciated that the same expression picture may be acquired using the first expression data and the third expression data, and the processing unit 503 needs to deduplicate the acquired target expression picture in order to avoid recommending a repeated target expression picture to the user.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt the following alternative implementation manners: acquiring an expression symbol corresponding to the first expression data as second expression data; acquiring emotion information corresponding to the first expression data as third expression data; and determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
The processing unit 503 performs the searching of the expression picture according to the first expression data, the second expression data and the third expression data simultaneously through the above method, so as to improve the comprehensiveness of the target expression picture obtained by searching.
When determining the target picture recommended to the user according to the first expression data, the second expression data, and the third expression data, the processing unit 503 may adopt the following alternative implementation manners: determining a first weight value corresponding to the first expression data, a weight value corresponding to the second expression data and a third weight value corresponding to the third expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree.
It will be appreciated that the same expression image may be obtained using the first expression data, the second expression data, and the third expression data, so as to avoid recommending a repeated target expression image to the user, the processing unit 503 needs to perform deduplication on the obtained target expression image, for example, performing deduplication on multiple expression images with identical sources or links and completely matched content.
The apparatus for searching for an expression picture in this embodiment may further include a second obtaining unit 504, configured to obtain a search term input by a user, so that the processing unit 503 determines, according to the first expression data and the search term, a target expression picture recommended to the user. That is, the processing unit 503 may further use the search word input by the user as auxiliary information when determining the target expression picture, thereby acquiring a greater number of target expression pictures.
In addition, the apparatus for searching for an expression picture in this embodiment may further include a third obtaining unit 505, configured to extract motion information of a user from a picture to be processed, so that the processing unit 503 determines, according to the first expression data and the motion information, a target expression picture recommended to the user. That is, when determining the target expression picture, the processing unit 503 may acquire an expression picture matching with the user's action in addition to the expression picture matching with the user's expression, thereby acquiring a richer target expression picture.
It will be appreciated that the same expression picture may be acquired using the first expression data and the search term and/or the motion information, and the processing unit 503 needs to deduplicate the acquired target expression picture in order to avoid recommending a repeated target expression picture to the user.
According to embodiments of the present application, the present application also provides an electronic device, a computer-readable storage medium, and a computer program product.
As shown in fig. 6, there is a block diagram of an electronic device of a method of searching for an emoticon according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
The memory 602 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for searching for emoticons provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method of searching for an emoticon provided by the present application.
The memory 602 is a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the first obtaining unit 501, the generating unit 502, the processing unit 503, the second obtaining unit 504, and the third obtaining unit 505 shown in fig. 5) corresponding to the method of searching for an emoticon in the embodiment of the application. The processor 601 performs various functional applications of the server and data processing, that is, implements the method of searching for emoticons in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 602 may optionally include memory remotely located with respect to the processor 601, which may be connected to the electronic device of the method of searching for emoticons through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of searching for an expression picture may further include: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the method of searching for emoticons, such as input devices of a touch screen, a keypad, a mouse, a track pad, a touch pad, a joystick, one or more mouse buttons, a track ball, a joystick, etc. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme provided by the embodiment of the application, after the first expression data is generated according to the face information in the picture to be processed, the generated first expression data is utilized to search the expression picture, so that the target expression picture matched with the expression information of the user is selected and recommended to the user, and the accuracy and convenience of the expression picture search are improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (15)

1. A method of searching for an expression picture, comprising:
acquiring a picture to be processed input by a user;
Generating first expression data according to the face information in the picture to be processed;
Determining a target expression picture recommended to a user according to the first expression data;
wherein, according to the first expression data, determining the target expression picture recommended to the user includes:
acquiring an expression symbol corresponding to the first expression data as second expression data;
acquiring emotion information corresponding to the first expression data as third expression data;
And determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
2. The method of claim 1, wherein the generating first expression data according to face information in the to-be-processed picture comprises:
extracting a face picture in the picture to be processed;
Determining face information of a user according to the extracted face picture;
And removing the face information of the first preset type in the face picture, and generating the first expression data.
3. The method of claim 1, wherein the determining a target expression picture recommended to a user from the first expression data comprises:
Acquiring an expression symbol corresponding to the first expression data as second expression data;
and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
4. The method of claim 1, wherein the determining a target expression picture recommended to the user from the first expression data and the second expression data comprises:
Acquiring emotion information corresponding to the first expression data as third expression data;
And determining a target expression picture recommended to the user according to the first expression data and the third expression data.
5. The method of claim 1, further comprising,
Acquiring a search term input by a user;
and determining a target expression picture recommended to the user according to the first expression data and the search term.
6. The method of claim 1, further comprising,
Extracting action information of a user from the picture to be processed;
and determining a target expression picture recommended to the user according to the first expression data and the action information.
7. An apparatus for searching for an expression picture, comprising:
the first acquisition unit is used for acquiring a picture to be processed input by a user;
The generating unit is used for generating first expression data according to the face information in the picture to be processed;
the processing unit is used for determining a target expression picture recommended to the user according to the first expression data;
The processing unit specifically executes when determining a target expression picture recommended to a user according to the first expression data:
acquiring an expression symbol corresponding to the first expression data as second expression data;
acquiring emotion information corresponding to the first expression data as third expression data;
And determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
8. The apparatus of claim 7, wherein the generating unit, when generating the first expression data according to the face information in the to-be-processed picture, specifically performs:
extracting a face picture in the picture to be processed;
Determining face information of a user according to the extracted face picture;
And removing the face information of the first preset type in the face picture, and generating the first expression data.
9. The apparatus of claim 7, wherein the processing unit, when determining the target expression picture recommended to the user according to the first expression data, specifically performs:
Acquiring an expression symbol corresponding to the first expression data as second expression data;
and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
10. The apparatus of claim 7, wherein the processing unit when determining a target expression picture recommended to a user from the first expression data comprises:
Acquiring emotion information corresponding to the first expression data as third expression data;
And determining a target expression picture recommended to the user according to the first expression data and the third expression data.
11. The apparatus of claim 7, further comprising a second acquisition unit,
And the processing unit is used for acquiring a search term input by the user, so that the processing unit determines a target expression picture recommended to the user according to the first expression data and the search term.
12. The apparatus of claim 7, further comprising a third acquisition unit,
And the processing unit is used for extracting action information of the user from the picture to be processed, so that the processing unit determines a target expression picture recommended to the user according to the first expression data and the action information.
13. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202010700613.1A 2020-07-20 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture Active CN112000828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010700613.1A CN112000828B (en) 2020-07-20 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010700613.1A CN112000828B (en) 2020-07-20 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture

Publications (2)

Publication Number Publication Date
CN112000828A CN112000828A (en) 2020-11-27
CN112000828B true CN112000828B (en) 2024-05-24

Family

ID=73466513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010700613.1A Active CN112000828B (en) 2020-07-20 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture

Country Status (1)

Country Link
CN (1) CN112000828B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784700B (en) * 2021-01-04 2024-05-03 北京小米松果电子有限公司 Method, device and storage medium for displaying face image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128558A1 (en) * 2017-12-28 2019-07-04 北京达佳互联信息技术有限公司 Analysis method and system of user limb movement and mobile terminal
CN110489578A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128558A1 (en) * 2017-12-28 2019-07-04 北京达佳互联信息技术有限公司 Analysis method and system of user limb movement and mobile terminal
CN110489578A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel approach based on an extended cuckoo search algorithm for the classification of tweets which contain Emoticon and Emoji;Molly Redmond;IEEE;20171211;全文 *
基于WEB的表情图片模块的动态管理与实现;屈佳;胡志勇;;电子设计工程;20160505(第09期);全文 *

Also Published As

Publication number Publication date
CN112000828A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN111221984A (en) Multimodal content processing method, device, equipment and storage medium
US20210264197A1 (en) Point cloud data processing method, apparatus, electronic device and computer readable storage medium
CN111709875B (en) Image processing method, device, electronic equipment and storage medium
CN111563855B (en) Image processing method and device
CN112084366B (en) Method, apparatus, device and storage medium for retrieving image
CN112001180A (en) Multi-mode pre-training model acquisition method and device, electronic equipment and storage medium
US20210312172A1 (en) Human body identification method, electronic device and storage medium
CN111259183B (en) Image recognition method and device, electronic equipment and medium
CN111507111B (en) Pre-training method and device of semantic representation model, electronic equipment and storage medium
CN111291729B (en) Human body posture estimation method, device, equipment and storage medium
CN111443801B (en) Man-machine interaction method, device, equipment and storage medium
CN111783601B (en) Training method and device of face recognition model, electronic equipment and storage medium
CN111858880B (en) Method, device, electronic equipment and readable storage medium for obtaining query result
CN112101552A (en) Method, apparatus, device and storage medium for training a model
CN112116525A (en) Face-changing identification method, device, equipment and computer-readable storage medium
CN112000828B (en) Method, device, electronic equipment and readable storage medium for searching expression picture
CN111291184A (en) Expression recommendation method, device, equipment and storage medium
CN112508964B (en) Image segmentation method, device, electronic equipment and storage medium
CN112016523B (en) Cross-modal face recognition method, device, equipment and storage medium
CN112464009A (en) Method and device for generating pairing image, electronic equipment and storage medium
JP2021170391A (en) Commodity guidance method, apparatus, device, storage medium, and program
CN111428489B (en) Comment generation method and device, electronic equipment and storage medium
CN112381927A (en) Image generation method, device, equipment and storage medium
CN112270303A (en) Image recognition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant