CN112000828A - Method and device for searching emoticons, electronic equipment and readable storage medium - Google Patents

Method and device for searching emoticons, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112000828A
CN112000828A CN202010700613.1A CN202010700613A CN112000828A CN 112000828 A CN112000828 A CN 112000828A CN 202010700613 A CN202010700613 A CN 202010700613A CN 112000828 A CN112000828 A CN 112000828A
Authority
CN
China
Prior art keywords
expression
picture
expression data
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010700613.1A
Other languages
Chinese (zh)
Other versions
CN112000828B (en
Inventor
饶少艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010700613.1A priority Critical patent/CN112000828B/en
Priority claimed from CN202010700613.1A external-priority patent/CN112000828B/en
Publication of CN112000828A publication Critical patent/CN112000828A/en
Application granted granted Critical
Publication of CN112000828B publication Critical patent/CN112000828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method and a device for searching expression pictures, electronic equipment and a readable storage medium, and relates to the technical field of artificial intelligence, computer vision and image processing. The implementation scheme adopted when searching for the expression picture is as follows: acquiring a picture to be processed input by a user; generating first expression data according to the face information in the picture to be processed; and determining a target expression picture recommended to the user according to the first expression data. The method and the device can improve the accuracy and convenience of searching the expression pictures.

Description

Method and device for searching emoticons, electronic equipment and readable storage medium
Technical Field
The application relates to the field of artificial intelligence, in particular to the technical field of image processing. Specifically, the application provides a method and a device for searching emoticons, electronic equipment and a readable storage medium.
Background
With the continuous development of terminal technology, the variety of social applications on terminal devices is increasing. In using a social application, users generally prefer to express their moods, thoughts, etc. through emoticons. Therefore, communication through the expression pictures is an important link in daily social life, and the expression pictures can better contact the relationship between people.
In the prior art, users usually search for emoticons by inputting keywords. However, in many cases, a user may need to input a plurality of keywords to find an expression picture meeting the requirement, or may not find an expression picture meeting the requirement, so that the expression picture cannot be accurately and conveniently recommended to the user.
Disclosure of Invention
The technical scheme adopted by the application for solving the technical problem is to provide a method for searching expression pictures, which comprises the following steps: acquiring a picture to be processed input by a user; generating first expression data according to the face information in the picture to be processed; and determining a target expression picture recommended to the user according to the first expression data.
The technical scheme that this application adopted for solving technical problem provides a device of search expression picture, includes: the first acquisition unit is used for acquiring a picture to be processed input by a user; the generating unit is used for generating first expression data according to the face information in the picture to be processed; and the processing unit is used for determining a target expression picture recommended to the user according to the first expression data.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above method.
A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the above method.
One embodiment in the above application has the following advantages or benefits: the method and the device can improve convenience and accuracy of searching the expression pictures. Because the technical means of searching the expression pictures according to the first expression data generated by the pictures to be processed is adopted, the technical problems of lower accuracy and convenience of expression picture searching caused by the fact that a user needs to input keywords in the prior art are solved, and the technical effects of improving the accuracy and convenience of expression picture searching are achieved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present application;
FIG. 2 is a schematic diagram according to a second embodiment of the present application;
FIG. 3 is a schematic illustration according to a third embodiment of the present application;
FIG. 4 is a schematic illustration according to a fourth embodiment of the present application;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present application
Fig. 6 is a block diagram of an electronic device for implementing a method of searching for an emoticon according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present application. As shown in fig. 1, the method for searching for an expression picture in this embodiment may specifically include the following steps:
s101, acquiring a picture to be processed input by a user;
s102, generating first expression data according to the face information in the picture to be processed;
s103, determining a target expression picture recommended to the user according to the first expression data.
According to the method for searching the expression pictures, after the first expression data are generated according to the face information in the picture to be processed, the expression pictures are searched by using the generated first expression data, so that the target expression pictures matched with the expression information of the user are selected and recommended to the user, and the accuracy and convenience of expression picture searching are improved.
In this embodiment, when S101 is executed to acquire a to-be-processed picture input by a user, the to-be-processed picture may be a photographed picture uploaded by the user, or may be a picture photographed by the user in real time.
After executing S101 to acquire a to-be-processed picture, executing S102 to generate first expression data according to face information in the to-be-processed picture. The first expression data generated in the embodiment is a picture.
Specifically, when executing S102 to generate the first expression data according to the face information in the picture to be processed, the embodiment may adopt an optional implementation manner as follows: extracting a face picture in a picture to be processed; determining face information of the user according to the extracted face picture, wherein the determined face information can comprise information such as face contour, eyes, eyebrows, nose, mouth, ears, hair, face angle, face lines and the like of the user; the method comprises the steps of removing first preset type face information in a face picture to generate first expression data, wherein the removed first preset type face information is face information irrelevant to face expression, such as face contour, ears, hair and the like, so that only face information relevant to expression, such as eyes, eyebrows, nose, mouth, face lines formed by the expression, face angles and the like, is reserved.
Therefore, in the first expression data generated by the method, the fact that the face information irrelevant to the expression is contained but only the face information relevant to the expression of the user is avoided, the target expression picture obtained by searching and the face in the picture to be processed have similar expression can be ensured, and the accuracy of searching the expression picture is improved.
It can be understood that, after the step S102 of removing the first preset type of face information in the face picture is executed, the following contents may also be included in the present embodiment: and adjusting the second preset type of face information in the face picture, wherein the adjusted second preset type of face information corresponds to the facial features of the user, such as the size of eyes, the size of nose, the shape of mouth and the like, and the adjustment result is used as the first expression data. When the second preset type of face information in the face picture is adjusted, the present embodiment may perform operations such as reducing the size of the large eye, increasing the size of the small eye, and changing the shape of the mouth.
Therefore, in the embodiment, the face information of the second preset type in the face picture is adjusted, so that the unique facial features of the user can be changed, richer target expression pictures can be searched according to the adjusted first expression data to be recommended to the user, and the recall rate of the expression picture searching is improved.
After executing S102 to generate the first expression data, the present embodiment executes S103 to determine a target expression picture recommended to the user according to the generated first expression data. The first expression data used in the embodiment is used for retrieving the expression picture of the corresponding person.
In this embodiment, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, an optional implementation manner that may be adopted is as follows: calculating the matching degree between the first expression data and each expression picture, for example, calculating the matching degree between the first expression data and the expression pictures in the internet or a preset database; and selecting a plurality of expression pictures as target expression pictures according to the matching degree, for example, selecting the expression pictures arranged at the front N positions as the target expression pictures, wherein N is a positive integer greater than or equal to 1.
It can be understood that, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, the following may also be included in the embodiment: acquiring a search word input by a user; and determining a target expression picture recommended to the user according to the first expression data and the search terms. That is to say, in the embodiment, when determining the target expression picture, the search term input by the user may also be used as auxiliary information, so as to obtain a greater number of target expression pictures.
In addition, when S103 is executed to determine the target expression picture recommended to the user according to the first expression data, the embodiment may further include the following: extracting motion information of a user from the picture to be processed, such as body posture, gesture and other information of the user; and determining a target expression picture recommended to the user according to the first expression data and the action information. That is to say, when determining the target expression picture, the embodiment can acquire the expression picture matched with the expression of the user, and also can acquire the expression picture matched with the action of the user, thereby acquiring a richer target expression picture.
It is understood that the same emoticon may be obtained by using the first emoticon data and the search term and/or the motion information, and in order to avoid recommending a repeated target emoticon to the user, the embodiment needs to perform deduplication on the obtained target emoticon, for example, deduplication on a plurality of emoticons whose sources or links are completely the same and whose contents are completely matched.
After the target emotion pictures recommended to the user are determined in step S103, the determined target emotion pictures are recommended to the user, for example, each target emotion picture may be recommended to the user in a manner of being displayed on a screen.
By adopting the method provided by the embodiment, after the first expression data is generated according to the face information in the picture to be processed, the expression picture is searched by utilizing the generated first expression data, so that the target expression picture matched with the expression information of the user is selected and recommended to the user, and the accuracy of expression picture searching is improved.
Fig. 2 is a schematic diagram according to a first embodiment of the present application. As shown in fig. 2, the method for searching for an expression picture in this embodiment may specifically include the following steps:
s201, acquiring a picture to be processed input by a user;
s202, generating first expression data according to the face information in the picture to be processed;
s203, obtaining an expression symbol corresponding to the first expression data as second expression data;
and S204, determining a target expression picture recommended to the user according to the first expression data and the second expression data.
In this embodiment, the second expression data obtained by executing S203 is a two-dimensional expression picture corresponding to the first expression data. That is, in the present embodiment, the human features included in the first expression data are removed by performing the flattening processing on the first expression data, so that the emoticon corresponding to the first expression data is used as the second expression data. The second expression data in this embodiment is used to retrieve an expression picture corresponding to a cartoon or emoji.
In this embodiment, when S203 is executed to acquire the emoticon corresponding to the first emoticon as the second emoticon, the emoticon most matched with the first emoticon in the internet or a preset database may be used as the second emoticon; after the face information in the first expression data is replaced with the corresponding graph, the generated emoticon may be used as the second expression data, for example, the eye, the eyebrow, the mouth, and the like in the first expression data are replaced with the horizontal lines with different lengths and different radians.
In this embodiment, when the target picture recommended to the user is determined according to the first expression data and the second expression data in step S204, an optional implementation manner that may be adopted is as follows: determining a first weight value corresponding to the first expression data and a second weight value corresponding to the second expression data, wherein the first weight value is greater than the second weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree, for example, selecting the expression pictures arranged at the top M as the target expression pictures, wherein M is a positive integer greater than or equal to 1.
For example, if the first weight value is 0.7, the second weight value is 0.4; if the picture matching degree between the expression picture 1 and the first expression data is 0.8, the present embodiment determines that the final matching degree between the expression picture 1 and the first expression data is (0.7 × 0.8 — 0.56); if the picture matching degree between the expression picture 2 and the second expression data is 0.8, the present embodiment determines that the final matching degree between the expression picture 2 and the second expression data is (0.4 × 0.8 — 0.32).
It is understood that the same emoticon may be obtained by using the first emoticon data and the second emoticon data, and in order to avoid recommending a repeated target emoticon to the user, the embodiment needs to perform deduplication on the obtained target emoticon, for example, perform deduplication on a plurality of emoticons having identical sources or links and completely matching contents.
In addition, in this embodiment, when the target expression picture recommended to the user is determined according to the first expression data and the second expression data in step S204, the retrieval word and/or the action information may be acquired and the expression picture may be searched.
By adopting the method provided by the embodiment, after the first expression data is generated according to the face information in the picture to be processed, the second expression data is obtained according to the generated first expression data, so that the expression pictures corresponding to people, cartoons or emoji are searched at the same time, and the recall rate of the expression picture searching is improved.
Fig. 3 is a schematic diagram according to a third embodiment of the present application. As shown in fig. 3, the method for searching for an expression picture in this embodiment may specifically include the following steps:
s301, acquiring a picture to be processed input by a user;
s302, generating first expression data according to the face information in the picture to be processed;
s303, obtaining emotion information corresponding to the first expression data as third expression data;
s304, determining a target expression picture recommended to the user according to the first expression data and the third expression data.
In this embodiment, the third expression data obtained by executing S303 is emotion information corresponding to the first expression data, and specifically, is a keyword corresponding to the emotion of the user. In this embodiment, the third expression data obtained in step S303 may include a plurality of keywords. That is, the present embodiment can perform a keyword search using text in addition to a picture search using the first emotion data.
In this embodiment, when S303 is executed to use the emotion information corresponding to the first expression data as the third expression data, the emotion information corresponding to the first expression data may be obtained through an existing emotion recognition technology, for example, a deep learning model, which is not described in detail herein.
In this embodiment, when the step S304 is executed to determine the target picture recommended to the user according to the first expression data and the third expression data, an optional implementation manner that may be adopted is as follows: determining a first weight value corresponding to the first expression data and a third weight value corresponding to the third expression data, wherein the first weight value is greater than the second weight value, and the second weight value is greater than the third weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree, for example, selecting the expression pictures arranged at the front K positions as the target expression pictures, wherein K is a positive integer greater than or equal to 1.
For example, if the first weight value is 0.7, the second weight value is 0.3; if the picture matching degree between the expression picture 1 and the first expression data is 0.8, the present embodiment determines that the final matching degree between the expression picture 1 and the first expression data is (0.7 × 0.8 — 0.56); if the picture matching degree between the expression picture 3 and the third emotion data is 0.8, the present embodiment determines that the final matching degree between the expression picture 3 and the third emotion data is (0.3 × 0.8 — 0.24).
It is understood that the same emoticon may be obtained by using the first emoticon data and the third emoticon data, and in order to avoid recommending a repeated target emoticon to the user, the embodiment needs to perform deduplication on the obtained target emoticon, for example, perform deduplication on a plurality of emoticons having identical sources or links and identical contents.
In addition, in this embodiment, when the target expression picture recommended to the user is determined according to the first expression data and the third expression data in step S304, the retrieval word and/or the action information may be acquired and the expression picture may be searched.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present application. As shown in fig. 4, the method for searching for an expression picture in this embodiment may specifically include the following steps:
s401, acquiring a picture to be processed input by a user;
s402, generating first expression data according to the face information in the picture to be processed;
s403, obtaining an emoticon corresponding to the first emotion data as second emotion data;
s404, obtaining emotion information corresponding to the first expression data as third expression data;
s405, determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
According to the method, the expression picture is searched simultaneously according to the first expression data, the second expression data and the third expression data, picture matching is conducted on the first expression data and the second expression data, keyword matching is conducted on the third expression data, and the comprehensiveness of the target expression picture obtained through searching can be improved.
In this embodiment, when S405 is executed to determine a target picture recommended to a user according to the first expression data, the second expression data, and the third expression data, an optional implementation manner that may be adopted is as follows: determining a first weight value corresponding to the first expression data, a second weight value corresponding to the second expression data and a third weight value corresponding to the third expression data, wherein the first weight value is greater than the second weight value, and the second weight value is greater than the third weight value; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the calculated matching degree, for example, selecting the expression pictures arranged at the front L positions as the target expression pictures, wherein L is a positive integer greater than or equal to 1.
It can be understood that the same emoticons may be obtained by using the first emoticons, the second emoticons, and the third emoticons, and in order to avoid recommending a repeated target emoticon to the user, the embodiment needs to perform deduplication on the obtained target emoticons, for example, perform deduplication on a plurality of emoticons having identical sources or links and identical contents.
In addition, in this embodiment, when the target expression picture recommended to the user is determined according to the first expression data, the second expression data, and the third expression data in step S405, the retrieval word and/or the action information may be acquired, and the expression picture may be searched at the same time.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present application. As shown in fig. 5, the apparatus for searching for an emoticon of the present embodiment includes:
the first obtaining unit 501 is configured to obtain a to-be-processed picture input by a user;
the generating unit 502 is configured to generate first expression data according to the face information in the picture to be processed;
the processing unit 503 is configured to determine a target expression picture recommended to the user according to the first expression data.
When acquiring a to-be-processed picture input by a user, the first acquiring unit 501 may receive a picture that has been taken and uploaded by the user, or may receive a picture that is taken by the user in real time.
After the first acquisition unit 501 acquires the picture to be processed, the generation unit 502 generates first expression data according to the face information in the picture to be processed. The first expression data generated by the generating unit 502 is a picture.
Specifically, when the generating unit 502 generates the first expression data according to the face information in the picture to be processed, the optional implementation manner that can be adopted is as follows: extracting a face picture in a picture to be processed; determining face information of the user according to the extracted face picture; and removing the face information of the first preset type in the face picture to generate first expression data.
Therefore, the generating unit 502 avoids that the first expression data generated by the above method contains face information irrelevant to the expression, but only contains face information relevant to the expression of the user, and can ensure that the target expression picture obtained by searching has a similar expression with the face in the picture to be processed, thereby improving the accuracy of searching the expression picture.
It is understood that, after removing the first preset type of face information in the face picture, the generating unit 502 may further include the following: and adjusting the second preset type of face information in the face picture, so that the adjustment result is used as the first expression data. The generating unit 502 may perform operations such as making the big eyes smaller and making the small eyes larger, and changing the shape of the mouth when adjusting the second preset type of face information in the face picture.
Therefore, the generating unit 502 can change the unique facial features of the user by adjusting the face information of the second preset type in the face picture, so that a richer target expression picture can be searched according to the adjusted first expression data to be recommended to the user, thereby increasing the recall rate of the expression picture search.
The generation unit 502 determines, by the processing unit 503, a target expression picture recommended to the user according to the generated first expression data after generating the first expression data. The first expression data used by the processing unit 503 is used to retrieve the expression picture of the corresponding person.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt an optional implementation manner as follows: calculating the matching degree between the first expression data and each expression picture; and selecting a plurality of expression pictures as target expression pictures according to the matching degree.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt an optional implementation manner as follows: acquiring an emoticon corresponding to the first expression data as second expression data; and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
The second expression data obtained by the processing unit 503 is a two-dimensional expression picture corresponding to the first expression data. That is, the processing unit 503 removes human features included in the first emotion data by performing flattening processing on the first emotion data, thereby taking the emoticon corresponding to the first emotion data as the second emotion data. The second expression data in the processing unit 503 is used to retrieve the expression picture corresponding to the cartoon or emoji.
When acquiring the emoticon corresponding to the first emoticon as the second emoticon data, the processing unit 503 may use an emoticon that is most matched with the first emoticon data in the internet or a preset database as the second emoticon data; the generated emoticons may be used as the second emoticon data after the face information in the first emoticon data is replaced with the corresponding graphics.
When determining the target picture recommended to the user according to the first expression data and the second expression data, the processing unit 503 may adopt an optional implementation manner as follows: determining a first weight value corresponding to the first expression data and a second weight value corresponding to the second expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the matching degree obtained by calculation.
It is understood that the same emoticon may be obtained by using the first emoticon data and the second emoticon data, and in order to avoid recommending a repeated target emoticon to the user, the processing unit 503 needs to perform deduplication on the obtained target emoticon.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt an optional implementation manner as follows: acquiring emotion information corresponding to the first expression data as third expression data; and determining a target expression picture recommended to the user according to the first expression data and the third expression data.
The third expression data obtained by the processing unit 503 is emotion information corresponding to the first expression data, and specifically is a keyword corresponding to the emotion of the user. The processing unit 503 may include a plurality of keywords in the obtained third expression data. That is, the processing unit 503 may perform a keyword search using text in addition to a picture search using the first emotion data.
When the emotion information corresponding to the first expression data is used as the third expression data, the processing unit 503 may obtain the emotion information corresponding to the first expression data through an existing emotion recognition technology, for example, a deep learning model, which is not described herein again in this embodiment.
When determining the target picture recommended to the user according to the first expression data and the third expression data, the processing unit 503 may adopt an optional implementation manner as follows: determining a first weight value corresponding to the first expression data and a third weight value corresponding to the third expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the matching degree obtained by calculation.
It is understood that the same emoticon may be obtained by using the first emoticon data and the third emoticon data, and in order to avoid recommending a repeated target emoticon to the user, the processing unit 503 needs to perform deduplication on the obtained target emoticon.
When determining the target expression picture recommended to the user according to the first expression data, the processing unit 503 may adopt an optional implementation manner as follows: acquiring an emoticon corresponding to the first expression data as second expression data; acquiring emotion information corresponding to the first expression data as third expression data; and determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
By the method, the processing unit 503 simultaneously searches the expression pictures according to the first expression data, the second expression data and the third expression data, and can improve the comprehensiveness of the target expression pictures obtained by searching.
When determining the target picture recommended to the user according to the first expression data, the second expression data, and the third expression data, the processing unit 503 may adopt an optional implementation manner as follows: determining a first weight value corresponding to the first expression data, a weight value corresponding to the second expression data and a third weight value corresponding to the third expression data; calculating the matching degree between the first expression data and each expression picture by combining the first weight value; calculating the matching degree between the second expression data and each expression picture by combining the second weight value; calculating the matching degree between the third expression data and each expression picture by combining the third weight value; and selecting a plurality of expression pictures as target expression pictures recommended to the user according to the matching degree obtained by calculation.
It is understood that the same emoticons may be obtained by using the first emoticons, the second emoticons, and the third emoticons, and in order to avoid recommending a repeated target emoticon to the user, the processing unit 503 needs to perform deduplication on the obtained target emoticons, for example, deduplication on a plurality of emoticons whose sources or links are identical and whose contents are completely matched.
The apparatus for searching for an emoticon in this embodiment may further include a second obtaining unit 504, configured to obtain a search term input by the user, so that the processing unit 503 determines a target emoticon recommended to the user according to the first emoticon data and the search term. That is, the processing unit 503 may also use the search term input by the user as auxiliary information when determining the target expression picture, thereby acquiring a greater number of target expression pictures.
In addition, the apparatus for searching for an emoticon in this embodiment may further include a third obtaining unit 505, configured to extract motion information of the user from the emoticon to be processed, so that the processing unit 503 determines a target emoticon recommended to the user according to the first emoticon data and the motion information. That is, when determining the target expression picture, the processing unit 503 may acquire an expression picture matching the user's expression in addition to the expression picture matching the user's expression, thereby acquiring a richer target expression picture.
It is understood that the same emoticon may be obtained by using the first emoticon data and the search term and/or the motion information, and in order to avoid recommending a repeated target emoticon to the user, the processing unit 503 needs to perform deduplication on the obtained target emoticon.
According to an embodiment of the present application, an electronic device and a computer-readable storage medium are also provided.
As shown in fig. 6, the embodiment of the present application is a block diagram of an electronic device for searching for an emoticon. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to execute the method for searching emoticons provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the method of searching for emoticons provided by the present application.
The memory 602 is used as a non-transitory computer readable storage medium and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for searching for an emoticon in the embodiment of the present application (for example, the first acquiring unit 501, the generating unit 502, the processing unit 503, the second acquiring unit 504, and the third acquiring unit 505 shown in fig. 5). The processor 601 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 602, that is, implements the method of searching for emoticons in the above-described method embodiments.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 may optionally include a memory remotely disposed from the processor 601, and these remote memories may be connected to the electronic device of the method of searching for emoticons through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of searching for an emoticon may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the method of searching for emoticons, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, after the first expression data are generated according to the face information in the picture to be processed, the generated first expression data are used for searching the expression picture, so that the target expression picture matched with the expression information of the user is selected and recommended to the user, and the accuracy and convenience of expression picture searching are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A method for searching an emoticon, comprising:
acquiring a picture to be processed input by a user;
generating first expression data according to the face information in the picture to be processed;
and determining a target expression picture recommended to the user according to the first expression data.
2. The method of claim 1, wherein the generating first expression data according to the face information in the picture to be processed comprises:
extracting a face picture in the picture to be processed;
determining face information of the user according to the extracted face picture;
and removing the first preset type of face information in the face picture to generate the first expression data.
3. The method of claim 1, wherein the determining, according to the first expression data, a target expression picture recommended to a user comprises:
acquiring an emoticon corresponding to the first emotion data as second emotion data;
and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
4. The method of claim 1, wherein the determining, according to the first expression data and the second expression data, a target expression picture recommended to a user comprises:
acquiring emotion information corresponding to the first expression data as third expression data;
and determining a target expression picture recommended to the user according to the first expression data and the third expression data.
5. The method of claim 1, wherein the determining, according to the first expression data, a target expression picture recommended to a user comprises:
acquiring an emoticon corresponding to the first emotion data as second emotion data;
acquiring emotion information corresponding to the first expression data as third expression data;
and determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
6. The method of claim 1, further comprising,
acquiring a search word input by a user;
and determining a target expression picture recommended to the user according to the first expression data and the search word.
7. The method of claim 1, further comprising,
extracting action information of a user from the picture to be processed;
and determining a target expression picture recommended to the user according to the first expression data and the action information.
8. An apparatus for searching for an emoticon, comprising:
the first acquisition unit is used for acquiring a picture to be processed input by a user;
the generating unit is used for generating first expression data according to the face information in the picture to be processed;
and the processing unit is used for determining a target expression picture recommended to the user according to the first expression data.
9. The apparatus according to claim 8, wherein the generating unit, when generating the first expression data according to the face information in the picture to be processed, specifically executes:
extracting a face picture in the picture to be processed;
determining face information of the user according to the extracted face picture;
and removing the first preset type of face information in the face picture to generate the first expression data.
10. The device of claim 8, wherein when determining the target expression picture recommended to the user according to the first expression data, the processing unit specifically executes:
acquiring an emoticon corresponding to the first emotion data as second emotion data;
and determining a target expression picture recommended to the user according to the first expression data and the second expression data.
11. The device of claim 8, wherein the processing unit, in determining the target emotion picture recommended to the user according to the first emotion data, comprises:
acquiring emotion information corresponding to the first expression data as third expression data;
and determining a target expression picture recommended to the user according to the first expression data and the third expression data.
12. The device of claim 8, wherein when determining the target expression picture recommended to the user according to the first expression data, the processing unit specifically executes:
acquiring an emoticon corresponding to the first emotion data as second emotion data;
acquiring emotion information corresponding to the first expression data as third expression data;
and determining a target expression picture recommended to the user according to the first expression data, the second expression data and the third expression data.
13. The apparatus of claim 8, further comprising a second acquisition unit,
and the processing unit is used for acquiring a search word input by the user, so that the target expression picture recommended to the user is determined according to the first expression data and the search word.
14. The apparatus of claim 8, further comprising a third acquisition unit,
the processing unit is used for extracting the action information of the user from the picture to be processed, so that the processing unit determines a target expression picture recommended to the user according to the first expression data and the action information.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202010700613.1A 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture Active CN112000828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010700613.1A CN112000828B (en) 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010700613.1A CN112000828B (en) 2020-07-20 Method, device, electronic equipment and readable storage medium for searching expression picture

Publications (2)

Publication Number Publication Date
CN112000828A true CN112000828A (en) 2020-11-27
CN112000828B CN112000828B (en) 2024-05-24

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784700A (en) * 2021-01-04 2021-05-11 北京小米松果电子有限公司 Method, device and storage medium for displaying face image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128558A1 (en) * 2017-12-28 2019-07-04 北京达佳互联信息技术有限公司 Analysis method and system of user limb movement and mobile terminal
CN110489578A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128558A1 (en) * 2017-12-28 2019-07-04 北京达佳互联信息技术有限公司 Analysis method and system of user limb movement and mobile terminal
CN110489578A (en) * 2019-08-12 2019-11-22 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOLLY REDMOND: "A novel approach based on an extended cuckoo search algorithm for the classification of tweets which contain Emoticon and Emoji", IEEE, 11 December 2017 (2017-12-11) *
屈佳;胡志勇;: "基于WEB的表情图片模块的动态管理与实现", 电子设计工程, no. 09, 5 May 2016 (2016-05-05) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784700A (en) * 2021-01-04 2021-05-11 北京小米松果电子有限公司 Method, device and storage medium for displaying face image
CN112784700B (en) * 2021-01-04 2024-05-03 北京小米松果电子有限公司 Method, device and storage medium for displaying face image

Similar Documents

Publication Publication Date Title
CN111695471B (en) Avatar generation method, apparatus, device and storage medium
CN110955764B (en) Scene knowledge graph generation method, man-machine conversation method and related equipment
CN111221984A (en) Multimodal content processing method, device, equipment and storage medium
US20210312172A1 (en) Human body identification method, electronic device and storage medium
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN111709875B (en) Image processing method, device, electronic equipment and storage medium
CN111507111B (en) Pre-training method and device of semantic representation model, electronic equipment and storage medium
CN111259183B (en) Image recognition method and device, electronic equipment and medium
CN113407850B (en) Method and device for determining and acquiring virtual image and electronic equipment
CN111914629A (en) Method, apparatus, device and storage medium for generating training data for face recognition
CN111563855A (en) Image processing method and device
CN112101552A (en) Method, apparatus, device and storage medium for training a model
CN112508964B (en) Image segmentation method, device, electronic equipment and storage medium
JP2021170391A (en) Commodity guidance method, apparatus, device, storage medium, and program
CN112464009A (en) Method and device for generating pairing image, electronic equipment and storage medium
CN111708477B (en) Key identification method, device, equipment and storage medium
CN112381927A (en) Image generation method, device, equipment and storage medium
CN111861954A (en) Method and device for editing human face, electronic equipment and readable storage medium
CN112116548A (en) Method and device for synthesizing face image
CN112017141A (en) Video data processing method and device
CN111291184A (en) Expression recommendation method, device, equipment and storage medium
CN112000828B (en) Method, device, electronic equipment and readable storage medium for searching expression picture
CN112000828A (en) Method and device for searching emoticons, electronic equipment and readable storage medium
CN112200169B (en) Method, apparatus, device and storage medium for training a model
US20220075952A1 (en) Method and apparatus for determining recommended expressions, device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant