CN116962563A - Interaction method, device and medium - Google Patents

Interaction method, device and medium Download PDF

Info

Publication number
CN116962563A
CN116962563A CN202210383986.XA CN202210383986A CN116962563A CN 116962563 A CN116962563 A CN 116962563A CN 202210383986 A CN202210383986 A CN 202210383986A CN 116962563 A CN116962563 A CN 116962563A
Authority
CN
China
Prior art keywords
expression
portrait
user
image
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210383986.XA
Other languages
Chinese (zh)
Inventor
姚伟淦
张亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210383986.XA priority Critical patent/CN116962563A/en
Priority to PCT/CN2023/085438 priority patent/WO2023197888A1/en
Publication of CN116962563A publication Critical patent/CN116962563A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Abstract

The application relates to an interaction method, equipment and medium. The method comprises the following steps: displaying a first interface of a first application on the electronic device, wherein the first interface comprises a first session window of a first contact; detecting the expression input operation of a user in a first session window of a first interface; and displaying at least one facial expression in the first facial expression set in the first conversation window corresponding to determining that the first facial expression set related to the first contact exists in the plurality of facial expression sets stored in the electronic device. According to the interaction method, when the user needs to input the expression, the electronic equipment can respond to the expression input operation of the user to display the portrait expression set of the corresponding contact person needed by the user, and further, the user can operate the screen of the electronic equipment to select the expression needed by the user in the portrait expression set of the corresponding contact person, the user does not need to manually make and import the expression of the corresponding contact person, and the user operation is simple and high in experience.

Description

Interaction method, device and medium
Technical Field
The application relates to the technical field of terminal equipment, in particular to an interaction method, equipment and a medium.
Background
With the popularization of terminal equipment such as mobile phones, tablets and computers and the development of social software, the application of expressions (including image expressions and text expressions) is more and more widespread. The user can create an expression from a photo of a person familiar with the user to activate the atmosphere of communication.
In the current social software, after a user makes a portrait expression according to a portrait photo of a corresponding contact person, when chatting with the corresponding contact person, the user needs to manually add the portrait expression into a collection expression set in the social software, then find out the required expression of the corresponding contact person in a plurality of expressions displayed in the collection expression set, and send the required expression to the corresponding contact person. According to the expression interaction scheme, the user is required to add and select the portrait expression of the corresponding contact person required by the user through a plurality of operations, and when the collected expressions are more in concentrated expression, the user is difficult to find the portrait expression of the corresponding contact person, and experience is poor.
Disclosure of Invention
The embodiment of the application provides an interaction method, interaction equipment and interaction media, which reduce the operations of making and searching expression packages by users and are beneficial to improving user experience.
In a first aspect, an embodiment of the present application provides an interaction method, which is applied to an electronic device, and includes: displaying a first interface of a first application on the electronic device, wherein the first interface comprises a first session window of a first contact; detecting the expression input operation of a user in a first session window of a first interface; and displaying at least one facial expression in the first facial expression set in the first conversation window corresponding to determining that the first facial expression set related to the first contact exists in the plurality of facial expression sets stored in the electronic device.
It may be understood that the expression input operation is an operation in which the user triggers the input of an expression in the first session window, and may be, for example, an operation in which the user clicks an expression input button in the first session window. It is also possible, for example, that the first session window includes a text input box, and the user inputs text in the text input box of the first session window and clicks an operation of the expression input button.
According to the interaction method provided by the application, the electronic equipment can respond to the expression input operation of the user to display the portrait expression set of the corresponding contact person required by the user, and further, the user can operate the screen of the electronic equipment to select the expression required by the user in the portrait expression set of the corresponding contact person, so that the user does not need to manually make and import the expression of the corresponding contact person, and the user operation is simple and the experience is high.
In one possible implementation manner of the first aspect, association information for characterizing correspondence between at least one contact in the first application and the plurality of portrait expression sets is preset in the electronic device.
It can be appreciated that the association information is that the electronic device associates the portrait expression set in the electronic device with the contact in the first application in response to an association operation of the user. In some embodiments, the association information may be associated with a contact in an address book application in the electronic device, for example, associating each portrait expression set with each contact in the first application with a contact in the address book application, and further, when determining the first portrait expression set, may be implemented by accessing the address book application. In some embodiments, the association information may enable determination of the first set of personal expressions by adding corresponding identical nametags or other custom tags to each set of personal expressions and each contact in the first application.
In a possible implementation of the first aspect, the first conversation window includes a conversation area and an expression area, and displaying at least one facial expression in the first set of facial expressions in the first conversation window includes: displaying a portrait expression tag of a first portrait expression set in an expression area, wherein the portrait expression tag is determined by the electronic equipment according to the portrait expression in the first portrait expression set; at least one of the first set of facial expressions is stored and displayed in the facial tag.
In a possible implementation of the first aspect, the portrait expression tag is a portrait head portrait of the first contact.
It can be understood that the portrait head can be any image in the first portrait expression set, which can represent the facial features of the first contact person, so that when the portrait head of a plurality of contacts is displayed by the electronic device, a user can quickly determine which contact person the portrait expression set belongs to through the portrait head.
In a possible implementation of the first aspect, the method further includes: and displaying at least one facial expression in a second facial expression set in the first session window, wherein the second facial expression set is a facial expression set in which a plurality of facial expressions meet preset conditions, corresponding to the fact that the first facial expression set related to the first contact does not exist.
It can be appreciated that when the portrait expression of the first contact does not exist in the electronic device, the electronic device can provide the portrait expression of the other contact for the user to select.
In some embodiments, when the first contact person has the portrait expression in the electronic device, the electronic device may also display the portrait expressions of other contact persons on the display interface in response to the query operation of the user, and the user may further select the portrait expressions of the common friends of the first contact person to interact with the portrait expressions of the common friends of the first contact person, so as to increase the interestingness of the chat.
In a possible implementation of the first aspect, the second set of facial expressions is determined by: acquiring historical use information of a plurality of portrait expression sets; and determining the facial expression set corresponding to the first historical use information meeting the preset condition as a second facial expression set.
In a possible implementation of the first aspect, the historical usage information includes a historical usage frequency; determining the portrait expression set corresponding to the first historical use information meeting the preset condition as a second portrait expression set comprises the following steps: determining a portrait expression set corresponding to a first historical use frequency with the historical use frequency higher than a preset frequency threshold as a second portrait expression set; or determining at least one facial expression set with highest historical use frequency as a second facial expression set.
In a possible implementation of the first aspect, the first application includes a plurality of contacts, and the plurality of portrait expression sets are generated by: acquiring a plurality of portrait images stored in electronic equipment; dividing at least one portrait image corresponding to the same contact person into the same portrait image set; and performing image processing on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, and associating each portrait expression set with each contact in the first application.
In one possible implementation manner of the first aspect, performing image processing on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set includes: and generating a portrait expression set corresponding to each portrait image set by using the expression making model for at least one portrait image in each portrait image set.
In a possible implementation of the first aspect, the expression profiling model includes: determining each portrait image as a portrait sub-model with the weight of a preset expression; determining a preset expression text as a text sub-model with the weight of the preset expression; and generating an expression generating sub-model of the corresponding portrait expression by using each portrait image and the matched preset expression characters according to the output results of the portrait sub-model and the character sub-model.
In a possible implementation of the first aspect, the method further includes: detecting a text input operation of a user in a first session window of a first interface, and receiving a text input by the user; determining at least one target portrait image matched with the text in a portrait image set corresponding to the first portrait expression set; the electronic equipment displays a second interface of the first application, wherein the second interface comprises at least one target portrait image; detecting an image selection operation of at least one portrait image of a user in a second interface, and generating a target portrait expression according to the text and the portrait image corresponding to the image selection operation; and displaying the target portrait expression in the first session window.
It can be understood that when the portrait expression in the electronic device does not meet the user requirement, the electronic device can automatically generate the portrait expression according to the text input by the user and the selected portrait image, so that manual production of the user is not needed, and the operation is simple.
In one possible implementation of the first aspect, the first application includes at least one of an instant messaging application, a short message, a conference application, and a social application.
It will be appreciated that instant messaging applications may, for example, weChat TM 、QQ TM Etc., social applications may, for example, tremble TM Microblog TM Red book TM Etc. In some embodiments, the first application may also be other user-enabled implementation graphs with other usersApplications of class-like information transfer, e.g. Taobao TM Etc., as long as the chat function between users is realized, and the present application is not limited thereto.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the interaction method described above.
In a third aspect, embodiments of the present application provide a computer readable storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the above-described interaction method.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the above-described interaction method.
Drawings
Fig. 1a to fig. 1c are schematic views showing interface changes in interaction process of some expressions according to an embodiment of the present application;
Fig. 2 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 3a to fig. 3e are schematic diagrams showing interface changes of a mobile phone after adding a "allow to obtain gallery and generate expression package" switch according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an interaction method according to an embodiment of the present application;
fig. 5a to fig. 5b are schematic views showing interface changes of interaction process of some expressions according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of an interaction method according to an embodiment of the present application;
fig. 7 is a flow chart of a method for generating an expression according to an embodiment of the present application;
fig. 8a to 8e are schematic views of interface changes in the process of generating some expressions according to an embodiment of the present application;
fig. 9 is a schematic flow chart of an automatic generation method of a portrait expression set according to an embodiment of the present application;
fig. 10 is a schematic software structure of an electronic device according to an embodiment of the present application.
Detailed Description
The application provides an interaction method for solving the problems of complex operation and poor experience when a user processes an image into a form and sends the form to a chat person. The method comprises the steps that the electronic equipment divides the portrait image of the same person in the stored portrait images into a portrait image set, a portrait expression set which can be used for communication application is generated for each portrait image set, and after the portrait expression set is generated by the electronic equipment, each portrait expression set is pre-associated with each chat person in the communication application. Wherein the set of portrait expressions includes at least one expression. When the electronic equipment detects the expression input operation of the user in the communication application, the corresponding contact information of the expression input operation is obtained. And if the electronic equipment stores the portrait expression set of the corresponding contact, displaying the acquired portrait expression set of the corresponding contact in a display interface of the communication application for selection by a user.
It can be understood that the pre-association, that is, the electronic device, determines chat person information corresponding to each portrait expression set in the communication application by acquiring input information of the user. For example, names or other custom labels of corresponding people can be added to each portrait expression set, meanwhile, names or other custom labels of the chat people are remarked in the communication application, and the pre-association of each portrait expression set and each chat person in the communication application is realized through the same names or custom labels. For another example, the portrait expression sets are corresponding to the communication contacts in the address book application in the electronic equipment, and the communication application is allowed to access the address book application of the electronic equipment at the same time, so that the corresponding address contact information in the address book application of the electronic equipment is determined according to the chat person information in the communication application, and the pre-association of the portrait expression sets and the chat persons in the communication application is realized.
In some embodiments, the electronic device may make multiple portrait images into respective portrait expression sets using an expression making model. The expression making model can be a model which is obtained by training the corresponding relation between the obtained portrait image and the preset text according to a large number of portrait images, corresponding expression labels and preset text corresponding to the expression labels by the electronic equipment and combining the portrait image and the corresponding preset text. The preset text can be a text corresponding to a common expression.
According to the interaction method provided by the application, the electronic equipment can automatically make the corresponding portrait expression for the stored portrait image. When the user needs to input the expression, the electronic equipment can respond to the expression input operation of the user to display the portrait expression set of the corresponding contact person needed by the user, and then the user can operate the screen of the electronic equipment to select the expression needed by the user in the portrait expression set of the corresponding contact person, so that the user does not need to manually make and import the expression of the corresponding contact person, and the user operation is simple and the experience is high.
In addition, in some embodiments, if the set of facial expressions of the corresponding contact is not stored in the electronic device, the electronic device may display at least one facial expression set with the highest use frequency on a display interface of the electronic device according to the use frequency of the user for each facial expression.
In other embodiments, the portrait expression set generated in the electronic device may be that after the electronic device automatically backs up a plurality of portrait images stored in the electronic device to the cloud server, the cloud server divides the portrait images of the same person in the backed up plurality of portrait images into a portrait image set, generates a portrait expression set for each portrait image set, which may be used for communication application, and sends each portrait expression set generated to the electronic device. It can be appreciated that the production of the portrait expression set may be performed by an electronic device or may be performed by a cloud server.
The above interaction method is described below by taking the electronic device 100 as an example of a mobile phone with reference to the accompanying drawings.
For example, fig. 1 is an interface change diagram of an interaction method according to an embodiment of the present application. As shown in fig. 1a, the mobile phone 100 displays a chat interface 101, and the chat interface 101 displays an expression input button 111, where the expression input button 111 may be a smiley face graphic as shown in fig. 1a, or other graphics representing expression input. In the process of operating the mobile phone 100 to perform expression selection, the user may click an expression input button in the chat interface 101 of the mobile phone 100, that is, perform operation (1). In this process, the mobile phone 100 acquires the user operation (1), that is, acquires the user expression input operation, and the mobile phone 100 may acquire the corresponding chat person information in the chat interface 101 in response to the user operation (1). At this time, the corresponding chat person information (i.e. the corresponding contact person in the above) acquired by the mobile phone is Alice. If the mobile phone 100 stores the facial expression set corresponding to the chat person Alice, the mobile phone 100 may obtain the facial expression set corresponding to the chat person Alice, and display the expression selection interface 102 shown in fig. 1b on the display interface of the mobile phone 100.
As shown in fig. 1b, in response to the operation (1) of the user at the mobile phone 100, the expression selection interface 102 shown in fig. 1b is displayed on the display interface, and the expression selection interface 102 displays the expression selection area 112 and the expression set selection area 113. The expression selection area 112 is for displaying expressions of the selected expression set, and the expression set selection area 113 displays selectable expression sets. After the user performs operation (1), the mobile phone 100 may display the expression of the default expression set corresponding to the communication application in the expression selection area 112, and at this time, the default expression button 114 is triggered. The facial expression set of the corresponding chat person Alice acquired by the mobile phone 100 in response to the operation (1) of the user may be displayed in the expression set selection area 113 in the form of a facial expression button. The portrait expression button can be made of a cover of a portrait image set corresponding to the portrait expression set. For example, in FIG. 1b, the facial expression set corresponding to chat person Alice may be displayed as chat people expression button 115. The user may also click on the expression hiding button 116 in fig. 1b to obtain a set of facial expressions for other chat people.
In some embodiments, when the portrait expression buttons are displayed, the mobile phone 100 may display in the expression set selection area 113 according to whether the corresponding expression sets are the portrait expression sets of the corresponding chat people and the frequency of use of each portrait expression set.
When the user wants to transmit the expression automatically generated by the mobile phone 100 from the stored portrait image, for example, wants to transmit the expression concentrated according to the portrait expression of the corresponding chat person Alice to the corresponding chat person Alice, the chat person expression button 115 may be clicked, i.e., operation (2) is performed. The mobile phone 100 displays an expression selection interface 103 on a display interface of the mobile phone 100 in response to the operation (2) of the user, as shown in fig. 1 c.
As shown in fig. 1c, in the expression set selection area 113 of the expression selection interface 103, the chat person expression button 115 is triggered, and at this time, the expression selection area 112 may display at least one expression in the corresponding image expression set of chat person Alice automatically generated by the mobile phone 100.
It can be appreciated that when the user wants to send his expression to the corresponding chat person, the user does not need to manually operate any more, and the image of the current chat person is processed to make a form package. The mobile phone 100 can automatically generate a portrait expression set of each chat person according to the stored multiple portrait images, and display the portrait expression set on a display interface of the mobile phone 100, so that a user can select the expression of the corresponding chat person by clicking the chat person expression button 115. The user is easy to operate based on the interaction of the expression, and the experience is higher.
It will be appreciated that the interaction methods provided by embodiments of the present application are applicable to electronic devices including, but not limited to, mobile phones, portable computers, laptop computers, desktop computers, tablet computers, head mounted displays, mobile email devices, car set devices, portable gaming devices, reader devices, televisions with one or more processors embedded or coupled therein, or other electronic devices capable of accessing a network.
Illustratively, fig. 2 shows a hardware architecture diagram of an electronic device.
As shown in fig. 2, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a sensor module 180, a display screen 190, and the like. The sensor module 180 may include a pressure sensor 180A, an acceleration sensor 180E, a touch sensor 180K, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data. In the embodiment of the present application, relevant instructions and data for executing the interaction method of the present application may be stored in the memory for being called by the processor 110, and the processor 110 may control execution of each step of executing the interaction method through the controller, and specific implementation processes will be described in detail below, which will not be repeated here.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as the display screen 190. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. Processor 110 and display screen 190 communicate via a DSI interface to implement the display functionality of electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 display screen 190, the sensor module 180, etc. The GPIO interface may also be configured as an I2C interface, MIPI interface, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The electronic device 100 implements display functions through a GPU, a display screen 190, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 190 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change window display information.
The display screen 190 is used to display images, videos, and the like. The display screen 190 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 190, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
In the embodiment of the present application, the execution instruction for implementing the interaction method of the present application may be stored in the internal memory 121, so that the processor 110 may invoke the interaction method of the present application, so that the electronic device 100 automatically obtains the portrait expression set of the corresponding contact stored in the electronic device when the user performs the expression input operation, without manual introduction by the user, and improves the user experience.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 190. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 190, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 190, and the touch sensor 180K and the display screen 190 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 190. In the embodiment of the present application, for example, a touch screen formed by the touch sensor 180K and the display screen 190 may detect a clicking operation of a user, and along with the clicking operation of the user, the touch screen may display a corresponding interface change, for example, clicking an expression acquisition switch, then pop up a floating window interface on a current display interface, which may be specifically described in detail below, and will not be described herein. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 190.
Based on the structure of the electronic device 100 shown in fig. 2, the implementation process of the interaction method according to the embodiment of the present application will be described in detail below by taking the electronic device 100 as an example of a mobile phone with reference to the accompanying drawings.
It should be noted that, in some embodiments, in order to facilitate the user operation, an expression pack acquisition switch may be added to the operation interface of the communication application, and the user may select whether to open the expression pack acquisition switch, so as to select whether to open the function of the communication application to acquire the expression pack automatically generated in the mobile phone 100.
For example, as shown in FIG. 3a, the communication application is a WeChat TM . User operates mobile phone 100 to open WeChat TM And displaying WeChat on the display interface of the mobile phone 100 TM The user may then click the set button 311 to open the WeChat TM As shown in fig. 3 b). In WeChat TM Includes WeChat in the setup interface 312 of (C) TM Including an added expression pack acquisition switch, such as an "allow to acquire gallery generated expression pack" switch 312 in the settings interface 312. The user can be in WeChat TM Clicking on the "allow to acquire gallery generated expression package" switch 312 in the setup interface 302 to turn on WeChat TM And acquiring the function of the expression package automatically generated in the mobile phone 100.
Further, in some embodiments, the user is on WeChat TM Clicking on the "allow to get expression package generated by gallery" switch 312 in the setup interface 302 of (c) in WeChat TM The floating window interface 303 pops up over the setting interface 302 as shown in fig. 3 c. In the floating window interface 303, a typeface of "whether or not the gallery is allowed to automatically generate an expression package" and a "ok" button 321 and a "cancel" button 322 are displayed. At this time, the user may click the "ok" button 321 to turn on the permission gallery to automatically generate the expression package, thereby WeChat TM The expression package automatically generated by the gallery can be obtained and used for expression interaction between the user and the corresponding contact person. After the user clicks the "ok" button 321, the floating window interface 303 disappears, and only WeChat is displayed on the screen of the mobile phone 100 TM As shown in fig. 3 d). At this time, in the display interface of the mobile phone 100, the "allow to acquire expression package generated by gallery" switch 312 is turned on.
The user may also click a "cancel" button 322 in the floating window interface 303, which relinquishes the function of opening the permission gallery to automatically generate the expression package. At this time, the mobile phone 100 defaults to the user to give up the on "allow to acquire expression pack" switch generated by gallery 312, i.e. after the user clicks the "cancel" button 322 in the floating window interface 303, the floating window interface 303 disappears, and the display interface of the mobile phone 100 still displays WeChat TM The "allow to acquire gallery generated expression package" switch 312 is in an off state.
It will be appreciated that, in the embodiment of the present application, the "allow to acquire expression package generated by gallery" switch 312 is turned on, and may be turned on only after the gallery is authorized to allow the expression package to be automatically generated. In some embodiments, the function of automatically generating the expression package may be completed by other software outside the gallery, and the corresponding expression package acquisition switch may also be displayed by other types of switches, which is not limited in this application.
In some embodiments, the user is on WeChat as shown in FIG. 3b TM After clicking the "allow to acquire expression package generated by gallery" switch 312 in the setting interface 302 of (a), the display interface of the mobile phone 100 will be from WeChat TM After the user clicks the "allow to obtain expression package generated by gallery" switch 312, the display interface of the mobile phone 100 displays the setting interface 305 of gallery, as shown in fig. 3 e. The user can turn on the "allow to automatically generate expression package" switch 331 in the setting interface 305 of the gallery shown in fig. 3e, and the user turns on the "allow to automatically generate expression package" switch 331, so that the display interface of the mobile phone 100 can jump to WeChat again TM And wherein the "allow to acquire expression pack generated by gallery" switch 312 is turned on, the display interface of the mobile phone 100 is shown in fig. 3 d.
It will be appreciated that in some embodiments, the method for turning on the expression pack acquisition switch may also be implemented in other ways to interact with the user, which is not limited by the present application.
After executing the above operation to turn on the expression pack acquisition switch, the interaction method provided by the embodiment of the present application is further described below with reference to fig. 4.
Fig. 4 is a schematic flow chart of an interaction method according to an embodiment of the present application.
As shown in fig. 4, the interaction method provided by the embodiment of the present application is applied to a mobile phone 100, and includes:
401: and detecting the expression input operation of the user in the communication application, and acquiring the corresponding contact information of the expression input operation. The display interface of the mobile phone 100 displays a chat interface of the communication application.
It will be appreciated that the communication application may be, for example, a WeChat TM 、QQ TM Instant messaging applications such as voice tremble TM Microblog TM Red book TM The social network platform with the chat function can also be used for example for short messages and meeting applications, and the communication application can support the sending and receiving of image type files, namely the communication application has the expression interaction function. The image type file can be a static image or a dynamic image.
It will be appreciated that the user's emotive input operation may be for the user to operate the screen of the mobile phone 100, generating a trigger event, where the trigger event is related to the location of the user operation and the manner of operation of the user.
For example, the user's expression input operation may be, for example, the user clicking the expression input button 111 as in fig. 1a, i.e., the user performs operation (1), and the mobile phone 100 detects the WeChat TM The expression input button 111 is triggered in the chat interface, so that the user's expression input operation is considered to be detected, and the corresponding chat person information of the expression input operation is triggered.
It can be understood that the mobile phone 100 obtains the corresponding chat person information of the expression input operation, and obtains the corresponding chat person information of the chat with the user in the display page where the expression input operation is located for the mobile phone 100. The obtained corresponding chat person information can comprise the user name, account number and remarks of the corresponding chat person.
For example, the communication application is WeChat TM The corresponding chat person information may be, for example, a nickname of the corresponding chat person, a remark of the user to the corresponding chat person, a cell phone number, a micro signal, etc. It will be appreciated that the corresponding chat person information obtained by the communication application may be used to match the corresponding chat person to the set of facial expressions stored in the cell phone 100.
402: and judging whether to store the portrait expression set of the corresponding contact person.
It can be understood that the portrait expression set is a portrait photo stored in the mobile phone 100, images are automatically clustered, portrait images corresponding to the same person are divided into a portrait image set, image processing is performed on each portrait image set, a corresponding portrait expression set is generated, and each generated portrait expression set can be stored in the mobile phone 100 to be used as a portrait expression library automatically generated by the mobile phone 100. That is, the mobile phone 100 processes the stored portrait images to generate a portrait expression library including at least one portrait expression set, and the portrait expression set includes at least one expression of the same person generated by the mobile phone 100.
In some embodiments, the clustering of images and the generation of the portrait image sets and the portrait expression sets may be implemented by an album-like application, such as a gallery application in the mobile phone 100, and after the portrait expression sets are generated, the album-like application needs to pre-associate each portrait expression set with the chat people in the communication application. Further, in step 402, the mobile phone 100 determines whether to store the portrait expression set of the corresponding contact person, that is, the gallery determines whether to include the portrait expression set of the corresponding contact person in the corresponding stored data.
In some embodiments, after each portrait expression set is generated, the album application may obtain the communication contact information in the address book application of the mobile phone 100, and according to the input information of the user, establish a correspondence between the portrait expression set and the communication contact information, for example, may use the communication contact information as a tag of the portrait expression set. The establishment of the portrait expression library and the correspondence between the portrait expression set and the communication contact information will be further described below, and will not be described here.
Further, the communication application may further include an address book access switch, and the user may select whether to turn on the address book access switch, so as to select whether to turn on the communication application to obtain the function of the communication contact information stored in the address book. The user can operate the mobile phone 100 and turn on the address book access switch, and then when the mobile phone 100 determines whether to store the portrait expression set of the corresponding contact in step 402, the user can obtain the communication contact information in the address book for the communication application, determine the communication contact information corresponding to the corresponding contact, and send the determined communication contact information to the gallery application. And the gallery application determines whether a portrait expression set corresponding to the received communication contact information is stored in the portrait expression gallery according to the received communication contact information.
In some embodiments, the user may add the name of the person corresponding to the portrait expression set as a tag of the portrait expression set automatically generated by the album application to the corresponding portrait expression set, and the user may also customize the tag of each portrait expression set and add the tag to the corresponding portrait expression set. Further, when the mobile phone 100 obtains the corresponding contact person information, the corresponding contact person information can be matched with the labels of the facial expression sets, and the facial expression set of the corresponding contact person can be determined.
If the result of the determination in step 402 is yes, step 403 is executed.
403: and acquiring and displaying the portrait expression set of the corresponding contact person.
It will be appreciated that when the handset 100 displays the portrait expression set of the corresponding contact, a portrait expression button (i.e. the chat portrait button above) for storing the portrait expression set of the corresponding contact may be added to the expression area of the communication application. When the mobile phone 100 detects that the user clicks the portrait expression button, the expression in the portrait expression set of the corresponding contact can be displayed in the expression display area of the communication application. When the display interface of the mobile phone 100 displays the expression of the corresponding contact, the user can select the expression of the corresponding contact required by the user from the displayed expressions of at least one corresponding contact, and send the expression to the corresponding contact.
In some embodiments, the gallery is applied to generate a set of facial expressions, and a corresponding one of the set of facial images or one of the set of facial expressions is selected as a cover of the set of facial expressions. Further, when the portrait expression buttons are added, the mobile phone 100 can use the cover of the portrait expression set of the corresponding contact as the portrait expression button of the corresponding contact.
For example, as shown in FIG. 1c, the communication application is WeChat TM The facial expression set of the corresponding chat person acquired by the mobile phone 100 is Alice, and the newly added facial expression button of the corresponding chat person may be, for example, a chat facial expression button 115, and when the user clicks the chat facial expression button 115, at least one facial expression in the facial expression set of the corresponding chat person Alice may be displayed in the facial expression selection area 112. In some embodiments, the expressions in the facial expression sets of the corresponding chat people are displayed in the expression selection area 112 according to the order of the times of using the expressions in the facial expression sets of the corresponding chat people.
According to the expression interaction method provided by the embodiment of the application, the portrait expression set of the corresponding contact person required by the user can be displayed in response to the expression input operation of the user, and further, the user can operate the screen of the electronic equipment to select the expression required by the user in the portrait expression set of the corresponding contact person, so that the user does not need to manually make and import the expression of the corresponding contact person, and the user operation is simple and the experience is high. Meanwhile, the expression made of the portrait image of the corresponding contact person is used as the interactive expression between the user and the corresponding contact person, so that the interestingness of chat can be improved. And the expression made of the portrait image of the corresponding contact person is adopted, so that the expression has more sense of affinity.
With continued reference to fig. 4, in some embodiments, when the determination of step 402 is negative, the mobile phone 100 performs the following steps:
404: and acquiring all stored portrait expression sets, and acquiring historical use information of the user on each portrait expression set.
It is understood that the historical usage information may be, for example, a frequency of usage of each set of facial expressions by the user, a set of facial expressions recently commonly used by the user, and so on. The use frequency of each portrait expression set can represent the preference degree of the user on each portrait expression set, and the most recently used portrait expression set of the user can represent the recently preferred portrait expression set of the user. In some embodiments, the mobile phone 100 may select one of the user's use frequency of each portrait expression set, a most recently used portrait expression set of the user, and the like as the history use information. In other embodiments, the mobile phone 100 may use the frequency of use of the facial expression sets by the user, the weighted value of at least two categories in the facial expression sets that are commonly used by the user recently, and so on, as the historical use information.
405: and acquiring and displaying the portrait expression sets meeting preset conditions according to the historical use information of each portrait expression set.
It is understood that the preset condition may be set according to the history use information. For example, when the historical usage information is the usage frequency of the user for each portrait expression set, the preset condition may be that the first 3 portrait expression sets with the highest usage frequency are displayed, and the usage frequency of the portrait expression sets may also exceed a preset threshold. The same applies when the historical usage information is other categories of information.
Fig. 5a to 5b are interface diagrams of some interaction methods according to embodiments of the present application.
In some embodiments, the communication application is WeChat TM For example, the cell phone 100 may also provide portrait expressions of other chat people (corresponding to contacts above) for the user to interact with the expressions of the corresponding chat people (i.e., corresponding contacts). For example, when the expression of the corresponding chat person cannot meet the expression interaction requirement of the user, or the user prefers to select the expression from the portrait expression set of other chat people to interact with the corresponding chat person, or the user needs to increase the chat interest of the corresponding chat person with the expression made of the portrait image of the common friend of the corresponding chat person.
Specifically, the mobile phone 100 may provide a query button of the portrait expression set on the display interface, and when the mobile phone 100 detects that the user clicks the query button of the portrait expression set in the communication application, the mobile phone 100 may display a portrait expression set selection window on the display interface. Each portrait expression set can be displayed in sequence in a portrait expression set selection window through the corresponding covers or the labels of the portrait expression sets.
In some embodiments, when the facial expression sets are sequentially displayed in the facial expression set selection window through the labels of the facial expression sets, the facial expression sets may be sorted according to the labels, and displayed in the facial expression set selection window according to the sorting result. For example, the labels of the portrait expression sets are names of people, and then the labels of the portrait expression sets can be ordered according to the initials of the names.
As shown in fig. 5a, the display interface of the mobile phone 100 displays WeChat TM Is provided) chat interface 501. WeChat TM The expression set selection area 511 of the chat interface 501 includes a chat person expression button 521 and a query button 522. The user can click the inquiry button 522, and the mobile phone responds to the click operation of the user and sends the inquiry button to the WeChat TM Above the chat interface 501, a portrait expression set selection window 502 is displayed in a floating window mode. As shown in fig. 5a, the portrait expression set selection window 502 may display labels of portrait expression sets of other chat people except for the portrait expression set of the corresponding chat person, and the labels of the portrait expression sets may be ordered according to initial letters. For example, the tags of the portrait expression set include "small swordsman" and "Zwc", and the "small swordsman" may be arranged first and the "Zwc" may be arranged second.
In some embodiments, when the facial expression sets are sequentially displayed in the facial expression set selection window through the cover of the facial expression set, the facial expression sets may be sorted according to the historical use information of the facial expression sets, and displayed in the facial expression set selection window according to the sorting result. The historical usage information is described above and is not described here. For example, the covers of the portrait expression sets are ordered according to the order of the use frequency of the portrait expression sets from high to low.
As shown in fig. 5b, the display interface of the mobile phone 100 displays WeChat TM Is provided for the chat interface 503. WeChat TM The expression set selection area 512 of the chat interface 503 includes a chat person expression button 531 and a query button 532. The user can click the query button 532, and the mobile phone responds to the click operation of the user and sends the query button to the WeChat TM Above the chat interface 503, a portrait expression set selection window 504 is displayed in a floating window mode. As shown in fig. 5b, the facial expression set selection window 504 may display covers of facial expression sets of other chat people except for the corresponding facial expression set of the chat people, and the covers of the facial expression sets may be according to the respective facial expressionsThe expression sets are used in order from high to low. For example, if the frequency of use of the portrait expression set corresponding to the cover 541 of the portrait expression set is higher than that of the portrait expression set corresponding to the cover 542 of the portrait expression set, the display of the cover 541 of the portrait expression set and the cover 542 of the portrait expression set in the portrait expression set selection window 504 may be as shown in fig. 5 b.
The communication application is called WeChat as follows in combination with FIG. 6 TM The gallery application automatically generates the expression package as an example, and the interaction method in the embodiment of the application is further introduced.
Fig. 6 is a flowchart of an interaction method according to an embodiment of the present application.
As shown in fig. 6, in some embodiments, the interaction method includes:
601: weChat TM 611 detects an expression input operation of the user.
It can be understood that the expression input operation is that the user is in WeChat TM 611 clicks an emoticon button, such as the emoticon button 111 in fig. 1 a. WeChat TM 611 detects an operation of clicking the expression input button 111 by the user, that is, an expression input operation by the user.
602: weChat TM 611 acquires corresponding chat person information in response to the user's expression input operation.
It will be appreciated that WeChat TM 611 after the expression input operation is acquired, the user is indicated to need to be in WeChat TM 611, at this time due to WeChat TM 611 turns on an expression pack acquisition switch, such as the "allow to acquire expression pack generated by gallery" switch in fig. 3b, then WeChat for automatically acquiring the required portrait expression set TM 611 need to first obtain corresponding chat person information for chat with the user.
603: weChat TM 611 sends the corresponding chat person information to gallery application 612.
It will be appreciated that WeChat TM 611, after obtaining the corresponding chat person information, send the obtained corresponding chat person information to the gallery application 612, and apply the gallery application 612 for the gallery application 612 itself to the gallery application 612 Dynamically generated portrait expression sets of corresponding chat people.
604: gallery application 612 determines whether to store a set of facial expressions for a corresponding chat person.
It will be appreciated that gallery application 612 is responsive to WeChat TM 611, the corresponding chat person information is matched with the labels of the facial expression sets generated by the gallery application 612, and it is determined whether the facial expression sets of the corresponding chat person exist in the facial expression sets generated by the gallery application 612. The labels of the portrait expression sets can be, for example, names of portraits corresponding to the portrait expression sets, labels customized by users, and corresponding communication contact information in the portrait expression sets and address book applications.
If the determination in step 604 is yes, then WeChat TM 611 may display the set of facial expressions of the corresponding chat person directly and step 605 is performed.
605: gallery application 612 sends the set of facial expressions of the corresponding chat person to WeChat TM 611。
It will be appreciated that gallery application 612 matches to a set of facial expressions of a corresponding chat person, then may respond to WeChat TM 611 to WeChat TM 611 sends the set of facial expressions of the corresponding chat person.
606: weChat TM 611 displays the set of facial expressions of the corresponding chat person.
If the determination result in step 604 is no, the gallery application 612 may not store the portrait image of the corresponding chat person, and further, does not process the portrait image of the corresponding chat person to obtain a portrait expression set of the corresponding chat person, and the gallery application 612 may send the rest portrait expression sets to WeChat according to the user's use preference TM 611 and steps 607, 608 are performed.
607: the gallery application 612 obtains historical usage information for each set of human facial expressions.
It will be appreciated that the historical usage information of the sets of facial expressions may characterize the user's usage preferences for the sets of facial expressions. The historical usage information is described in detail above, and is not described herein.
608: gallery shouldAt least one set of facial expressions meeting the preset conditions is sent to WeChat at 612 TM 611。
It will be appreciated that gallery application 612 sends at least a portion of its generated set of human facial expressions to WeChat TM 611 without requiring the user to manually make the portrait expression or manually import the portrait expression into the WeChat TM 611. The user operation is simple.
609: weChat TM 611 displays the received at least one set of facial expressions satisfying the preset condition.
In some embodiments, when the user wants to obtain the expression package with the specified text, the user can input the text to be specified, and input the portrait image which needs to participate in expression production, and the mobile phone 100 can automatically generate the corresponding expression according to the information input by the user. The following is a communication application as WeChat with reference to FIGS. 7-8 d TM 611 and the expression of the corresponding chat person is made as an example, the process is further described.
Fig. 7 is a flowchart of a method for generating an expression according to an embodiment of the present application.
Fig. 8a to 8e are diagrams illustrating interface changes in the process of generating some expressions according to an embodiment of the present application.
As shown in fig. 7, the method includes:
701: weChat TM 611 acquires the target emoji character input by the user.
It can be understood that the target emoji is the user's WeChat TM 611.
For example, the display interface of the mobile phone 100 displays WeChat TM 611 as shown in fig. 8 a. In WeChat TM 611 has a WeChat displayed in the chat interface 801 TM 611, the user may input text content to be transmitted or target emoji text corresponding to the target emoji of the chat person in the text input box 811. WeChat TM 611 also included in the chat interface 801 is an expression selection area 810. Wherein the expression selection area 810 displays the expressions in the portrait expression set of the corresponding chat person, and the add button 812。
702: weChat TM 611 detects an expression increasing operation of the user, and acquires corresponding chat person information in response to the expression increasing operation.
It can be appreciated that the user can operate the corresponding position on the screen of the mobile phone 100 to trigger the expression adding button, i.e. WeChat TM 611 detects an expression increasing operation of the user.
For example, as shown in FIG. 8a, the user may be on WeChat TM 611 click the add button 812 in the chat interface 801, when the add button 812 is triggered, the WeChat is TM 611 detects an expression increasing operation of the user.
703: weChat TM 611 obtains a set of portrait images for the corresponding chat person from gallery application 612.
It will be appreciated that WeChat TM 611 when the expression increasing operation is detected, a portrait image corresponding to the chat person can be expressed is required to be obtained, so that WeChat TM 611 will obtain a set of portrait images of the corresponding chat person in gallery application 612.
704: weChat TM 611 displays a set of portrait images corresponding to the chat person.
It can be understood that the user performs the expression increasing operation, and then the WeChat TM 611 pops up the portrait image set of the corresponding chat person.
For example, the WeChat of the user is shown in FIG. 8a TM After clicking the add button 812 in the chat interface 801 of 611, weChat TM 611 will display the acquired portrait image set of the corresponding chat person in floating window mode in WeChat TM 611 above the chat interface 801. As shown in fig. 8 b. After the user clicks the add button 812, the WeChat TM 611, a portrait selection window 802 is displayed above the display interface 801. Portrait selection window 802 may include a Portrait selection field 813 and a "manual assignment button" 814.
705: weChat TM 611 detects a person specifying operation by the user, and determines person image information corresponding to the person specifying operation.
It will be appreciated that the portrait designation operates as if the user is on WeChat TM 611, and selecting a portrait image matched with the target expression text.
For example, as shown in fig. 8b, after popping up the portrait selection window 802, the user may click on a portrait image matching the target emoji character among the portrait images of the corresponding chat people displayed in the portrait selection area 813 to select a portrait image matching the target emoji character. At this time, the user may click the "manual designation" button 814 to complete the portrait designation operation.
It will be appreciated that in some embodiments, weChat TM The portrait image information corresponding to the portrait designation operation determined at 611 may be a portrait image. In some embodiments, weChat TM The portrait image information corresponding to the portrait designation operation determined at 611 may be other types of information capable of characterizing a portrait image, such as a name of a portrait image, a feature of a portrait image, etc., which is not limited by the present application.
706: weChat TM 611 sends the determined portrait image information and the target emoji to the gallery application 612.
707: the gallery application 612 generates a target expression from the received portrait image information and target emoji words.
It will be appreciated that the gallery application 612, upon receiving the portrait image information, determines a portrait image of the corresponding chat person, and generates a target expression in combination with the target emoji word.
708: gallery application 612 sends the target expression to WeChat TM 611。
709: weChat TM 611 displays the target expression on the display interface.
It will be appreciated that WeChat TM 611 after receiving the target expression, the target expression can be added to the WeChat TM The chat interface of 611 corresponds to the set of facial expressions of the chat person.
In some embodiments, when the user performs portrait designating operation, the user may also designate the font, font size, text color, etc. of the target text, thereby performing WeChat TM 611 the information sent to gallery application 612 also includes target tablesFont, size, color, etc. of the emotion text.
The expression generating method described in the above steps 701 to 709 is further described in the following with reference to fig. 8c to 8e in a specific embodiment.
As shown in fig. 8c, the display interface of the mobile phone 100 displays WeChat TM 611, chat interface 803. WeChat shown in FIG. 8c TM 611 and the WeChat displayed in FIG. 8a above TM 611 are identical, including WeChat TM 611, the user may enter the target expressive text "jarring-! The following is carried out ". WeChat TM 611 also includes an add button 812 in the chat interface 803.
When WeChat is a WeChat TM 611 when detecting that the user clicks the add button 812, the WeChat TM 611 will obtain the image set of the corresponding chat person from gallery application 612, at which time WeChat TM 611 in WeChat in response to user's expression increasing operation TM 611, a portrait selection window 804 is displayed in a floating window mode. Portrait selection window 804 may include a Portrait selection field 813 and a "manual assignment button" 814.
The user may select the icon with the target emoji word "shock-! The following is carried out "matching portrait image," at which point the user selects image 815. After selecting the image 815, the user may click on the "manual designation button" 814 to complete the selection of the portrait image. At this time, weChat TM 611 can determine the portrait image information of the image 815 and store this information and the target expressive text "shock-! The following is carried out "send to gallery application 612. The gallery application 612 may determine a portrait image corresponding to the target emoji based on the portrait image information, and make the target emoji and the determined portrait image into a target expression.
The gallery application 612 may send the target expression to WeChat after automatically generating the target expression TM 611 WeChat TM 611 will be displayed on its chat interface as shown in fig. 8 e. WeChat TM 611 after the target expression is obtained, displaying WeChat TM 611, chat interface 805. Wherein WeChat is a letter TM The expression selection area of the chat interface 805 of 611 is displayed with a newly created target expression 821.
In some embodiments, the target expression generated by the user by specifying the target emoji word is added to the set of facial expressions of the corresponding chat person and is in WeChat TM 611, when displaying the portrait expression set of the corresponding chat person, the target expression is displayed in the front of the portrait expression set, which can be understood that the reliability of the target expression generated by the user by specifying the target emoji is higher.
It can be understood that in the above embodiment, the expression generating process is described by taking the expression increase of the corresponding chat person as an example, and the expression increase of other portrait expression sets can be performed in the same way, which is not limited by the present application.
The following describes a process of automatically generating a portrait expression set in an embodiment of the present application with reference to fig. 9.
Fig. 9 is a flowchart of an automatic generation method of a portrait expression set according to an embodiment of the present application.
As shown in fig. 9, the method includes:
901: the portrait image stored in the mobile phone 100 is acquired.
It can be appreciated that in some embodiments, the method for automatically generating expressions provided in the embodiments of the present application may be executed by an album-like application and complete the production of a portrait expression set, such as a gallery application. Further, the capturing of the portrait image stored in the mobile phone 100 may be understood as the gallery application capturing the portrait image data stored in the corresponding storage module.
902: the portrait images stored in the mobile phone 100 are clustered according to the characters corresponding to the different portrait images.
It can be understood that clustering the portrait images is to extract features of all portrait images stored in the mobile phone 100 and then divide portrait images with similar features into the same category.
In some embodiments, the clustering algorithm for clustering the portrait images may be, for example, a K-Means clustering algorithm, a mean shift clustering algorithm, or the like, and the matched clustering algorithm may be selected according to the portrait images to be processed, which is not limited in the present application.
903: the same category of portrait images is divided into a portrait image set.
It can be understood that the portrait images of the same category obtained by using the clustering algorithm can have the same character features, that is, the portrait images of the same category are portrait images of the same task, and further, the portrait images belonging to the same category can be used as a portrait image set.
904: and (5) making the portrait images in each portrait image set into expressions by using an expression making model to obtain each portrait expression set.
It can be understood that the expression making model is a model for representing the corresponding relation between the portrait image and the emoji according to a large number of portrait images and emoji corresponding to portrait images and corresponding to emoji expressions and emoji labels, and the expression making model can analyze the portrait images, calculate the target position of emoji in the portrait images, determine the emoji character size and character color according to the size and color of the target position, and set the corresponding relation between emoji characters and fonts according to the emotional colors of emoji characters.
In some embodiments, the expression production model includes a portrait sub-model, a text sub-model, and an expression generation sub-model.
The portrait sub-model is a sub-model established by Computer Vision (cv) technology. The portrait sub-model can calculate the weight of the portrait image as various expressions for the input portrait image. Specifically, corresponding expression labels, such as smiles, frowns, laughs and the like, can be set in the portrait sub-model in advance according to different expressions, so that the portrait sub-model can calculate the weights of the smiles, the frowns and the like for the input portrait image, and at least one expression label with the highest weight is output as an output result of the portrait model.
The character sub-model is a weight of various expression labels of preset expression characters obtained through model training. Specifically, the same plurality of expression labels, such as smiling, frowning, laughing and the like, can be set in the character submodel in advance corresponding to the expression labels of the portrait expression, so that the character submodel can calculate weights of the expression characters respectively serving as the expression labels for the preset inputted expression characters, and at least one expression label with the highest weight is output as an output result of the character submodel.
The expression generating sub-model matches the output results of the portrait sub-model and the character sub-model, determines preset expression characters corresponding to the portrait image, and performs image processing on the portrait image by analyzing the portrait image, such as cutting, adding a filter, and the like, and simultaneously determines the target positions of the preset expression characters, character sizes, characters, character colors, and the like. Specifically, the matching process may be that, for the output result of each portrait image output by the portrait sub-model and the output result of each preset emoji output by the text sub-model, the emoji label and the corresponding weight of the portrait image are matched with the emoji label and the corresponding weight of the preset emoji, and the portrait image with the emoji label and the weight close to each other is determined to be matched with the preset emoji. After the matching is completed, in the process of generating the portrait expression, a face analysis technology can be utilized to analyze the portrait image, determine the position of the face in the image, calculate the area with small color block difference around the face, for example, can select a blank area preferentially, and the area is used as the position of the preset emoji.
Further, in some embodiments, the preset emoji color may be the contrasting color of the blank area to highlight the emoji.
In some embodiments, based on the determined region with small color block difference, the shape and size of the region may be determined to determine the layout manner and the word size of the preset emoji word in the region.
In some embodiments, the font of the emoji can be determined according to the emotion bias of the emoji corresponding to the preset emoji, for example, a round and happy font can be adopted for the emoji corresponding to the happy emoji.
In some embodiments, the size of the interception area can be properly expanded to place the preset emoji when the preset emoji is more, and the interception size of the portrait on the portrait image can be controlled according to the preset emoji length.
In some embodiments, in step 901, a video file including a face may also be acquired as a material for making a portrait expression. Specifically, a video frame including a face in a video file may be extracted, and the extracted video frame may be input as an obtained face image into a face sub-model, and the weight of each expression label may be calculated.
In some embodiments, after a video file including a face is obtained as a material for making an expression, video frames with similar weights of a plurality of expression tags can be combined to achieve a dynamic expression effect of a portrait expression.
In some embodiments, in the process of making the target expression shown in fig. 7, after the target expression text is input, when the portrait expression of the corresponding chat person is obtained, the output result of each portrait image in the portrait sub-model may be obtained at the same time, and the target expression text is input into the text sub-model, so as to output the weight of the corresponding expression. Further in WeChat TM 611, the human images can be displayed according to the sequence from high to low of the matching degree of the target expression characters and the human images, so that the user can quickly find the human images required by the subset from the multiple human images of the human expression set.
In some embodiments, after the mobile phone 100 performs the process of making the target expression shown in fig. 7, the mobile phone 100 may further optimize the expression making model in the above step 904 according to the generated target expression. Specifically, the mobile phone 100 may update each weight parameter of the portrait image and the expression label in the portrait sub-model, and update each weight parameter of the preset expression text and expression label in the text sub-model. In some embodiments, if the target emoji input by the user does not belong to the preset emoji in the text sub-model, the target emoji may be added to the text sub-model, and the weight of each emoji label may be determined as the target emoji.
Fig. 10 shows a software architecture block diagram of an electronic device 100 according to an embodiment of the application.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the application takes an android system with a layered architecture as an example, and illustrates a software structure of the electronic device 100.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 10, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 10, the application framework layer may include a window manager, a task manager, a telephony manager, a resource manager, a notification manager, a view system, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. In the embodiment of the present application, the window manager may obtain the touch event corresponding to the clicking operation of the user, including the application information corresponding to the window, the touch position, and the like, to match the corresponding display task, and display the corresponding interface, for example, display the portrait expression set and the like described in the step 403, and detailed descriptions in the step 403 and the step 405 are omitted herein.
The task manager is configured to cooperate with the window manager to invoke task content corresponding to the sliding operation of the user, for example, a display task that needs to be controlled by the window manager to be executed, and the task manager invokes the content of the corresponding display task and sends the content to the window manager to be executed, so as to implement a process of displaying the corresponding interface by the electronic device 200.
The content provider is used to store and retrieve data and make such data accessible to applications. Such data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The android runtime includes a core library and virtual machines. And the android running time is responsible for scheduling and managing an android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a touch driver and a sensor driver.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example implementation or technique according to the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The present disclosure also relates to an operating device for executing the text. The apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application Specific Integrated Circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processors for increased computing power.
Additionally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present disclosure is intended to be illustrative, but not limiting, of the scope of the concepts discussed herein.

Claims (15)

1. An interaction method applied to an electronic device, comprising the following steps:
a first interface of a first application is displayed on the electronic equipment, wherein the first interface comprises a first session window of a first contact;
detecting an expression input operation of a user in the first session window of the first interface;
and displaying at least one facial expression in the first facial expression set in the first conversation window corresponding to determining that a first facial expression set related to the first contact exists in a plurality of facial expression sets stored in the electronic equipment.
2. The interaction method according to claim 1, wherein association information for representing correspondence between at least one contact in the first application and the plurality of portrait expression sets is preset in the electronic device.
3. The interaction method of claim 1, wherein the first conversation window includes a conversation region and an expression region, and wherein displaying at least one of the first set of human expressions in the first conversation window includes:
Displaying a portrait expression tag of the first portrait expression set in the expression area, wherein the portrait expression tag is determined by the electronic equipment according to the portrait expression in the first portrait expression set;
and storing and displaying at least one facial expression in the first facial expression set in the facial label.
4. The interaction method of claim 3, wherein the portrait expression tag is a portrait head portrait of the first contact.
5. The interaction method of claim 1, further comprising:
and displaying at least one facial expression in a second facial expression set in the first session window, wherein the second facial expression set is a facial expression set in which the plurality of facial expression sets meet preset conditions, corresponding to the fact that the first facial expression set related to the first contact does not exist.
6. The interaction method of claim 5, wherein the second set of facial expressions is determined by:
acquiring historical use information of the plurality of portrait expression sets;
and determining a portrait expression set corresponding to the first historical use information meeting the preset condition as the second portrait expression set.
7. The interactive method according to claim 6, wherein the history use information includes a history use frequency;
the determining that the portrait expression set corresponding to the first historical usage information meeting the preset condition is the second portrait expression set includes:
determining a portrait expression set corresponding to a first historical use frequency with the historical use frequency higher than a preset frequency threshold as the second portrait expression set; or alternatively
And determining at least one facial expression set with highest historical use frequency as the second facial expression set.
8. The interaction method of claim 1, wherein the first application includes a plurality of contacts, and wherein the plurality of sets of facial expressions are generated by:
acquiring a plurality of portrait images stored in the electronic equipment;
dividing at least one portrait image corresponding to the same contact person into the same portrait image set;
and performing image processing on the at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, and associating each portrait expression set with each contact in the first application.
9. The interaction method according to claim 8, wherein performing image processing on the at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set includes:
and generating a portrait expression set corresponding to each portrait image set by using an expression making model for the at least one portrait image in each portrait image set.
10. The interaction method of claim 9, wherein the expression production model includes:
determining that each portrait image is a portrait sub-model with the weight of a preset expression;
determining a preset expression text as a text sub-model of the weight of the preset expression;
and generating an expression generating sub-model of the corresponding portrait expression by using each portrait image and the matched preset expression characters according to the output results of the portrait sub-model and the character sub-model.
11. The method of interaction of claim 8, further comprising:
detecting a text input operation of the user in the first session window of the first interface, and receiving text input by the user;
determining at least one target portrait image matched with the text in a portrait image set corresponding to the first portrait expression set;
The electronic equipment displays a second interface of the first application, wherein the second interface comprises the at least one target portrait image;
detecting an image selection operation of the user on the at least one portrait image in the second interface, and generating a target portrait expression according to the text and the portrait image corresponding to the image selection operation;
and displaying the target portrait expression in the first session window.
12. The method of interaction of claim 1, wherein the first application comprises at least one of an instant messaging application, a short message, a meeting application, a social application.
13. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, and
a processor for performing the interaction method of any of claims 1 to 12 when the instructions are executed by one or more processors.
14. A computer readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the interaction method of any of claims 1 to 12.
15. A computer program product, characterized in that it comprises instructions for implementing the interaction method according to any of claims 1 to 12.
CN202210383986.XA 2022-04-12 2022-04-12 Interaction method, device and medium Pending CN116962563A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210383986.XA CN116962563A (en) 2022-04-12 2022-04-12 Interaction method, device and medium
PCT/CN2023/085438 WO2023197888A1 (en) 2022-04-12 2023-03-31 Interaction method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210383986.XA CN116962563A (en) 2022-04-12 2022-04-12 Interaction method, device and medium

Publications (1)

Publication Number Publication Date
CN116962563A true CN116962563A (en) 2023-10-27

Family

ID=88328888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210383986.XA Pending CN116962563A (en) 2022-04-12 2022-04-12 Interaction method, device and medium

Country Status (2)

Country Link
CN (1) CN116962563A (en)
WO (1) WO2023197888A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146056A (en) * 2007-09-24 2008-03-19 腾讯科技(深圳)有限公司 A display method and system for emotion icons
CN110019286B (en) * 2017-07-19 2021-10-29 中国移动通信有限公司研究院 Expression recommendation method and device based on user social relationship
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN111162993B (en) * 2019-12-26 2022-04-26 上海连尚网络科技有限公司 Information fusion method and device
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium

Also Published As

Publication number Publication date
WO2023197888A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
JP6842799B2 (en) Sharing user-configurable graphic structures
CN110134484B (en) Message icon display method and device, terminal and storage medium
US20200310601A1 (en) Dynamic media selection menu
CN114787813A (en) Context sensitive avatar captions
CN113766064B (en) Schedule processing method and electronic equipment
CA3102222C (en) Method, device, terminal equipment and storage medium of sharing personal information
US20230215072A1 (en) Animated expressive icon
EP3493112B1 (en) Image processing method, computer device, and computer readable storage medium
CN107924256B (en) Emoticons and preset replies
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
WO2021047230A1 (en) Method and apparatus for obtaining screenshot information
CN113572889B (en) Simplifying user interface generation
US20230091214A1 (en) Augmented reality items based on scan
CN114816167A (en) Application icon display method, electronic device and readable storage medium
US20140288916A1 (en) Method and apparatus for function control based on speech recognition
CN114205447B (en) Shortcut setting method and device of electronic equipment, storage medium and electronic equipment
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN116962563A (en) Interaction method, device and medium
CN114338572B (en) Information processing method, related device and storage medium
CN115017522B (en) Permission recommendation method and electronic equipment
CN107515709B (en) Screen display method and device
WO2022212669A1 (en) Determining classification recommendations for user content
CN108052506A (en) Natural language processing method, apparatus, storage medium and electronic equipment
CN116126197A (en) Application program recommendation method
US20230018205A1 (en) Message composition interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination