WO2023197888A1 - Interaction method, device and medium - Google Patents

Interaction method, device and medium Download PDF

Info

Publication number
WO2023197888A1
WO2023197888A1 PCT/CN2023/085438 CN2023085438W WO2023197888A1 WO 2023197888 A1 WO2023197888 A1 WO 2023197888A1 CN 2023085438 W CN2023085438 W CN 2023085438W WO 2023197888 A1 WO2023197888 A1 WO 2023197888A1
Authority
WO
WIPO (PCT)
Prior art keywords
portrait
expression
user
image
expression set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/085438
Other languages
French (fr)
Chinese (zh)
Inventor
姚伟淦
张亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2023197888A1 publication Critical patent/WO2023197888A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Definitions

  • This application relates to the technical field of terminal equipment, and specifically relates to an interaction method, equipment and medium.
  • emoticons including image emoticons and text emoticons
  • Users can enliven the atmosphere of communication by turning photos of people they know well into emoticons.
  • the embodiments of the present application provide an interactive method, device and medium, which reduces the user's operations of making and searching for emoticons, and is conducive to improving the user experience.
  • embodiments of the present application provide an interaction method applied to an electronic device, including: a first interface of a first application displayed on the electronic device, wherein the first interface includes a first conversation window of a first contact ; Detecting the user's expression input operation in the first conversation window of the first interface; Corresponding to the determination that there is a first portrait expression set related to the first contact in the plurality of portrait expression sets stored in the electronic device, in the first At least one portrait expression in the first portrait expression set is displayed in the session window.
  • the expression input operation is an operation in which the user triggers the input of an expression in the first session window, which may be, for example, an operation in which the user clicks an expression input button in the first session window.
  • the first session window may include a text input box, and the user may enter text in the text input box of the first session window and click an expression input button.
  • the electronic device can display the portrait expression set corresponding to the contact that the user needs in response to the user's expression input operation. Furthermore, the user can operate the screen of the electronic device to select the user from the portrait expression set corresponding to the contact.
  • the required emoticons do not require users to manually create and import the emoticons corresponding to the contacts. The user operation is simple and the experience is high.
  • the electronic device is preset with association information representing a correspondence between at least one contact in the first application and multiple portrait expression sets.
  • the association information is that the electronic device responds to the user's association operation and associates the portrait expression set in the electronic device with the contact in the first application.
  • the associated information can be associated with communication contacts in the address book application in the electronic device, for example, each portrait expression set and each contact in the first application are associated with the communication contacts in the address book application. , furthermore, when determining the first portrait expression set, it can be achieved by accessing the address book application.
  • the associated information can be determined by adding corresponding name tags or other custom tags to each set of portrait expressions and each contact in the first application to confirm the first set of portrait expressions.
  • the first conversation window includes a conversation area and an expression area
  • displaying at least one portrait expression in the first portrait expression set in the first conversation window includes: displaying the first portrait expression in the expression area.
  • the portrait expression label is the portrait avatar of the first contact person.
  • the portrait avatar can be any image in the first portrait expression set that can represent the facial features of the first contact. Furthermore, when the electronic device displays the portraits of multiple contacts, the user can quickly determine through the portrait avatars. Which contact does the portrait expression set belong to.
  • the above method further includes: corresponding to determining that there is no first set of portrait expressions related to the first contact, displaying the second set of portrait expressions in the first conversation window At least one portrait expression set, wherein the second portrait expression set is a set of portrait expressions among multiple portrait expression sets that meet preset conditions.
  • the electronic device when the electronic device does not contain the portrait expression of the first contact, the electronic device can provide the user with portrait expressions of other contacts for the user to select.
  • the electronic device can also respond to the user's query operation and display the portrait expressions of other contacts on the display interface, and the user can also select the portrait expression of the first contact. Interact with the facial expressions of common friends to make chatting more interesting.
  • the second portrait expression set is determined in the following manner: obtaining historical usage information of multiple portrait expression sets; determining the portrait expression corresponding to the first historical usage information that meets the preset conditions The set is the second portrait expression set.
  • the historical usage information includes historical usage frequency; determining the portrait expression set corresponding to the first historical usage information that meets the preset conditions as the second portrait expression set includes: determining the historical usage frequency The portrait expression set corresponding to the first historical usage frequency higher than the preset frequency threshold is the second portrait expression set; or at least one portrait expression set with the highest historical usage frequency is determined to be the second portrait expression set.
  • the first application includes multiple contacts, and the multiple portrait expression sets are generated in the following manner: acquiring multiple portrait images stored in the electronic device; At least one portrait image of the contact is divided into the same portrait image set; image processing is performed on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, and each portrait expression set is compared with the first portrait image set
  • Each contact in the application is associated.
  • image processing is performed on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, including: processing at least one portrait image in each portrait image set For portrait images, the expression production model is used to generate the portrait expression set corresponding to each portrait image set.
  • the expression production model includes: a portrait sub-model that determines the weight of each portrait image as a preset expression; a text sub-model that determines the weight of the preset expression text as the preset expression; According to the output results of the portrait sub-model and the text sub-model, each portrait image and the matching preset expression text are generated to generate an expression generation sub-model corresponding to the portrait expression.
  • the above method further includes: detecting the user's text input operation in the first conversation window of the first interface, and receiving the text input by the user; Determine at least one target portrait image that matches the text from a set of portrait images; the electronic device displays a second interface of the first application, and the second interface includes at least one target portrait image; detects that the user is in at least one of the second interface
  • the image selection operation of the portrait image generates a target portrait expression based on the text and the portrait image corresponding to the image selection operation; and displays the target portrait expression in the first session window.
  • the electronic device can adjust the expression according to the text and selection input by the user. It automatically generates facial expressions from a certain portrait image, without the need for users to manually create them, and the operation is simple.
  • the first application includes at least one of an instant messaging application, a text message, a conferencing application, and a social networking application.
  • instant messaging applications can be, for example, WeChat TM , QQ TM, etc.
  • social networking applications can be, for example, Douyin TM , Weibo TM , Xiaohongshu TM , etc.
  • the first application can also be other applications that can realize the transmission of image information between users and other users, such as TaobaoTM , etc., as long as the chat function between users can be realized, this application does not cover this. limit.
  • embodiments of the present application provide an electronic device, including: one or more processors; one or more memories; one or more memories store one or more programs. When one or more programs are When one or more processors are executed, the electronic device is caused to execute the above interaction method.
  • embodiments of the present application provide a computer-readable storage medium. Instructions are stored on the storage medium. When the instructions are executed on the computer, they cause the computer to perform the above interactive method.
  • embodiments of the present application provide a computer program product, which includes a computer program/instruction.
  • the computer program/instruction is executed by a processor, the above interactive method is implemented.
  • FIGS 1a to 1c show schematic diagrams of interface changes of some expression interaction processes provided by embodiments of the present application
  • Figure 2 shows a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application
  • FIGS 3a to 3e show some schematic diagrams of the interface changes of the mobile phone after adding the "Allow acquisition of gallery to generate emoticon package” switch provided by the embodiment of the present application;
  • Figure 4 shows a schematic flow chart of an interaction method provided by an embodiment of the present application
  • FIGS 5a to 5b are schematic diagrams of interface changes of some expression interaction processes provided by embodiments of the present application.
  • Figure 6 shows a schematic flow chart of an interaction method provided by an embodiment of the present application.
  • Figure 7 shows a schematic flow chart of an expression generation method provided by an embodiment of the present application.
  • FIGS 8a to 8e are schematic diagrams of interface changes in the generation process of some expressions provided by embodiments of the present application.
  • Figure 9 shows a schematic flow chart of a method for automatically generating a portrait expression set provided by an embodiment of the present application.
  • Figure 10 shows a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • this application provides an interactive method.
  • the method includes: the electronic device divides the portrait images of the same person in multiple portrait images it stores into a portrait image set, and generates a portrait expression set for each portrait image set that can be used for communication applications. After the electronic device generates the portrait expression set , pre-associate each portrait expression set with each chat person in the communication application. Among them, the portrait expression set includes at least one expression.
  • the electronic device detects the user's expression input operation in the communication application, it obtains the contact information corresponding to the expression input operation. If the electronic device stores a portrait expression set corresponding to the contact, the acquired portrait expression set corresponding to the contact will be displayed on the display interface of the communication application for the user to select.
  • pre-association means that the electronic device determines the chat person information corresponding to each portrait expression set in the communication application by obtaining the user's input information. For example, you can add the name or other custom tags of the corresponding person to each set of portrait expressions, and at the same time, note the name or other custom tags of each chat person in the communication application. Through the same name or custom tag, you can realize the integration of each set of portrait expressions with each other. Pre-association of chat persons in communication applications.
  • Another example is to correspond each portrait expression set to the communication contacts in the address book application in the electronic device, and at the same time allow The communication application is allowed to access the address book application of the electronic device to determine the corresponding communication contact information in the address book application of the electronic device based on the chat person information in the communication application, so as to realize the matching of each portrait expression set with each chat person in the communication application. Pre-associated.
  • the electronic device can use the expression production model to produce multiple portrait images into each portrait expression set.
  • the expression production model may be the corresponding relationship between the portrait image and the preset text obtained by training based on a large number of portrait images, corresponding expression tags, and preset text corresponding to the expression tag by the electronic device, and the portrait image and the corresponding preset text Text is combined to create a model of the resulting expression.
  • the preset text can be text corresponding to commonly used expressions.
  • the electronic device can automatically create corresponding portrait expressions for the portrait images it stores.
  • the electronic device can display the set of portrait expressions corresponding to the contact that the user needs in response to the user's expression input operation.
  • the user can operate the screen of the electronic device to select the user from the set of portrait expressions corresponding to the contact.
  • the required emoticons do not require users to manually create and import the emoticons corresponding to the contacts. The user operation is simple and the experience is high.
  • the electronic device can display at least one portrait expression set with the highest frequency of use on the electronic device according to the user's frequency of use of each portrait expression. display interface.
  • the portrait expression set generated in the electronic device may be that after the electronic device automatically backs up the multiple portrait images it stores to the cloud server, the cloud server divides the backed up multiple portrait images into portrait images of the same person. to a set of portrait images, generate a set of portrait expressions for each set of portrait images that can be used in communication applications, and send each generated set of portrait expressions to an electronic device. It can be understood that the production of the portrait expression set can be executed by electronic equipment or by a cloud server.
  • Figure 1 shows an interface change diagram of the interaction method provided by the embodiment of the present application.
  • the mobile phone 100 displays a chat interface 101, and an expression input button 111 is displayed on the chat interface 101.
  • the expression input button 111 can be a smiley face graphic as shown in Figure 1a, or other graphics representing expression input.
  • the user can click the expression input button in the chat interface 101 of the mobile phone 100, that is, perform operation 1.
  • the mobile phone 100 obtains the user's operation 1, that is, the user's expression input operation.
  • the mobile phone 100 can respond to the user's operation 1 and obtain the corresponding chat person information in the chat interface 101.
  • the corresponding chat person information obtained by the mobile phone is Alice. If the mobile phone 100 stores a portrait expression set corresponding to the chatter Alice, the mobile phone 100 can obtain the portrait expression set corresponding to the chatter Alice, and display the expression selection interface 102 as shown in Figure 1b on the display interface of the mobile phone 100.
  • the mobile phone 100 displays an expression selection interface 102 as shown in Figure 1b on the display interface.
  • the expression selection interface 102 displays an expression selection area 112 and an expression set selection area 113.
  • the expression selection area 112 is used to display the expressions of the selected expression set, and the expression set selection area 113 displays the selectable expression set.
  • the mobile phone 100 can display the expression corresponding to the default expression set of the communication application in the expression selection area 112, and at this time the default expression button 114 is triggered.
  • the portrait expression set corresponding to the chat person Alice obtained by the mobile phone 100 in response to the user's operation 1 can be displayed in the expression set selection area 113 in the form of a portrait expression button.
  • the portrait expression button can be made by using the cover of the portrait image set corresponding to the portrait expression set.
  • the portrait expression set corresponding to the chatter Alice can be displayed as the chatter emoticon button 115.
  • the user can also click the expression hiding button 116 in Figure 1b to obtain the portrait expression sets of other chatters.
  • the mobile phone 100 when the mobile phone 100 displays the portrait expression button, it may be displayed in the expression set selection area 113 according to whether the corresponding expression set is the portrait expression set of the corresponding chat person and the frequency of use of each portrait expression set.
  • the user wants to send the expression automatically generated by the mobile phone 100 based on the stored portrait image, for example, when the user wants to send the expression concentrated based on the portrait expression of the corresponding chat person Alice to the corresponding chat person Alice, the user can click the chat person expression button 115, that is, execute Operation 2.
  • the mobile phone 100 displays the expression selection interface 103 on the display interface of the mobile phone 100, as shown in Figure 1c.
  • the chat person's expression button 115 is triggered in the expression set selection area 113 of the expression selection interface 103.
  • the expression selection area 112 may display at least one expression in the portrait expression set corresponding to Alice, the chat person, automatically generated by the mobile phone 100 .
  • the mobile phone 100 can automatically generate a portrait expression set of each chatter based on the stored multiple portrait images, and display the portrait expression set on the display interface of the mobile phone 100.
  • the user can select the expression of the corresponding chatter by clicking the chatter expression button 115.
  • the user's expression-based interaction is simple to operate and has a higher sense of experience.
  • the interactive methods provided by the embodiments of the present application are applicable to electronic devices including but not limited to mobile phones, portable computers, laptop computers, desktop computers, tablet computers, head-mounted displays, and mobile emails.
  • devices automotive devices, portable game consoles, reader devices, televisions with one or more processors embedded or coupled therein, or other electronic devices capable of accessing a network.
  • FIG. 2 shows a schematic diagram of the hardware structure of an electronic device.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a sensor module 180, a display screen 190, etc.
  • the sensor module 180 may include a pressure sensor 180A, an acceleration sensor 180E, a touch sensor 180K, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • image signal processor, ISP image signal processor
  • controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the relevant instructions and data for executing the interactive method of the present application can be stored in the memory for the processor 110 to call.
  • the processor 110 can control the execution of each step of the interactive method through the controller. The specific implementation process will be It is described in detail below and will not be repeated here.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (subscriber identity) module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 may separately couple the touch sensors 180K and the like through different I2C bus interfaces.
  • the processor 110 can be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100 .
  • the MIPI interface can be used to connect the processor 110 and peripheral devices such as the display screen 190 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the display screen 190 communicate through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 display 190, sensor module 180, etc.
  • the GPIO interface can also be configured Set as I2C interface, MIPI interface, etc.
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the electronic device 100 implements display functions through a GPU, a display screen 190, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 190 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter window display information.
  • the display screen 190 is used to display images, videos, etc.
  • Display 190 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Mini-LED Micro-LED
  • Micro-OLED quantum dot light-emitting diode
  • QLED quantum dot light-emitting diode
  • the electronic device 100 may include 1 or N display screens 190, where N is a positive integer greater than 1.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the internal memory 121 can store execution instructions for implementing the interactive method of the present application for the processor 110 to call to implement the interactive method of the present application, so that the electronic device 100 automatically obtains the expression when the user performs an expression input operation.
  • the portrait expression set corresponding to the contact stored in the electronic device does not require the user to manually import it, improving the user experience.
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be disposed on the display screen 190 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of the pressure based on the change in capacitance.
  • the electronic device 100 detects the strength of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices and be used in horizontal and vertical screen switching, pedometer and other applications.
  • Touch sensor 180K also known as "touch device”.
  • the touch sensor 180K can be disposed on the display screen 190.
  • the touch sensor 180K and the display screen 190 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through the display screen 190 .
  • the touch screen composed of the touch sensor 180K and the display screen 190 can detect the user's click operation. With the user's click operation, the touch screen can display corresponding interface changes, such as click emoticons.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a location different from that of the display screen 190 .
  • a switch for obtaining emoticons can be added to the operation interface of the communication application.
  • the user can choose whether to turn on the switch for obtaining emoticons, thereby choosing whether to turn on the communication application to obtain the mobile phone 100
  • the function of automatically generating emoticons can be added to the operation interface of the communication application.
  • the communication application is WeChatTM .
  • the user operates the mobile phone 100 to open WeChat TM , and the WeChat TM interface 301 is displayed on the display interface of the mobile phone 100.
  • the user can click the setting button 311 to open the WeChat TM setting interface 312, as shown in Figure 3b.
  • the setting interface 312 of WeChat TM includes various settings of WeChat TM , and various settings include added emoticon package acquisition switches, such as the "allow obtaining emoticon packages generated by the gallery" switch 312 in the setting interface 312 .
  • the user can click the "Allow acquisition of emoticon packages generated by the gallery" switch 312 in the setting interface 302 of WeChat TM to enable the function of WeChat TM to obtain the emoticon packages automatically generated in the mobile phone 100 .
  • a floating window interface 303 will pop up above the setting interface 302 of WeChat TM , as shown in Figure 3c shown.
  • the words "Whether the gallery is allowed to automatically generate emoticons” will be displayed, as well as the "OK" button 321 and the “Cancel” button 322.
  • the user can click the "OK” button 321 to enable the automatic generation of emoticon packages by the gallery, and then WeChat TM can obtain the emoticon packages automatically generated by the gallery and use them for emoticon interaction between the user and the corresponding contact.
  • the user can also click the "Cancel" button 322 in the floating window interface 303 to abandon the function of allowing the gallery to automatically generate emoticons.
  • the user of the mobile phone 100 gives up turning on the "Allow acquisition of emoticons generated from the gallery" switch 312, that is, the user can also click the "Cancel” button 322 in the floating window interface 303, and the floating window interface 303 will disappear.
  • the mobile phone 100 The display interface still displays the setting interface 302 of WeChat TM , and the "Allow acquisition of emoticon packages generated by the gallery" switch 312 is turned off.
  • the "Allow acquisition of emoticon packages generated by the gallery" switch 312 in the embodiment of the present application can be turned on only after the gallery is authorized to automatically generate emoticon packages.
  • other software other than the gallery may be used to automatically generate emoticon packages.
  • the emoticon package acquisition switch may also be displayed through other types of switches. This application does not limit this.
  • the display interface of the mobile phone 100 will jump from the settings interface 302 of WeChat TM .
  • the display interface of the mobile phone 100 displays the setting interface 305 of the gallery, as shown in Figure 3e.
  • the user can turn on the "Allow automatic generation of emoticons" switch 331 in the setting interface 305 of the gallery shown in Figure 3e.
  • the method for turning on the emoticon package acquisition switch can also adopt other methods to achieve interaction with the user, and this application does not limit this.
  • Figure 4 shows a schematic flowchart of an interaction method provided by an embodiment of the present application.
  • the interaction method provided by the embodiment of the present application is applied to the mobile phone 100, including:
  • the display interface of the machine 100 displays the chat interface of the communication application.
  • communication applications can include instant messaging applications such as WeChatTM and QQTM , social network platforms with chat functions such as DouyinTM , WeiboTM , and XiaohongshuTM , and text messaging and conferencing applications.
  • the communication application can support the sending and receiving of image type files, that is, the communication application has the expression interaction function.
  • the image type files can be static images or dynamic images.
  • the user's expression input operation can generate a trigger event for the user to operate the screen of the mobile phone 100, and the trigger event is related to the location of the user's operation and the user's operation method.
  • the user's expression input operation can be, for example, the user clicks the expression input button 111 as shown in Figure 1a, that is, the user performs operation 1, and the mobile phone 100 detects that the expression input button 111 in the chat interface of WeChat TM is triggered, which can be considered as detecting the user.
  • the emoticon input operation triggers the acquisition of the chatter information corresponding to the emoticon input operation.
  • the mobile phone 100 obtains the chat person information corresponding to the expression input operation, so that the mobile phone 100 obtains the corresponding chat person information for chatting with the user in the display page where the expression input operation is performed.
  • the obtained corresponding chat person information may include the corresponding chat person's user name, account number, and the user's remarks about the corresponding chat person, etc.
  • the corresponding chat person information may include the corresponding chat person's nickname, the user's notes about the corresponding chat person, mobile phone number, WeChat ID, etc. It can be understood that the corresponding chat person information obtained by the communication application can be used to match the corresponding chat person with the portrait expression set stored in the mobile phone 100 .
  • the portrait expression set is the portrait photos stored in the mobile phone 100. It automatically clusters the images, divides the portrait images corresponding to the same person into a portrait image set, and performs image processing on each portrait image set to generate a corresponding Portrait expression sets, each generated portrait expression set can be stored in the mobile phone 100 as a portrait expression library automatically generated by the mobile phone 100 . That is, the mobile phone 100 processes the portrait images stored therein and generates a portrait expression library including at least one portrait expression set, and the portrait expression set includes at least one expression of the same person generated by the mobile phone 100 .
  • the clustering of images and the generation of portrait image sets and portrait expression sets can be implemented through photo album applications, such as the gallery application in the mobile phone 100, and after generating the portrait expression set, the photo album application needs to The emoticon set is pre-associated with the chat person in the messaging application. Further, in step 402, the mobile phone 100 determines whether the portrait expression set corresponding to the contact is stored, that is, the gallery determines whether its corresponding stored data includes the portrait expression set corresponding to the contact.
  • the photo album application can obtain the communication contact information in the address book application of the mobile phone 100, and establish a relationship between the portrait expression set and the communication contact information based on the user's input information.
  • the communication contact information can be used as the label of the portrait expression set.
  • the communication application may also include an address book access switch.
  • the user can choose whether to enable the function of the communication application to obtain communication contact information stored in the address book by choosing whether to turn on the address book access switch.
  • the user can operate the mobile phone 100 to turn on the address book access switch.
  • step 402 when the mobile phone 100 determines whether to store the portrait expression set of the corresponding contact, it can obtain the communication contact information in the address book for the communication application, and determine the corresponding contact information of the corresponding contact. Communication contact information, and send the confirmed communication contact information to the gallery application.
  • the gallery application determines whether the portrait expression set corresponding to the received communication contact information is stored in the portrait expression library based on the received communication contact information.
  • the user can add the name of the person corresponding to the portrait expression set as the label of the portrait expression set automatically generated by the photo album application to the corresponding portrait expression set.
  • the user can also customize the label of each portrait expression set and add to the corresponding portrait expression set.
  • the mobile phone 100 can match it with the tags of each portrait expression set to determine the corresponding contact information. Set of portrait expressions.
  • step 403 is executed.
  • the mobile phone 100 when the mobile phone 100 displays the portrait expression set corresponding to the contact, it can add a portrait expression button (ie, the chat person expression button above) that stores the portrait expression set corresponding to the contact in the expression area of the communication application. Furthermore, when the mobile phone 100 detects that the user clicks the portrait expression button, it can display the expression corresponding to the portrait expression set of the contact in the expression display area of the communication application. When the display interface of the mobile phone 100 displays the expression of the corresponding contact, the user can select the expression of the corresponding contact that he needs from the displayed expression of at least one corresponding contact, and send it to the corresponding contact.
  • a portrait expression button ie, the chat person expression button above
  • the gallery application when generating a portrait expression set, will select a portrait image in the corresponding portrait image set or an expression in the portrait expression set as the cover of the portrait expression set. Further, when the mobile phone 100 adds a portrait expression button, the cover of the portrait expression set corresponding to the contact can be used as the portrait expression button corresponding to the contact.
  • the portrait expression set corresponding to the chatter obtained by the mobile phone 100 is Alice's portrait expression set
  • the newly added portrait expression button corresponding to the chatter can be, for example, the chatter expression button 115
  • the expression selection area 112 displays the expressions in the portrait expression set corresponding to the chatter in descending order according to the number of uses of the expressions in the portrait expression set of the corresponding chatter.
  • the expression interaction method provided by the embodiment of the present application can display the portrait expression set corresponding to the contact required by the user in response to the user's expression input operation. Furthermore, the user can operate the screen of the electronic device to select from the portrait expression set corresponding to the contact.
  • the mobile phone 100 when the determination result in step 402 is no, the mobile phone 100 performs the following steps:
  • the historical usage information may include, for example, the user's frequency of use of each portrait expression set, the user's recently used portrait expression sets, etc.
  • the frequency of use of each portrait expression set can represent the user's preference for each portrait expression set
  • the user's recently used portrait expression set can represent the user's recently preferred portrait expression set.
  • the mobile phone 100 can select a category from the user's frequency of use of each portrait expression set, the user's most recently used portrait expression set, etc., as historical usage information.
  • the mobile phone 100 may use the user's frequency of use of each portrait expression set and the weighted values of at least two categories in the user's recently used portrait expression sets as historical usage information.
  • the preset conditions can be set based on historical usage information. For example, when the historical usage information is the user's frequency of use of each portrait expression set, the preset condition can be that the top three most frequently used portrait expression sets are displayed, and the preset condition can also be that the usage frequency of the portrait expression set exceeds the preset limit. Set threshold. The same applies when the historical usage information is other categories of information.
  • FIGS 5a to 5b are interface diagrams of some interactive methods provided by embodiments of the present application.
  • the mobile phone 100 can also provide portrait expressions of other chat persons (corresponding to the contacts mentioned above) for the user to communicate with the corresponding chat persons (ie, corresponding contacts).
  • Expression interaction For example, when the expression of the corresponding chat person cannot meet the user's expression interaction needs, or the user prefers to select expressions from the portrait expression set of other chat persons to interact with the corresponding chat person, or the user needs portrait images of common friends with the corresponding chat person The made emoticons make chatting with the corresponding chat person more interesting, etc.
  • the mobile phone 100 can provide a query button for the portrait expression set on the display interface.
  • the mobile phone 100 detects that the user is using the communication application.
  • the query button of the portrait expression set is clicked, the mobile phone 100 can display a portrait expression set selection window on the display interface.
  • Each portrait expression set can be displayed sequentially in the portrait expression set selection window through its corresponding cover or label of the portrait expression set.
  • each portrait expression set when each portrait expression set is displayed sequentially in the portrait expression set selection window through the label of the portrait expression set, each portrait expression set can be sorted according to the label, and the portrait expression set selection window can be displayed according to the sorting result. to display. For example, if the label of the portrait expression set is the name of the person, the labels of each portrait expression set can be sorted according to the first letter of the name.
  • the display interface of the mobile phone 100 displays the chat interface 501 of WeChat TM .
  • the emoticon set selection area 511 of the chat interface 501 of WeChatTM includes a chat person emoticon button 521 and a query button 522. The user can click the query button 522.
  • the mobile phone responds to the user's click operation and displays the portrait expression set selection window 502 in a floating window mode above the chat interface 501 of WeChatTM .
  • the portrait expression set selection window 502 can display the labels of the portrait expression sets of other chatters except the portrait expression set of the corresponding chatter, and the labels of each portrait expression set can be sorted by first letter. For example, if the labels of the portrait expression set include "Peter Pan" and "Zwc", "Peter Pan” can be ranked first and "Zwc" can be ranked second.
  • the portrait expression set when each portrait expression set is displayed sequentially in the portrait expression set selection window through the cover of the portrait expression set, the portrait expression set can be sorted according to the historical usage information of each portrait expression set, and the portrait expression set selection can be made based on the sorting results. displayed in the window.
  • the historical usage information is introduced above and will not be described in detail here.
  • the covers of each portrait expression set are sorted according to the frequency of use of each portrait expression set from high to low.
  • the display interface of the mobile phone 100 displays the chat interface 503 of WeChat TM .
  • the emoticon set selection area 512 of the chat interface 503 of WeChatTM includes a chat person emoticon button 531 and a query button 532. The user can click the query button 532.
  • the mobile phone responds to the user's click operation and displays the portrait expression set selection window 504 in a floating window mode above the chat interface 503 of WeChatTM .
  • the portrait expression set selection window 504 can display the covers of portrait expression sets of other chatters except the portrait expression set of the corresponding chatter, and the cover of each portrait expression set can be based on the frequency of use of each portrait expression set. Sort from high to low.
  • the cover 541 of the portrait expression set and the cover 542 of the portrait expression set are in the portrait expression set selection window.
  • the display of 504 can be as shown in Figure 5b.
  • Figure 6 shows a flow chart of an interaction method provided by an embodiment of the present application.
  • the interaction method includes:
  • WeChat TM 611 detects the user's expression input operation.
  • the expression input operation is for the user to click an expression input button on the chat interface of WeChat TM 611, such as the expression input button 111 in Figure 1a.
  • WeChatTM 611 detects the user's operation of clicking the expression input button 111, that is, detects the user's expression input operation.
  • WeChat TM 611 obtains the corresponding chat person information in response to the user's expression input operation.
  • WeChat TM 611 obtains the emoticon input operation, it indicates that the user needs to input emoticons in the chat interface of WeChat TM 611.
  • WeChat TM 611 turns on the emoticon package acquisition switch, such as "Allow acquisition of gallery generation” in Figure 3b "emoticon package” switch, in order to automatically obtain the required portrait emoticon set, WeChat TM 611 needs to first obtain the corresponding chat person information to chat with the user.
  • WeChat TM 611 sends the corresponding chat person information to the gallery application 612.
  • WeChat TM 611 obtains the corresponding chat person information, it will send the obtained corresponding chat person information to the gallery application 612, and apply to the gallery application 612 for the portrait expression set of the corresponding chat person automatically generated by the gallery application 612.
  • the gallery application 612 determines whether to store the portrait expression set corresponding to the chat person.
  • the gallery application 612 will match the corresponding chat person information with the tags of each portrait expression set generated by the gallery application 612, and determine whether there is a corresponding chat person in each portrait expression set generated by the gallery application 612. set of portrait expressions.
  • the tags of the portrait expression set may be, for example, the name of the portrait corresponding to the portrait expression set, a user-defined tag, the portrait expression set, and the corresponding communication contact information in the address book application.
  • WeChatTM 611 can directly display the portrait expression set corresponding to the chat person, and execute step 605.
  • the gallery application 612 sends the portrait expression set corresponding to the chat person to the WeChat TM 611.
  • the gallery application 612 can respond to the application of WeChat TM 611 and send the portrait expression set of the corresponding chat person to WeChat TM 611 .
  • WeChat TM 611 displays the portrait expression set corresponding to the chat person.
  • step 604 the portrait image corresponding to the chatter may not be stored in the gallery application 612, and the portrait image corresponding to the chatter may not be processed to obtain the portrait expression set corresponding to the chatter, then the gallery application 612
  • the remaining portrait expression sets can be sent to WeChatTM 611 and displayed according to the user's preference, that is, steps 607 and 608 are performed.
  • the gallery application 612 obtains the historical usage information of each portrait expression set.
  • the historical usage information of the portrait expression set can represent the user's usage preference for each portrait expression set. Among them, historical usage information has been introduced in detail previously and will not be described again here.
  • the gallery application 612 sends at least one portrait expression set that meets the preset conditions to WeChat TM 611.
  • the gallery application 612 sends at least part of the portrait expression set generated by it to the WeChat TM 611, without the user having to manually create the portrait expressions or manually import the portrait expressions into the WeChat TM 611. User operation is simple.
  • WeChatTM 611 displays at least one received portrait expression set that meets the preset conditions.
  • the user when the user wants to obtain an emoticon package with specified text, the user can enter the text he wants to specify and the portrait image that needs to be involved in making the emoticon.
  • the mobile phone 100 can automatically generate the corresponding emoticon based on the information input by the user. expression.
  • the process will be further introduced below with reference to Figures 7 to 8d, taking the communication application as WeChat TM 611 and making the expression of the corresponding chat person as an example.
  • Figure 7 shows a schematic flowchart of an expression generation method provided by an embodiment of the present application.
  • FIGS 8a to 8e show interface change diagrams of some expression generation processes provided by embodiments of the present application.
  • the method includes:
  • WeChat TM 611 obtains the target emoticon text input by the user.
  • target emoticon text is the content input by the user in the text input box of WeChat TM 611.
  • the display interface of the mobile phone 100 displays the chat interface 801 of WeChat TM 611, as shown in Figure 8a.
  • a text input box 811 of WeChat TM 611 is displayed in the chat interface 801 of WeChat TM 611.
  • the user can input the text content he wants to send or the target emoticon text corresponding to the target emoticon of the chatter in the text input box 811.
  • the chat interface 801 of WeChatTM 611 also includes an expression selection area 810. Among them, the expression selection area 810 displays expressions corresponding to the portrait expression set of the chatter, and an add button 812 .
  • WeChat TM 611 detects the user's expression adding operation, and obtains the corresponding chat person information in response to the expression adding operation.
  • the user can operate the corresponding position on the screen of the mobile phone 100 to trigger the expression adding button.
  • the WeChat TM 611 detects the user's expression adding operation.
  • the user can click the add button 812 in the chat interface 801 of WeChat TM 611.
  • the add button 812 is triggered, the WeChat 611 detects the user's expression adding operation.
  • WeChat TM 611 obtains the portrait image set of the corresponding chat person from the gallery application 612.
  • WeChat TM 611 detects the operation of adding an expression, it needs to obtain a portrait image of the corresponding chat person that can be made into an expression. Therefore, WeChat TM 611 will obtain a set of portrait images of the corresponding chat person in the gallery application 612.
  • WeChat TM 611 displays the portrait image set corresponding to the chat person.
  • WeChat TM 611 will display the obtained set of portrait images of the corresponding chat person on the chat interface 801 of WeChat TM 611 in a floating window mode. above. As shown in Figure 8b.
  • a portrait selection window 802 will be displayed above the display interface 801 of the WeChat TM 611.
  • the portrait selection window 802 may include a portrait selection area 813 and a "manual specification button" 814.
  • WeChatTM 611 detects the user's portrait designation operation, and determines the portrait image information corresponding to the portrait designation operation.
  • the portrait designation operation is for the user to select a portrait image that matches the target expression text from the set of portrait images displayed on WeChat TM 611.
  • the user can click the portrait image that matches the target expression text among the multiple portrait images corresponding to the chat person displayed in the portrait selection area 813 to select the target expression.
  • Emoji text matched to portrait image the user can click the "manual designation" button 814 to complete the portrait designation operation.
  • the portrait image information corresponding to the portrait specifying operation determined by WeChat 611 may be a portrait image.
  • the portrait image information corresponding to the portrait specifying operation determined by WeChatTM 611 may be other types of information that can characterize the portrait image, such as the name of the portrait image, the characteristics of the portrait image, etc. This application does not limit this.
  • WeChat TM 611 sends the determined portrait image information and target emoticon text to the gallery application 612.
  • the gallery application 612 generates a target expression based on the received portrait image information and target expression text.
  • the gallery application 612 will determine the corresponding portrait image of the chatting person, and combine it with the target expression text to generate the target expression.
  • the gallery application 612 sends the target expression to WeChat TM 611.
  • WeChatTM 611 displays the target expression on the display interface.
  • WeChat TM 611 can add the target expression to the portrait expression set of the corresponding chat person in the chat interface of WeChat TM 611.
  • the user when performing a portrait designation operation, can also specify the font, font size, text color, etc. of the target text.
  • the information sent by WeChatTM 611 to the gallery application 612 also includes the font, font size, etc. of the target emoticon text. , text color, etc.
  • the display interface of the mobile phone 100 displays the chat interface 803 of WeChat TM 611.
  • the chat interface 803 of WeChat TM 611 shown in Figure 8c is the same as the chat interface 801 of WeChat TM 611 shown in Figure 8a above, including a text input box 811 of WeChat TM 611, in which the user can input target emoticon text. "Shock!".
  • the chat interface 803 of WeChatTM 611 also includes an add button 812.
  • a portrait selection window 804 is displayed in a floating window mode.
  • the portrait selection window 804 may include a portrait selection area 813 and a "manual specification button" 814.
  • the user can select a portrait image that matches the target emoticon text "Shocked!”" in the portrait selection area 813.
  • the user selects image 815.
  • the user can click the "manual designation button” 814 to complete the selection of the portrait image.
  • WeChatTM 611 can determine the portrait image information of the image 815, and send the information and the target emoticon text "Shocked!!" to the gallery application 612.
  • the gallery application 612 may determine the portrait image corresponding to the target emoticon text based on the portrait image information, and produce the target emoticon text and the determined portrait image into the target emoticon.
  • the gallery application 612 can send the target expression to WeChat TM 611, and WeChat TM 611 will display it on its chat interface, as shown in Figure 8e.
  • WeChat TM 611 displays the chat interface 805 of WeChat TM 611.
  • WeChatTM 611 The newly created target emoticon 821 is displayed in the emoticon selection area of the chat interface 805.
  • the target expression generated by the user by specifying the target expression text will be added to the portrait expression set of the corresponding chat person, and when WeChat TM 611 displays the portrait expression set of the corresponding chat person, the target expression will be displayed on the portrait
  • the top position in the emoticon set can be understood as the target expression generated by the user by specifying the target emoticon text is more reliable.
  • the expression generation process is introduced by taking the expression of the corresponding chat person as an example.
  • the expression of other portrait expression sets can also be added in the same way, and this application does not limit this.
  • FIG. 9 is a schematic flowchart of a method for automatically generating a portrait expression set provided in an embodiment of the present application.
  • the method includes:
  • the automatic generation method of expressions provided in the embodiments of the present application can be executed by a photo album application to complete the production of a portrait expression set, such as a photo gallery application.
  • obtaining the portrait image stored in the mobile phone 100 can be understood as the gallery application obtaining the portrait image data stored in its corresponding storage module.
  • clustering the portrait images is to extract features from all the portrait images stored in the mobile phone 100 and then classify the portrait images with similar features into the same category.
  • the clustering algorithm for clustering portrait images can be, for example, K-Means clustering algorithm, mean shift clustering algorithm, etc., and a matching algorithm can be selected according to the portrait image to be processed, etc. Clustering algorithm, this application does not limit this.
  • the portrait images of the same category obtained by using the clustering algorithm can have the same character characteristics, that is, the portrait images of the same category are portrait images of the same task, and the portrait images belonging to the same category can be used as a portrait image set.
  • the expression production model is based on a large number of portrait images, the expressions corresponding to the portrait images, and the expression text corresponding to the expression label, and generates a model that represents the correspondence between the portrait image and the expression text
  • the expression production model can also be used for portrait images. Analyze and calculate the target position of the emoticon text in the portrait image, and determine the font size and text color of the emoticon text based on the size and color of the target position. You can also set the correspondence between the emoticon text and the font according to the emotional color of the emoticon text. relation.
  • the expression production model includes a portrait sub-model, a text sub-model, and an expression generation sub-model.
  • the portrait sub-model is a sub-model established using computer vision (Computer Vision, cv) technology.
  • the portrait sub-model can calculate the weight of various expressions of the portrait image.
  • corresponding expression tags can be set in advance in the portrait sub-model according to different expressions, such as smile, frown, laugh, etc.
  • the portrait sub-model can calculate the weights of smile and frown respectively. weight, size weight, etc., and output at least one expression label with the highest weight as the output result of the portrait model.
  • the text sub-model is the preset emoticon text obtained through model training and is the weight of various emoticon tags.
  • the same multiple expression tags corresponding to the expression tags of the portrait expression can be set in advance in the text sub-model, such as smile, frown, laugh, etc., and then the text sub-model can calculate for the input preset expression text
  • the emoticon text is the weight of each emoticon label, and at least one emoticon label with the highest weight is output as the output result of the text sub-model.
  • the expression generation sub-model matches the output results of the portrait sub-model and the text sub-model, determines the preset expression text corresponding to the portrait image, and performs image processing on the portrait image by analyzing the portrait image, such as cropping and adding filters. Mirror, etc., and determine the target position, font size, font, text color, etc. of the preset emoticon text at the same time.
  • the matching process can be, for the portrait sub-model input
  • the output results of each portrait image and the output results of each preset emoticon text output by the text sub-model are compared with the expression labels and corresponding weights of the portrait images and the preset emoticon text expression labels and corresponding weights. Matching determines whether expression tags and portrait images with similar weights match the preset expression text.
  • face analysis technology can be used to analyze the portrait image, determine the position of the face in the image, and calculate the areas with small differences in color blocks around the face. For example, you can give priority to blank spaces. area, use this area as the location of the preset emoticon text.
  • the color of the preset emoticon text can be a contrasting color of the blank area to highlight the emoticon text.
  • the shape and size of the area can be determined to determine the layout and font size of the preset emoticon text in the area.
  • the font of the emoticon text can be determined according to the emotional bias of the emoticon corresponding to the preset emoticon text. For example, for the emoticon text corresponding to happy, a round and cheerful font can be used.
  • the interception size of the portrait expression on the portrait image can be controlled according to the length of the preset emoticon text.
  • the size of the interception area can be appropriately expanded to place the preset emoticon text.
  • a video file including human faces may also be obtained as material for making portrait expressions.
  • video frames including human faces in the video file can be extracted, and the extracted video frames can be input into the portrait sub-model as the acquired face images, and the weights of each expression label can be calculated.
  • video frames with similar weights of multiple expression tags can be combined to achieve the dynamic expression effect of portrait expressions.
  • the expressions in may include dynamic portrait expressions or static portrait expressions, and this application does not limit this.
  • the output result of each portrait image in the portrait sub-model can be obtained at the same time, and Input the target emoticon text into the text sub-model and output the weight of the corresponding emoticon.
  • the portrait images in the portrait image set are displayed on the chat interface of WeChat TM 611, the portrait images can be displayed in descending order of the matching degree between the target emoticon text and the portrait image, and then the user can display multiple portrait images in the portrait emoticon set. Quickly find the portrait images required by the subset.
  • the mobile phone 100 can also optimize the expression production model in step 904 based on the generated target expression. Specifically, the mobile phone 100 can update the weight parameters of the portrait image and the expression label in the portrait sub-model, and update the weight parameters of the preset emoticon text and expression label in the text sub-model. In some embodiments, if the target emoticon text input by the user does not belong to the preset emoticon text in the text sub-model, the target emoticon text can be added to the text sub-model, and the target emoticon text can be determined as the weight of each emoticon tag.
  • Figure 10 shows a software structure block diagram of an electronic device 100 according to an embodiment of the present application.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, task manager, phone manager, resource manager, notification manager, view system, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the window manager can obtain the touch event corresponding to the user's click operation, including the application information corresponding to the window, the touch position, etc., to match the corresponding display task and display the corresponding interface, such as displaying the above steps
  • the task manager is used to cooperate with the window manager to retrieve the task content corresponding to the user's sliding operation, such as display tasks that need to be controlled by the window manager.
  • the task manager retrieves the content of the corresponding display task and sends it to the window manager for execution. , thereby realizing the process of the electronic device 200 displaying the corresponding interface.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • the above data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the Android runtime includes core libraries and a virtual machine.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, touch driver, and sensor driver.
  • the present disclosure also relates to means for performing the operations described herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • Such computer programs may be stored on a computer-readable medium such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magneto-optical disks, Read memory (ROM), random access memory (RAM), EPROM, EEPROM, magnetic or optical card, application specific integrated circuit (ASIC), or any type of medium suitable for storing electronic instructions, and each may be coupled to a computer system bus.
  • the computers referred to in the specification may include a single processor or may employ an architecture involving multiple processors for increased computing power.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application relates to an interaction method, a device, and a medium. The method comprises: an electronic device displaying a first interface of a first application, wherein the first interface comprises a first session window of a first contact; detecting an expression input operation of a user on the first session window of the first interface; displaying, corresponding to determining that a first portrait expression set related to the first contact exists in a plurality of portrait expression sets stored in the electronic device, at least one portrait expression in the first portrait expression set in the first session window. According to the interaction method in the present application, when the user needs to input the expression, the electronic device can display a portrait expression set of a corresponding contact required by the user in response to the expression input operation of the user, the user can operate a screen of the electronic device, the expression required by the user is selected in the portrait expression set of the corresponding contact, the user does not need to manually make and import the expression of the corresponding contact, the user operation is simple, and the experience is high.

Description

交互方法、设备及介质Interaction methods, devices and media

本申请要求于2022年04月12日提交中国专利局、申请号为202210383986.X、发明名称为“交互方法、设备及介质”的中国专利申请的优先权,上述专利的全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application submitted to the China Patent Office on April 12, 2022, with the application number 202210383986. in this application.

技术领域Technical field

本申请涉及终端设备技术领域,具体涉及一种交互方法、设备及介质。This application relates to the technical field of terminal equipment, and specifically relates to an interaction method, equipment and medium.

背景技术Background technique

随着手机、平板、电脑等终端设备的普及和社交软件的发展,表情(包括图像表情和文字表情)的应用越来越广泛。用户可以通过将其熟悉的人的照片制作成表情,以活跃沟通的气氛。With the popularization of mobile phones, tablets, computers and other terminal devices and the development of social software, emoticons (including image emoticons and text emoticons) are increasingly used. Users can enliven the atmosphere of communication by turning photos of people they know well into emoticons.

目前社交软件中,用户根据对应联系人的人像照片制作成的人像表情后,在与对应联系人聊天时,需要在社交软件中手动添加人像表情至收藏表情集中,然后在收藏表情集显示的多个表情中翻找,找到需要的对应联系人的表情发送至对应联系人。表情交互方案,需要用户通过多个操作,添加并选择用户需要的对应联系人的人像表情,并且当收藏表情集中表情较多时用户翻找对应联系人的人像表情比较困难,体验感较差。In current social software, after users create portrait emoticons based on the portrait photos of the corresponding contacts, when chatting with the corresponding contacts, they need to manually add the portrait emoticons to the favorite emoticon set in the social software, and then add the facial expressions displayed in the favorite emoticon set. Search among the emoticons to find the desired emoticon of the corresponding contact and send it to the corresponding contact. The emoticon interaction solution requires the user to go through multiple operations to add and select the portrait emoticon corresponding to the contact that the user needs. When there are many expressions in the collection of emoticons, it is difficult for the user to find the portrait emoticon corresponding to the contact, and the experience is poor.

发明内容Contents of the invention

本申请实施例提供了一种交互方法、设备及介质,减少了用户制作、翻找表情包的操作,利于提高用户体验。The embodiments of the present application provide an interactive method, device and medium, which reduces the user's operations of making and searching for emoticons, and is conducive to improving the user experience.

第一方面,本申请实施例提供了一种交互方法,应用于电子设备,包括:电子设备上显示有第一应用的第一界面,其中,第一界面包括第一联系人的第一会话窗口;检测到用户在第一界面的第一会话窗口的表情输入操作;对应于确定出电子设备中存储的多个人像表情集中存在与第一联系人相关的第一人像表情集,在第一会话窗口中显示第一人像表情集中的至少一个人像表情。In a first aspect, embodiments of the present application provide an interaction method applied to an electronic device, including: a first interface of a first application displayed on the electronic device, wherein the first interface includes a first conversation window of a first contact ; Detecting the user's expression input operation in the first conversation window of the first interface; Corresponding to the determination that there is a first portrait expression set related to the first contact in the plurality of portrait expression sets stored in the electronic device, in the first At least one portrait expression in the first portrait expression set is displayed in the session window.

可以理解,表情输入操作为用户在第一会话窗口中触发输入表情的操作,可以例如用户点击第一会话窗口中的表情输入按钮的操作。还可以例如,第一会话窗口包括文本输入框,用户在第一会话窗口的文本输入框中输入文本,并点击表情输入按钮的操作。It can be understood that the expression input operation is an operation in which the user triggers the input of an expression in the first session window, which may be, for example, an operation in which the user clicks an expression input button in the first session window. For example, the first session window may include a text input box, and the user may enter text in the text input box of the first session window and click an expression input button.

本申请提供的交互方法,电子设备可以响应于用户的表情输入操作将用户需要的对应联系人的人像表情集显示,进而,用户可以操作电子设备的屏幕,在对应联系人的人像表情集中选择用户需要的表情,无需用户手动制作并导入对应联系人的表情,用户操作简单,体验高。With the interaction method provided by this application, the electronic device can display the portrait expression set corresponding to the contact that the user needs in response to the user's expression input operation. Furthermore, the user can operate the screen of the electronic device to select the user from the portrait expression set corresponding to the contact. The required emoticons do not require users to manually create and import the emoticons corresponding to the contacts. The user operation is simple and the experience is high.

在上述第一方面的一种可能的实现中,电子设备中预先设置用于表征第一应用中的至少一个联系人与多个人像表情集的对应关系的关联信息。In a possible implementation of the above first aspect, the electronic device is preset with association information representing a correspondence between at least one contact in the first application and multiple portrait expression sets.

可以理解,关联信息为电子设备响应于用户的关联操作,将电子设备中的人像表情集与第一应用中的联系人关联。在一些实施例中,关联信息可以通过电子设备中的通讯录应用中的通讯联系人关联,例如将各人像表情集以及第一应用中的各联系人分别于通讯录应用中的通讯联系人关联,进而,在确定第一人像表情集时,可以通过对通讯录应用的访问实现。在一些实施例中,关联信息可以通过对各人像表情集和第一应用中的各联系人添加对应相同的姓名标签或其他自定义标签实现第一人像表情集的确 定。It can be understood that the association information is that the electronic device responds to the user's association operation and associates the portrait expression set in the electronic device with the contact in the first application. In some embodiments, the associated information can be associated with communication contacts in the address book application in the electronic device, for example, each portrait expression set and each contact in the first application are associated with the communication contacts in the address book application. , furthermore, when determining the first portrait expression set, it can be achieved by accessing the address book application. In some embodiments, the associated information can be determined by adding corresponding name tags or other custom tags to each set of portrait expressions and each contact in the first application to confirm the first set of portrait expressions. Certainly.

在上述第一方面的一种可能的实现中,第一会话窗口包括会话区域和表情区域,在第一会话窗口中显示第一人像表情集中的至少一个人像表情,包括:在表情区域显示第一人像表情集的人像表情标签,其中,人像表情标签为电子设备根据第一人像表情集中的人像表情确定的;在人像标签中存放并显示第一人像表情集中的至少一个人像表情。In a possible implementation of the above first aspect, the first conversation window includes a conversation area and an expression area, and displaying at least one portrait expression in the first portrait expression set in the first conversation window includes: displaying the first portrait expression in the expression area. A portrait expression tag of a portrait expression set, wherein the portrait expression tag is determined by the electronic device based on the portrait expression in the first portrait expression set; at least one portrait expression in the first portrait expression set is stored and displayed in the portrait tag.

在上述第一方面的一种可能的实现中,人像表情标签为第一联系人的人像头像。In a possible implementation of the above first aspect, the portrait expression label is the portrait avatar of the first contact person.

可以理解,人像头像可以是第一人像表情集中的任一张能够表示第一联系人人脸特征的图像,进而当电子设备显示多个联系人的人像头像时,用户可以通过人像头像快速确定人像表情集属于哪个联系人。It can be understood that the portrait avatar can be any image in the first portrait expression set that can represent the facial features of the first contact. Furthermore, when the electronic device displays the portraits of multiple contacts, the user can quickly determine through the portrait avatars. Which contact does the portrait expression set belong to.

在上述第一方面的一种可能的实现中,上述方法还包括:对应于确定出不存在与第一联系人相关的第一人像表情集,在第一会话窗口中显示第二人像表情集中的至少一个人像表情,其中第二人像表情集为多个人像表情集中满足预设条件的人像表情集。In a possible implementation of the above first aspect, the above method further includes: corresponding to determining that there is no first set of portrait expressions related to the first contact, displaying the second set of portrait expressions in the first conversation window At least one portrait expression set, wherein the second portrait expression set is a set of portrait expressions among multiple portrait expression sets that meet preset conditions.

可以理解,当电子设备中未存在第一联系人的人像表情,电子设备可以为用户提供其他联系人的人像表情供用户选择。It can be understood that when the electronic device does not contain the portrait expression of the first contact, the electronic device can provide the user with portrait expressions of other contacts for the user to select.

在一些实施例中,电子设备中存在第一联系人的人像表情,则电子设备也可以响应于用户的查询操作,在显示界面显示其他联系人的人像表情,用户可以还选择其与第一联系人的共同好友的人像表情进行交互,增加聊天的趣味性。In some embodiments, if there is a portrait expression of the first contact in the electronic device, the electronic device can also respond to the user's query operation and display the portrait expressions of other contacts on the display interface, and the user can also select the portrait expression of the first contact. Interact with the facial expressions of common friends to make chatting more interesting.

在上述第一方面的一种可能的实现中,第二人像表情集是通过以下方式确定:获取多个人像表情集的历史使用信息;确定满足预设条件的第一历史使用信息对应的人像表情集为第二人像表情集。In a possible implementation of the above first aspect, the second portrait expression set is determined in the following manner: obtaining historical usage information of multiple portrait expression sets; determining the portrait expression corresponding to the first historical usage information that meets the preset conditions The set is the second portrait expression set.

在上述第一方面的一种可能的实现中,历史使用信息包括历史使用频率;确定满足预设条件的第一历史使用信息对应的人像表情集为第二人像表情集,包括:确定历史使用频率高于预设频率阈值的第一历史使用频率对应的人像表情集为第二人像表情集;或者确定多个历史使用频率最高的至少一个人像表情集为第二人像表情集。In a possible implementation of the above first aspect, the historical usage information includes historical usage frequency; determining the portrait expression set corresponding to the first historical usage information that meets the preset conditions as the second portrait expression set includes: determining the historical usage frequency The portrait expression set corresponding to the first historical usage frequency higher than the preset frequency threshold is the second portrait expression set; or at least one portrait expression set with the highest historical usage frequency is determined to be the second portrait expression set.

在上述第一方面的一种可能的实现中,第一应用中包括多个联系人,多个人像表情集是通过以下方式生成的:获取电子设备中存储的多张人像图像;将对应于同一联系人的至少一张人像图像划分至同一人像图像集中;对各人像图像集中的至少一张人像图像进行图像处理,得到各人像图像集对应的人像表情集,并将各人像表情集与第一应用中的各联系人关联。In a possible implementation of the above first aspect, the first application includes multiple contacts, and the multiple portrait expression sets are generated in the following manner: acquiring multiple portrait images stored in the electronic device; At least one portrait image of the contact is divided into the same portrait image set; image processing is performed on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, and each portrait expression set is compared with the first portrait image set Each contact in the application is associated.

在上述第一方面的一种可能的实现中,对各人像图像集中的至少一张人像图像进行图像处理,得到各人像图像集对应的人像表情集,包括:对各人像图像集中的至少一张人像图像,利用表情制作模型生成各人像图像集对应的人像表情集。In a possible implementation of the above first aspect, image processing is performed on at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, including: processing at least one portrait image in each portrait image set For portrait images, the expression production model is used to generate the portrait expression set corresponding to each portrait image set.

在上述第一方面的一种可能的实现中,表情制作模型包括:确定各人像图像为预设表情的权重的人像子模型;确定预置的表情文字为预设表情的权重的文字子模型;根据人像子模型和文字子模型的输出结果,将各人像图像以及匹配的预置的表情文字生成对应的人像表情的表情生成子模型。In a possible implementation of the first aspect above, the expression production model includes: a portrait sub-model that determines the weight of each portrait image as a preset expression; a text sub-model that determines the weight of the preset expression text as the preset expression; According to the output results of the portrait sub-model and the text sub-model, each portrait image and the matching preset expression text are generated to generate an expression generation sub-model corresponding to the portrait expression.

在上述第一方面的一种可能的实现中,上述方法还包括:检测到用户在第一界面的第一会话窗口的文本输入操作,并接收用户输入的文本;在第一人像表情集对应的人像图像集中确定与文本匹配的至少一张目标人像图像;电子设备显示第一应用的第二界面,第二界面包括至少一张目标人像图像;检测到用户在第二界面中的至少一张人像图像的图像选定操作,根据文本以及图像选定操作对应的人像图像,生成目标人像表情;在第一会话窗口中显示目标人像表情。In a possible implementation of the above first aspect, the above method further includes: detecting the user's text input operation in the first conversation window of the first interface, and receiving the text input by the user; Determine at least one target portrait image that matches the text from a set of portrait images; the electronic device displays a second interface of the first application, and the second interface includes at least one target portrait image; detects that the user is in at least one of the second interface The image selection operation of the portrait image generates a target portrait expression based on the text and the portrait image corresponding to the image selection operation; and displays the target portrait expression in the first session window.

可以理解,当电子设备中的人像表情不满足用户需求时,电子设备可以根据用户输入的文本以及选 定的人像图像自动生成人像表情,无需用户手动制作,操作简单。It is understandable that when the portrait expressions in the electronic device do not meet the user's needs, the electronic device can adjust the expression according to the text and selection input by the user. It automatically generates facial expressions from a certain portrait image, without the need for users to manually create them, and the operation is simple.

在上述第一方面的一种可能的实现中,第一应用包括即时通讯应用、短信、会议应用、社交应用中的至少一种。In a possible implementation of the above first aspect, the first application includes at least one of an instant messaging application, a text message, a conferencing application, and a social networking application.

可以理解,即时通讯应用可以例如微信TM、QQTM等,社交应用可以例如抖音TM、微博TM、小红书TM等。在一些实施例中,第一应用还可以是其他能够实现用户与其他用户实现图像类信息传递的应用,例如淘宝TM等,只要能顾实现用户之间的聊天功能即可,本申请对此不作限制。It can be understood that instant messaging applications can be, for example, WeChat TM , QQ TM, etc., and social networking applications can be, for example, Douyin TM , Weibo TM , Xiaohongshu TM , etc. In some embodiments, the first application can also be other applications that can realize the transmission of image information between users and other users, such as TaobaoTM , etc., as long as the chat function between users can be realized, this application does not cover this. limit.

第二方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;一个或多个存储器;一个或多个存储器存储有一个或多个程序,当一个或者多个程序被一个或多个处理器执行时,使得电子设备执行上述交互方法。In a second aspect, embodiments of the present application provide an electronic device, including: one or more processors; one or more memories; one or more memories store one or more programs. When one or more programs are When one or more processors are executed, the electronic device is caused to execute the above interaction method.

第三方面,本申请实施例提供了一种计算机可读存储介质,存储介质上存储有指令,指令在计算机上执行时使计算机执行上述交互方法。In a third aspect, embodiments of the present application provide a computer-readable storage medium. Instructions are stored on the storage medium. When the instructions are executed on the computer, they cause the computer to perform the above interactive method.

第四方面,本申请实施例提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述交互方法。In the fourth aspect, embodiments of the present application provide a computer program product, which includes a computer program/instruction. When the computer program/instruction is executed by a processor, the above interactive method is implemented.

附图说明Description of the drawings

图1a至图1c所示为本申请实施例提供的一些表情的交互过程的界面变化示意图;Figures 1a to 1c show schematic diagrams of interface changes of some expression interaction processes provided by embodiments of the present application;

图2所示为本申请实施例提供的一种电子设备的硬件结构示意图;Figure 2 shows a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application;

图3a至图3e所示为本申请实施例提供的一些添加“允许获取图库生成表情包”开关后,手机的界面变化示意图;Figures 3a to 3e show some schematic diagrams of the interface changes of the mobile phone after adding the "Allow acquisition of gallery to generate emoticon package" switch provided by the embodiment of the present application;

图4所示为本申请实施例所提供的一种交互方法的流程示意图;Figure 4 shows a schematic flow chart of an interaction method provided by an embodiment of the present application;

图5a至图5b所示为本申请实施例所提供的一些表情的交互过程的界面变化示意图;Figures 5a to 5b are schematic diagrams of interface changes of some expression interaction processes provided by embodiments of the present application;

图6所示为本申请实施例提供的一种交互方法的流程示意图;Figure 6 shows a schematic flow chart of an interaction method provided by an embodiment of the present application;

图7所示为本申请实施例提供的一种表情的生成方法的流程示意图;Figure 7 shows a schematic flow chart of an expression generation method provided by an embodiment of the present application;

图8a至图8e所示为本申请实施例提供的一些表情的生成过程的界面变化示意图;Figures 8a to 8e are schematic diagrams of interface changes in the generation process of some expressions provided by embodiments of the present application;

图9所示为本申请实施例提供的一种人像表情集的自动生成方法的流程示意图;Figure 9 shows a schematic flow chart of a method for automatically generating a portrait expression set provided by an embodiment of the present application;

图10所示为本申请实施例提供的一种电子设备的软件结构示意图。Figure 10 shows a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.

具体实施方式Detailed ways

为了解决上述用户将图像处理成表情并发送至聊天人时操作繁琐、体验感差的问题,本申请提供了一种交互方法。该方法包括,电子设备将其存储的多张人像图像中同一人物的人像图像划分至一个人像图像集,并对各人像图像集生成可用于通讯应用的人像表情集,电子设备生成人像表情集后,将各人像表情集与通讯应用中的各聊天人预先关联。其中,人像表情集中包括至少一个表情。当电子设备检测到用户在通讯应用中的表情输入操作时,获取表情输入操作的对应联系人信息。若电子设备中存储有对应联系人的人像表情集,则将获取到的对应联系人的人像表情集显示在通讯应用的显示界面中,供用户选择。In order to solve the above-mentioned problems of cumbersome operations and poor experience when users process images into expressions and send them to chatters, this application provides an interactive method. The method includes: the electronic device divides the portrait images of the same person in multiple portrait images it stores into a portrait image set, and generates a portrait expression set for each portrait image set that can be used for communication applications. After the electronic device generates the portrait expression set , pre-associate each portrait expression set with each chat person in the communication application. Among them, the portrait expression set includes at least one expression. When the electronic device detects the user's expression input operation in the communication application, it obtains the contact information corresponding to the expression input operation. If the electronic device stores a portrait expression set corresponding to the contact, the acquired portrait expression set corresponding to the contact will be displayed on the display interface of the communication application for the user to select.

可以理解,预先关联即电子设备通过获取用户的输入信息,确定各人像表情集在通讯应用中对应的聊天人信息。例如可以对各人像表情集添加其对应的人物的姓名或其他自定义标签,同时通讯应用中对各聊天人备注其姓名或其他自定义标签,通过同一姓名或自定义标签实现各人像表情集与通讯应用中各聊天人的预先关联。又例如,将各人像表情集与电子设备中的通讯录应用中的通讯联系人对应,同时允 许通讯应用访问电子设备的通讯录应用,以根据通讯应用中的聊天人信息确定其在电子设备的通讯录应用中对应的通讯联系人信息,实现各人像表情集与通讯应用中各聊天人的预先关联。It can be understood that pre-association means that the electronic device determines the chat person information corresponding to each portrait expression set in the communication application by obtaining the user's input information. For example, you can add the name or other custom tags of the corresponding person to each set of portrait expressions, and at the same time, note the name or other custom tags of each chat person in the communication application. Through the same name or custom tag, you can realize the integration of each set of portrait expressions with each other. Pre-association of chat persons in communication applications. Another example is to correspond each portrait expression set to the communication contacts in the address book application in the electronic device, and at the same time allow The communication application is allowed to access the address book application of the electronic device to determine the corresponding communication contact information in the address book application of the electronic device based on the chat person information in the communication application, so as to realize the matching of each portrait expression set with each chat person in the communication application. Pre-associated.

在一些实施例中,电子设备可以利用表情制作模型将多张人像图像制作成各人像表情集。其中的表情制作模型可以是电子设备根据大量的人像图像、对应的表情标签、表情标签对应的预置文本,训练得到的人像图像与预置文本的对应关系,并将人像图像与对应的预置文本结合以制作得到表情的模型。其中预置文本可以为常用表情对应的文字。In some embodiments, the electronic device can use the expression production model to produce multiple portrait images into each portrait expression set. The expression production model may be the corresponding relationship between the portrait image and the preset text obtained by training based on a large number of portrait images, corresponding expression tags, and preset text corresponding to the expression tag by the electronic device, and the portrait image and the corresponding preset text Text is combined to create a model of the resulting expression. The preset text can be text corresponding to commonly used expressions.

本申请提供的交互方法,电子设备可以自动对其存储的人像图像制作对应的人像表情。用户在需要输入表情时,电子设备可以响应于用户的表情输入操作将用户需要的对应联系人的人像表情集显示,进而,用户可以操作电子设备的屏幕,在对应联系人的人像表情集中选择用户需要的表情,无需用户手动制作并导入对应联系人的表情,用户操作简单,体验高。With the interactive method provided by this application, the electronic device can automatically create corresponding portrait expressions for the portrait images it stores. When the user needs to input an expression, the electronic device can display the set of portrait expressions corresponding to the contact that the user needs in response to the user's expression input operation. Furthermore, the user can operate the screen of the electronic device to select the user from the set of portrait expressions corresponding to the contact. The required emoticons do not require users to manually create and import the emoticons corresponding to the contacts. The user operation is simple and the experience is high.

此外,在一些实施例中,若电子设备中未存储对应联系人的人像表情集,则电子设备可以根据用户对于各人像表情的使用频率,将使用频率最高的至少一个人像表情集显示在电子设备的显示界面。In addition, in some embodiments, if the electronic device does not store the portrait expression set corresponding to the contact, the electronic device can display at least one portrait expression set with the highest frequency of use on the electronic device according to the user's frequency of use of each portrait expression. display interface.

在其他一些实施例中,电子设备中生成的人像表情集可以是电子设备将其存储的多张人像图像自动备份到云服务器后,云服务器将备份的多张人像图像中同一人物的人像图像划分至一个人像图像集,并对各人像图像集生成可用于通讯应用的人像表情集,并将生成的各人像表情集发送至电子设备。可以理解,人像表情集的制作可以是电子设备执行,也可以是云服务器执行。In some other embodiments, the portrait expression set generated in the electronic device may be that after the electronic device automatically backs up the multiple portrait images it stores to the cloud server, the cloud server divides the backed up multiple portrait images into portrait images of the same person. to a set of portrait images, generate a set of portrait expressions for each set of portrait images that can be used in communication applications, and send each generated set of portrait expressions to an electronic device. It can be understood that the production of the portrait expression set can be executed by electronic equipment or by a cloud server.

下面结合附图,以电子设备100为手机为例,对上述交互方法进行介绍。The above interactive method will be introduced below with reference to the accompanying drawings, taking the electronic device 100 as a mobile phone as an example.

例如,图1所示为本申请实施例提供的交互方法的界面变化图。如图1a所示,手机100显示聊天界面101,在聊天界面101显示有表情输入按钮111,表情输入按钮111可以是如图1a中所示的笑脸图形,也可以是其他表示表情输入的图形。用户操作手机100进行表情选择的过程中,用户可以点击手机100的聊天界面101中的表情输入按钮,即执行操作①。在此过程中,手机100获取到用户的操作①即获取到用户的表情输入操作,手机100可以响应于用户的操作①,获取聊天界面101中的对应聊天人信息。此时,手机获取到的对应聊天人信息(即上文中的对应联系人)为Alice。若手机100中存储有对应聊天人Alice的人像表情集,则手机100可以获取对应聊天人Alice的人像表情集,并在手机100的显示界面显示如图1b所示的表情选择界面102。For example, Figure 1 shows an interface change diagram of the interaction method provided by the embodiment of the present application. As shown in Figure 1a, the mobile phone 100 displays a chat interface 101, and an expression input button 111 is displayed on the chat interface 101. The expression input button 111 can be a smiley face graphic as shown in Figure 1a, or other graphics representing expression input. When the user operates the mobile phone 100 to select an expression, the user can click the expression input button in the chat interface 101 of the mobile phone 100, that is, perform operation ①. During this process, the mobile phone 100 obtains the user's operation ①, that is, the user's expression input operation. The mobile phone 100 can respond to the user's operation ① and obtain the corresponding chat person information in the chat interface 101. At this time, the corresponding chat person information obtained by the mobile phone (that is, the corresponding contact person above) is Alice. If the mobile phone 100 stores a portrait expression set corresponding to the chatter Alice, the mobile phone 100 can obtain the portrait expression set corresponding to the chatter Alice, and display the expression selection interface 102 as shown in Figure 1b on the display interface of the mobile phone 100.

如图1b所示,在手机100响应于用户的操作①,在显示界面显示如图1b所示的表情选择界面102,表情选择界面102显示表情选择区域112和表情集选择区域113。表情选择区域112用于显示被选择的表情集合的表情,表情集选择区域113显示可选择的表情集合。用户执行操作①后,手机100可以在表情选择区域112显示对应通讯应用的默认表情集合的表情,并且此时默认表情按钮114被触发。手机100响应与用户的操作①获取到的对应聊天人Alice的人像表情集可以以人像表情按钮的形式在表情集选择区域113显示。其中,人像表情按钮可以采用人像表情集对应的人像图像集的封面制作成。例如图1b中,对应聊天人Alice的人像表情集可以显示为聊天人表情按钮115。用户还可以点击图1b中的表情隐藏按钮116获取其他聊天人的人像表情集。As shown in Figure 1b, in response to the user's operation ①, the mobile phone 100 displays an expression selection interface 102 as shown in Figure 1b on the display interface. The expression selection interface 102 displays an expression selection area 112 and an expression set selection area 113. The expression selection area 112 is used to display the expressions of the selected expression set, and the expression set selection area 113 displays the selectable expression set. After the user performs operation ①, the mobile phone 100 can display the expression corresponding to the default expression set of the communication application in the expression selection area 112, and at this time the default expression button 114 is triggered. The portrait expression set corresponding to the chat person Alice obtained by the mobile phone 100 in response to the user's operation ① can be displayed in the expression set selection area 113 in the form of a portrait expression button. Among them, the portrait expression button can be made by using the cover of the portrait image set corresponding to the portrait expression set. For example, in Figure 1b, the portrait expression set corresponding to the chatter Alice can be displayed as the chatter emoticon button 115. The user can also click the expression hiding button 116 in Figure 1b to obtain the portrait expression sets of other chatters.

在一些实施例中,手机100在显示人像表情按钮时,可以按照对应的表情集合是否为对应聊天人的人像表情集以及各人像表情集的使用频率在表情集选择区域113显示。In some embodiments, when the mobile phone 100 displays the portrait expression button, it may be displayed in the expression set selection area 113 according to whether the corresponding expression set is the portrait expression set of the corresponding chat person and the frequency of use of each portrait expression set.

当用户想要发送手机100根据存储的人像图像自动生成的表情时,例如想要向对应聊天人Alice发送根据对应聊天人Alice的人像表情集中的表情时,可以点击聊天人表情按钮115,即执行操作②。手机100响应于用户的操作②,在手机100的显示界面显示表情选择界面103,如图1c所示。When the user wants to send the expression automatically generated by the mobile phone 100 based on the stored portrait image, for example, when the user wants to send the expression concentrated based on the portrait expression of the corresponding chat person Alice to the corresponding chat person Alice, the user can click the chat person expression button 115, that is, execute Operation ②. In response to the user's operation ②, the mobile phone 100 displays the expression selection interface 103 on the display interface of the mobile phone 100, as shown in Figure 1c.

如图1c所示,在表情选择界面103的表情集选择区域113中,聊天人表情按钮115被触发,此时 表情选择区域112可以显示手机100自动生成的对应聊天人Alice的人像表情集中的至少一个表情。As shown in Figure 1c, in the expression set selection area 113 of the expression selection interface 103, the chat person's expression button 115 is triggered. At this time The expression selection area 112 may display at least one expression in the portrait expression set corresponding to Alice, the chat person, automatically generated by the mobile phone 100 .

可以理解,当用户想要向对应聊天人发送其表情时,无需再手动操作,对当前聊天人的图像进行处理,制作成表情包。手机100可以自动根据存储的多张人像图像,生成各聊天人的人像表情集,并在手机100的显示界面显示,用户点击聊天人表情按钮115就可以选择对应聊天人的表情。用户基于表情的交互时操作简单,体验感更高。It is understandable that when the user wants to send his or her emoticon to the corresponding chat person, there is no need to manually process the image of the current chat person and make an emoticon package. The mobile phone 100 can automatically generate a portrait expression set of each chatter based on the stored multiple portrait images, and display the portrait expression set on the display interface of the mobile phone 100. The user can select the expression of the corresponding chatter by clicking the chatter expression button 115. The user's expression-based interaction is simple to operate and has a higher sense of experience.

可以理解,本申请实施例所提供的交互方法,所适用的电子设备包括但不限于包括但不限于手机,便携式计算机、膝上型计算机、台式计算机、平板电脑、头戴式显示器、移动电子邮件设备、车机设备、便携式游戏机、阅读器设备、其中嵌入或耦接有一个或多个处理器的电视机、或能够访问网络的其他电子设备。It can be understood that the interactive methods provided by the embodiments of the present application are applicable to electronic devices including but not limited to mobile phones, portable computers, laptop computers, desktop computers, tablet computers, head-mounted displays, and mobile emails. devices, automotive devices, portable game consoles, reader devices, televisions with one or more processors embedded or coupled therein, or other electronic devices capable of accessing a network.

示例性地,图2示出了一种电子设备的硬件结构示意图。Exemplarily, FIG. 2 shows a schematic diagram of the hardware structure of an electronic device.

如图2所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,传感器模块180,显示屏190等。其中传感器模块180可以包括压力传感器180A,加速度传感器180E,触摸传感器180K等。As shown in Figure 2, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a sensor module 180, a display screen 190, etc. The sensor module 180 may include a pressure sensor 180A, an acceleration sensor 180E, a touch sensor 180K, etc.

可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently. The components illustrated may be implemented in hardware, software, or a combination of software and hardware.

处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在本申请实施例中,执行本申请的交互方法的相关指令和数据可以存储在存储器中,供处理器110调用,处理器110可以通过控制器控制执行实施交互方法的各步骤,具体实施过程将在下文详细描述,在此不再赘述。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Among them, different processing units can be independent devices or integrated in one or more processors. The controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions. The processor 110 may also be provided with a memory for storing instructions and data. In the embodiment of the present application, the relevant instructions and data for executing the interactive method of the present application can be stored in the memory for the processor 110 to call. The processor 110 can control the execution of each step of the interactive method through the controller. The specific implementation process will be It is described in detail below and will not be repeated here.

在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (subscriber identity) module, SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, etc.

I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may separately couple the touch sensors 180K and the like through different I2C bus interfaces. For example, the processor 110 can be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100 .

MIPI接口可以被用于连接处理器110与显示屏190等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。处理器110和显示屏190通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 and peripheral devices such as the display screen 190 . MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc. The processor 110 and the display screen 190 communicate through the DSI interface to implement the display function of the electronic device 100 .

GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110显示屏190,传感器模块180等。GPIO接口还可以被配 置为I2C接口,MIPI接口等。The GPIO interface can be configured through software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface may be used to connect the processor 110 display 190, sensor module 180, etc. The GPIO interface can also be configured Set as I2C interface, MIPI interface, etc.

可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.

电子设备100通过GPU,显示屏190,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏190和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变窗口显示信息。The electronic device 100 implements display functions through a GPU, a display screen 190, an application processor, and the like. The GPU is an image processing microprocessor and is connected to the display screen 190 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter window display information.

显示屏190用于显示图像,视频等。显示屏190包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Mini-LED,Micro-LED,Micro-OLED,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏190,N为大于1的正整数。The display screen 190 is used to display images, videos, etc. Display 190 includes a display panel. The display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode). emitting diode (AMOLED), flexible light-emitting diode (FLED), Mini-LED, Micro-LED, Micro-OLED, quantum dot light-emitting diode (QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 190, where N is a positive integer greater than 1.

外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.

内部存储器121可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。Internal memory 121 may be used to store computer executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.). The storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.). In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.

在本申请实施例中,内部存储器121中可以存储实施本申请的交互方法的执行指令,以供处理器110调用,实施本申请的交互方法,使电子设备100在用户执行表情输入操作时自动获取电子设备中存储的对应联系人的人像表情集,无需用户手动导入,提高用户使用体验。In the embodiment of the present application, the internal memory 121 can store execution instructions for implementing the interactive method of the present application for the processor 110 to call to implement the interactive method of the present application, so that the electronic device 100 automatically obtains the expression when the user performs an expression input operation. The portrait expression set corresponding to the contact stored in the electronic device does not require the user to manually import it, improving the user experience.

压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏190。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏190,电子设备100根据压力传感器180A检测该触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。The pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals. In some embodiments, the pressure sensor 180A may be disposed on the display screen 190 . There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. A capacitive pressure sensor may include at least two parallel plates of conductive material. When a force is applied to pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of the pressure based on the change in capacitance. When a touch operation is performed on the display screen 190 , the electronic device 100 detects the strength of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.

加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices and be used in horizontal and vertical screen switching, pedometer and other applications.

触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏190,由触摸传感器180K与显示屏190组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏190提供与触摸操作相关的视觉输出。在本申请实施例中,例如触摸传感器180K与显示屏190组成的触摸屏可以检测到用户的点击操作,随着用户的点击操作,触摸屏可以显示相应的界面变化,例如点击表情 获取开关,则在当前显示界面弹出浮窗界面等,具体可以参考下文详细描述,在此不再赘述。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏190所处的位置不同。Touch sensor 180K, also known as "touch device". The touch sensor 180K can be disposed on the display screen 190. The touch sensor 180K and the display screen 190 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K. The touch sensor can pass the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 190 . In the embodiment of the present application, for example, the touch screen composed of the touch sensor 180K and the display screen 190 can detect the user's click operation. With the user's click operation, the touch screen can display corresponding interface changes, such as click emoticons. If the switch is obtained, a floating window interface will pop up on the current display interface. For details, please refer to the detailed description below and will not be repeated here. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a location different from that of the display screen 190 .

基于上述图2所示的电子设备100的结构,下文结合附图,以电子设备100为手机为例,详细描述本申请实施例的交互方法的实施过程。Based on the structure of the electronic device 100 shown in FIG. 2 , the implementation process of the interactive method according to the embodiment of the present application will be described in detail below with reference to the accompanying drawings, taking the electronic device 100 as a mobile phone as an example.

首先需要说明的是,在一些实施例中,为了方便用户操作,可以在通讯应用的操作界面添加表情包获取开关,用户可以通过选择是否打开表情包获取开关,从而选择是否开启通讯应用获取手机100中自动生成的表情包的功能。First of all, it should be noted that in some embodiments, in order to facilitate user operations, a switch for obtaining emoticons can be added to the operation interface of the communication application. The user can choose whether to turn on the switch for obtaining emoticons, thereby choosing whether to turn on the communication application to obtain the mobile phone 100 The function of automatically generating emoticons.

例如图3a所示,以通讯应用为微信TM。用户操作手机100打开微信TM,并在手机100显示界面显示微信TM的界面301,然后用户可以单击设置按钮311,打开微信TM的设置界面312,如图3b所示。在微信TM的设置界面312中包括微信TM的多种设置,其中多种设置里包括添加的表情包获取开关,例如设置界面312中的“允许获取图库生成的表情包”开关312。用户可以在微信TM的设置界面302中点击“允许获取图库生成的表情包”开关312,以开启微信TM获取手机100中自动生成的表情包的功能。For example, as shown in Figure 3a, the communication application is WeChatTM . The user operates the mobile phone 100 to open WeChat TM , and the WeChat TM interface 301 is displayed on the display interface of the mobile phone 100. Then the user can click the setting button 311 to open the WeChat TM setting interface 312, as shown in Figure 3b. The setting interface 312 of WeChat TM includes various settings of WeChat TM , and various settings include added emoticon package acquisition switches, such as the "allow obtaining emoticon packages generated by the gallery" switch 312 in the setting interface 312 . The user can click the "Allow acquisition of emoticon packages generated by the gallery" switch 312 in the setting interface 302 of WeChat TM to enable the function of WeChat TM to obtain the emoticon packages automatically generated in the mobile phone 100 .

进一步地,在一些实施例中,用户在微信TM的设置界面302中点击“允许获取图库生成的表情包”开关312,在微信TM的设置界面302的上方会弹出浮窗界面303,如图3c所示。在浮窗界面303中,会显示“是否允许图库自动生成表情包”的字样,以及“确定”按钮321、“取消”按钮322。此时,用户可以点击“确定”按钮321以开启允许图库自动生成表情包,进而微信TM可以获取图库自动生成的表情包,并用于用户与对应联系人之间的表情交互。用户点击“确定”按钮321后,浮窗界面303会消失,手机100的屏幕上只显示微信TM的设置界面304,如图3d所示。此时,手机100的显示界面中,“允许获取图库生成的表情包”开关312被开启。Further, in some embodiments, when the user clicks the "Allow acquisition of emoticon packages generated by the gallery" switch 312 in the setting interface 302 of WeChat TM , a floating window interface 303 will pop up above the setting interface 302 of WeChat TM , as shown in Figure 3c shown. In the floating window interface 303, the words "Whether the gallery is allowed to automatically generate emoticons" will be displayed, as well as the "OK" button 321 and the "Cancel" button 322. At this time, the user can click the "OK" button 321 to enable the automatic generation of emoticon packages by the gallery, and then WeChat TM can obtain the emoticon packages automatically generated by the gallery and use them for emoticon interaction between the user and the corresponding contact. After the user clicks the "OK" button 321, the floating window interface 303 will disappear, and only the WeChat TM setting interface 304 will be displayed on the screen of the mobile phone 100, as shown in Figure 3d. At this time, in the display interface of the mobile phone 100, the "Allow acquisition of emoticon packages generated by the gallery" switch 312 is turned on.

用户在浮窗界面303中还可以点击“取消”按钮322,放弃开启允许图库自动生成表情包的功能。此时,手机100默认用户放弃开启“允许获取图库生成的表情包”开关312,即用户在浮窗界面303中还可以点击“取消”按钮322后,浮窗界面303会消失,此时手机100的显示界面仍然显示微信TM的设置界面302,“允许获取图库生成的表情包”开关312处于关闭状态。The user can also click the "Cancel" button 322 in the floating window interface 303 to abandon the function of allowing the gallery to automatically generate emoticons. At this time, by default, the user of the mobile phone 100 gives up turning on the "Allow acquisition of emoticons generated from the gallery" switch 312, that is, the user can also click the "Cancel" button 322 in the floating window interface 303, and the floating window interface 303 will disappear. At this time, the mobile phone 100 The display interface still displays the setting interface 302 of WeChat TM , and the "Allow acquisition of emoticon packages generated by the gallery" switch 312 is turned off.

可以理解,本申请实施例中的“允许获取图库生成的表情包”开关312的开启,在图库被授权允许自动生成表情包后才可以开启。在一些实施例中,可以是图库外的其他软件完成自动生成表情包的功能,对应的,表情包获取开关也可以通过其他类型的开关进行显示,本申请对此不作限制。It can be understood that the "Allow acquisition of emoticon packages generated by the gallery" switch 312 in the embodiment of the present application can be turned on only after the gallery is authorized to automatically generate emoticon packages. In some embodiments, other software other than the gallery may be used to automatically generate emoticon packages. Correspondingly, the emoticon package acquisition switch may also be displayed through other types of switches. This application does not limit this.

在一些实施例中,用户在如图3b所示的微信TM的设置界面302中点击“允许获取图库生成的表情包”开关312后,手机100的显示界面会从微信TM的设置界面302跳转至图库的设置界面305,即用户点击“允许获取图库生成的表情包”开关312后,手机100的显示界面显示图库的设置界面305,如图3e所示。用户可以在图3e所示的图库的设置界面305中打开“允许自动生成表情包”开关331,用户打开“允许自动生成表情包”开关331,手机100的显示界面会重新跳转至微信TM的设置界面,并且其中的“允许获取图库生成的表情包”开关312被开启,手机100的显示界面如图3d所示。In some embodiments, after the user clicks the "Allow acquisition of emoticons generated by the gallery" switch 312 in the settings interface 302 of WeChat TM as shown in Figure 3b, the display interface of the mobile phone 100 will jump from the settings interface 302 of WeChat TM . To the setting interface 305 of the gallery, that is, after the user clicks the "Allow acquisition of emoticons generated by the gallery" switch 312, the display interface of the mobile phone 100 displays the setting interface 305 of the gallery, as shown in Figure 3e. The user can turn on the "Allow automatic generation of emoticons" switch 331 in the setting interface 305 of the gallery shown in Figure 3e. If the user turns on the "Allow automatic generation of emoticons" switch 331, the display interface of the mobile phone 100 will jump back to WeChatTM . Settings interface, and the "Allow acquisition of emoticon packages generated by the gallery" switch 312 is turned on, and the display interface of the mobile phone 100 is as shown in Figure 3d.

可以理解,在一些实施例中,对于表情包获取开关的开启方法还可以采取其他方式实现与用户的交互,本申请对此不作限制。It can be understood that in some embodiments, the method for turning on the emoticon package acquisition switch can also adopt other methods to achieve interaction with the user, and this application does not limit this.

在执行上述操作开启表情包获取开关后,下面结合图4对本申请实施例提供的交互方法进行进一步介绍。After performing the above operation and turning on the emoticon package acquisition switch, the interaction method provided by the embodiment of the present application will be further introduced below in conjunction with Figure 4.

图4所示为本申请实施例提供的一种交互方法的流程示意图。Figure 4 shows a schematic flowchart of an interaction method provided by an embodiment of the present application.

如图4所示,本申请实施例提供的交互方法,应用于手机100,包括:As shown in Figure 4, the interaction method provided by the embodiment of the present application is applied to the mobile phone 100, including:

401:检测到用户在通讯应用中的表情输入操作,获取表情输入操作的对应联系人信息。其中,手 机100的显示界面显示通讯应用的聊天界面。401: Detect the user's expression input operation in the communication application, and obtain the corresponding contact information for the expression input operation. Among them, hand The display interface of the machine 100 displays the chat interface of the communication application.

可以理解,通讯应用可以例如微信TM、QQTM等即时通讯类应用,也可以例如抖音TM、微博TM、小红书TM等具备聊天功能的社交网络平台,还可以例如短信、以及会议应用,且通讯应用可以支持图像类型文件的发送与接收,即通讯应用具备表情交互功能。其中的图像类型文件可以为静态图像,也可以为动态图像。It can be understood that communication applications can include instant messaging applications such as WeChatTM and QQTM , social network platforms with chat functions such as DouyinTM , WeiboTM , and XiaohongshuTM , and text messaging and conferencing applications. , and the communication application can support the sending and receiving of image type files, that is, the communication application has the expression interaction function. The image type files can be static images or dynamic images.

可以理解,用户的表情输入操作可以为用户操作手机100的屏幕,产生触发事件,触发事件与用户操作的位置、用户的操作方式相关。It can be understood that the user's expression input operation can generate a trigger event for the user to operate the screen of the mobile phone 100, and the trigger event is related to the location of the user's operation and the user's operation method.

例如,用户的表情输入操作可例如用户点击如图1a中的表情输入按钮111,即用户执行操作①,手机100检测到微信TM的聊天界面中表情输入按钮111被触发,即可认为检测到了用户的表情输入操作,进而触发表情输入操作的对应聊天人信息的获取。For example, the user's expression input operation can be, for example, the user clicks the expression input button 111 as shown in Figure 1a, that is, the user performs operation ①, and the mobile phone 100 detects that the expression input button 111 in the chat interface of WeChat TM is triggered, which can be considered as detecting the user. The emoticon input operation triggers the acquisition of the chatter information corresponding to the emoticon input operation.

可以理解,手机100获取表情输入操作的对应聊天人信息,为手机100获取表情输入操作所处的显示页面中,与用户进行聊天的对应聊天人信息。其中获取的对应聊天人信息可以包括对应聊天人的用户名、账号以及用户对对应聊天人的备注等。It can be understood that the mobile phone 100 obtains the chat person information corresponding to the expression input operation, so that the mobile phone 100 obtains the corresponding chat person information for chatting with the user in the display page where the expression input operation is performed. The obtained corresponding chat person information may include the corresponding chat person's user name, account number, and the user's remarks about the corresponding chat person, etc.

例如,通讯应用为微信TM,则对应聊天人信息可以例如对应聊天人的昵称、用户对对应聊天人的备注、手机号、微信号等。可以理解,通讯应用获取到的对应聊天人信息可以用于将对应聊天人与手机100中存储的人像表情集进行匹配。For example, if the communication application is WeChatTM , the corresponding chat person information may include the corresponding chat person's nickname, the user's notes about the corresponding chat person, mobile phone number, WeChat ID, etc. It can be understood that the corresponding chat person information obtained by the communication application can be used to match the corresponding chat person with the portrait expression set stored in the mobile phone 100 .

402:判断是否存储对应联系人的人像表情集。402: Determine whether to store the portrait expression set corresponding to the contact.

可以理解,人像表情集为手机100对其内存储的人像照片,自动进行图像进行聚类,将同一人物对应的人像图像划分至一个人像图像集,并对各人像图像集进行图像处理,生成对应的人像表情集,生成的各人像表情集可以存储于手机100中,作为手机100自动生成的人像表情库。即手机100对其存储的人像图像进行处理,生成了包括至少一个人像表情集的人像表情库,且人像表情集中包括手机100生成的、同一人物的至少一个表情。It can be understood that the portrait expression set is the portrait photos stored in the mobile phone 100. It automatically clusters the images, divides the portrait images corresponding to the same person into a portrait image set, and performs image processing on each portrait image set to generate a corresponding Portrait expression sets, each generated portrait expression set can be stored in the mobile phone 100 as a portrait expression library automatically generated by the mobile phone 100 . That is, the mobile phone 100 processes the portrait images stored therein and generates a portrait expression library including at least one portrait expression set, and the portrait expression set includes at least one expression of the same person generated by the mobile phone 100 .

在一些实施例中,图像的聚类以及人像图像集、人像表情集的生成可以通过相册类应用实现,例如手机100中的图库应用,并且在生成人像表情集后,相册类应用需要将各人像表情集与通讯应用中的聊天人预先关联。进一步地,步骤402中手机100判断是否存储对应联系人的人像表情集,即图库判断其对应的存储数据中,是否包括对应联系人的人像表情集。In some embodiments, the clustering of images and the generation of portrait image sets and portrait expression sets can be implemented through photo album applications, such as the gallery application in the mobile phone 100, and after generating the portrait expression set, the photo album application needs to The emoticon set is pre-associated with the chat person in the messaging application. Further, in step 402, the mobile phone 100 determines whether the portrait expression set corresponding to the contact is stored, that is, the gallery determines whether its corresponding stored data includes the portrait expression set corresponding to the contact.

在一些实施例中,相册类应用在生成各人像表情集后,可以获取手机100的通讯录应用中的通讯联系人信息,并根据用户的输入信息,建立人像表情集与通讯联系人信息之间的对应关系,例如可以将通讯联系人信息作为人像表情集的标签。人像表情库的建立以及人像表情集与通讯联系人信息之间的对应关系将在下文进行进一步介绍,在此不作赘述。In some embodiments, after generating each portrait expression set, the photo album application can obtain the communication contact information in the address book application of the mobile phone 100, and establish a relationship between the portrait expression set and the communication contact information based on the user's input information. For example, the communication contact information can be used as the label of the portrait expression set. The establishment of the portrait expression library and the correspondence between the portrait expression set and communication contact information will be further introduced below and will not be described in detail here.

进一步的,通讯应用中还可以包括通讯录访问开关,用户可以通过选择是否开启通讯录访问开关,从而选择是否开启通讯应用获取通讯录中存储的通讯联系人信息的功能。用户可以操作手机100,开启通讯录访问开关,进而步骤402中手机100判断是否存储对应联系人的人像表情集时,可以为通讯应用获取通讯录中的通讯联系人信息,确定对应联系人对应的通讯联系人信息,并将确认的通讯联系人信息发送至图库应用。图库应用根据接收到的通讯联系人信息,确定人像表情库中是否存储有接收到的通讯联系人信息对应的人像表情集。Furthermore, the communication application may also include an address book access switch. The user can choose whether to enable the function of the communication application to obtain communication contact information stored in the address book by choosing whether to turn on the address book access switch. The user can operate the mobile phone 100 to turn on the address book access switch. Then in step 402, when the mobile phone 100 determines whether to store the portrait expression set of the corresponding contact, it can obtain the communication contact information in the address book for the communication application, and determine the corresponding contact information of the corresponding contact. Communication contact information, and send the confirmed communication contact information to the gallery application. The gallery application determines whether the portrait expression set corresponding to the received communication contact information is stored in the portrait expression library based on the received communication contact information.

在一些实施例中,用户可以将人像表情集对应的人物的姓名,作为相册应用自动生成的人像表情集的标签添加至对应的人像表情集,用户还可以自定义各人像表情集的标签并添加至对应的人像表情集。进而,手机100在获取到对应联系人信息是,可以与各人像表情集的标签进行匹配,确定对应联系人的 人像表情集。In some embodiments, the user can add the name of the person corresponding to the portrait expression set as the label of the portrait expression set automatically generated by the photo album application to the corresponding portrait expression set. The user can also customize the label of each portrait expression set and add to the corresponding portrait expression set. Furthermore, after obtaining the corresponding contact information, the mobile phone 100 can match it with the tags of each portrait expression set to determine the corresponding contact information. Set of portrait expressions.

若步骤402的判断结果为是,则执行步骤403。If the judgment result in step 402 is yes, step 403 is executed.

403:获取并显示对应联系人的人像表情集。403: Obtain and display the portrait expression set of the corresponding contact.

可以理解,手机100在显示对应联系人的人像表情集时,可以在通讯应用的表情区增加一个存放对应联系人的人像表情集的人像表情按钮(即上文中的聊天人表情按钮)。进而手机100在检测到用户点击该人像表情按钮时,可以在通讯应用的表情显示区域显示对应联系人的人像表情集中的表情。当手机100的显示界面显示对应联系人的表情时,用户可以通过在显示的至少一个对应联系人的表情中,选择其需要的对应联系人的表情,并将其发送至对应联系人。It can be understood that when the mobile phone 100 displays the portrait expression set corresponding to the contact, it can add a portrait expression button (ie, the chat person expression button above) that stores the portrait expression set corresponding to the contact in the expression area of the communication application. Furthermore, when the mobile phone 100 detects that the user clicks the portrait expression button, it can display the expression corresponding to the portrait expression set of the contact in the expression display area of the communication application. When the display interface of the mobile phone 100 displays the expression of the corresponding contact, the user can select the expression of the corresponding contact that he needs from the displayed expression of at least one corresponding contact, and send it to the corresponding contact.

在一些实施例中,图库应用在生成人像表情集中,会选择对应的人像图像集中的一张人像图像或人像表情集中的一张表情作为该人像表情集的封面。进一步地,手机100在增加人像表情按钮时,可以将对应联系人的人像表情集的封面作为对应联系人的人像表情按钮。In some embodiments, when generating a portrait expression set, the gallery application will select a portrait image in the corresponding portrait image set or an expression in the portrait expression set as the cover of the portrait expression set. Further, when the mobile phone 100 adds a portrait expression button, the cover of the portrait expression set corresponding to the contact can be used as the portrait expression button corresponding to the contact.

例如图1c中所示,通讯应用为微信TM,则手机100获取到的对应聊天人的人像表情集为Alice的人像表情集,新增的对应聊天人的人像表情按钮可例如聊天人表情按钮115,当用户点击聊天人表情按钮115后,对应聊天人Alice的人像表情集中的至少一个表情可以显示于表情选择区域112内。在一些实施例中,表情选择区域112中会根据对应聊天人的人像表情集中表情的使用次数又高到低的顺序显示对应聊天人的人像表情集中的表情。For example, as shown in Figure 1c, if the communication application is WeChatTM , then the portrait expression set corresponding to the chatter obtained by the mobile phone 100 is Alice's portrait expression set, and the newly added portrait expression button corresponding to the chatter can be, for example, the chatter expression button 115 , when the user clicks the chat person expression button 115, at least one expression corresponding to the portrait expression set of the chat person Alice can be displayed in the expression selection area 112. In some embodiments, the expression selection area 112 displays the expressions in the portrait expression set corresponding to the chatter in descending order according to the number of uses of the expressions in the portrait expression set of the corresponding chatter.

本申请实施例提供的表情交互方法,可以响应于用户的表情输入操作将用户需要的对应联系人的人像表情集显示,进而,用户可以操作电子设备的屏幕,在对应联系人的人像表情集中选择用户需要的表情,无需用户手动制作并导入对应联系人的表情,用户操作简单,体验高。同时,采用对应联系人的人像图像制成的表情作为用户与对应联系人之间交互的表情,可以提高聊天的趣味度。且采用对应联系人的人像图像制作成的表情,可以使得表情更有亲近感。The expression interaction method provided by the embodiment of the present application can display the portrait expression set corresponding to the contact required by the user in response to the user's expression input operation. Furthermore, the user can operate the screen of the electronic device to select from the portrait expression set corresponding to the contact. The emoticons that users need do not require users to manually create and import emoticons for corresponding contacts. The user operation is simple and the experience is high. At the same time, emoticons made from portrait images of corresponding contacts are used as expressions for interaction between the user and the corresponding contact, which can make the chat more interesting. And the expressions made with portrait images corresponding to the contacts can make the expressions more familiar.

继续参考图4,在一些实施例中,当步骤402的判断结果为否时,手机100执行以下步骤:Continuing to refer to Figure 4, in some embodiments, when the determination result in step 402 is no, the mobile phone 100 performs the following steps:

404:获取存储的所有人像表情集,并获取用户对各人像表情集的历史使用信息。404: Obtain all stored portrait expression sets and obtain the user's historical usage information for each portrait expression set.

可以理解,历史使用信息可以例如用户对各人像表情集的使用频率、用户最近常用人像表情集等。其中各人像表情集的使用频率可以表征用户对各人像表情集的偏好程度,用户最近常用人像表情集可以表征用户近期偏好的人像表情集。在一些实施例中,手机100可以选择用户对各人像表情集的使用频率、用户最近常用人像表情集等中的一种类别,作为历史使用信息。在另一些实施例中,手机100可以将用户对各人像表情集的使用频率、用户最近常用人像表情集等中的至少两种类别的加权值作为历史使用信息。It can be understood that the historical usage information may include, for example, the user's frequency of use of each portrait expression set, the user's recently used portrait expression sets, etc. The frequency of use of each portrait expression set can represent the user's preference for each portrait expression set, and the user's recently used portrait expression set can represent the user's recently preferred portrait expression set. In some embodiments, the mobile phone 100 can select a category from the user's frequency of use of each portrait expression set, the user's most recently used portrait expression set, etc., as historical usage information. In other embodiments, the mobile phone 100 may use the user's frequency of use of each portrait expression set and the weighted values of at least two categories in the user's recently used portrait expression sets as historical usage information.

405:根据各人像表情集的历史使用信息,获取并显示符合预设条件的人像表情集。405: According to the historical usage information of each portrait expression set, obtain and display the portrait expression set that meets the preset conditions.

可以理解,预设条件可以根据历史使用信息进行设置。例如,当历史使用信息为用户对各人像表情集的使用频率时,预设条件可以为使用频率最多的前3个人像表情集进行显示,预设条件还可以为人像表情集的使用频率超过预设阈值。当历史使用信息为其他类别的信息时,同理。It is understood that the preset conditions can be set based on historical usage information. For example, when the historical usage information is the user's frequency of use of each portrait expression set, the preset condition can be that the top three most frequently used portrait expression sets are displayed, and the preset condition can also be that the usage frequency of the portrait expression set exceeds the preset limit. Set threshold. The same applies when the historical usage information is other categories of information.

如图5a至5b所示,为本申请实施例提供的一些交互方法的界面图。As shown in Figures 5a to 5b, they are interface diagrams of some interactive methods provided by embodiments of the present application.

在一些实施例中,以通讯应用为微信TM为例,手机100还可以提供其他聊天人(对应于上文中的联系人)的人像表情,以供用户与对应聊天人(即对应联系人)的表情交互。例如,当对应聊天人的表情不能满足用户的表情交互需求,或用户偏好从其他聊天人的人像表情集中选择表情与对应聊天人进行交互,或者用户需要其与对应聊天人的共同好友的人像图像制成的表情增加与对应聊天人的聊天趣味性等。In some embodiments, taking the communication application WeChat TM as an example, the mobile phone 100 can also provide portrait expressions of other chat persons (corresponding to the contacts mentioned above) for the user to communicate with the corresponding chat persons (ie, corresponding contacts). Expression interaction. For example, when the expression of the corresponding chat person cannot meet the user's expression interaction needs, or the user prefers to select expressions from the portrait expression set of other chat persons to interact with the corresponding chat person, or the user needs portrait images of common friends with the corresponding chat person The made emoticons make chatting with the corresponding chat person more interesting, etc.

具体地,手机100可以在显示界面提供人像表情集的查询按钮,手机100在检测到用户在通讯应用 的中点击人像表情集的查询按钮时,手机100可以在显示界面显示人像表情集选择窗口。各人像表情集可以通过其对应的封面或人像表情集的标签,在人像表情集选择窗口中依次进行显示。Specifically, the mobile phone 100 can provide a query button for the portrait expression set on the display interface. The mobile phone 100 detects that the user is using the communication application. When the query button of the portrait expression set is clicked, the mobile phone 100 can display a portrait expression set selection window on the display interface. Each portrait expression set can be displayed sequentially in the portrait expression set selection window through its corresponding cover or label of the portrait expression set.

在一些实施例中,各人像表情集通过人像表情集的标签在人像表情集选择窗口中依次进行显示时,可以根据标签对各人像表情集进行排序,并根据排序结果在人像表情集选择窗口中进行显示。例如人像表情集的标签为人物的姓名,则可以根据姓名的首字母对各人像表情集的标签进行排序。In some embodiments, when each portrait expression set is displayed sequentially in the portrait expression set selection window through the label of the portrait expression set, each portrait expression set can be sorted according to the label, and the portrait expression set selection window can be displayed according to the sorting result. to display. For example, if the label of the portrait expression set is the name of the person, the labels of each portrait expression set can be sorted according to the first letter of the name.

如图5a所示,手机100的显示界面显示微信TM的聊天界面501。微信TM的聊天界面501的表情集选择区域511中包括聊天人表情按钮521和查询按钮522。用户可以点击查询按钮522,此时手机响应于用户的点击操作,在微信TM的聊天界面501的上方以浮窗模式显示人像表情集选择窗口502。如图5a所示,人像表情集选择窗口502可以显示除对应聊天人的人像表情集外的其他聊天人的人像表情集的标签,且各人像表情集的标签可以按照首字母进行排序。例如,人像表情集的标签包括“小飞侠”和“Zwc”,则可以将“小飞侠”排第一个,“Zwc”排第二个。As shown in Figure 5a, the display interface of the mobile phone 100 displays the chat interface 501 of WeChat TM . The emoticon set selection area 511 of the chat interface 501 of WeChatTM includes a chat person emoticon button 521 and a query button 522. The user can click the query button 522. At this time, the mobile phone responds to the user's click operation and displays the portrait expression set selection window 502 in a floating window mode above the chat interface 501 of WeChatTM . As shown in Figure 5a, the portrait expression set selection window 502 can display the labels of the portrait expression sets of other chatters except the portrait expression set of the corresponding chatter, and the labels of each portrait expression set can be sorted by first letter. For example, if the labels of the portrait expression set include "Peter Pan" and "Zwc", "Peter Pan" can be ranked first and "Zwc" can be ranked second.

在一些实施例中,各人像表情集通过人像表情集的封面在人像表情集选择窗口中依次进行显示时,可以根据各人像表情集的历史使用信息进行排序,并根据排序结果在人像表情集选择窗口中进行显示。其中的历史使用信息在上文进行介绍,在此不作赘述。例如按照各人像表情集的使用频率由高到低的顺序,对各人像表情集的封面进行排序。In some embodiments, when each portrait expression set is displayed sequentially in the portrait expression set selection window through the cover of the portrait expression set, the portrait expression set can be sorted according to the historical usage information of each portrait expression set, and the portrait expression set selection can be made based on the sorting results. displayed in the window. The historical usage information is introduced above and will not be described in detail here. For example, the covers of each portrait expression set are sorted according to the frequency of use of each portrait expression set from high to low.

如图5b所示,手机100的显示界面显示微信TM的聊天界面503。微信TM的聊天界面503的表情集选择区域512中包括聊天人表情按钮531和查询按钮532。用户可以点击查询按钮532,此时手机响应于用户的点击操作,在微信TM的聊天界面503的上方以浮窗模式显示人像表情集选择窗口504。如图5b所示,人像表情集选择窗口504可以显示除对应聊天人的人像表情集外的其他聊天人的人像表情集的封面,且各人像表情集的封面可以按照各人像表情集的使用频率由高到低的顺序排序。例如,人像表情集的封面541对应的人像表情集的使用频率高于人像表情集的封面542对应的人像表情集,则人像表情集的封面541和人像表情集的封面542在人像表情集选择窗口504的显示可以如图5b所示。As shown in Figure 5b, the display interface of the mobile phone 100 displays the chat interface 503 of WeChat TM . The emoticon set selection area 512 of the chat interface 503 of WeChatTM includes a chat person emoticon button 531 and a query button 532. The user can click the query button 532. At this time, the mobile phone responds to the user's click operation and displays the portrait expression set selection window 504 in a floating window mode above the chat interface 503 of WeChatTM . As shown in Figure 5b, the portrait expression set selection window 504 can display the covers of portrait expression sets of other chatters except the portrait expression set of the corresponding chatter, and the cover of each portrait expression set can be based on the frequency of use of each portrait expression set. Sort from high to low. For example, if the portrait expression set corresponding to the cover 541 of the portrait expression set is used more frequently than the portrait expression set corresponding to the cover 542 of the portrait expression set, then the cover 541 of the portrait expression set and the cover 542 of the portrait expression set are in the portrait expression set selection window. The display of 504 can be as shown in Figure 5b.

下面结合图6,以通讯应用为微信TM,且图库应用自动生成表情包为例,对本申请实施例中的交互方法进行进一步介绍。Next, with reference to Figure 6, taking the communication application WeChat TM and the gallery application automatically generating emoticons as an example, the interaction method in the embodiment of the present application will be further introduced.

图6所示为本申请实施例提供的一种交互方法的流程图。Figure 6 shows a flow chart of an interaction method provided by an embodiment of the present application.

如图6所示,在一些实施例中,交互方法包括:As shown in Figure 6, in some embodiments, the interaction method includes:

601:微信TM611检测到用户的表情输入操作。601: WeChat TM 611 detects the user's expression input operation.

可以理解,表情输入操作为用户在微信TM611的聊天界面点击表情输入按钮,例如图1a中的表情输入按钮111。微信TM611检测到用户点击表情输入按钮111的操作,即检测到用户的表情输入操作。It can be understood that the expression input operation is for the user to click an expression input button on the chat interface of WeChat TM 611, such as the expression input button 111 in Figure 1a. WeChatTM 611 detects the user's operation of clicking the expression input button 111, that is, detects the user's expression input operation.

602:微信TM611响应于用户的表情输入操作获取对应聊天人信息。602: WeChat TM 611 obtains the corresponding chat person information in response to the user's expression input operation.

可以理解,微信TM611在获取到表情输入操作后,表明用户需要在微信TM611的聊天界面中输入表情,此时由于微信TM611开启表情包获取开关,例如图3b中的“允许获取图库生成的表情包”开关,则为了自动获取需要的人像表情集,微信TM611需要先获取与用户进行聊天的对应聊天人信息。It can be understood that after WeChat TM 611 obtains the emoticon input operation, it indicates that the user needs to input emoticons in the chat interface of WeChat TM 611. At this time, WeChat TM 611 turns on the emoticon package acquisition switch, such as "Allow acquisition of gallery generation" in Figure 3b "emoticon package" switch, in order to automatically obtain the required portrait emoticon set, WeChat TM 611 needs to first obtain the corresponding chat person information to chat with the user.

603:微信TM611将对应聊天人信息发送至图库应用612。603: WeChat TM 611 sends the corresponding chat person information to the gallery application 612.

可以理解,微信TM611在获取到对应聊天人信息后,会将获取到的对应聊天人信息发送给图库应用612,并向图库应用612申请图库应用612自动生成的对应聊天人的人像表情集。It can be understood that after WeChat TM 611 obtains the corresponding chat person information, it will send the obtained corresponding chat person information to the gallery application 612, and apply to the gallery application 612 for the portrait expression set of the corresponding chat person automatically generated by the gallery application 612.

604:图库应用612判断是否存储对应聊天人的人像表情集。604: The gallery application 612 determines whether to store the portrait expression set corresponding to the chat person.

可以理解,图库应用612响应于微信TM611的信息,会将对应聊天人信息与图库应用612生成的各人像表情集的标签进行匹配,判断图库应用612生成的各人像表情集中是否有对应聊天人的人像表情集。 其中,人像表情集的标签可以例如人像表情集对应的人像的姓名、用户自定义的标签、人像表情集与通讯录应用中对应的通讯联系人信息。It can be understood that in response to the information from WeChatTM 611, the gallery application 612 will match the corresponding chat person information with the tags of each portrait expression set generated by the gallery application 612, and determine whether there is a corresponding chat person in each portrait expression set generated by the gallery application 612. set of portrait expressions. The tags of the portrait expression set may be, for example, the name of the portrait corresponding to the portrait expression set, a user-defined tag, the portrait expression set, and the corresponding communication contact information in the address book application.

若步骤604的判断结果为是,则微信TM611可以直接显示对应聊天人的人像表情集,及执行步骤605。If the determination result in step 604 is yes, WeChatTM 611 can directly display the portrait expression set corresponding to the chat person, and execute step 605.

605:图库应用612将对应聊天人的人像表情集发送至微信TM611。605: The gallery application 612 sends the portrait expression set corresponding to the chat person to the WeChat TM 611.

可以理解,图库应用612匹配到对应聊天人的人像表情集,则可以响应于微信TM611的申请,向微信TM611发送对应聊天人的人像表情集。It can be understood that if the gallery application 612 matches the portrait expression set of the corresponding chat person, it can respond to the application of WeChat TM 611 and send the portrait expression set of the corresponding chat person to WeChat TM 611 .

606:微信TM611显示对应聊天人的人像表情集。606: WeChat TM 611 displays the portrait expression set corresponding to the chat person.

若步骤604的判断结果为否,则图库应用612中可能未存储对应聊天人的人像图像,进而也未对对应聊天人的人像图像进行处理,得到对应聊天人的人像表情集,则图库应用612可以根据用户的使用偏好将其余人像表情集发送至微信TM611并显示,即执行步骤607、608。If the judgment result in step 604 is no, the portrait image corresponding to the chatter may not be stored in the gallery application 612, and the portrait image corresponding to the chatter may not be processed to obtain the portrait expression set corresponding to the chatter, then the gallery application 612 The remaining portrait expression sets can be sent to WeChatTM 611 and displayed according to the user's preference, that is, steps 607 and 608 are performed.

607:图库应用612获取各人像表情集的历史使用信息。607: The gallery application 612 obtains the historical usage information of each portrait expression set.

可以理解,人像表情集的历史使用信息可以表征用户对各人像表情集的使用偏好。其中,历史使用信息在前文已经进行详细介绍,在此不作赘述。It can be understood that the historical usage information of the portrait expression set can represent the user's usage preference for each portrait expression set. Among them, historical usage information has been introduced in detail previously and will not be described again here.

608:图库应用612将满足预设条件的至少一个人像表情集发送至微信TM611。608: The gallery application 612 sends at least one portrait expression set that meets the preset conditions to WeChat TM 611.

可以理解,图库应用612将其生成的至少部分人像表情集发送至微信TM611,无需用户手动制作人像表情,也无需用户将人像表情手动导入微信TM611中。用户操作简单。It can be understood that the gallery application 612 sends at least part of the portrait expression set generated by it to the WeChat TM 611, without the user having to manually create the portrait expressions or manually import the portrait expressions into the WeChat TM 611. User operation is simple.

609:微信TM611显示接收到的满足预设条件的至少一个人像表情集。609: WeChatTM 611 displays at least one received portrait expression set that meets the preset conditions.

在一些实施例中,用户想要获取到指定文字的表情包时,用户可以输入想要指定的文字,并输入需要参与进行表情制作的人像图像,手机100可以根据用户输入的信息自动生成对应的表情。下面结合图7至图8d,以通讯应用为微信TM611,且对对应聊天人的表情制作为例,对该过程进行进一步介绍。In some embodiments, when the user wants to obtain an emoticon package with specified text, the user can enter the text he wants to specify and the portrait image that needs to be involved in making the emoticon. The mobile phone 100 can automatically generate the corresponding emoticon based on the information input by the user. expression. The process will be further introduced below with reference to Figures 7 to 8d, taking the communication application as WeChat TM 611 and making the expression of the corresponding chat person as an example.

图7所示为本申请实施例提供的一种表情的生成方法的流程示意图。Figure 7 shows a schematic flowchart of an expression generation method provided by an embodiment of the present application.

图8a至8e所示为本申请实施例提供的一些表情的生成过程的界面变化图。Figures 8a to 8e show interface change diagrams of some expression generation processes provided by embodiments of the present application.

如图7所示,该方法包括:As shown in Figure 7, the method includes:

701:微信TM611获取用户输入的目标表情文字。701: WeChat TM 611 obtains the target emoticon text input by the user.

可以理解,目标表情文字为用户在微信TM611的文字输入框中输入的内容。It can be understood that the target emoticon text is the content input by the user in the text input box of WeChat TM 611.

例如,手机100的显示界面显示微信TM611的聊天界面801,如图8a所示。在微信TM611的聊天界面801中显示有微信TM611的文字输入框811,用户可以在文字输入框811中输入想要发送的文本内容或对应聊天人的目标表情的目标表情文字。微信TM611的聊天界面801中还包括表情选择区域810。其中,表情选择区域810中显示有对应聊天人的人像表情集中的表情,以及添加按钮812。For example, the display interface of the mobile phone 100 displays the chat interface 801 of WeChat TM 611, as shown in Figure 8a. A text input box 811 of WeChat TM 611 is displayed in the chat interface 801 of WeChat TM 611. The user can input the text content he wants to send or the target emoticon text corresponding to the target emoticon of the chatter in the text input box 811. The chat interface 801 of WeChatTM 611 also includes an expression selection area 810. Among them, the expression selection area 810 displays expressions corresponding to the portrait expression set of the chatter, and an add button 812 .

702:微信TM611检测到用户的表情增加操作,并响应于表情增加操作获取对应聊天人信息。702: WeChat TM 611 detects the user's expression adding operation, and obtains the corresponding chat person information in response to the expression adding operation.

可以理解,用户可以操作手机100的屏幕上对应位置,触发表情添加按钮,此时即微信TM611检测到用户的表情增加操作。It can be understood that the user can operate the corresponding position on the screen of the mobile phone 100 to trigger the expression adding button. At this time, the WeChat TM 611 detects the user's expression adding operation.

例如,如图8a所示,用户可以在微信TM611的聊天界面801中点击添加按钮812,当添加按钮812被触发,即为微信TM611检测到用户的表情增加操作。For example, as shown in Figure 8a, the user can click the add button 812 in the chat interface 801 of WeChat TM 611. When the add button 812 is triggered, the WeChat 611 detects the user's expression adding operation.

703:微信TM611从图库应用612获取对应聊天人的人像图像集。703: WeChat TM 611 obtains the portrait image set of the corresponding chat person from the gallery application 612.

可以理解,微信TM611在检测到表情增加操作时,需要得到对应聊天人可以被制成表情的人像图像,因此微信TM611会获取图库应用612中的对应聊天人的人像图像集。It can be understood that when WeChat TM 611 detects the operation of adding an expression, it needs to obtain a portrait image of the corresponding chat person that can be made into an expression. Therefore, WeChat TM 611 will obtain a set of portrait images of the corresponding chat person in the gallery application 612.

704:微信TM611显示对应聊天人的人像图像集。704: WeChat TM 611 displays the portrait image set corresponding to the chat person.

可以理解,用户在执行表情增加操作后,微信TM611的显示界面上方会弹出对应聊天人的人像图像 集。It is understandable that after the user performs the operation of adding an expression, the portrait image of the corresponding chat person will pop up at the top of the display interface of WeChat TM 611 set.

例如,用户在图8a所示的微信TM611的聊天界面801中点击添加按钮812后,微信TM611会将获取的对应聊天人的人像图像集以浮窗模式显示在微信TM611的聊天界面801上方。如图8b所示。用户点击添加按钮812后,微信TM611的显示界面801上方会显示人像选择窗口802。人像选择窗口802中可以包括人像选择区域813以及“手动指定按钮”814。For example, after the user clicks the add button 812 in the chat interface 801 of WeChat TM 611 shown in Figure 8a, WeChat TM 611 will display the obtained set of portrait images of the corresponding chat person on the chat interface 801 of WeChat TM 611 in a floating window mode. above. As shown in Figure 8b. After the user clicks the add button 812, a portrait selection window 802 will be displayed above the display interface 801 of the WeChat TM 611. The portrait selection window 802 may include a portrait selection area 813 and a "manual specification button" 814.

705:微信TM611检测到用户的人像指定操作,并确定人像指定操作对应的人像图像信息。705: WeChatTM 611 detects the user's portrait designation operation, and determines the portrait image information corresponding to the portrait designation operation.

可以理解,人像指定操作为用户在微信TM611显示的人像图像集中,选择与目标表情文字匹配的人像图像。It can be understood that the portrait designation operation is for the user to select a portrait image that matches the target expression text from the set of portrait images displayed on WeChat TM 611.

例如,如图8b所示,手机100在弹出人像选择窗口802后,用户可以在人像选择区域813显示的多张对应聊天人的人像图像中点击与目标表情文字匹配的人像图像,以选中与目标表情文字匹配的人像图像。此时,用户可以点击“手动指定”按钮814,以完成人像指定操作。For example, as shown in FIG. 8b , after the portrait selection window 802 pops up on the mobile phone 100 , the user can click the portrait image that matches the target expression text among the multiple portrait images corresponding to the chat person displayed in the portrait selection area 813 to select the target expression. Emoji text matched to portrait image. At this time, the user can click the "manual designation" button 814 to complete the portrait designation operation.

可以理解,在一些实施例中,微信TM611确定的人像指定操作对应的人像图像信息可以为人像图像。在一些实施例脏中,微信TM611确定的人像指定操作对应的人像图像信息可以为能够表征人像图像的其他类型信息,例如人像图像的名称、人像图像的特征等,本申请对此不作限制。It can be understood that in some embodiments, the portrait image information corresponding to the portrait specifying operation determined by WeChat 611 may be a portrait image. In some embodiments, the portrait image information corresponding to the portrait specifying operation determined by WeChatTM 611 may be other types of information that can characterize the portrait image, such as the name of the portrait image, the characteristics of the portrait image, etc. This application does not limit this.

706:微信TM611将确定的人像图像信息以及目标表情文字发送至图库应用612。706: WeChat TM 611 sends the determined portrait image information and target emoticon text to the gallery application 612.

707:图库应用612根据接收到的人像图像信息以及目标表情文字生成目标表情。707: The gallery application 612 generates a target expression based on the received portrait image information and target expression text.

可以理解,图库应用612在接收到人像图像信息后,会确定对应的对应聊天人的人像图像,并结合目标表情文字生成目标表情。It can be understood that after receiving the portrait image information, the gallery application 612 will determine the corresponding portrait image of the chatting person, and combine it with the target expression text to generate the target expression.

708:图库应用612将目标表情发送至微信TM611。708: The gallery application 612 sends the target expression to WeChat TM 611.

709:微信TM611在显示界面显示目标表情。709: WeChatTM 611 displays the target expression on the display interface.

可以理解,微信TM611在接收到目标表情后,可以将目标表情添加于微信TM611的聊天界面中对应聊天人的人像表情集中。It can be understood that after receiving the target expression, WeChat TM 611 can add the target expression to the portrait expression set of the corresponding chat person in the chat interface of WeChat TM 611.

在一些实施例中,用户在进行人像指定操作时,还可以对目标文字的字体、字号、文字颜色等进行指定,进而微信TM611发送至图库应用612的信息还包括目标表情文字的字体、字号、文字颜色等。In some embodiments, when performing a portrait designation operation, the user can also specify the font, font size, text color, etc. of the target text. Furthermore, the information sent by WeChatTM 611 to the gallery application 612 also includes the font, font size, etc. of the target emoticon text. , text color, etc.

下面结合图8c至8e,以具体实施例对上述步骤701至709所述的表情的生成方法进行进一步介绍。The expression generation method described in the above steps 701 to 709 will be further introduced with specific embodiments in conjunction with Figures 8c to 8e.

如图8c所示,手机100的显示界面显示微信TM611的聊天界面803。图8c中显示的微信TM611的聊天界面803与上文中图8a显示的微信TM611的聊天界面801相同,包括微信TM611的文字输入框811,用户可以在文字输入框811中输入目标表情文字“震惊!!”。微信TM611的聊天界面803中还包括添加按钮812。As shown in Figure 8c, the display interface of the mobile phone 100 displays the chat interface 803 of WeChat TM 611. The chat interface 803 of WeChat TM 611 shown in Figure 8c is the same as the chat interface 801 of WeChat TM 611 shown in Figure 8a above, including a text input box 811 of WeChat TM 611, in which the user can input target emoticon text. "Shock!!". The chat interface 803 of WeChatTM 611 also includes an add button 812.

当微信TM611检测到用户点击添加按钮812的操作时,微信TM611会从图库应用612获取对应聊天人的人像图像集,此时微信TM611响应于用户的表情增加操作,在微信TM611的聊天界面803上方,以浮窗模式显示人像选择窗口804。人像选择窗口804中可以包括人像选择区域813以及“手动指定按钮”814。When WeChat TM 611 detects the user's operation of clicking the add button 812, WeChat TM 611 will obtain the portrait image set of the corresponding chat person from the gallery application 612. At this time, WeChat TM 611 adds an operation in response to the user's expression. Above the chat interface 803, a portrait selection window 804 is displayed in a floating window mode. The portrait selection window 804 may include a portrait selection area 813 and a "manual specification button" 814.

用户可以在人像选择区域813中选中与目标表情文字“震惊!!”匹配的人像图像,此时用户选中图像815。用户在选中图像815后,可以点击“手动指定按钮”814,完成人像图像的选择。此时,微信TM611可以确定图像815的人像图像信息,并将该信息以及目标表情文字“震惊!!”发送给图库应用612。图库应用612可以根据人像图像信息确定目标表情文字对应的人像图像,并将目标表情文字以及确定的人像图像制作成目标表情。The user can select a portrait image that matches the target emoticon text "Shocked!!" in the portrait selection area 813. At this time, the user selects image 815. After selecting the image 815, the user can click the "manual designation button" 814 to complete the selection of the portrait image. At this time, WeChatTM 611 can determine the portrait image information of the image 815, and send the information and the target emoticon text "Shocked!!" to the gallery application 612. The gallery application 612 may determine the portrait image corresponding to the target emoticon text based on the portrait image information, and produce the target emoticon text and the determined portrait image into the target emoticon.

图库应用612在自动生成目标表情后,可以将目标表情发送至微信TM611,微信TM611会在其聊天界面显示,如图8e所示。微信TM611在获取到目标表情后,显示微信TM611的聊天界面805。其中微信TM611 的聊天界面805的表情选择区域显示有新制作的目标表情821。After automatically generating the target expression, the gallery application 612 can send the target expression to WeChat TM 611, and WeChat TM 611 will display it on its chat interface, as shown in Figure 8e. After acquiring the target expression, WeChat TM 611 displays the chat interface 805 of WeChat TM 611. Among them WeChatTM 611 The newly created target emoticon 821 is displayed in the emoticon selection area of the chat interface 805.

在一些实施例中,用户通过指定目标表情文字生成的目标表情会添加至对应聊天人的人像表情集中,并且在微信TM611显示对应聊天人的人像表情集时,会将该目标表情显示在人像表情集的前列,可以理解为,用户通过指定目标表情文字生成的目标表情的可靠性更高。In some embodiments, the target expression generated by the user by specifying the target expression text will be added to the portrait expression set of the corresponding chat person, and when WeChat TM 611 displays the portrait expression set of the corresponding chat person, the target expression will be displayed on the portrait The top position in the emoticon set can be understood as the target expression generated by the user by specifying the target emoticon text is more reliable.

可以理解,上述实施例中以对应聊天人的表情增加为例,对表情的生成过程进行了介绍,其他人像表情集的表情的增加也可采用相同的方式进行,本申请对此不作限制。It can be understood that in the above embodiment, the expression generation process is introduced by taking the expression of the corresponding chat person as an example. The expression of other portrait expression sets can also be added in the same way, and this application does not limit this.

下面结合图9,对本申请实施例中人像表情集的自动生成的过程进行介绍。The following is an introduction to the process of automatically generating a portrait expression set in the embodiment of the present application with reference to Figure 9 .

图9所示为本申请实施例中提供的一种人像表情集的自动生成方法的流程示意图。FIG. 9 is a schematic flowchart of a method for automatically generating a portrait expression set provided in an embodiment of the present application.

如图9所示,该方法包括:As shown in Figure 9, the method includes:

901:获取手机100中存储的人像图像。901: Obtain the portrait image stored in the mobile phone 100.

可以理解,在一些实施例中,本申请实施例中提供的表情的自动生成方法可以由相册类应用执行并完成人像表情集的制作,例如图库应用。进而,获取手机100中存储的人像图像,可以理解为图库应用获取其对应的存储模块中存储的人像图像数据。It can be understood that in some embodiments, the automatic generation method of expressions provided in the embodiments of the present application can be executed by a photo album application to complete the production of a portrait expression set, such as a photo gallery application. Furthermore, obtaining the portrait image stored in the mobile phone 100 can be understood as the gallery application obtaining the portrait image data stored in its corresponding storage module.

902:根据不同人像图像对应的人物的不同,将手机100中存储的人像图像进行聚类。902: Cluster the portrait images stored in the mobile phone 100 according to the different characters corresponding to the different portrait images.

可以理解,对人像图像进行聚类为,将手机100中存储的所有人像图像进行特征提取后,将特征相近的人像图像划分至同一类别。It can be understood that clustering the portrait images is to extract features from all the portrait images stored in the mobile phone 100 and then classify the portrait images with similar features into the same category.

在一些实施例中,对人像图像进行聚类的聚类算法可以例如K中心点(K-Means)聚类算法、均值偏移聚类算法等,可以根据待处理的人像图像等选择相匹配的聚类算法,本申请对此不作限制。In some embodiments, the clustering algorithm for clustering portrait images can be, for example, K-Means clustering algorithm, mean shift clustering algorithm, etc., and a matching algorithm can be selected according to the portrait image to be processed, etc. Clustering algorithm, this application does not limit this.

903:将同一类别的人像图像划分至一个人像图像集。903: Divide portrait images of the same category into a portrait image set.

可以理解,利用聚类算法得到的同一类别的人像图像可以具备相同的人物特征,即同一类别的人像图像为同一任务的人像图像,进而属于同一类别的人像图像可以作为一个人像图像集。It can be understood that the portrait images of the same category obtained by using the clustering algorithm can have the same character characteristics, that is, the portrait images of the same category are portrait images of the same task, and the portrait images belonging to the same category can be used as a portrait image set.

904:利用表情制作模型将各人像图像集中的人像图像制作成表情,得到各人像表情集。904: Use the expression production model to make the portrait images in each portrait image set into expressions to obtain each portrait expression set.

可以理解,表情制作模型是根据大量的人像图像、人像图像对应的把表情表情、表情标签对应的表情文字,生成的表征人像图像与表情文字对应关系的模型,并且表情制作模型还可以对人像图像进行分析,计算出表情文字在人像图像中的目标位置,并根据目标位置的大小和颜色等,确定表情文字字号和文字颜色,并且该可以根据表情文字的情感色彩,设置表情文字与字体的对应关系。It can be understood that the expression production model is based on a large number of portrait images, the expressions corresponding to the portrait images, and the expression text corresponding to the expression label, and generates a model that represents the correspondence between the portrait image and the expression text, and the expression production model can also be used for portrait images. Analyze and calculate the target position of the emoticon text in the portrait image, and determine the font size and text color of the emoticon text based on the size and color of the target position. You can also set the correspondence between the emoticon text and the font according to the emotional color of the emoticon text. relation.

在一些实施例中,表情制作模型包括人像子模型、文字子模型、表情生成子模型。In some embodiments, the expression production model includes a portrait sub-model, a text sub-model, and an expression generation sub-model.

其中,人像子模型为利用计算机视觉(Computer Vision,cv)技术建立的子模型。人像子模型对于输入的人像图像,可以计算人像图像为各种表情的权重。具体地,可以预先在人像子模型中根据不同的表情设置对应的表情标签,例如微笑、皱眉、大笑等,进而人像子模型对于输入的人像图像,可以计算出其分别为微笑的权重、皱眉的权重、大小的权重等,并输出权重最高的至少一个表情标签作为人像模型的输出结果。Among them, the portrait sub-model is a sub-model established using computer vision (Computer Vision, cv) technology. For the input portrait image, the portrait sub-model can calculate the weight of various expressions of the portrait image. Specifically, corresponding expression tags can be set in advance in the portrait sub-model according to different expressions, such as smile, frown, laugh, etc. Then, for the input portrait image, the portrait sub-model can calculate the weights of smile and frown respectively. weight, size weight, etc., and output at least one expression label with the highest weight as the output result of the portrait model.

其中,文字子模型为通过模型训练得到的预置的表情文字为各种表情标签的权重。具体地,可以预先在文字子模型中对应于人像表情的表情标签设置相同的多个表情标签,例如微笑、皱眉、大笑等,进而文字子模型对于输入的预置的表情文字,可以计算出表情文字分别为各表情标签的权重,并输出权重最高的至少一个表情标签作为文字子模型的输出结果。Among them, the text sub-model is the preset emoticon text obtained through model training and is the weight of various emoticon tags. Specifically, the same multiple expression tags corresponding to the expression tags of the portrait expression can be set in advance in the text sub-model, such as smile, frown, laugh, etc., and then the text sub-model can calculate for the input preset expression text The emoticon text is the weight of each emoticon label, and at least one emoticon label with the highest weight is output as the output result of the text sub-model.

其中表情生成子模型将人像子模型和文字子模型的输出结果进行匹配,确定人像图像对应的预置的表情文字,并通过对人像图像进行分析,对人像图像进行图像处理,例如裁剪、增加滤镜等,同时确定预置的表情文字的目标位置和字号、字体、文字颜色等。具体地,匹配过程可以为,对于人像子模型输 出的各人像图像的输出结果,以及文字子模型输出的各预置的表情文字的输出结果,对人像图像的表情标签以及对应的权重,与预置的表情文字的表情标签以及对应的权重进行匹配,确定表情标签以及权重相近的人像图像与预置的表情文字匹配。在匹配完成后,生成人像表情的过程中,可以利用人脸分析技术,对人像图像进行分析,确定人脸在图像中的位置,计算人脸周边色块差异小的区域,例如可以优先选择空白区域,将该区域作为预置的表情文字的位置。The expression generation sub-model matches the output results of the portrait sub-model and the text sub-model, determines the preset expression text corresponding to the portrait image, and performs image processing on the portrait image by analyzing the portrait image, such as cropping and adding filters. Mirror, etc., and determine the target position, font size, font, text color, etc. of the preset emoticon text at the same time. Specifically, the matching process can be, for the portrait sub-model input The output results of each portrait image and the output results of each preset emoticon text output by the text sub-model are compared with the expression labels and corresponding weights of the portrait images and the preset emoticon text expression labels and corresponding weights. Matching determines whether expression tags and portrait images with similar weights match the preset expression text. After the matching is completed, during the process of generating portrait expressions, face analysis technology can be used to analyze the portrait image, determine the position of the face in the image, and calculate the areas with small differences in color blocks around the face. For example, you can give priority to blank spaces. area, use this area as the location of the preset emoticon text.

进而,在一些实施例中,预置的表情文字的颜色可以为该空白区域的对比色,以突出表情文字。Furthermore, in some embodiments, the color of the preset emoticon text can be a contrasting color of the blank area to highlight the emoticon text.

在一些实施例中,基于确定出的色块差异小的区域,可以确定该区域的形状和大小,以确定预置的表情文字在该区域内的布局方式以及字号。In some embodiments, based on the determined area with small difference in color blocks, the shape and size of the area can be determined to determine the layout and font size of the preset emoticon text in the area.

在一些实施例中,可以根据预置的表情文字对应的表情表情的情感偏向,确定表情文字的字体,例如对于对应开心的表情文字,可以采用圆润,欢快的字体。In some embodiments, the font of the emoticon text can be determined according to the emotional bias of the emoticon corresponding to the preset emoticon text. For example, for the emoticon text corresponding to happy, a round and cheerful font can be used.

在一些实施例中,可以根据预置的表情文字长度控制人像表情在人像图像上的截取大小,预置的表情文字多时,适当拓展截取区域的大小,以放置预置的表情文字。In some embodiments, the interception size of the portrait expression on the portrait image can be controlled according to the length of the preset emoticon text. When there are many preset emoticon words, the size of the interception area can be appropriately expanded to place the preset emoticon text.

在一些实施例中,步骤901中还可以获取包括人脸的视频文件作为制作人像表情的素材。具体地,可以提取视频文件中包括人脸的视频帧,提取出的视频帧可以作为获取到的人脸图像输入到人像子模型中,计算各表情标签的权重。In some embodiments, in step 901, a video file including human faces may also be obtained as material for making portrait expressions. Specifically, video frames including human faces in the video file can be extracted, and the extracted video frames can be input into the portrait sub-model as the acquired face images, and the weights of each expression label can be calculated.

在一些实施例中,获取包括人脸的视频文件作为制作表情的素材后,可以将多个表情标签的权重相似的视频帧进行组合,达到人像表情的动态表情效果,可以理解,本申请实施例中的表情,可以包括动态的人像表情,也可以包括静态的人像表情,本申请对此不作限制。In some embodiments, after obtaining video files including human faces as materials for making expressions, video frames with similar weights of multiple expression tags can be combined to achieve the dynamic expression effect of portrait expressions. It can be understood that the embodiments of this application The expressions in may include dynamic portrait expressions or static portrait expressions, and this application does not limit this.

在一些实施例中,图7所示的制作目标表情的过程中,在输入目标表情文字后,获取对应聊天人的人像表情时,可以同时获取每个人像图像在人像子模型的输出结果,并且将目标表情文字输入文字子模型,输出对应的表情表情的权重。进而在微信TM611的聊天界面显示人像图像集中的人像图像时,可以按照目标表情文字与人像图像的匹配程度由高到低的顺序显示人像图像,进而用户可以在人像表情集的多张人像图像中快速找到子集需要的人像图像。In some embodiments, in the process of making the target expression shown in Figure 7, after inputting the target expression text, when obtaining the portrait expression of the corresponding chat person, the output result of each portrait image in the portrait sub-model can be obtained at the same time, and Input the target emoticon text into the text sub-model and output the weight of the corresponding emoticon. Furthermore, when the portrait images in the portrait image set are displayed on the chat interface of WeChat TM 611, the portrait images can be displayed in descending order of the matching degree between the target emoticon text and the portrait image, and then the user can display multiple portrait images in the portrait emoticon set. Quickly find the portrait images required by the subset.

在一些实施例中,手机100执行图7所示的制作目标表情的过程后,手机100还可以根据生成的目标表情对上述步骤904中的表情制作模型进行优化。具体地,手机100可以更新人像子模型中人像图像与表情标签的各权重参数,更新文字子模型中预置的表情文字与表情标签的各权重参数。在一些实施例中,若用户输入的目标表情文字不属于文字子模型中的预置的表情文字,则可以在文字子模型中添加目标表情文字,并确定目标表情文字为各表情标签的权重。In some embodiments, after the mobile phone 100 executes the process of creating the target expression shown in FIG. 7 , the mobile phone 100 can also optimize the expression production model in step 904 based on the generated target expression. Specifically, the mobile phone 100 can update the weight parameters of the portrait image and the expression label in the portrait sub-model, and update the weight parameters of the preset emoticon text and expression label in the text sub-model. In some embodiments, if the target emoticon text input by the user does not belong to the preset emoticon text in the text sub-model, the target emoticon text can be added to the text sub-model, and the target emoticon text can be determined as the weight of each emoticon tag.

图10根据本申请实施例示出了一种电子设备100的软件结构框图。Figure 10 shows a software structure block diagram of an electronic device 100 according to an embodiment of the present application.

电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的安卓系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .

分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将安卓系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has clear roles and division of labor. The layers communicate through software interfaces. In some embodiments, the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.

应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.

如图10所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 10, the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.

应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。 The application framework layer provides an application programming interface (API) and programming framework for applications in the application layer. The application framework layer includes some predefined functions.

如图10所示,应用程序框架层可以包括窗口管理器,任务管理器,电话管理器,资源管理器,通知管理器,视图系统等。As shown in Figure 10, the application framework layer can include window manager, task manager, phone manager, resource manager, notification manager, view system, etc.

窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。在本申请实施例中,窗口管理器可以获取用户的点击操作对应的触摸事件,包括窗口对应的应用信息、触摸的位置等,来匹配相应的显示任务、并显示相应的界面,例如显示上述步骤403中描述的人像表情集等,具体参考上述步骤403、步骤405中相关描述,在此不再赘述。A window manager is used to manage window programs. The window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc. In the embodiment of this application, the window manager can obtain the touch event corresponding to the user's click operation, including the application information corresponding to the window, the touch position, etc., to match the corresponding display task and display the corresponding interface, such as displaying the above steps For the portrait expression set described in step 403, please refer specifically to the relevant descriptions in step 403 and step 405, which will not be described again here.

任务管理器用于配合窗口管理器,调取对应于用户滑动操作的任务内容,例如需要窗口管理器控制执行的显示任务等,任务管理器调取相应显示任务的内容后发送给窗口管理器进行执行,从而实现电子设备200显示相应界面的过程。The task manager is used to cooperate with the window manager to retrieve the task content corresponding to the user's sliding operation, such as display tasks that need to be controlled by the window manager. The task manager retrieves the content of the corresponding display task and sends it to the window manager for execution. , thereby realizing the process of the electronic device 200 displaying the corresponding interface.

内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。上述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make this data accessible to applications. The above data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.

资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.

通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.

视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls that display text, controls that display pictures, etc. A view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.

安卓运行时包括核心库和虚拟机。安卓运行时负责安卓系统的调度和管理。The Android runtime includes core libraries and a virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.

核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.

应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and application framework layer run in virtual machines. The virtual machine executes the java files of the application layer and application framework layer into binary files. The virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.

系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.

表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.

媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.

三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.

2D图形引擎是2D绘图的绘图引擎。2D Graphics Engine is a drawing engine for 2D drawing.

内核层是硬件和软件之间的层。内核层至少包含显示驱动,触控驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, touch driver, and sensor driver.

在说明书对“一个实施例”或“实施例”的引用意指结合实施例所描述的具体特征、结构或特性被包括在根据本申请公开的至少一个范例实施方案或技术中。说明书中的各个地方的短语“在一个实施例中”的出现不一定全部指代同一个实施例。Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example embodiment or technology disclosed herein. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.

本申请公开还涉及用于执行文本中的操作装置。该装置可以专门处于所要求的目的而构造或者其可以包括被存储在计算机中的计算机程序选择性地激活或者重新配置的通用计算机。这样的计算机程序可以被存储在计算机可读介质中,诸如,但不限于任何类型的盘,包括软盘、光盘、CD-ROM、磁光盘、只 读存储器(ROM)、随机存取存储器(RAM)、EPROM、EEPROM、磁或光卡、专用集成电路(ASIC)或者适于存储电子指令的任何类型的介质,并且每个可以被耦合到计算机系统总线。此外,说明书中所提到的计算机可以包括单个处理器或者可以是采用针对增加的计算能力的多个处理器涉及的架构。The present disclosure also relates to means for performing the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored on a computer-readable medium such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magneto-optical disks, Read memory (ROM), random access memory (RAM), EPROM, EEPROM, magnetic or optical card, application specific integrated circuit (ASIC), or any type of medium suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may employ an architecture involving multiple processors for increased computing power.

另外,在本说明书所使用的语言已经主要被选择用于可读性和指导性的目的并且可能未被选择为描绘或限制所公开的主题。因此,本申请公开旨在说明而非限制本文所讨论的概念的范围。 Additionally, the language used in this specification has been selected primarily for readability and instructional purposes and may not have been selected to delineate or limit the disclosed subject matter. Accordingly, this disclosure is intended to illustrate, but not to limit, the scope of the concepts discussed herein.

Claims (15)

一种交互方法,应用于电子设备,其特征在于,包括:An interactive method, applied to electronic devices, is characterized by including: 所述电子设备上显示有第一应用的第一界面,其中,所述第一界面包括第一联系人的第一会话窗口;A first interface of the first application is displayed on the electronic device, wherein the first interface includes a first conversation window of the first contact; 检测到用户在所述第一界面的所述第一会话窗口的表情输入操作;Detecting the user's expression input operation in the first conversation window of the first interface; 对应于确定出所述电子设备中存储的多个人像表情集中存在与所述第一联系人相关的第一人像表情集,在所述第一会话窗口中显示所述第一人像表情集中的至少一个人像表情。Corresponding to determining that there is a first portrait expression set related to the first contact among the plurality of portrait expression sets stored in the electronic device, displaying the first portrait expression set in the first conversation window of at least one portrait expression. 根据权利要求1所述的交互方法,其特征在于,所述电子设备中预先设置用于表征所述第一应用中的至少一个联系人与所述多个人像表情集的对应关系的关联信息。The interaction method according to claim 1, characterized in that the electronic device is preset with association information used to characterize the correspondence between at least one contact in the first application and the plurality of portrait expression sets. 根据权利要求1所述的交互方法,其特征在于,所述第一会话窗口包括会话区域和表情区域,所述在所述第一会话窗口中显示所述第一人像表情集中的至少一个人像表情,包括:The interaction method according to claim 1, characterized in that the first conversation window includes a conversation area and an expression area, and at least one portrait in the first portrait expression set is displayed in the first conversation window. Expressions, including: 在所述表情区域显示所述第一人像表情集的人像表情标签,其中,所述人像表情标签为所述电子设备根据所述第一人像表情集中的人像表情确定的;Display the portrait expression tags of the first portrait expression set in the expression area, where the portrait expression tags are determined by the electronic device based on the portrait expressions in the first portrait expression set; 在所述人像标签中存放并显示所述第一人像表情集中的至少一个人像表情。Store and display at least one portrait expression in the first portrait expression set in the portrait tag. 根据权利要求3所述的交互方法,其特征在于,所述人像表情标签为所述第一联系人的人像头像。The interaction method according to claim 3, wherein the portrait expression tag is the portrait avatar of the first contact. 根据权利要求1所述的交互方法,其特征在于,还包括:The interaction method according to claim 1, further comprising: 对应于确定出不存在与所述第一联系人相关的第一人像表情集,在所述第一会话窗口中显示第二人像表情集中的至少一个人像表情,其中所述第二人像表情集为所述多个人像表情集中满足预设条件的人像表情集。Corresponding to the determination that there is no first portrait expression set related to the first contact, at least one portrait expression set in the second portrait expression set is displayed in the first conversation window, wherein the second portrait expression set A set of portrait expressions meeting preset conditions is collected for the plurality of portrait expressions. 根据权利要求5所述的交互方法,其特征在于,所述第二人像表情集是通过以下方式确定:The interaction method according to claim 5, characterized in that the second portrait expression set is determined in the following manner: 获取所述多个人像表情集的历史使用信息;Obtain historical usage information of the multiple portrait expression sets; 确定满足所述预设条件的第一历史使用信息对应的人像表情集为所述第二人像表情集。The portrait expression set corresponding to the first historical usage information that satisfies the preset condition is determined to be the second portrait expression set. 根据权利要求6所述的交互方法,其特征在于,所述历史使用信息包括历史使用频率;The interactive method according to claim 6, wherein the historical usage information includes historical usage frequency; 所述确定满足所述预设条件的第一历史使用信息对应的人像表情集为所述第二人像表情集,包括:It is determined that the portrait expression set corresponding to the first historical usage information that meets the preset condition is the second portrait expression set, including: 确定历史使用频率高于预设频率阈值的第一历史使用频率对应的人像表情集为所述第二人像表情集;或者Determine that the portrait expression set corresponding to the first historical usage frequency whose historical usage frequency is higher than the preset frequency threshold is the second portrait expression set; or 确定所述多个历史使用频率最高的至少一个人像表情集为所述第二人像表情集。It is determined that at least one portrait expression set with the highest historical usage frequency among the plurality of historical faces is the second portrait expression set. 根据权利要求1所述的交互方法,其特征在于,所述第一应用中包括多个联系人,所述多个人像表情集是通过以下方式生成的:The interaction method according to claim 1, wherein the first application includes multiple contacts, and the multiple portrait expression sets are generated in the following manner: 获取所述电子设备中存储的多张人像图像;Obtain multiple portrait images stored in the electronic device; 将对应于同一联系人的至少一张人像图像划分至同一人像图像集中;Classify at least one portrait image corresponding to the same contact into the same portrait image set; 对各所述人像图像集中的所述至少一张人像图像进行图像处理,得到各所述人像图像集对应的人像表情集,并将各所述人像表情集与所述第一应用中的各所述联系人关联。 Perform image processing on the at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, and compare each portrait expression set with each character in the first application. Described contact association. 根据权利要求8所述的交互方法,其特征在于,对各所述人像图像集中的所述至少一张人像图像进行图像处理,得到各所述人像图像集对应的人像表情集,包括:The interactive method according to claim 8, characterized in that image processing is performed on the at least one portrait image in each portrait image set to obtain a portrait expression set corresponding to each portrait image set, including: 对各所述人像图像集中的所述至少一张人像图像,利用表情制作模型生成各所述人像图像集对应的人像表情集。For the at least one portrait image in each of the portrait image sets, an expression production model is used to generate a portrait expression set corresponding to each of the portrait image sets. 根据权利要求9所述的交互方法,其特征在于,所述表情制作模型包括:The interaction method according to claim 9, characterized in that the expression production model includes: 确定各所述人像图像为预设表情的权重的人像子模型;Determine each of the portrait images as a portrait sub-model with a weight of a preset expression; 确定预置的表情文字为所述预设表情的权重的文字子模型;Determine the preset emoticon text to be the weighted text sub-model of the preset emoticon; 根据所述人像子模型和所述文字子模型的输出结果,将各所述人像图像以及匹配的预置的表情文字生成对应的人像表情的表情生成子模型。According to the output results of the portrait sub-model and the text sub-model, each of the portrait images and the matching preset expression text are used to generate a corresponding expression generation sub-model of the portrait expression. 根据权利要求8所述的交互方法,其特征在于,还包括:The interactive method according to claim 8, further comprising: 检测到所述用户在所述第一界面的所述第一会话窗口的文本输入操作,并接收所述用户输入的文本;Detecting the user's text input operation in the first conversation window of the first interface, and receiving the text input by the user; 在所述第一人像表情集对应的人像图像集中确定与所述文本匹配的至少一张目标人像图像;Determine at least one target portrait image matching the text in the portrait image set corresponding to the first portrait expression set; 所述电子设备显示第一应用的第二界面,所述第二界面包括所述至少一张目标人像图像;The electronic device displays a second interface of the first application, and the second interface includes the at least one target portrait image; 检测到所述用户在所述第二界面中的所述至少一张人像图像的图像选定操作,根据所述文本以及所述图像选定操作对应的人像图像,生成目标人像表情;Detecting the image selection operation of the at least one portrait image by the user in the second interface, and generating a target portrait expression based on the text and the portrait image corresponding to the image selection operation; 在所述第一会话窗口中显示所述目标人像表情。The target portrait expression is displayed in the first conversation window. 根据权利要求1所述的交互方法,其特征在于,所述第一应用包括即时通讯应用、短信、会议应用、社交应用中的至少一种。The interaction method according to claim 1, wherein the first application includes at least one of an instant messaging application, a text message, a conferencing application, and a social networking application. 一种电子设备,其特征在于,包括:An electronic device, characterized by including: 存储器,用于存储由电子设备的一个或多个处理器执行的指令,以及memory for storing instructions for execution by one or more processors of the electronic device, and 处理器,当所述指令被一个或多个处理器执行时,所述处理器用于执行权利要求1至12中任一项所述的交互方法。A processor, when the instructions are executed by one or more processors, the processor is configured to execute the interactive method according to any one of claims 1 to 12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,该指令在电子设备上执行时使电子设备执行权利要求1至12中任一项所述的交互方法。A computer-readable storage medium, characterized in that instructions are stored on the computer-readable storage medium, and when executed on an electronic device, the instructions cause the electronic device to perform the interactive method described in any one of claims 1 to 12 . 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,所述指令用于实现如权利要求1至12中任一项所述的交互方法。 A computer program product, characterized in that the computer program product includes instructions, and the instructions are used to implement the interactive method according to any one of claims 1 to 12.
PCT/CN2023/085438 2022-04-12 2023-03-31 Interaction method, device and medium Ceased WO2023197888A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210383986.X 2022-04-12
CN202210383986.XA CN116962563B (en) 2022-04-12 2022-04-12 Interaction method, device and medium

Publications (1)

Publication Number Publication Date
WO2023197888A1 true WO2023197888A1 (en) 2023-10-19

Family

ID=88328888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085438 Ceased WO2023197888A1 (en) 2022-04-12 2023-03-31 Interaction method, device and medium

Country Status (2)

Country Link
CN (1) CN116962563B (en)
WO (1) WO2023197888A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025084991A1 (en) * 2023-10-20 2025-04-24 脸萌有限公司 Method and apparatus for emoji usage, device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118605763A (en) * 2024-05-28 2024-09-06 北京达佳互联信息技术有限公司 Synthetic expression generation method, device, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146056A (en) * 2007-09-24 2008-03-19 腾讯科技(深圳)有限公司 A display method and system for emotion icons
CN110019286A (en) * 2017-07-19 2019-07-16 中国移动通信有限公司研究院 A kind of expression recommended method and device based on user social contact relationship
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN111162993A (en) * 2019-12-26 2020-05-15 上海连尚网络科技有限公司 Information fusion method and device
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL226047A (en) * 2013-04-29 2017-12-31 Hershkovitz Reshef May Method and system for providing personal emoticons
CN104267877B (en) * 2014-09-30 2018-03-16 小米科技有限责任公司 The display methods and device of expression picture, electronic equipment
CN105975563B (en) * 2016-04-29 2019-10-11 腾讯科技(深圳)有限公司 Expression recommendation method and device
CN107145270A (en) * 2017-04-25 2017-09-08 北京小米移动软件有限公司 Emotion icons sort method and device
WO2019162842A1 (en) * 2018-02-20 2019-08-29 Hike Private Limited A system and a method for customizing an image based on facial expressions
CN110414404A (en) * 2019-07-22 2019-11-05 腾讯科技(深圳)有限公司 Image data processing method, device and storage medium based on instant messaging
CN111476154B (en) * 2020-04-03 2025-12-30 深圳传音控股股份有限公司 Methods, apparatus, devices and computer-readable storage media for generating emojis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146056A (en) * 2007-09-24 2008-03-19 腾讯科技(深圳)有限公司 A display method and system for emotion icons
CN110019286A (en) * 2017-07-19 2019-07-16 中国移动通信有限公司研究院 A kind of expression recommended method and device based on user social contact relationship
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN111162993A (en) * 2019-12-26 2020-05-15 上海连尚网络科技有限公司 Information fusion method and device
CN112905791A (en) * 2021-02-20 2021-06-04 北京小米松果电子有限公司 Expression package generation method and device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025084991A1 (en) * 2023-10-20 2025-04-24 脸萌有限公司 Method and apparatus for emoji usage, device, and storage medium

Also Published As

Publication number Publication date
CN116962563A (en) 2023-10-27
CN116962563B (en) 2025-10-03

Similar Documents

Publication Publication Date Title
AU2021201419B2 (en) Device, method, and graphical user interface for adjusting the appearance of a control
US12470501B2 (en) User interfaces for messages
US11175817B2 (en) Device, method, and graphical user interface for displaying application status information
US11048873B2 (en) Emoji and canned responses
US10996917B2 (en) User interfaces for audio media control
US20240029334A1 (en) Techniques for managing an avatar on a lock screen
CN102763079B (en) The application programming interface (API) of keyboard is replaced with self-defined control
JP2019215900A (en) Device, method and graphical user interface for managing folder
WO2021047230A1 (en) Method and apparatus for obtaining screenshot information
US12200161B2 (en) User interfaces for presenting indications of incoming calls
WO2023197888A1 (en) Interaction method, device and medium
CN107924256A (en) Emoji and canned replies
US20240364645A1 (en) User interfaces and techniques for editing, creating, and using stickers
US20230379427A1 (en) User interfaces for managing visual content in a media representation
CN114205447A (en) Rapid setting method and device of electronic equipment, storage medium and electronic equipment
CN116048317B (en) Display method and device
CN114338572B (en) Information processing method, related equipment and storage medium
CN116450066A (en) Split screen display method, electronic device and readable storage medium
CN115877939A (en) Input method, electronic device and storage medium
CN119091000A (en) Wallpaper generation method, readable medium and electronic device
CN120123222A (en) Dirt point detection method, device, electronic device and storage medium
TW201514828A (en) Device, method, and graphical user interface for adjusting the appearance of a control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787533

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23787533

Country of ref document: EP

Kind code of ref document: A1