WO2022156557A1 - 图像显示方法、装置、设备及介质 - Google Patents
图像显示方法、装置、设备及介质 Download PDFInfo
- Publication number
- WO2022156557A1 WO2022156557A1 PCT/CN2022/071150 CN2022071150W WO2022156557A1 WO 2022156557 A1 WO2022156557 A1 WO 2022156557A1 CN 2022071150 W CN2022071150 W CN 2022071150W WO 2022156557 A1 WO2022156557 A1 WO 2022156557A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- expression
- target
- user
- display
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000001960 triggered effect Effects 0.000 claims abstract description 9
- 230000008921 facial expression Effects 0.000 claims description 65
- 238000003860 storage Methods 0.000 claims description 21
- 230000001815 facial effect Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008451 emotion Effects 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 31
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000003825 pressing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/274—Converting codes to words; Guess-ahead of partial word inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
Definitions
- the present disclosure relates to the technical field of image processing, and in particular, to an image display method, apparatus, device, and medium.
- the present disclosure provides an image display method, apparatus, device and medium.
- the present disclosure provides an image display method, including:
- the emoticon recommendation panel When the input text displayed in the conversation interface triggers the emoticon recommendation event, in the conversation interface, the emoticon recommendation panel is displayed, and the emoticon recommendation panel displays the target entry icon, and the target entry icon is used to trigger the display of the customized target emoticon image;
- a target expression display panel corresponding to the target entry icon is displayed, and the target expression display panel displays a first preview image, and the first preview image is a preview image of the target expression image.
- an image display device comprising:
- the first display unit is configured to display an emoticon recommendation panel in the conversation interface when the input text displayed in the conversation interface triggers an emoticon recommendation event, and the emoticon recommendation panel displays a target entry icon, and the target entry icon is used to trigger the display of a custom interface. target expression image;
- a second display unit configured to stop displaying the expression recommendation panel when the first trigger operation on the target entry icon is detected
- the third display unit is configured to display a target expression display panel corresponding to the target entry icon in the conversation interface, the target expression display panel displays a first preview image, and the first preview image is a preview image of the target expression image.
- an image display device comprising:
- the processor is configured to read executable instructions from the memory and execute the executable instructions to implement the image display method described in the first aspect.
- the present disclosure provides a computer-readable storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method described in the first aspect.
- the image display method, device, device, and medium of the embodiments of the present disclosure can display an expression recommendation panel in the conversation interface when the input text displayed in the conversation interface triggers an expression recommendation event
- the expression recommendation panel may include a function for triggering display Customize the target entry icon of the target expression image, and then stop displaying the expression recommendation panel when the first trigger operation on the target entry icon is detected, and display the target expression display corresponding to the target entry icon in the conversation interface Panel
- the target emoticon display panel can display the preview image of the target emoticon image, so that when the user enters text to trigger the emoticon recommendation event, directly through the target entry icon displayed in the emoticon recommendation panel, quickly enter the display with the target emoticon image.
- the target expression display panel of the preview image improves the convenience for the user to search for the custom expression image, simplifies the user's operation of finding the user-defined expression image, and further improves the user's experience.
- FIG. 1 is an architectural diagram of an image display provided by an embodiment of the present disclosure
- FIG. 2 is an architectural diagram of another image display provided by an embodiment of the present disclosure
- FIG. 3 is a schematic flowchart of an image display method according to an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of an expression recommendation panel according to an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of an entry triggering process according to an embodiment of the present disclosure.
- FIG. 6 is a schematic flowchart of a method for generating a first expression image according to an embodiment of the present disclosure
- FIG. 7 is a schematic diagram of invitation information provided by an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of another entry triggering process provided by an embodiment of the present disclosure.
- FIG. 9 is a schematic diagram of yet another entry triggering process provided by an embodiment of the present disclosure.
- FIG. 10 is a schematic flowchart of a method for generating a third expression image according to an embodiment of the present disclosure
- FIG. 11 is a schematic diagram of an expression display panel according to an embodiment of the present disclosure.
- FIG. 12 is a schematic diagram of another expression display panel provided by an embodiment of the present disclosure.
- FIG. 13 is a schematic diagram of another expression recommendation panel provided by an embodiment of the present disclosure.
- FIG. 14 is a schematic diagram of a display mode of an entry icon provided by an embodiment of the present disclosure.
- FIG. 15 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
- FIG. 16 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
- the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
- the term “based on” is “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- the image display method provided by the present disclosure can be applied to the architecture shown in FIG. 1 and FIG. 2 , which will be described in detail with reference to FIG. 1 and FIG. 2 .
- FIG. 1 shows an architecture diagram of an image display provided by an embodiment of the present disclosure.
- the image display architecture may include at least one first electronic device 110 and at least one second electronic device 120 of the client.
- the first electronic device 110 and the second electronic device 120 may establish a connection and perform information exchange through a network protocol such as Hyper Text Transfer Protocol over Secure Socket Layer (HTTPS).
- HTTPS Hyper Text Transfer Protocol over Secure Socket Layer
- the first electronic device 110 and the second electronic device 120 may respectively include a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle terminal, a wearable electronic device, an all-in-one computer, a smart home device, etc. It can be a virtual machine or a device emulated by an emulator.
- the designated platform may have an instant messaging function.
- the designated platform may be a designated website or a designated application.
- an emoticon recommendation panel may be displayed in the conversation interface, and the emoticon recommendation panel may be displayed in the conversation interface.
- the recommendation panel may display a target entry icon for triggering the display of the customized target expression image, and further may stop displaying the expression recommendation panel in the case of detecting the first trigger operation of the first user on the target entry icon, and in the conversation interface.
- a target expression display panel corresponding to the target entry icon is displayed, and the target expression display panel may display a first preview image, wherein the first preview image may be a preview image of the target expression image.
- the user when inputting text to trigger an expression recommendation event, the user can directly enter the target expression display panel displaying the preview image of the target expression image directly through the target entry icon displayed in the expression recommendation panel, without the need for the user to perform cumbersome operations.
- Finding the self-defined target expression image improves the convenience for the user to search for the self-defined target expression image, simplifies the user's operation of finding the self-defined target expression image, and further improves the user's experience.
- the image display method provided by the embodiment of the present disclosure can be applied not only to the above-mentioned architecture composed of multiple electronic devices, but also to the architecture composed of electronic devices and servers, which will be specifically described with reference to FIG. 2 .
- FIG. 2 shows an architecture diagram of another image display provided by an embodiment of the present disclosure.
- the image display architecture may include at least one first electronic device 110 and at least one second electronic device 120 of the client and at least one server 130 of the server.
- the first electronic device 110 , the second electronic device 120 and the server 130 can establish a connection and perform information exchange through a network protocol such as HTTPS, and the first electronic device 110 and the second electronic device 120 can communicate through the server 130 .
- the first electronic device 110 and the second electronic device 120 may respectively include a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle terminal, a wearable electronic device, an all-in-one computer, a smart home device, etc. It can be a virtual machine or a device emulated by an emulator.
- the server 130 may be a device with storage and computing functions, such as a cloud server or a server cluster.
- the first user using the first electronic device 110 and the second electronic device 120 can implement conversational chat.
- the designated platform may have an instant messaging function.
- the designated platform may be a designated website or a designated application.
- the first electronic device 110 may acquire the customized target facial expression image sent by the server 130 .
- an emoticon recommendation panel may be displayed in the conversation interface, and the emoticon recommendation panel may be displayed in the conversation interface.
- the recommendation panel may display a target entry icon for triggering the display of the target expression image, and further may stop displaying the expression recommendation panel in the case of detecting the first trigger operation of the first user on the target entry icon, and in the conversation interface, display the icon.
- the target expression display panel corresponding to the target entry icon, the target expression display panel may display a first preview image, wherein the first preview image may be a preview image of the target expression image.
- the user can first obtain the target expression image before inputting the input text, and after obtaining the target expression image, when the input text triggers the expression recommendation event, the user can directly enter the expression recommendation panel through the target entry icon displayed in the expression recommendation panel.
- the target expression display panel displaying the preview image of the target expression image can find the customized target expression image without the need for the user to perform tedious operations, which improves the user's search convenience for the user-defined target expression image, and simplifies the user's search for self-defined expression images. The operation of the defined target expression image, thereby improving the user's experience.
- the image display method may be performed by an electronic device.
- the electronic device may be the first electronic device 110 in the client shown in FIGS. 1 and 2 .
- the electronic devices may include devices with communication functions such as mobile phones, tablet computers, desktop computers, notebook computers, vehicle terminals, wearable devices, all-in-one computers, and smart home devices, and may also include devices simulated by virtual machines or simulators.
- FIG. 3 shows a schematic flowchart of an image display method provided by an embodiment of the present disclosure.
- the image display method may include the following steps.
- the user when the electronic device displays the conversation interface, the user can enter the input text in the input box of the conversation interface.
- the electronic device can detect the input text in real time.
- an emoticon recommendation panel displaying the target entry icon can be displayed in the conversation interface.
- the electronic device may acquire the input text in real time, and input the input text into a pre-trained emotion classification model to obtain an emotion type corresponding to the input text, where the emotion type is used to indicate the emotion to be expressed by the input text.
- the electronic device can detect the non-customized emoticon image stored locally, and in the case of detecting that there is an emoticon image with the emotion type corresponding to the input text, it can determine that the input text triggers the emoticon recommendation event, and in the conversation interface to display the emoji recommendation panel with the target entry icon displayed.
- the electronic device may acquire the input text in real time, and extract emotional keywords in the input text. Next, the electronic device can detect the image tag of the non-customized emoticon image stored locally, and in the case of detecting that there is an emoticon image with the image tag as the emotional keyword, it can determine that the input text triggers the emoticon recommendation event, and in the conversation In the interface, an emoticon recommendation panel displaying the target entry icon is displayed.
- the expression recommendation panel may be a container for storing the target portal icon, so that the expression recommendation panel may display the target portal icon.
- the expression recommendation panel may be displayed in the information display area of the conversation interface.
- the information display area may be an area for displaying conversational chat records.
- the conversational chat record may include at least one of instant conversational messages and historical conversational messages.
- the expression recommendation panel may be superimposed and displayed in the information display area aligned to the right, and located at the top of the input box.
- FIG. 4 shows a schematic diagram of an expression recommendation panel provided by an embodiment of the present disclosure.
- the electronic device 401 can display a conversation interface 402 where Xiaohong and Xiaolan conduct conversational chat, and the conversation interface 402 displays an information display area 403 , an input box 404 and a virtual keyboard control 405 .
- the electronic device determines that the input text "hahahaha” triggers the emoticon recommendation event, and displays the emoticon recommendation in a right-aligned overlay in the information display area 403 above the input box 404.
- panel 406 The expression recommendation panel 406 may display a target entry icon 407 corresponding to the custom expression image.
- the expression recommendation panel may also be displayed between the input box and the virtual keyboard controls of the conversation interface.
- the virtual keyboard control can be used for the user to input text into the input box.
- the electronic device may add an expression recommendation panel display area between the input box and the virtual keyboard control, and display the expression recommendation panel in the expression recommendation panel display area.
- the expression recommendation panel may be displayed between the input box and the virtual keyboard control with both ends aligned.
- the target entry icon may be used to trigger the display of a customized target facial expression image. Therefore, the user can make the electronic device directly display the preview image of the target facial expression image by triggering the target entry icon.
- the expression image is an image with a meaning expression function, which can reflect the inner activity, emotion, emotion or specific semantics of the user who sends the expression image.
- the customized target emoticon image can be a customized emoticon image.
- the target expression image may be an expression image that needs to be generated by combining with the user's own facial features.
- the target facial expression image may include at least one of a static facial expression image and a dynamic facial expression image.
- the static expression image can be a frame of static image
- the static expression image can be an image in the Portable Network Graphics (PNG) file format
- the dynamic expression image is an animation image composed of multiple frames of static images.
- the dynamic expression image may be an image in the Graphics Interchange Format (GIF) file format.
- GIF Graphics Interchange Format
- the user when an expression recommendation panel displaying a target portal icon is displayed in the conversation interface, the user may perform a first triggering operation on the target portal icon.
- the electronic device can detect the user's operation on the conversation interface in real time, and stop displaying the expression recommendation panel in the case of detecting the first triggering operation on the target portal icon.
- the first triggering operation may be operations such as clicking, long-pressing, and double-clicking on the target entry icon, which is not limited herein.
- the electronic device may directly stop displaying the expression recommendation panel.
- the electronic device may cancel the display area of the expression recommendation panel, and then stop displaying the expression recommendation panel.
- the electronic device may display the target expression display panel corresponding to the target entry icon in the conversation interface.
- the target expression display panel can be a container for storing the target expression image, so that the target expression display panel can display the first preview image corresponding to the target expression image.
- the target expression display panel may be superimposed and displayed on the virtual keyboard control, and displayed over the virtual keyboard control.
- the target expression display panel can also be displayed in place of the virtual keyboard control.
- the number of target expression images may be one or multiple, which is not limited herein.
- an expression recommendation panel is displayed in the conversation interface, and the expression recommendation panel may include a target entry for triggering the display of a customized target expression image icon, and then when the first trigger operation on the target entry icon is detected, the display of the expression recommendation panel can be stopped, and in the conversation interface, the target expression display panel corresponding to the target entry icon can be displayed, and the target expression display panel can display There is a preview image of the target emoticon image, so that the user can directly enter the target emoticon display panel displaying the preview image of the target emoticon image through the target entry icon displayed in the emoticon recommendation panel when inputting text to trigger the emoticon recommendation event, improving the performance of the user.
- the convenience for the user to find the customized target expression image is simplified, and the operation of the user to find the customized target expression image is simplified, thereby improving the user's experience.
- the electronic device when the electronic device detects that the input text triggers an emoticon recommendation event, it can determine, according to the emoticon type of the custom emoticon image stored locally on the electronic device, that the user can directly trigger the display through the emoticon recommendation panel.
- the target facial expression image, and then the target entry icon for triggering the display of the target facial expression image is displayed in the facial expression recommendation panel.
- the conversational interface may implement a conversational chat interface for the first user and the second user.
- the target expression image may include the first expression image.
- the first facial expression image is generated according to the first facial image of the first user and the second facial image of the second user, that is, the first facial expression image can be customized for the co-shot of the first facial image and the second facial image
- the expression image, the expression type of the first expression image is the co-shot type.
- target entry icon may be the first expression entry icon, that is, the co-shot custom expression entry icon of the first human face image and the second human face image.
- the target expression display panel may be a first expression display panel, that is, an expression display panel used to display a custom expression image of the first human face image and the second human face image.
- the conversational interface may implement a conversational chat interface for a first user and a second user.
- the first facial expression image may be a co-shot custom facial expression image generated according to the first facial image of the first user and the second facial image of the second user.
- the conversational interface may implement a conversational chat interface for the first user and a plurality of second users.
- the first facial expression image may be a co-shot custom facial expression image generated according to the first facial image of the first user and the second facial images of all the second users.
- FIG. 5 shows a schematic diagram of an ingress triggering process provided by an embodiment of the present disclosure.
- the electronic device 501 can display a conversation interface 502 where Xiaohong and Xiaolan conduct conversational chat, and display an information display area 503 , an input box 504 and a virtual keyboard control 505 in the conversation interface 502 .
- the electronic device determines that the input text "Hahahaha” triggers the emoticon recommendation event, and displays the emoticon recommendation in a right-aligned overlay in the information display area 503 above the input box 504.
- the emoticon recommendation panel 506 may display a co-shot custom emoticon entry icon 507 corresponding to the co-shot custom emoticon image.
- Xiaohong can click on the co-shot custom emoticon entry icon 507, and the electronic device can replace the virtual keyboard control 505 with an emoji display panel 508 of the co-shot custom emoticon image when detecting a click operation on the co-shot custom emoticon entry icon 507.
- the expression display panel 508 may display a preview image 509 of the custom expression image that matches the shot.
- the electronic device when it has a co-shot custom emoticon image between conversational users in the current conversation interface, it can directly display the co-shot custom emoticon entry icon corresponding to the co-shot custom emoticon image, so as to The expression display panel that helps users quickly enter the co-shot custom expression images between the conversational users in the current session interface.
- the user can intelligently recommend custom expression images that can be used to further improve the user's experience. experience.
- the electronic device optionally, before determining that the input text displayed in the conversation interface triggers the emoticon recommendation event, the electronic device also needs to generate a first emoticon image first.
- FIG. 6 shows a schematic flowchart of a method for generating a first expression image provided by an embodiment of the present disclosure.
- the first expression image generating method may include the following steps.
- the second user may use the electronic device used by the second electronic device 120 as shown in FIG. 1 and FIG.
- Target invitation information corresponding to the target co-op request sent by the second user to the first user may be displayed, and the target invitation information may be used to prompt the first user that the second user sends a co-inviting to him.
- the target invitation information may include any one of the first invitation information and the second invitation information.
- the first invitation information may be invitation information sent by the second user to the first user by triggering the first invitation prompt information displayed in the session interface.
- the first invitation prompt information is used to prompt the second user that he or she can issue a co-opinion invitation to the first user.
- the electronic device used by the second user may display the first invitation prompt information in the conversation interface when both the first user and the second user have generated a self-portrait custom emoticon image. If the second user wants to take a photo with the first user, the first invitation prompt message can be triggered, so that the electronic device used by the second user sends the first co-photography request to the first user, and then the electronic device used by the first user can The device displays the first invitation information corresponding to the first co-op request.
- the electronic device used by the second user can display the first invitation prompt information in the conversation interface. If the second user wants to take a photo with the first user, the first invitation prompt message can be triggered, so that the electronic device used by the second user sends the first co-photography request to the first user, and then the electronic device used by the first user can The device displays the first invitation information corresponding to the first co-op request.
- the second invitation information may be invitation information sent by the second user to the first user by triggering the second invitation prompt information displayed in the expression display panel for displaying the second expression image.
- the second facial expression image is generated according to the second face image, that is, the second facial expression image is a self-defined facial expression image of the second user.
- the second invitation prompt information is used to prompt the second user that he or she can send a co-production invitation to other users in the address book.
- the second invitation prompt information may be further used to prompt the second user that he or she can send a co-shot invitation to other users in the address book who have generated self-portrait custom emoticon images.
- the second user can enter the expression display panel for displaying the self-portrait custom expression image of the second user through the used electronic device, and the second invitation prompt information can be displayed in the expression display panel.
- the second user wants to take an emoticon with another user who has generated a self-portrait custom emoticon image
- the second user can trigger the second invitation prompt message, so that the electronic device used by the second user displays other user information that has generated a self-portrait custom emoticon image.
- the user avatar and/or user name the second user can select the user information of at least one user in the displayed user information, such as clicking the user information of at least one user, and the at least one user selected by the second user includes the first user information.
- the electronic device used by the second user can send the second co-op request to the first user, so that the electronic device used by the first user can display the second invitation information corresponding to the second co-op request.
- S610 may specifically include: in the conversation interface, displaying the target invitation information sent by the second user to the first user.
- the electronic device may display the target invitation information in the information display area of the conversation interface.
- FIG. 7 shows a schematic diagram of invitation information provided by an embodiment of the present disclosure.
- the electronic device 701 can display a conversation interface 702 in which Xiaohong and Xiaolan conduct conversational chat, and display an information display area 703, an input box 704 and a virtual keyboard control 705 in the conversation interface 702.
- the information display area 703 may display the invitation information "Xiaolan invites you to generate a co-shot emoji".
- the electronic device used by the second user may further display multiple co-shot emoticon template images, and the second user may display multiple co-shot emoticon template images in the multiple co-shot emoticon template images.
- the electronic device used by the second user can obtain the target template identifier of the co-shot emoticon template image selected by the second user, and send the target invitation information corresponding to the target co-shooting request carrying the target template identifier to the first user.
- the electronic device displays the target invitation information
- the user may perform a second trigger operation on the target invitation information.
- the electronic device can detect the user's operation on the session interface in real time, and when detecting the user's second trigger operation on the target invitation information, the server can send the first user ID of the first user and the second user ID of the second user.
- the first build request is a request for the user's operation on the session interface in real time.
- the second trigger operation may be operations such as clicking, long-pressing, and double-clicking on the target invitation information, which is not limited herein.
- Xiaohong can click the text "co-shot emoji" in the invitation information, so that the electronic device sends the generated co-shot emoticon carrying Xiaohong's user ID and Xiaolan's user ID to the server. ask.
- the electronic device may directly send the first user ID carrying the first user ID of the first user and the second user ID of the second user to the server. Generate request.
- the image display method may further include: displaying a face collection interface, and in In the case that the face collection interface collects the first face image, it sends the first face image carrying the first face image to the server.
- the electronic device when detecting the second trigger operation on the target invitation information, may first display the face collection interface, and collect the data on the face collection interface.
- the first generation request carrying the first face image is sent to the server, and then the first generation request carrying the first user ID of the first user and the second user ID of the second user is sent to the server.
- the server may also send an image acquisition request to the electronic device used by the second user, so that the first generation request is
- the electronic device used by the second user displays the image upload prompt information corresponding to the image acquisition request, so as to prompt the second user to send the second user's second face image to the server through the image upload prompt information, so that the server can obtain the second face image of the second user.
- the second face image of the second user is
- the first generation request may be used to instruct the server according to the first face image stored in association with the first user ID, the second face image stored in association with the second user ID, and the first expression template image A first expression image is generated and fed back.
- the server may first perform face segmentation processing on the first face image and the second face image respectively, so as to cut out the first user face in the first face image and the second face in the second face image.
- user face, and then edge optimization such as blurring, feathering, etc. can be performed on the extracted first user face and the second user face, and then the head positions of the first face image and the second face image can be processed.
- Perform following and expression migration processing to obtain a face animation.
- a face image corresponds to a face region in the first expression template image
- a face region corresponds to a head position and an expression.
- each frame of each first expression template image is combined with each frame of the facial animation to generate a first expression image corresponding to each first expression template image.
- the target invitation information may carry the user IDs of all users
- the first generation request may carry the user IDs of all users
- the server may carry the user IDs of all users.
- the first expression image may be generated based on the face images of all the users after receiving the generation requests sent by the users corresponding to all the user identifiers.
- the target invitation information may carry a target template identifier
- the target template identifier may be an expression template selected by the second user, such as a template identifier of a co-pattern expression template.
- the first generation request may also carry a target template identifier
- the first expression template image may be an expression template image corresponding to the target template identifier
- the server may generate and feed back the first expression according to the first face image stored in association with the first user identification, the second face image stored in association with the second user identification, and the first expression template image corresponding to the target template identification images, which will not be repeated here.
- the server can generate the co-shot custom facial expression image between the first user and the second user according to the facial expression template selected by the second user, thereby improving the flexibility of generating the co-shot custom facial expression image.
- the electronic device may pull the first expression image from the server to receive the first expression image fed back by the server.
- the electronic device may pull the first expression image in real time after sending the first generation request.
- the electronic device may further pull the first expression image after waiting for a preset waiting time period after sending the first generation request.
- the material collection method of the first expression image is relatively simple, and the user does not need to design text, textures, etc. for the expression image, which can reduce the production time of the expression image and improve the user experience.
- the conversational interface may implement a conversational chat interface for the first user and the second user.
- the target expression image may include the third expression image.
- the third expression image is generated according to the first face image, that is, the third expression image may be a self-defined expression image of the first user, and the expression type of the third expression image is a self-portrait type.
- the target entry icon may be a third expression entry icon, that is, a self-portrait custom expression entry icon of the first user.
- the target expression display panel may be a third expression display panel, that is, an expression display panel for displaying a self-portrait custom expression image of the first user.
- FIG. 8 shows a schematic diagram of another ingress triggering process provided by an embodiment of the present disclosure.
- the electronic device 801 can display a conversation interface 802 in which Xiaohong and Xiaolan conduct conversational chat, and display an information display area 803, an input box 804 and a virtual keyboard control 805 in the conversation interface 802.
- Xiaohong enters the input text "Hahahaha” in the input box 804
- the electronic device determines that the input text "Hahahaha” triggers the emoticon recommendation event, and displays the emoticon recommendation in a right-aligned overlay in the information display area 803 above the input box 804.
- panel 806 the electronic device 801 can display a conversation interface 802 in which Xiaohong and Xiaolan conduct conversational chat, and display an information display area 803, an input box 804 and a virtual keyboard control 805 in the conversation interface 802.
- the emoticon recommendation panel 806 can display the self-portrait corresponding to the self-portrait custom emoticon image.
- Custom emoticon entry icon 807 Xiaohong can click the self-portrait custom emoticon entry icon 807, and the electronic device can replace the virtual keyboard control 805 with an emoticon display panel 808 of the self-portrait custom emoticon image when the electronic device detects a click operation on the self-portrait custom emoticon entry icon 807.
- the expression display panel 808 can display a preview image 809 of the self-portrait custom expression image.
- the self-portrait custom emoticon when the electronic device does not have a custom emoticon image that is co-photographed between conversation users in the current conversation interface, but there is a self-portrait custom emoticon image of the first user, the self-portrait custom emoticon can be directly displayed.
- the self-portrait custom emoticon entry icon corresponding to the image can help users quickly enter the emoticon display panel of the self-portrait custom emoticon image.
- the user's electronic device has a custom emoticon image that is in harmony with other users, it can intelligently recommend for the user.
- the custom emoticon images used avoid infringing on the portrait rights of others and further enhance the user's experience.
- the conversational interface may implement a conversational chat interface for the first user and the second user.
- the target expression image may include the second expression template image.
- the third facial expression image is generated according to the first face image and the second facial expression template image of the first user, that is, the third facial expression image may be a self-portrait custom facial expression image of the first user generated by using the second facial expression template image.
- the second user may send a co-shooting invitation to the first user who has generated the self-portrait custom emoticon image after the self-portrait custom emoticon image has been generated.
- the electronic device detects that the third emoticon does not exist locally image, that is, there is no local self-portrait custom emoticon image of the first user, it can be determined that the first user has not generated any custom emoticon image, therefore, the second emoticon template image can be used as the target emoticon image, so that the target emoticon image can be Include a second emoji image.
- the second user can send a co-shooting invitation to the first user.
- the electronic device detects that the third user does not exist locally
- the emoticon image and the first emoticon image that is, there is no self-portrait custom emoticon image of the first user locally, and there is no co-shot custom emoticon image of the first user and the second user, then it can be determined that the first user does not have a custom emoticon Therefore, the second expression template image can be used as the target expression image, so that the target expression image can include the second expression template image.
- target entry icon may be the entry icon of the second expression template, that is, the entry icon of the expression template used to generate the self-portrait custom expression of the first user.
- the target expression display panel may be a second expression template display panel, that is, an expression display panel for displaying the second expression template image.
- FIG. 9 shows a schematic diagram of yet another entry triggering process provided by an embodiment of the present disclosure.
- the electronic device 901 can display a conversation interface 902 where Xiaohong and Xiaolan conduct conversational chat, and display an information display area 903 , an input box 904 and a virtual keyboard control 905 in the conversation interface 902 .
- the electronic device determines that the input text "Hahahaha” triggers the emoticon recommendation event, and displays the emoticon recommendation in a right-aligned overlay in the information display area 903 above the input box 904.
- the expression recommendation panel 906 may display a self-portrait self-portrait for generating Xiaohong.
- the expression template entry icon 907 of the expression template that defines the expression. Xiaohong can click on the expression template entry icon 907, and the electronic device can replace the virtual keyboard control 905 with the expression template image used to generate Xiaohong's self-portrait custom expression when detecting the click operation on the expression template entry icon 907.
- the expression display panel 908 displayed in the expression display panel 908 can display a preview image 909 of the expression template image used to generate Xiaohong's self-portrait custom expression.
- both the second expression template image and the face area in the preview image of the second expression template image may be displayed in blank, as shown in the preview image 909 in FIG. 9 .
- the target expression display panel may also display an expression generation trigger control as shown in the “Generate Now” button 910 in FIG. 9 , and the expression generation trigger control may be used to trigger generation of a selfie of the first user. Custom emoji images. Therefore, the user can generate the trigger control by triggering the expression, so that the electronic device can generate the self-defined expression image of the first user.
- the electronic device may also generate a third expression image.
- FIG. 10 shows a schematic flowchart of a third image generation method provided by an embodiment of the present disclosure.
- the third expression image generation method may include the following steps.
- the user can perform a third trigger operation on the expression generation trigger control.
- the electronic device can detect the user's operation on the target expression display panel in real time, and display the face collection interface in the case of detecting the third trigger operation on the expression generation trigger control.
- the third trigger operation may be operations such as clicking, long-pressing, and double-clicking on the expression generation trigger control, which is not limited herein.
- the electronic device may jump from the conversation interface to the face collection interface for display when detecting the third trigger operation on the expression generation trigger control.
- the face collection interface may include a face collection frame.
- the face collection frame may have a specified face collection angle.
- the user may collect the first face image through the face collection interface, and the electronic device may send a file carrying the collected first face image to the server when the first face image is collected on the face collection interface.
- the second generates a request.
- the electronic device may directly collect the first face image displayed in the face collection frame, and collect the image on the face collection interface.
- a second generation request carrying the first face image is sent to the server.
- the camera control in the face collection interface can be lit, and the user can click the camera control to make the electronic device respond
- the user clicks on the camera control the first face image displayed in the face collection frame is collected, and when the first face image is collected on the face collection interface, the second face image carrying the first face image is sent to the server. Generate request.
- displaying a complete face in the face collection frame means that all the faces are in the face collection frame, and the height of the face is not less than half of the height of the face collection frame.
- the second generation request may be used to instruct the server to generate and feed back the third expression image according to the first face image and the second expression template image.
- the specific process of generating the third facial expression image by the server is similar to the specific process of generating the first facial expression image, and details are not described here.
- the electronic device may pull the third expression image from the server to receive the third expression image fed back by the server.
- the electronic device may pull the third emoticon image in real time after sending the second generation request.
- the electronic device may further pull the third expression image after waiting for a preset waiting period.
- the electronic device can replace the first preview image with the second preview image corresponding to the third emoticon image in the target emoticon display panel for display, so that the electronic device can display the third emoticon image in the third
- the third facial expression image is directly displayed to the user.
- the electronic device when the electronic device jumps from the conversation interface to the face collection interface, after the electronic device receives the third expression image fed back by the server, the electronic device can also jump back to the conversation interface from the face collection interface.
- the material collection method of the third expression image is relatively simple, and the user does not need to design text, textures, etc. for the expression image, which can reduce the production time of the expression image and improve the user experience.
- the target expression image may include the first target text displayed in a preset text style
- the first preview image may also include the first target text displayed in the preset text style.
- each target expression image may correspond to a preset text style.
- the preset text style may include at least one of a font style, a color style, a stroke style, a position style, and an angle style, which is not limited herein.
- the first target text when the number of characters in the input text is less than or equal to the preset number threshold, the first target text may include the input text; when the number of characters in the input text is greater than the preset number threshold, the first target text may be. Include preset text.
- the preset number threshold may be any value set as required, which is not limited here.
- the preset number threshold may be 3, 5, 10, 20, and so on.
- the electronic device may first determine whether the number of characters in the input text is less than or equal to a preset number threshold, and if so, add the input text to the target expression image in a preset text style, so that the target The expression image includes input text in a preset text style, otherwise, the preset text is added to the target expression image in a preset text style, so that the target expression image includes preset text in a preset text style.
- each target expression image may correspond to a preset text.
- FIG. 11 shows a schematic diagram of an expression display panel provided by an embodiment of the present disclosure.
- the electronic device 1101 can display a conversation interface 1102 in which Xiaohong and Xiaolan conduct a conversational chat, and in the conversation interface 1102 an information display area 1103, an input box 1104 and an expression display panel for self-portrait custom expression images are displayed 1105.
- an input text "Hahahaha” is displayed in the input box 1104
- the preset number threshold is 5, the number of characters is less than the preset number threshold
- the expression display panel 1105 displays
- the preview image 1106 of the selfie custom emoticon image may include the input text "Hahahaha" displayed in a preset text style.
- the self-portrait custom emoticon image corresponding to the preview image 1106 may also include the input text "Hahaha" displayed in a preset text style.
- FIG. 12 shows a schematic diagram of another expression display panel provided by an embodiment of the present disclosure.
- the electronic device 1201 can display a conversation interface 1202 in which Xiaohong and Xiaolan conduct conversational chat, and display an information display area 1203, an input box 1204 and an expression display panel for Selfie self-defining expression images in the conversation interface 1202 1205.
- a conversation interface 1202 in which Xiaohong and Xiaolan conduct conversational chat
- the input text "Hahahaha" is displayed in the input box 1204
- the preset number threshold is 3
- the number of characters is greater than the preset number threshold
- the expression display panel 1205 displays
- the preview image 1206 of the self-portrait custom emoticon image may include preset text displayed in a preset text style, and one preview image 1206 corresponds to one preset text.
- the self-portrait custom emoticon image corresponding to the preview image 1206 may also include corresponding preset text displayed in a preset text style.
- the text displayed in the first preview image can be flexibly adjusted based on the input text, which further improves the user's experience.
- the image display method may further include:
- the target expression image corresponding to the target preview image is displayed, and the target expression image corresponding to the target preview image includes the first target text.
- the user can perform a first trigger operation on the target portal icon, and the electronic device can detect the user's operation on the session interface in real time, and when detecting In the case of the first triggering operation on the target portal icon, the display of the emoticon recommendation panel is stopped, and the target emoticon display panel corresponding to the target portal icon is displayed in the conversation interface.
- the user can perform a fourth trigger operation on the target preview image in the first preview image, and the electronic device can detect the user's fourth trigger operation on the target preview image in real time, and after detecting the target preview image
- the target expression image corresponding to the target preview image is sent to the electronic device used by the second user through the server, and in the information display area of the conversation interface, the target expression corresponding to the target preview image is displayed.
- image, the target facial expression image corresponding to the target preview image may be displayed with the first target text.
- the fourth trigger operation may be operations such as clicking, long-pressing, and double-clicking on the target preview image, which is not limited herein.
- the electronic device can automatically adjust the text displayed in the customized facial expression image according to the input text entered by the user, which improves the flexibility of the user in using the customized target facial expression image.
- the target portal icon may include a target portal image.
- the target portal image may include any one of the first portal image and the second portal image.
- the first portal image is an image randomly selected in the first preview image.
- the electronic device may randomly select an image from the first preview image corresponding to the target facial expression image, and use the selected image as the first entry image.
- the second entry image is a preview image of a target facial expression image of the same emotion type as the input text.
- the electronic device may detect the target facial expression image, and use the preview image of the target facial expression image with the emotion type corresponding to the input text as the second entry image.
- the electronic device may detect the image label of the target facial expression image, and use the preview image of the target facial expression image whose image label is the emotional keyword in the input text as the second entry image .
- the target portal image can be flexibly adjusted based on the input text, which further improves the user's experience.
- no text may be displayed within the target portal icon.
- the target entry icon may further include a second target text displayed in a preset text style.
- the second target text may include the first preset number of characters in the input text.
- the preset number may be any value set as required, which is not limited here.
- the preset number may be 1, 2, 3, etc.
- FIG. 13 shows a schematic diagram of another expression recommendation panel provided by an embodiment of the present disclosure.
- the electronic device 1301 can display a conversation interface 1302 where Xiaohong and Xiaolan conduct conversational chat, and display an information display area 1303 , an input box 1304 and a virtual keyboard control 1305 in the conversation interface 1302 .
- the electronic device determines that the input text "hahahaha” triggers the emoticon recommendation event, and displays the emoticon recommendation in a right-aligned overlay in the information display area 1303 above the input box 1304.
- the emoticon recommendation panel 1306 may display the self-portrait corresponding to the self-portrait custom emoticon image Custom emoticon entry icon 1307.
- the self-portrait custom emoticon entry icon 1307 may include a preview image of one emoticon image randomly selected from the self-portrait custom emoticon images and the first word "ha" in the input text "hahahaha”.
- the target portal icon when the number of characters in the input text is greater than the preset number threshold, no text may be displayed in the target portal icon; when the number of characters in the input text is less than or equal to the preset number threshold, the target portal icon The icon may include the first preset number of words in the input text.
- the second target text may further include an ellipsis, such as ".".
- the second target text may be the text composed of the first preset number of characters and the ellipsis in the input text.
- the text displayed in the target portal icon can be flexibly adjusted based on the input text, which further improves the user's experience.
- the expression recommendation panel may display a third preview image, and the third preview image may include the target entry icon and the preview image of the fourth expression image. That is, in addition to displaying the target entry icon, the emoticon recommendation panel can also display the preview image of the fourth emoticon image.
- the fourth expression image may be a non-customized expression image of the same emotion type as the input text.
- the fourth expression image may be a non-customized expression image with an emotion type corresponding to the input text.
- the fourth emoticon image may also be a non-customized emoticon image whose image tag is an emotional keyword in the input text.
- the target portal icon may be displayed before the preview image of the fourth emoticon image.
- the target portal icon 407 may be located on the left side of the preview images 408 of all the fourth emoticon images.
- the target portal icon may be displayed fixedly in the expression recommendation panel, and even if the user performs a sliding operation in the expression recommendation panel, the display position of the target portal icon will not change.
- the target portal icon may be displayed non-fixedly in the expression recommendation panel.
- the display position of the target portal icon may change with the sliding direction of the sliding operation.
- the image display method may further include:
- the target portal icon is displayed.
- the electronic device when the electronic device displays the conversation interface, the user can enter the input text in the input box of the conversation interface. During the process of the user entering the input text, the electronic device can detect the input text in real time. When the text triggers the emoticon recommendation event, the emoticon recommendation panel with the target entry icon can be displayed in the conversation interface. After the expression recommendation panel is displayed, the electronic device may time the display duration of the expression recommendation panel. If the display duration reaches the preset duration and the third preview image is not triggered, the display of the expression recommendation panel is stopped, and the target is displayed in the conversation interface. Entrance icon.
- the target portal icon may be displayed in the information display area of the conversation interface.
- the target portal icon can be superimposed and displayed in the information display area aligned to the right, and located at the top of the input box.
- the size of the target portal icon may remain unchanged after stopping the display of the expression recommendation panel; in another example, after stopping the display of the expression recommendation panel, the size of the target portal icon may be reduced by a preset ratio, here No restrictions apply.
- the preset ratio can be set as required, which is not limited here.
- the expression recommendation panel 406 may display a target entry icon 407 corresponding to a custom expression image and a preview image 408 of a non-custom expression image recommended based on the input text "hahahaha".
- the electronic device may time the display duration of the expression recommendation panel 406 after displaying the expression recommendation panel 406, and stop displaying the expression if the display duration reaches the preset duration and no user triggers the target entry icon 407 or any preview image 408 is detected.
- the recommendation panel 406 is displayed as shown in FIG. 14 .
- FIG. 14 is a schematic diagram of a display manner of an entry icon provided by an embodiment of the present disclosure.
- the electronic device 401 can display a conversation interface 402 in which Xiaohong and Xiaolan conduct conversational chats.
- the conversation interface 402 displays an information display area 403 , an input box 404 and a virtual keyboard control 405 , and the input box 404 displays There is an input text "Hahahaha", and a target entry icon 407 is superimposed and displayed right-aligned in the information display area 403 above the input box 404 .
- the target entry icon may also be displayed between the input box and the virtual keyboard control of the conversation interface.
- the electronic device may continue to display the target entry icon in the display area of the expression recommendation panel that is additionally displayed between the input box and the virtual keyboard control.
- the target entry icon may be displayed between the input box and the virtual keyboard control in a right-aligned manner.
- the size of the target entry icon may remain unchanged; in another example, after stopping the display of the expression recommendation panel, the size of the target entry icon may be reduced by a preset ratio, at this time , the size of the display area of the expression recommendation panel can also be reduced by a preset ratio, which is not limited here.
- the preset ratio can be set as required, which is not limited here.
- the target entry icon can still be displayed, which further improves the convenience for the user to find the customized target expression image, and improves the user experience.
- the image display method may further include:
- the electronic device may time the display duration of the expression recommendation panel, and if the display duration reaches the preset duration and the third preview image is not triggered, the expression recommendation panel is stopped to be displayed, and the expression recommendation panel is displayed in the conversation interface. to display the target portal icon.
- the electronic device can detect the input text displayed in the input box in real time. If it detects that the input text is not displayed in the input box, that is, the user deletes all the input text in the input box, the target portal icon can be displayed. .
- the display of the target portal icon can be stopped, so as to avoid the problem of continuing to display the target portal icon when the user stops editing the session content, which further improves the user experience. experience.
- the expression recommendation panel is stopped to be displayed, and in the conversation interface, the display is displayed.
- the target expression display panel corresponding to the target entry icon.
- Embodiments of the present disclosure also provide an image display device for implementing the above-mentioned image display method.
- the image display apparatus may be an electronic device.
- the electronic device may be the first electronic device 110 in the client shown in FIGS. 1 and 2 .
- the electronic device may be a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle terminal, a wearable device, an all-in-one computer, a smart home device, or other device with communication functions, or a device simulated by a virtual machine or a simulator.
- the image display device provided by the embodiment of the present disclosure will be described below with reference to FIG. 15 .
- FIG. 15 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure.
- the image display apparatus 1500 may include a first display unit 1510 , a second display unit 1520 and a third display unit 1530 .
- the first display unit 1510 can be configured to display an emoticon recommendation panel in the conversation interface when the input text displayed in the conversation interface triggers an emoticon recommendation event, the emoticon recommendation panel displays a target entry icon, and the target entry icon is used to trigger the display from Defines the target emoticon image.
- the second display unit 1520 may be configured to stop displaying the expression recommendation panel when a first trigger operation on the target portal icon is detected.
- the third display unit 1530 may be configured to display a target expression display panel corresponding to the target entry icon in the conversation interface, the target expression display panel displays a first preview image, and the first preview image is a preview image of the target expression image.
- an expression recommendation panel is displayed in the conversation interface, and the expression recommendation panel may include a target for triggering the display of a customized target expression image
- the entry icon and then when the first trigger operation on the target entry icon is detected, the display of the expression recommendation panel can be stopped, and in the conversation interface, the target expression display panel corresponding to the target entry icon can be displayed, and the target expression display panel can be The preview image of the target emoticon image is displayed, so that the user can quickly enter the target emoticon display panel displaying the preview image of the target emoticon image directly through the target entry icon displayed in the emoticon recommendation panel when the emoticon recommendation event is triggered by inputting text,
- the convenience for the user to search for the user-defined target expression image is improved, and the operation of the user to search for the user-defined target expression image is simplified, thereby improving the user's experience.
- the conversation interface may implement a conversation chat interface for the first user and the second user.
- the target facial expression image may include a first facial expression image
- the first facial expression image may be based on the first facial image of the first user and the second facial image of the second user. generate.
- the image display apparatus 1500 may further include a fourth display unit, a first sending unit and a first receiving unit.
- the fourth display unit may be configured to display target invitation information sent by the second user to the first user.
- the first sending unit may be configured to send a first generation request carrying the first user identifier of the first user and the second user identifier of the second user to the server when a second trigger operation on the target invitation information is detected,
- the first generation request may be used to instruct the server to generate and feed back the first facial expression image according to the first facial image stored in association with the first user identity, the second facial image stored in association with the second user identity, and the first facial expression template image.
- the first receiving unit may be configured to receive the first expression image fed back by the server.
- the target invitation information may include any of the following:
- first invitation information where the first invitation information may be invitation information sent by the second user to the first user by triggering the first invitation prompt information displayed in the session interface;
- the second invitation information, the second invitation information may be the invitation information sent by the second user to the first user by triggering the second invitation prompt information displayed in the expression display panel for displaying the second expression image, and the second expression image may be Generated from the second face image.
- the target invitation information may carry a target template identifier
- the target template identifier may be the template identifier of the expression template selected by the second user.
- the first generation request may also carry a target template identifier
- the first expression template image may be an expression template image corresponding to the target template identifier
- the conversation interface may implement a conversation chat interface for the first user and the second user.
- the target expression image may include a third expression image
- the first expression image may be based on the first facial image of the first user and the second expression image.
- the second facial image of the user is generated, and the third facial expression image can be generated according to the first facial image.
- the conversation interface may implement a conversation chat interface for the first user and the second user.
- the target expression image may include a second expression template image
- the third expression image may be generated according to the first face image and the second expression template image of the first user.
- the target expression display panel may also display an expression generation trigger control.
- the image display apparatus 1500 may further include a fifth display unit, a second sending unit, a second receiving unit and a sixth display unit.
- the fifth display unit may be configured to display a face collection interface when a third trigger operation on the expression generation trigger control is detected.
- the second sending unit may be configured to send a second generation request carrying the first face image to the server when the first face image is collected on the face collection interface, and the second generation request may be used to instruct the server to The first face image and the second expression template image are generated and fed back to the third expression image.
- the second receiving unit may be configured to receive a third emoticon image fed back by the server.
- the sixth display unit may be configured to replace the first preview image with a second preview image for display, and the second preview image may be a preview image of the third facial expression image.
- the first preview image may include the first target text displayed in a preset text style.
- the first target text when the number of characters in the input text is less than or equal to the preset number threshold, the first target text may include the input text; when the number of characters in the input text is greater than the preset number threshold, the first target text may include the preset text.
- the target portal icon may include any of the following:
- the first portal image where the first portal image may be an image randomly selected in the first preview image.
- the second portal image where the second portal image may be a preview image of the target facial expression image that has the same emotion type as the input text.
- the target entry icon may include a second target text displayed in a preset text style, and the second target text may include the first preset number of characters in the input text.
- the emoticon recommendation panel may display a third preview image, and the third preview image may include a target entry icon and a preview image of a fourth emoticon image, and the fourth emoticon image may belong to the input text A non-custom emoji image of the same emotion type.
- the image display apparatus 1500 may further include a display timing unit, a seventh display unit and an eighth display unit.
- the display timing unit may be configured to time the display duration of the expression recommendation panel.
- the seventh display unit may be configured to stop displaying the expression recommendation panel when the display duration reaches a preset duration and the third preview image is not triggered.
- the eighth display unit may be configured to display the target portal icon in the session interface.
- the image display apparatus 1500 may further include a ninth display unit, and the ninth display unit may be configured to stop displaying the target portal icon when the input text is not displayed in the conversation interface.
- the image display apparatus 1500 shown in FIG. 15 can perform various steps in the method embodiments shown in FIGS. 3 to 14 , and implement various processes and processes in the method embodiments shown in FIGS. The effect will not be repeated here.
- Embodiments of the present disclosure also provide an image display device, the image display device may include a processor and a memory, and the memory may be used to store executable instructions.
- the processor may be configured to read executable instructions from the memory and execute the executable instructions to implement the image display method in the above-mentioned embodiments.
- FIG. 16 shows a schematic structural diagram of an image display device provided by an embodiment of the present disclosure. Referring specifically to FIG. 16 below, it shows a schematic structural diagram of an image display device 1600 suitable for implementing an embodiment of the present disclosure.
- the image display device 1600 in the embodiment of the present disclosure may be an electronic device.
- the electronic equipment may include, but not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), in-vehicle terminals (such as in-vehicle navigation terminals) , wearable devices, etc., as well as stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
- the electronic device may be the first electronic device 110 in the client shown in FIGS. 1 and 2 .
- image display device 1600 shown in FIG. 16 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
- the image display apparatus 1600 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1601 that may be loaded into a read-only memory (ROM) 1602 according to a program stored in a read-only memory (ROM) 1602 or from a storage device 1608
- ROM read-only memory
- RAM random access memory
- various programs and data necessary for the operation of the image display device 1600 are also stored.
- the processing device 1601, the ROM 1602, and the RAM 1603 are connected to each other through a bus 1604.
- Input/output (I/O) interface 1605 is also connected to bus 1604 .
- the following devices can be connected to the I/O interface 1605: input devices 1606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1607 of a computer, etc.; a storage device 1608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1609.
- the communication means 1609 may allow the image display device 1600 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 16 shows the image display apparatus 1600 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by the processor, the processor enables the processor to implement the image display method in the foregoing embodiment.
- Embodiments of the present disclosure also provide a computer program product, the computer program product may include a computer program, and when the computer program is executed by a processor, enables the processor to implement the image display method in the above embodiments.
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from a network via communication device 1609, or from storage device 1608, or from ROM 1602.
- the processing device 1601 the above-mentioned functions defined in the image display method of the embodiment of the present disclosure are executed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
- clients, servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
- a communication network examples include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned image display apparatus; or may exist alone without being incorporated into the image display apparatus.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the image display device, the image display device is caused to execute:
- the emoticon recommendation panel When the input text displayed in the conversation interface triggers the emoticon recommendation event, in the conversation interface, the emoticon recommendation panel is displayed, and the emoticon recommendation panel displays the target entry icon, and the target entry icon is used to trigger the display of the customized target emoticon image;
- the display of the expression recommendation panel is stopped; in the conversation interface, the target expression display panel corresponding to the target entry icon is displayed, and the target expression display panel displays a first preview image, and the first preview image is the target A preview image of the emoji image.
- computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages - such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs Systems on Chips
- CPLDs Complex Programmable Logical Devices
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Hardware Design (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
本公开涉及一种图像显示方法、装置、设备及介质。其中,图像显示方法包括:当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标,目标入口图标用于触发显示自定义的目标表情图像;当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板;在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第一预览图像为目标表情图像的预览图像。根据本公开实施例,能够使用户在输入文本触发表情推荐事件时,可以直接通过表情推荐面板内的目标入口图标快速地进入用于展示目标表情图像的目标表情展示面板,提高了自定义表情图像的查找便捷性。
Description
本申请要求于2021年01月22日提交国家知识产权局、申请号为202110088297.1、申请名称为“图像显示方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本公开涉及图像处理技术领域,尤其涉及一种图像显示方法、装置、设备及介质。
随着社交媒体的普及,人们已经不再满足于单纯的文字、语音等交流方式,而需要更有趣的媒介来丰富社交活动,因此各式各样的表情图像应运而生。随之而来的是,用户更加趋向于使用自定义表情图像进行社交活动。
目前,当用户想要发送自定义表情图像时,需要进行一系列繁琐的操作,才能找到自定义表情图像,导致用户查找自定义表情图像的操作较为繁琐,降低了用户的体验。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像显示方法、装置、设备及介质。
第一方面,本公开提供了一种图像显示方法,包括:
当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标,目标入口图标用于触发显示自定义的目标表情图像;
当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板;
在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第一预览图像为目标表情图像的预览图像。
第二方面,本公开提供了一种图像显示装置,包括:
第一显示单元,配置为当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标,目标入口图标用于触发显示自定义的目标表情图像;
第二显示单元,配置为当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板;
第三显示单元,配置为在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第一预览图像为目标表情图像的预览图像。
第三方面,本公开提供了一种图像显示设备,包括:
处理器;
存储器,用于存储可执行指令;
其中,处理器用于从存储器中读取可执行指令,并执行可执行指令以实现第一方面所述的图像显示方法。
第四方面,本公开提供了一种计算机可读存储介质,该存储介质存储有计算机程序, 当计算机程序被处理器执行时,使得处理器实现第一方面所述的图像显示方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例的图像显示方法、装置、设备及介质,能够在会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,该表情推荐面板可以包括用于触发显示自定义的目标表情图像的目标入口图标,进而可以在检测到对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板,该目标表情展示面板可以显示有目标表情图像的预览图像,使得用户可以在输入文本触发表情推荐事件时,直接通过在表情推荐面板内显示的目标入口图标快速地进入显示有目标表情图像的预览图像的目标表情展示面板,提高了用户对自定义表情图像查找的便捷性,简化了用户查找自定义表情图像的操作,进而提升了用户的体验。
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种图像显示的架构图;
图2为本公开实施例提供的另一种图像显示的架构图;
图3为本公开实施例提供的一种图像显示方法的流程示意图;
图4为本公开实施例提供的一种表情推荐面板的示意图;
图5为本公开实施例提供的一种入口触发过程的示意图;
图6为本公开实施例提供的一种第一表情图像生成方法的流程示意图;
图7为本公开实施例提供的一种邀请信息的示意图;
图8为本公开实施例提供的另一种入口触发过程的示意图;
图9为本公开实施例提供的又一种入口触发过程的示意图;
图10为本公开实施例提供的一种第三表情图像生成方法的流程示意图;
图11为本公开实施例提供的一种表情展示面板的示意图;
图12为本公开实施例提供的另一种表情展示面板的示意图;
图13为本公开实施例提供的另一种表情推荐面板的示意图;
图14为本公开实施例提供的一种入口图标显示方式的示意图;
图15为本公开实施例提供的一种图像显示装置的结构示意图;
图16为本公开实施例提供的一种图像显示设备的结构示意图。
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或 并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开所提供的图像显示方法可以应用于图1和图2所示的架构中,具体结合图1和图2进行详细说明。
图1示出了本公开实施例提供的一种图像显示的架构图。
如图1所示,该图像显示架构中可以包括客户端的至少一个第一电子设备110和至少一个第二电子设备120。第一电子设备110和第二电子设备120可以通过网络协议如超文本传输安全协议(Hyper Text Transfer Protocol over Secure Socket Layer,HTTPS)建立连接并进行信息交互。其中,第一电子设备110和第二电子设备120可以分别包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
基于上述架构,在第一电子设备110和至少一个第二电子设备120各自展示的属于指定平台的会话界面内,使用第一电子设备110的第一用户和使用第二电子设备120的第二用户可以实现会话聊天。其中,指定平台可以具有即时通信功能。可选地,指定平台可以为指定网站,也可以为指定应用程序。
在第一用户在会话界面内输入想要发送给第二用户的输入文本的过程中,当会话界面内显示的输入文本触发表情推荐事件时,可以在会话界面内,显示表情推荐面板,该表情推荐面板可以显示有用于触发显示自定义的目标表情图像的目标入口图标,进而可以在检测到第一用户对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板,该目标表情展示面板可以显示有第一预览图像,其中,第一预览图像可以为目标表情图像的预览图像。
因此,用户可以在输入文本触发表情推荐事件时,直接通过在表情推荐面板内显示的目标入口图标快速地进入显示有目标表情图像的预览图像的目标表情展示面板,无需用户进行繁琐的操作即可找到自定义的目标表情图像,提高了用户对自定义的目标表情图像查找的便捷性,简化了用户查找自定义的目标表情图像的操作,进而提升了用户的体验。
另外,本公开实施例提供的图像显示方法除了可以应用在上述的多个电子设备组成的架构中,还可以应用在电子设备和服务器组成的架构中,具体结合图2进行说明。
图2示出了本公开实施例提供的另一种图像显示的架构图。
如图2所示,该图像显示架构中可以包括客户端的至少一个第一电子设备110和至少一个第二电子设备120以及服务端的至少一个服务器130。第一电子设备110、第二电子设备120和服务器130可以通过网络协议如HTTPS建立连接并进行信息交互,并且第一电子设备110和第二电子设备120可以通过服务器130实现通信。其中,第一电子设备110和第二电子设备120可以分别包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴电子设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。服务器130可以是云服务器或者服务器集群等具有存储及计算功能的设备。
基于上述架构,在第一电子设备110和至少一个第二电子设备120各自展示的属于服务器130提供的指定平台的会话界面内,使用第一电子设备110的第一用户和使用第二电子设备120的第二用户可以实现会话聊天。其中,指定平台可以具有即时通信功能。可选地,指定平台可以为指定网站,也可以为指定应用程序。
在第一用户在会话界面内输入想要发送给第二用户的输入文本之前,第一电子设备110可以获取服务器130发送的自定义的目标表情图像。
在第一用户在会话界面内输入想要发送给第二用户的输入文本的过程中,当会话界面内显示的输入文本触发表情推荐事件时,可以在会话界面内,显示表情推荐面板,该表情推荐面板可以显示有用于触发显示目标表情图像的目标入口图标,进而可以在检测到第一用户对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板,该目标表情展示面板可以显示有第一预览图像,其中,第一预览图像可以为目标表情图像的预览图像。
因此,用户可以在输入该输入文本之前,首先获取目标表情图像,并且在获取到目标表情图像之后,在输入文本触发表情推荐事件时,直接通过在表情推荐面板内显示的目标入口图标快速地进入显示有目标表情图像的预览图像的目标表情展示面板,无需用户进行繁琐的操作即可找到自定义的目标表情图像,提高了用户对自定义的目标表情图像的查找便捷性,简化了用户查找自定义的目标表情图像的操作,进而提升了用户的体验。
根据上述架构,下面结合图3-图14对本公开实施例提供的图像显示方法进行说明。
在本公开实施例中,该图像显示方法可以由电子设备执行。在一些实施例中,该电子设备可以为图1和图2中所示的客户端中的第一电子设备110。其中,电子设备可以包括移动电话、平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴设备、一体机、智能家居设备等具有通信功能的设备,也可以包括虚拟机或者模拟器模拟的设备。
图3示出了本公开实施例提供的一种图像显示方法的流程示意图。
如图3所示,该图像显示方法可以包括如下步骤。
S310、当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标。
在本公开实施例中,在电子设备显示会话界面的情况下,用户可以在会话界面的输入 框内录入输入文本,在用户录入输入文本的过程中,电子设备可以实时地对输入文本进行检测,当检测到输入文本触发表情推荐事件时,可以在会话界面内,显示展示有目标入口图标的表情推荐面板。
在一些实施例中,电子设备可以实时地获取输入文本,并且将输入文本输入预先训练得到的情感分类模型,得到该输入文本对应的情感类型,该情感类型用于指示输入文本所要表达的情绪。接着,电子设备可以对本地存储的非自定义表情图像进行检测,在检测到存在具有该输入文本对应的情感类型的表情图像的情况下,可以确定输入文本触发表情推荐事件,并且在会话界面内,显示展示有目标入口图标的表情推荐面板。
在另一些实施例中,电子设备可以实时地获取输入文本,并且提取输入文本中的情感关键词。接着,电子设备可以对本地存储的非自定义表情图像的图像标签进行检测,在检测到存在图像标签为该情感关键词的表情图像的情况下,可以确定输入文本触发表情推荐事件,并且在会话界面内,显示展示有目标入口图标的表情推荐面板。
在本公开实施例中,表情推荐面板可以是用于存放目标入口图标的容器,使得表情推荐面板可以显示有目标入口图标。
在一些实施例中,该表情推荐面板可以显示于会话界面的信息展示区内。其中,信息展示区可以为用于显示会话聊天记录的区域。会话聊天记录可以包括即时会话消息和历史会话消息中的至少一种。
可选地,该表情推荐面板可以向右对齐地叠加显示于信息展示区内,并且位于输入框的顶部。
图4示出了本公开实施例提供的一种表情推荐面板的示意图。
如图4所示,电子设备401可以显示有小红与小兰进行会话聊天的会话界面402,会话界面402内显示有信息展示区403、输入框404和虚拟键盘控件405。当小红在输入框404内录入输入文本“哈哈哈哈”时,电子设备确定输入文本“哈哈哈哈”触发表情推荐事件,并且在输入框404上方的信息展示区403内右对齐地叠加显示表情推荐面板406。其中,表情推荐面板406内可以显示有自定义表情图像对应的目标入口图标407。
在另一些实施例中,该表情推荐面板还可以显示于输入框与会话界面的虚拟键盘控件之间。其中,虚拟键盘控件可以用于用户向输入框录入输入文本。
具体地,电子设备可以在输入框与虚拟键盘控件之间增加显示表情推荐面板显示区域,并且在该表情推荐面板显示区域内显示该表情推荐面板。
可选地,该表情推荐面板可以两端对齐地显示于输入框与虚拟键盘控件之间。
在本公开实施例中,目标入口图标可以用于触发显示自定义的目标表情图像。因此,用户可以通过触发目标入口图标,使电子设备直接显示目标表情图像的预览图像。
其中,表情图像是一种具有意思表达功能的图像,可以反映发送该表情图像的用户的内心活动、情绪、情感或特定语义。
自定义的目标表情图像可以为自定义表情图像。具体地,目标表情图像可以为需要与用户自身的人脸特征结合来生成的表情图像。
可选地,目标表情图像可以包括静态表情图像和动态表情图像中的至少一种。
一般情况下,静态表情图像可以是一帧静态图片,例如静态表情图像可以是便携式网络图形(Portable Network Graphics,PNG)文件格式的图像,而动态表情图像是一个由多帧静态图片合成的动画图片,例如动态表情图像可以是图像互换格式(Graphics Interchange Format,GIF)文件格式的图像。
下面参考图3,继续说明S320。
S320、当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板。
在本公开实施例中,在会话界面内显示有展示有目标入口图标的表情推荐面板的情况下,用户可以对目标入口图标进行第一触发操作。电子设备可以实时检测用户对会话界面的操作,并且在检测到对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板。
可选地,第一触发操作可以为对目标入口图标的点击、长按、双击等操作,在此不作限制。
在一些实施例中,在该表情推荐面板显示于会话界面的信息展示区内的情况下,电子设备可以直接停止显示表情推荐面板。
在另一些实施例中,在该表情推荐面板显示于输入框与会话界面的虚拟键盘控件之间的情况下,电子设备可以取消显示表情推荐面板显示区域,进而停止显示表情推荐面板。
S330、在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第一预览图像为目标表情图像的预览图像。
在本公开实施例中,电子设备在停止显示表情推荐面板之后,电子设备可以在会话界面内显示目标入口图标对应的目标表情展示面板。
由于目标入口图标用于触发显示自定义的目标表情图像,因此,目标表情展示面板可以是用于存放目标表情图像的容器,使得目标表情展示面板可以显示有目标表情图像对应的第一预览图像。
在一些实施例中,该目标表情展示面板可以叠加显示于虚拟键盘控件之上,并且覆盖虚拟键盘控件进行显示。
在另一些实施例中,该目标表情展示面板还可以替换虚拟键盘控件进行显示。
需要说明的是,目标表情图像的数量可以为一个,也可以为多个,在此不作限制。
在本公开实施例中,在会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,该表情推荐面板可以包括用于触发显示自定义的目标表情图像的目标入口图标,进而可以在检测到对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板,该目标表情展示面板可以显示有目标表情图像的预览图像,使得用户可以在输入文本触发表情推荐事件时,直接通过在表情推荐面板内显示的目标入口图标快速地进入显示有目标表情图像的预览图像的目标表情展示面板,提高了用户对自定义的目标表情图像的查找便捷性,简化了用户查找自定义的目标表情图像的操作,进而提升了用户的体验。
在本公开一种实施方式中,电子设备在检测到输入文本触发表情推荐事件的情况下,可以根据电子设备本地存储的自定义表情图像的表情类型,确定用户可以通过表情推荐面板直接触发展示的目标表情图像,进而在表情推荐面板中展示用于触发显示该目标表情图 像的目标入口图标。
在本公开一些实施例中,会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到存在第一表情图像的情况下,目标表情图像可以包括第一表情图像。其中,第一表情图像根据第一用户的第一人脸图像和第二用户的第二人脸图像生成,即第一表情图像可以为第一人脸图像与第二人脸图像的合拍自定义表情图像,第一表情图像的表情类型为合拍类型。
进一步地,目标入口图标可以为第一表情入口图标,即第一人脸图像与第二人脸图像的合拍自定义表情入口图标。
进一步地,目标表情展示面板可以为第一表情展示面板,即用于展示第一人脸图像与第二人脸图像的合拍自定义表情图像的表情展示面板。
在一些实施例中,会话界面可以为第一用户与一个第二用户实现会话聊天的界面。
此时,第一表情图像可以根据第一用户的第一人脸图像和该第二用户的第二人脸图像生成的合拍自定义表情图像。
在另一些实施例中,会话界面可以为第一用户与多个第二用户实现会话聊天的界面。
此时,第一表情图像可以根据第一用户的第一人脸图像和全部第二用户的第二人脸图像生成的合拍自定义表情图像。
图5示出了本公开实施例提供的一种入口触发过程的示意图。
如图5所示,电子设备501可以显示有小红与小兰进行会话聊天的会话界面502,在会话界面502内显示有信息展示区503、输入框504和虚拟键盘控件505。当小红在输入框504内录入输入文本“哈哈哈哈”时,电子设备确定输入文本“哈哈哈哈”触发表情推荐事件,并且在输入框504上方的信息展示区503内右对齐地叠加显示表情推荐面板506。其中,如果电子设备501检测到本地存在小红与小兰的合拍自定义表情图像,表情推荐面板506内可以显示有该合拍自定义表情图像对应的合拍自定义表情入口图标507。小红可以点击合拍自定义表情入口图标507,电子设备在检测到对合拍自定义表情入口图标507的点击操作的情况下,可以将虚拟键盘控件505替换为合拍自定义表情图像的表情展示面板508进行显示,该表情展示面板508内可以显示有合拍自定义表情图像的预览图像509。
由此,在本公开实施例中,可以在电子设备存在当前会话界面内的会话用户之间的合拍自定义表情图像时,直接显示该合拍自定义表情图像对应的合拍自定义表情入口图标,以帮助用户快速进入当前会话界面内的会话用户之间的合拍自定义表情图像的表情展示面板,在用户的自定义表情图像较多时,为用户智能推荐可以使用的自定义表情图像,进一步提升用户的体验。
在本公开实施例中,可选地,电子设备在确定会话界面内显示的输入文本触发表情推荐事件之前,还需要先生成第一表情图像。
图6示出了本公开实施例提供的一种第一表情图像生成方法的流程示意图。
如图6所示,第一表情图像生成方法可以包括如下步骤。
S610、显示第二用户向第一用户发送的目标邀请信息。
在本公开实施例中,第二用户可以通过所使用的电子设备如图1和图2中所示的第二 电子设备120向第一用户所使用的电子设备发送的目标合拍请求,使电子设备可以显示第二用户向第一用户发送的目标合拍请求对应的目标邀请信息,该目标邀请信息可以用于向第一用户提示第二用户向其发出合拍邀请。
可选地,目标邀请信息可以包括第一邀请信息和第二邀请信息中的任一项。
在一些实施例中,该第一邀请信息可以为第二用户通过触发会话界面内显示的第一邀请提示信息向第一用户发送的邀请信息。其中,第一邀请提示信息用于向第二用户提示其可以向第一用户发出合拍邀请。
在一个示例中,第二用户所使用的电子设备可以在第一用户和第二用户均已生成自拍自定义表情图像的情况下,在会话界面内显示的第一邀请提示信息。第二用户如果想要与第一用户合拍表情,则可以触发第一邀请提示信息,使第二用户所使用的电子设备向第一用户发送第一合拍请求,进而使第一用户所使用的电子设备显示第一合拍请求对应的第一邀请信息。
在另一个示例中,无论第一用户和第二用户是否已生成自拍自定义表情图像,第二用户所使用的电子设备均可以在会话界面内显示的第一邀请提示信息。第二用户如果想要与第一用户合拍表情,则可以触发第一邀请提示信息,使第二用户所使用的电子设备向第一用户发送第一合拍请求,进而使第一用户所使用的电子设备显示第一合拍请求对应的第一邀请信息。
在另一些实施例中,第二邀请信息可以为第二用户通过触发用于展示第二表情图像的表情展示面板内显示的第二邀请提示信息向第一用户发送的邀请信息。第二表情图像根据第二人脸图像生成,即第二表情图像为第二用户的自拍自定义表情图像。
其中,第二邀请提示信息用于向第二用户提示其可以向通讯录里的其他用户发出合拍邀请。
进一步地,第二邀请提示信息可以进一步用于向第二用户提示其可以向通讯录里的其他已生成自拍自定义表情图像的用户发出合拍邀请。
在上述实施例中,第二用户可以通过所使用的电子设备进入用于展示第二用户的自拍自定义表情图像的表情展示面板,在该表情展示面板内可以显示有第二邀请提示信息。第二用户如果想要与其他已生成自拍自定义表情图像的用户合拍表情,则可以触发第二邀请提示信息,使第二用户所使用的电子设备显示其他已生成自拍自定义表情图像的用户信息如用户头像和/或用户名,第二用户可以在显示的用户信息中,对至少一个用户的用户信息进行选择操作如点击至少一个用户的用户信息,在第二用户选择的至少一个用户包括第一用户的情况下,第二用户所使用的电子设备可以向第一用户发送第二合拍请求,进而使第一用户所使用的电子设备显示第二合拍请求对应的第二邀请信息。
在本公开实施例中,可选地,S610可以具体包括:在会话界面内,显示第二用户向第一用户发送的目标邀请信息。
可选地,电子设备可以在会话界面的信息展示区内显示目标邀请信息。图7示出了本公开实施例提供的一种邀请信息的示意图。
如图7所示,电子设备701可以显示有小红与小兰进行会话聊天的会话界面702,在 会话界面702内显示有信息展示区703、输入框704和虚拟键盘控件705。当电子设备701接收到小兰向小红发送的合拍邀请的情况下,信息展示区703内可以显示有邀请信息“小兰邀请你生成合拍表情”。
在本公开实施例中,可选地,第二用户触发邀请提示信息后,第二用户所使用的电子设备还可以显示多个合拍表情模板图像,第二用户可以在多个合拍表情模板图像中选择至少一个,第二用户所使用的电子设备可以获取第二用户所选择的合拍表情模板图像的目标模板标识,并且向第一用户发送携带有目标模板标识的目标合拍请求对应的目标邀请信息。
参考图6,继续说明S620。
S620、当检测到对目标邀请信息的第二触发操作时,向服务器发送携带有第一用户的第一用户标识和第二用户的第二用户标识的第一生成请求。
在本公开实施例中,电子设备在显示目标邀请信息之后,用户如果接受合拍邀请,可以对目标邀请信息进行第二触发操作。电子设备可以实时地检测用户对会话界面的操作,当检测到用户对目标邀请信息的第二触发操作时,可以服务器发送携带有第一用户的第一用户标识和第二用户的第二用户标识的第一生成请求。
可选地,第二触发操作可以为对目标邀请信息的点击、长按、双击等操作,在此不作限制。
继续参见图7,小红在看到该邀请信息之后,可以点击邀请信息中的文字“合拍表情”,使电子设备向服务器发送携带有小红的用户标识和小兰的用户标识的合拍表情生成请求。
在一些实施例中,在第一用户已生成自拍自定义表情图像的情况下,电子设备可以直接向服务器发送携带有第一用户的第一用户标识和第二用户的第二用户标识的第一生成请求。
在另一些实施例中,在第一用户未生成自拍自定义表情图像的情况下,电子设备在向服务器发送第一生成请求之前,该图像显示方法还可以包括:显示人脸采集界面,并且在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像。
具体地,在第一用户未生成自拍自定义表情图像的情况下,当检测到对目标邀请信息的第二触发操作时,电子设备可以首先显示人脸采集界面,并且在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像,然后再向服务器发送携带有第一用户的第一用户标识和第二用户的第二用户标识的第一生成请求。
在又一些实施例中,在第二用户未生成自拍自定义表情图像的情况下,服务器在接收到第一生成请求之后,还可以向第二用户所使用的电子设备发送图像获取请求,使第二用户所使用的电子设备显示该图像获取请求对应的图像上传提示信息,以通过图像上传提示信息来提示第二用户向服务器发送第二用户的第二人脸图像,进而使服务器可以获取到第二用户的第二人脸图像。
在本公开实施例中,第一生成请求可以用于指示服务器根据与第一用户标识关联存储的第一人脸图像、与第二用户标识关联存储的第二人脸图像和第一表情模板图像生成并反馈第一表情图像。
具体地,服务器首先可以分别对第一人脸图像和第二人脸图像进行人脸分割处理,以 抠出第一人脸图像中的第一用户人脸和第二人脸图像中的第二用户人脸,然后可以对抠出的第一用户人脸和第二用户人脸分别进行边缘优化如模糊、羽化等处理,接着可以对第一人脸图像和第二人脸图像的头部位置进行跟随以及表情迁移处理,得到人脸动图。其中,一个人脸图像对应第一表情模板图像中的一个人脸区域,一个人脸区域对应一个头部位置和一种表情。最后将每个第一表情模板图像的每一帧图片分别与人脸动图的每一帧图片进行图片合成,生成每个第一表情模板图像对应的第一表情图像。
在一些实施例中,当第二用户选择与多个用户共同生成合拍自定义表情图像时,目标邀请信息可以携带有全部用户的用户标识,第一生成请求可以携带有全部用户的用户标识,服务器可以在接收到全部用户标识对应的用户发送的生成请求之后,基于全部用户的人脸图像生成第一表情图像。
在本公开实施例中,可选地,目标邀请信息可以携带有目标模板标识,目标模板标识可以为第二用户选择的表情模板如合拍表情模板的模板标识。
相应地,第一生成请求还可以携带有目标模板标识,第一表情模板图像可以为目标模板标识对应的表情模板图像。
具体地,服务器可以根据与第一用户标识关联存储的第一人脸图像、与第二用户标识关联存储的第二人脸图像和目标模板标识对应的第一表情模板图像生成并反馈第一表情图像,在此不做赘述。
由此,服务器可以根据第二用户选择的表情模板生成第一用户与第二用户之间的合拍自定义表情图像,进而提高了生成合拍自定义表情图像的灵活性。
S630、接收服务器反馈的第一表情图像。
具体地,电子设备可以从服务器拉取第一表情图像,以接收服务器反馈的第一表情图像。
在一些实施例中,电子设备可以在发送第一生成请求之后,实时地拉取第一表情图像。
在另一些实施例中,电子设备还可以在发送第一生成请求之后,在等待预设等待时长之后,再拉取第一表情图像。
由此,在本公开实施例中,第一表情图像的素材采集方式较为简单,也无需用户对表情图像进行文本、贴图等的设计,可以减少表情图像的制作时间,提升用户的体验。
在本公开另一些实施例中,会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到不存在第一表情图像且存在第三表情图像的情况下,目标表情图像可以包括第三表情图像。第三表情图像根据第一人脸图像生成,即第三表情图像可以为第一用户的自拍自定义表情图像,第三表情图像的表情类型为自拍类型。
进一步地,目标入口图标可以为第三表情入口图标,即第一用户的自拍自定义表情入口图标。
进一步地,目标表情展示面板可以为第三表情展示面板,即用于展示第一用户的自拍自定义表情图像的表情展示面板。
图8示出了本公开实施例提供的另一种入口触发过程的示意图。
如图8所示,电子设备801可以显示有小红与小兰进行会话聊天的会话界面802,在 会话界面802内显示有信息展示区803、输入框804和虚拟键盘控件805。当小红在输入框804内录入输入文本“哈哈哈哈”时,电子设备确定输入文本“哈哈哈哈”触发表情推荐事件,并且在输入框804上方的信息展示区803内右对齐地叠加显示表情推荐面板806。其中,如果电子设备801检测到本地不存在小红与小兰的合拍自定义表情图像但存在小红的自拍自定义表情图像,表情推荐面板806内可以显示有该自拍自定义表情图像对应的自拍自定义表情入口图标807。小红可以点击自拍自定义表情入口图标807,电子设备在检测到对自拍自定义表情入口图标807的点击操作的情况下,可以将虚拟键盘控件805替换为自拍自定义表情图像的表情展示面板808进行显示,该表情展示面板808内可以显示有自拍自定义表情图像的预览图像809。
由此,在本公开实施例中,可以在电子设备不存在当前会话界面内的会话用户之间的合拍自定义表情图像但存在第一用户的自拍自定义表情图像时,直接显示自拍自定义表情图像对应的自拍自定义表情入口图标,以帮助用户快速进入自拍自定义表情图像的表情展示面板,在用户的电子设备存在与其他用户之间的合拍自定义表情图像时,可以为用户智能推荐可以使用的自定义表情图像,避免侵犯他人的肖像权,进一步提升用户的体验。
在本公开又一些实施例中,会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到不存在第三表情图像的情况下,目标表情图像可以包括第二表情模板图像。第三表情图像根据第一用户的第一人脸图像和第二表情模板图像生成,即第三表情图像可以为利用第二表情模板图像生成的第一用户的自拍自定义表情图像。
在一些实施例中,第二用户可以在已生成自拍自定义表情图像之后,向已生成自拍自定义表情图像的第一用户发送合拍邀请,此时,只要电子设备检测到本地不存在第三表情图像,即本地不存在第一用户的自拍自定义表情图像,即可以确定第一用户未生成任何的自定义表情图像,因此,可以将第二表情模板图像作为目标表情图像,使目标表情图像可以包括第二表情模板图像。
在另一些实施例中,无论第一用户和第二用户是否已生成自拍自定义表情图像,第二用户均可以向第一用户发送合拍邀请,此时,如果电子设备检测到本地不存在第三表情图像和第一表情图像,即本地不存在第一用户的自拍自定义表情图像,也不存在第一用户和第二用户的合拍自定义表情图像,则可以确定第一用户不具有自定义表情图像,因此,可以将第二表情模板图像作为目标表情图像,使目标表情图像可以包括第二表情模板图像。
进一步地,目标入口图标可以为第二表情模板入口图标,即用于生成第一用户的自拍自定义表情的表情模板的入口图标。
进一步地,目标表情展示面板可以为第二表情模板展示面板,即用于展示第二表情模板图像的表情展示面板。
图9示出了本公开实施例提供的又一种入口触发过程的示意图。
如图9所示,电子设备901可以显示有小红与小兰进行会话聊天的会话界面902,在会话界面902内显示有信息展示区903、输入框904和虚拟键盘控件905。当小红在输入框904内录入输入文本“哈哈哈哈”时,电子设备确定输入文本“哈哈哈哈”触发表情推荐事件,并且在输入框904上方的信息展示区903内右对齐地叠加显示表情推荐面板906。 其中,如果电子设备901检测到本地不存在小红与小兰的合拍自定义表情图像并且也不存在小红的自拍自定义表情图像,表情推荐面板906内可以显示有用于生成小红的自拍自定义表情的表情模板的表情模板入口图标907。小红可以点击表情模板入口图标907,电子设备在检测到对表情模板入口图标907的点击操作的情况下,可以将虚拟键盘控件905替换为用于生成小红的自拍自定义表情的表情模板图像的表情展示面板908进行显示,该表情展示面板908内可以显示有用于生成小红的自拍自定义表情的表情模板图像的预览图像909。
在本公开实施例中,可选地,第二表情模板图像和第二表情模板图像的预览图像中的人脸区域均可以空白显示,如图9中的预览图像909所示。
在本公开实施例中,可选地,目标表情展示面板还可以显示有表情生成触发控件如图9中的“立即生成”按钮910,该表情生成触发控件可以用于触发生成第一用户的自拍自定义表情图像。因此,用户可以通过触发表情生成触发控件,使电子设备生成第一用户的自拍自定义表情图像。
进一步地,电子设备在显示目标入口图标对应的目标表情展示面板之后,还可以生成第三表情图像。
图10示出了本公开实施例提供的一种第三图像生成方法的流程示意图。
如图10所示,该第三表情图像生成方法可以包括如下步骤。
S1010、当检测到对表情生成触发控件的第三触发操作时,显示人脸采集界面。
具体地,在目标表情展示面板内显示有表情生成触发控件的情况下,用户可以对表情生成触发控件进行第三触发操作。电子设备可以实时检测用户对目标表情展示面板的操作,并且在检测到对表情生成触发控件的第三触发操作的情况下,显示人脸采集界面。
可选地,第三触发操作可以为对表情生成触发控件的点击、长按、双击等操作,在此不作限制。
可选地,电子设备可以在检测到对表情生成触发控件的第三触发操作的情况下,由会话界面跳转至人脸采集界面进行显示。
进一步地,人脸采集界面可以包括人脸采集框。可选地,人脸采集框可以具有指定的人脸采集角度。
S1020、在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像的第二生成请求。
具体地,用户可以通过人脸采集界面采集第一人脸图像,电子设备可以在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有所采集的第一人脸图像的第二生成请求。
在一些实施例中,电子设备在人脸采集界面内的人脸采集框内显示有完整人脸时,可以直接采集人脸采集框内显示的第一人脸图像,并且在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像的第二生成请求。
在另一些实施例中,电子设备在人脸采集界面内的人脸采集框内显示有完整人脸时,可以点亮人脸采集界面内的拍照控件,用户可以点击拍照控件,使电子设备响应于用户点 击拍照控件,采集人脸采集框内显示的第一人脸图像,并且在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像的第二生成请求。
其中,人脸采集框内显示有完整人脸指的是人脸全部在人脸采集框中,并且人脸的高度不小于人脸采集框高度的一半。
在本公开实施例中,第二生成请求可以用于指示服务器根据第一人脸图像和第二表情模板图像生成并反馈第三表情图像。
其中,服务器生成第三表情图像的具体过程与生成第一表情图像的具体过程相似,在此不做赘述。
S1030、接收服务器反馈的第三表情图像。
具体地,电子设备可以从服务器拉取第三表情图像,以接收服务器反馈的第三表情图像。
在一些实施例中,电子设备可以在发送第二生成请求之后,实时地拉取第三表情图像。
在另一些实施例中,电子设备还可以在发送第二生成请求之后,在等待预设等待时长之后,再拉取第三表情图像。
S1040、将第一预览图像替换为第二预览图像进行显示,第二预览图像为第三表情图像的预览图像。
具体地,电子设备可以在拉取到第三表情图像之后,在目标表情展示面板内,将第一预览图像替换为第三表情图像对应的第二预览图像进行显示,使得电子设备可以在第三表情图像制作完成后,直接将第三表情图像展示给用户。
可选地,在电子设备由会话界面跳转至人脸采集界面的情况下,在电子设备接收到服务器反馈的第三表情图像之后,还可以由人脸采集界面跳转回会话界面。
由此,在本公开实施例中,第三表情图像的素材采集方式较为简单,也无需用户对表情图像进行文本、贴图等的设计,可以减少表情图像的制作时间,提升用户的体验。
在本公开另一种实施方式中,目标表情图像可以包括预设文本样式显示的第一目标文本,第一预览图像也可以包括以预设文本样式显示的第一目标文本。
在本公开实施例中,每个目标表情图像可以对应一个预设文本样式。
在一些实施例中,预设文本样式可以包括字体样式、颜色样式、描边样式、位置样式和角度样式中的至少一种,在此不作限制。
可选地,在输入文本的文字数量小于或等于预设数量阈值的情况下,第一目标文本可以包括输入文本;在输入文本的文字数量大于预设数量阈值的情况下,第一目标文本可以包括预设文本。
其中,预设数量阈值可以为根据需要设置的任意数值,在此不作限制限制。例如,预设数量阈值可以为3、5、10、20等。
具体地,电子设备在确定目标表情图像之后,可以首先判断输入文本的文字数量是否小于或等于预设数量阈值,如果是,则将输入文本以预设文本样式添加至目标表情图像中,使目标表情图像包括预设文本样式的输入文本,否则,将预设文本以预设文本样式添加至目标表情图像中,使目标表情图像包括预设文本样式的预设文本。
可选地,每个目标表情图像可以对应一个预设文本。
图11示出了本公开实施例提供的一种表情展示面板的示意图。
如图11所示,电子设备1101可以显示有小红与小兰进行会话聊天的会话界面1102,在会话界面1102内显示有信息展示区1103、输入框1104和自拍自定义表情图像的表情展示面板1105。当输入框1104内显示有输入文本“哈哈哈哈”时,由于输入文本的文字数量为4,在预设数量阈值为5的情况下,文字数量小于预设数量阈值,该表情展示面板1105内显示的自拍自定义表情图像的预览图像1106可以包括以预设文本样式显示的输入文本“哈哈哈哈”。
可选地,预览图像1106对应的自拍自定义表情图像也可以包括以预设文本样式显示的输入文本“哈哈哈哈”。
图12示出了本公开实施例提供的另一种表情展示面板的示意图。
如图12所示,电子设备1201可以显示有小红与小兰进行会话聊天的会话界面1202,在会话界面1202内显示有信息展示区1203、输入框1204和自拍自定义表情图像的表情展示面板1205。当输入框1204内显示有输入文本“哈哈哈哈”时,由于输入文本的文字数量为4,在预设数量阈值为3的情况下,文字数量大于预设数量阈值,该表情展示面板1205内显示的自拍自定义表情图像的预览图像1206可以包括以预设文本样式显示的预设文本,一个预览图像1206对应一个预设文本。
可选地,预览图像1206对应的自拍自定义表情图像也可以包括以预设文本样式显示的对应预设文本。
由此,在本公开实施例中,第一预览图像内显示的文本可以基于输入文本灵活地进行调整,进一步提升了用户的体验。
在本公开实施例中,可选地,在显示目标入口图标对应的目标表情展示面板之后,该图像显示方法还可以包括:
当检测到对第一预览图像中的目标预览图像的第四触发操作时,在会话界面的信息展示区内,显示目标预览图像对应的目标表情图像,目标预览图像对应的目标表情图像包括第一目标文本。
具体地,在会话界面内显示有展示有目标入口图标的表情推荐面板的情况下,用户可以对目标入口图标进行第一触发操作,电子设备可以实时检测用户对会话界面的操作,并且在检测到对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内显示目标入口图标对应的目标表情展示面板。在显示目标表情展示面板之后,用户可以对第一预览图像中的目标预览图像进行第四触发操作,电子设备可以实时检测用户对目标预览图像的第四触发操作,并且在检测到对目标预览图像的第四触发操作的情况下,向通过服务器向第二用户所使用的电子设备发送目标预览图像对应的目标表情图像,并且在在会话界面的信息展示区内,显示目标预览图像对应的目标表情图像,该目标预览图像对应的目标表情图像可以显示有第一目标文本。
可选地,第四触发操作可以为对目标预览图像的点击、长按、双击等操作,在此不作限制。
由此,在用户聊天的过程中,电子设备能够根据用户录入的输入文本自动调整自定义表情图像中所显示的文本,提高了用户对自定义的目标表情图像使用的灵活性。
在本公开又一种实施方式中,为了进一步提升用户的体验,目标入口图标可以包括目标入口图像。
可选地,目标入口图像可以包括第一入口图像和第二入口图像中的任一项。
在一些实施例中,该第一入口图像为在第一预览图像中随机选择的图像。
具体地,电子设备在确定目标表情图像之后,可以在目标表情图像对应的在第一预览图像中随机选择一个图像,并将所选择的图像作为第一入口图像。
在另一些实施例中,该第二入口图像为与输入文本所属的情感类型相同的目标表情图像的预览图像。
在一个示例中,电子设备在确定目标表情图像之后,可以对目标表情图像进行检测,将具有该输入文本对应的情感类型的目标表情图像的预览图像作为第二入口图像。
在另一个示例中,电子设备在确定目标表情图像之后,可以对目标表情图像的图像标签进行检测,将图像标签为该输入文本中的情感关键词的目标表情图像的预览图像作为第二入口图像。
由此,在本公开实施例中,目标入口图像可以基于输入文本灵活地进行调整,进一步提升了用户的体验。
在本公开一些实施例中,目标入口图标内可以不显示任何文本。
在本公开另一些实施例中,目标入口图标还可以包括以预设文本样式显示的第二目标文本。
可选地,第二目标文本可以包括输入文本中的前预设数量个文字。
其中,预设数量可以为根据需要设置的任意数值,在此不作限制限制。例如,预设数量可以为1、2、3等。
图13示出了本公开实施例提供的另一种表情推荐面板的示意图。
如图13所示,电子设备1301可以显示有小红与小兰进行会话聊天的会话界面1302,在会话界面1302内显示有信息展示区1303、输入框1304和虚拟键盘控件1305。当小红在输入框1304内录入输入文本“哈哈哈哈”时,电子设备确定输入文本“哈哈哈哈”触发表情推荐事件,并且在输入框1304上方的信息展示区1303内右对齐地叠加显示表情推荐面板1306。其中,如果电子设备1301检测到本地不存在小红与小兰的合拍自定义表情图像但存在小红的自拍自定义表情图像,表情推荐面板1306内可以显示有该自拍自定义表情图像对应的自拍自定义表情入口图标1307。自拍自定义表情入口图标1307可以包括从自拍自定义表情图像中随机选择的一个表情图像的预览图像和输入文本“哈哈哈哈”中的第一个文字“哈”。
在一些实施例中,在输入文本的文字数量大于预设数量阈值的情况下,目标入口图标内可以不显示任何文本;在输入文本的文字数量小于或等于预设数量阈值的情况下,目标入口图标可以包括输入文本中的前预设数量个文字。
在另一些实施例中,在输入文本的文字数量大于预设数量的情况下,第二目标文本还 可以包括省略符号,例如“…”。
由此,第二目标文本可以为输入文本中的前预设数量个文字和省略符号所组成的文本。
由此,在本公开实施例中,目标入口图标内显示的文本可以基于输入文本灵活地进行调整,进一步提升了用户的体验。
在本公开再一种实施方式中,表情推荐面板可以显示有第三预览图像,第三预览图像可以包括该目标入口图标和第四表情图像的预览图像。即表情推荐面板除了显示目标入口图标以外,还可以显示第四表情图像的预览图像。
其中,第四表情图像可以为与输入文本所属的情感类型相同的非自定义的表情图像。
在一个示例中,第四表情图像可以为具有该输入文本对应的情感类型的非自定义的表情图像。
在另一个示例中,第四表情图像还可以为图像标签为该输入文本中的情感关键词的非自定义的表情图像。
可选地,目标入口图标可以显示于第四表情图像的预览图像之前,如图4所示,目标入口图标407可以位于全部第四表情图像的预览图像408的左侧。
在一些实施例中,目标入口图标可以在表情推荐面板内固定显示,即使用户在表情推荐面板内进行滑动操作,目标入口图标的显示位置也不会改变。
在另一些实施例中,目标入口图标可以在表情推荐面板内非固定显示,当用户在表情推荐面板内进行滑动操作时,目标入口图标的显示位置可以随着滑动操作的滑动方向改变。
在本公开实施例中,可选地,在显示表情推荐面板之后,该图像显示方法还可以包括:
对表情推荐面板的显示时长进行计时;
在显示时长达到预设时长且未触发第三预览图像的情况下,停止显示表情推荐面板;
在会话界面内,显示目标入口图标。
具体地,在电子设备显示会话界面的情况下,用户可以在会话界面的输入框内录入输入文本,在用户录入输入文本的过程中,电子设备可以实时地对输入文本进行检测,当检测到输入文本触发表情推荐事件时,可以在会话界面内,显示展示有目标入口图标的表情推荐面板。在显示表情推荐面板之后,电子设备可以对表情推荐面板的显示时长进行计时,如果显示时长达到预设时长且未触发第三预览图像,则停止显示表情推荐面板,并且在会话界面内,显示目标入口图标。
在一些实施例中,在停止显示表情推荐面板之后,目标入口图标可以显示于会话界面的信息展示区内。
可选地,该目标入口图标可以向右对齐地叠加显示于信息展示区内,并且位于输入框的顶部。
在一个示例中,在停止显示表情推荐面板之后,目标入口图标的尺寸可以保持不变;在另一个示例中,在停止显示表情推荐面板之后,目标入口图标的尺寸可以缩小预设比例,在此不作限制。
其中,预设比例可以根据需要设置,在此不作限制。
继续参见图4,表情推荐面板406内可以显示有自定义表情图像对应的目标入口图标 407和基于输入文本“哈哈哈哈”推荐的非自定义的表情图像的预览图像408。电子设备可以在显示表情推荐面板406之后,对表情推荐面板406的显示时长进行计时,如果显示时长达到预设时长且未检测到用户触发目标入口图标407或者任一个预览图像408,则停止显示表情推荐面板406,并且按照图14进行显示。
图14示出了本公开实施例提供的一种入口图标显示方式的示意图。
如图14所示,电子设备401可以显示有小红与小兰进行会话聊天的会话界面402,会话界面402内显示有信息展示区403、输入框404和虚拟键盘控件405,输入框404内显示有输入文本“哈哈哈哈”,并且在输入框404上方的信息展示区403内右对齐地叠加显示目标入口图标407。
在另一些实施例中,在停止显示表情推荐面板之后,目标入口图标还可以显示于输入框与会话界面的虚拟键盘控件之间。
具体地,电子设备可以在输入框与虚拟键盘控件之间增加显示的表情推荐面板显示区域内继续显示该目标入口图标。
可选地,该目标入口图标可以向右对齐地显示于输入框与虚拟键盘控件之间。
在一个示例中,在停止显示表情推荐面板之后,目标入口图标的尺寸可以保持不变;在另一个示例中,在停止显示表情推荐面板之后,目标入口图标的尺寸可以缩小预设比例,此时,表情推荐面板显示区域的尺寸也可以缩小预设比例,在此不作限制。
其中,预设比例可以根据需要设置,在此不作限制。
由此,在本公开实施例中,即使停止显示表情推荐面板,仍然可以保持显示目标入口图标,进一步提升了用户查找自定义的目标表情图像的便利性,提升了用户的体验。
在本公开实施例中,可选地,在显示目标入口图标之后,该图像显示方法还可以包括:
在会话界面内未显示输入文本的情况下,停止显示目标入口图标。
具体地,在显示表情推荐面板之后,电子设备可以对表情推荐面板的显示时长进行计时,如果显示时长达到预设时长且未触发第三预览图像,则停止显示表情推荐面板,并且在会话界面内,显示目标入口图标。在显示目标入口图标之后,电子设备可以实时检测输入框内显示的输入文本,如果检测到输入框内未显示输入文本,即用户删除了输入框内的全部输入文本,则停止可以显示目标入口图标。
由此,在本公开实施例中,在会话界面内未显示输入文本时,可以停止可以显示目标入口图标,避免在用户停止编辑会话内容时出现仍然持续显示目标入口图标的问题,进一步提升了用户的体验。
进一步地,在对表情推荐面板的显示时长进行计时之后,在显示时长未达到预设时长且检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板。
本公开实施例还提供了一种用于实现上述的图像显示方法的图像显示装置。
在本公开实施例中,该图像显示装置可以为电子设备。在一些实施例中,该电子设备可以为图1和图2中所示的客户端中的第一电子设备110。其中,电子设备可以是移动电话、 平板电脑、台式计算机、笔记本电脑、车载终端、可穿戴设备、一体机、智能家居设备等具有通信功能的设备,也可以是虚拟机或者模拟器模拟的设备。
下面将参照图15对本公开实施例提供的图像显示装置进行说明。
图15示出了本公开实施例提供的一种图像显示装置的结构示意图。
如图15所示,该图像显示装置1500可以包括第一显示单元1510、第二显示单元1520和第三显示单元1530。
该第一显示单元1510可以配置为当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标,目标入口图标用于触发显示自定义的目标表情图像。
该第二显示单元1520可以配置为当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板。
该第三显示单元1530可以配置为在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第一预览图像为目标表情图像的预览图像。
在本公开实施例中,能够在会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,该表情推荐面板可以包括用于触发显示自定义的目标表情图像的目标入口图标,进而可以在检测到对目标入口图标的第一触发操作的情况下,停止显示表情推荐面板,并且在会话界面内,显示目标入口图标对应的目标表情展示面板,该目标表情展示面板可以显示有目标表情图像的预览图像,使得用户可以在输入文本触发表情推荐事件时,直接通过在表情推荐面板内显示的目标入口图标快速地进入显示有目标表情图像的预览图像的目标表情展示面板,提高了用户对自定义的目标表情图像查找的便捷性,简化了用户查找自定义的目标表情图像的操作,进而提升了用户的体验。
在本公开一些实施例中,该会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到存在第一表情图像的情况下,目标表情图像可以包括第一表情图像,第一表情图像可以根据第一用户的第一人脸图像和第二用户的第二人脸图像生成。
在本公开一些实施例中,该图像显示装置1500还可以包括第四显示单元、第一发送单元和第一接收单元。
该第四显示单元可以配置为显示第二用户向第一用户发送的目标邀请信息。
该第一发送单元可以配置为当检测到对目标邀请信息的第二触发操作时,向服务器发送携带有第一用户的第一用户标识和第二用户的第二用户标识的第一生成请求,第一生成请求可以用于指示服务器根据与第一用户标识关联存储的第一人脸图像、与第二用户标识关联存储的第二人脸图像和第一表情模板图像生成并反馈第一表情图像。
该第一接收单元可以配置为接收服务器反馈的第一表情图像。
在本公开一些实施例中,该目标邀请信息可以包括下列中的任一项:
第一邀请信息,该第一邀请信息可以为第二用户通过触发会话界面内显示的第一邀请提示信息向第一用户发送的邀请信息;
第二邀请信息,该第二邀请信息可以为第二用户通过触发用于展示第二表情图像的表 情展示面板内显示的第二邀请提示信息向第一用户发送的邀请信息,第二表情图像可以根据第二人脸图像生成。
在本公开一些实施例中,该目标邀请信息可以携带有目标模板标识,该目标模板标识可以为第二用户选择的表情模板的模板标识。
相应地,该第一生成请求还可以携带有目标模板标识,该第一表情模板图像可以为目标模板标识对应的表情模板图像。
在本公开一些实施例中,该会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到不存在第一表情图像且存在第三表情图像的情况下,目标表情图像可以包括第三表情图像,第一表情图像可以根据第一用户的第一人脸图像和第二用户的第二人脸图像生成,第三表情图像可以根据第一人脸图像生成。
在本公开一些实施例中,该会话界面可以为第一用户与第二用户实现会话聊天的界面。
相应地,在检测到不存在第三表情图像的情况下,目标表情图像可以包括第二表情模板图像,第三表情图像可以根据第一用户的第一人脸图像和第二表情模板图像生成。
在本公开一些实施例中,该目标表情展示面板还可以显示有表情生成触发控件。
相应地,该图像显示装置1500还可以包括第五显示单元、第二发送单元、第二接收单元和第六显示单元。
该第五显示单元可以配置为当检测到对表情生成触发控件的第三触发操作时,显示人脸采集界面。
该第二发送单元可以配置为在人脸采集界面采集到第一人脸图像的情况下,向服务器发送携带有第一人脸图像的第二生成请求,第二生成请求可以用于指示服务器根据第一人脸图像和第二表情模板图像生成并反馈第三表情图像。
该第二接收单元可以配置为接收服务器反馈的第三表情图像。
该第六显示单元可以配置为将第一预览图像替换为第二预览图像进行显示,第二预览图像可以为第三表情图像的预览图像。
在本公开一些实施例中,该第一预览图像可以包括以预设文本样式显示的第一目标文本。
相应地,在输入文本的文字数量小于或等于预设数量阈值的情况下,第一目标文本可以包括输入文本;在输入文本的文字数量大于预设数量阈值的情况下,第一目标文本可以包括所述预设文本。
在本公开一些实施例中,该目标入口图标可以包括下列中的任一项:
第一入口图像,该第一入口图像可以为在第一预览图像中随机选择的图像。
第二入口图像,该第二入口图像可以为与输入文本所属的情感类型相同的目标表情图像的预览图像。
在本公开一些实施例中,该目标入口图标可以包括以预设文本样式显示的第二目标文本,该第二目标文本可以包括输入文本中的前预设数量个文字。
在本公开一些实施例中,该表情推荐面板可以显示有第三预览图像,该第三预览图像可以包括目标入口图标和第四表情图像的预览图像,该第四表情图像可以为与输入文本所 属的情感类型相同的非自定义的表情图像。
相应地,该图像显示装置1500还可以包括显示计时单元、第七显示单元和第八显示单元。
该显示计时单元可以配置为对表情推荐面板的显示时长进行计时。
该第七显示单元可以配置为在显示时长达到预设时长且未触发第三预览图像的情况下,停止显示表情推荐面板。
该第八显示单元可以配置为在会话界面内,显示目标入口图标。
在本公开一些实施例中,该图像显示装置1500还可以包括第九显示单元,该第九显示单元可以配置为在会话界面内未显示输入文本的情况下,停止显示目标入口图标。
需要说明的是,图15所示的图像显示装置1500可以执行图3至图14所示的方法实施例中的各个步骤,并且实现图3至图14所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种图像显示设备,该图像显示设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储器中读取可执行指令,并执行可执行指令以实现上述实施例中的图像显示方法。
图16示出了本公开实施例提供的一种图像显示设备的结构示意图。下面具体参考图16,其示出了适于用来实现本公开实施例中的图像显示设备1600的结构示意图。
本公开实施例中的图像显示设备1600可以为电子设备。其中,电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备、等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。
在一些实施例中,该电子设备可以为图1和图2中所示的客户端中的第一电子设备110。
需要说明的是,图16示出的图像显示设备1600仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图16所示,该图像显示设备1600可以包括处理装置(例如中央处理器、图形处理器等)1601,其可以根据存储在只读存储器(ROM)1602中的程序或者从存储装置1608加载到随机访问存储器(RAM)1603中的程序而执行各种适当的动作和处理。在RAM 1603中,还存储有图像显示设备1600操作所需的各种程序和数据。处理装置1601、ROM 1602以及RAM 1603通过总线1604彼此相连。输入/输出(I/O)接口1605也连接至总线1604。
通常,以下装置可以连接至I/O接口1605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1607;包括例如磁带、硬盘等的存储装置1608;以及通信装置1609。通信装置1609可以允许图像显示设备1600与其他设备进行无线或有线通信以交换数据。虽然图16示出了具有各种装置的图像显示设备1600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当 计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品可以包括计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的图像显示方法。
例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1609从网络上被下载和安装,或者从存储装置1608被安装,或者从ROM 1602被安装。在该计算机程序被处理装置1601执行时,执行本公开实施例的图像显示方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述图像显示设备中所包含的;也可以是单独存在,而未装配入该图像显示设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该图像显示设备执行时,使得该图像显示设备执行:
当会话界面内显示的输入文本触发表情推荐事件时,在会话界面内,显示表情推荐面板,表情推荐面板显示有目标入口图标,目标入口图标用于触发显示自定义的目标表情图像;当检测到对目标入口图标的第一触发操作时,停止显示表情推荐面板;在会话界面内,显示目标入口图标对应的目标表情展示面板,目标表情展示面板显示有第一预览图像,第 一预览图像为目标表情图像的预览图像。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的 技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
Claims (16)
- 一种图像显示方法,其特征在于,包括:当会话界面内显示的输入文本触发表情推荐事件时,在所述会话界面内,显示表情推荐面板,所述表情推荐面板显示有目标入口图标,所述目标入口图标用于触发显示自定义的目标表情图像;当检测到对所述目标入口图标的第一触发操作时,停止显示所述表情推荐面板;在所述会话界面内,显示所述目标入口图标对应的目标表情展示面板,所述目标表情展示面板显示有第一预览图像,所述第一预览图像为所述目标表情图像的预览图像。
- 根据权利要求1所述的方法,其特征在于,所述会话界面为第一用户与第二用户实现会话聊天的界面;所述目标表情图像包括第一表情图像,所述第一表情图像根据所述第一用户的第一人脸图像和所述第二用户的第二人脸图像生成。
- 根据权利要求2所述的方法,其特征在于,在所述会话界面内显示的输入文本触发表情推荐事件之前,所述方法还包括:显示所述第二用户向所述第一用户发送的目标邀请信息;当检测到对所述目标邀请信息的第二触发操作时,向服务器发送携带有所述第一用户的第一用户标识和所述第二用户的第二用户标识的第一生成请求,所述第一生成请求用于指示所述服务器根据与所述第一用户标识关联存储的所述第一人脸图像、与所述第二用户标识关联存储的所述第二人脸图像和第一表情模板图像生成所述第一表情图像;接收所述服务器反馈的所述第一表情图像。
- 根据权利要求3所述的方法,其特征在于,所述目标邀请信息包括下列中的任一项:第一邀请信息,所述第一邀请信息为所述第二用户通过触发所述会话界面内显示的第一邀请提示信息向所述第一用户发送的邀请信息;第二邀请信息,所述第二邀请信息为所述第二用户通过触发用于展示第二表情图像的表情展示面板内显示的第二邀请提示信息向所述第一用户发送的邀请信息,所述第二表情图像根据所述第二人脸图像生成。
- 根据权利要求3所述的方法,其特征在于,所述目标邀请信息携带有目标模板标识,所述目标模板标识为所述第二用户选择的表情模板的模板标识;所述第一生成请求还携带有所述目标模板标识,所述第一表情模板图像为所述目标模板标识对应的表情模板图像。
- 根据权利要求1所述的方法,其特征在于,所述会话界面为第一用户与第二用户实现会话聊天的界面;在检测到不存在第一表情图像且存在第三表情图像的情况下,所述目标表情图像包括所述第三表情图像,所述第一表情图像根据所述第一用户的第一人脸图像和所述第二用户的第二人脸图像生成,所述第三表情图像根据所述第一人脸图像生成。
- 根据权利要求1所述的方法,其特征在于,所述会话界面为第一用户与第二用户实现会话聊天的界面;其中,在检测到不存在第三表情图像的情况下,所述目标表情图像包括第二表情模板图像,所述第三表情图像根据所述第一用户的第一人脸图像和所述第二表情模板图像生成。
- 根据权利要求7所述的方法,其特征在于,所述目标表情展示面板还显示有表情生成触发控件;在显示所述目标入口图标对应的目标表情展示面板之后,所述方法还包括:当检测到对所述表情生成触发控件的第三触发操作时,显示人脸采集界面;在所述人脸采集界面采集到所述第一人脸图像的情况下,向服务器发送携带有所述第一人脸图像的第二生成请求,所述第二生成请求用于指示所述服务器根据所述第一人脸图像和所述第二表情模板图像生成所述第三表情图像;接收所述服务器反馈的所述第三表情图像;将所述第一预览图像替换为第二预览图像进行显示,所述第二预览图像为所述第三表情图像的预览图像。
- 根据权利要求1所述的方法,其特征在于,所述第一预览图像包括以预设文本样式显示的第一目标文本;在所述输入文本的文字数量小于或等于预设数量阈值的情况下,所述第一目标文本包括所述输入文本;在所述输入文本的文字数量大于所述预设数量阈值的情况下,所述第一目标文本包括预设文本。
- 根据权利要求1所述的方法,其特征在于,所述目标入口图标包括下列中的任一项:第一入口图像,所述第一入口图像为在所述第一预览图像中随机选择的图像;第二入口图像,所述第二入口图像为与所述输入文本所属的情感类型相同的目标表情图像的预览图像。
- 根据权利要求1所述的方法,其特征在于,所述目标入口图标包括以预设文本样式显示的第二目标文本,所述第二目标文本包括所述输入文本中的前预设数量个文字。
- 根据权利要求1所述的方法,其特征在于,所述表情推荐面板显示有第三预览图像,所述第三预览图像包括所述目标入口图标和第四表情图像的预览图像,所述第四表情图像为与所述输入文本所属的情感类型相同的非自定义的表情图像;在所述显示表情推荐面板之后,所述方法还包括:对所述表情推荐面板的显示时长进行计时;在所述显示时长达到预设时长且未触发所述第三预览图像的情况下,停止显示所述表情推荐面板;在所述会话界面内,显示所述目标入口图标。
- 根据权利要求12所述的方法,其特征在于,在所述显示所述目标入口图标之后,所述方法还包括:在所述会话界面内未显示所述输入文本的情况下,停止显示所述目标入口图标。
- 一种图像显示装置,其特征在于,包括:第一显示单元,配置为当会话界面内显示的输入文本触发表情推荐事件时,在所述会话界面内,显示表情推荐面板,所述表情推荐面板显示有目标入口图标,所述目标入口图标用于触发显示自定义的目标表情图像;第二显示单元,配置为当检测到对所述目标入口图标的第一触发操作时,停止显示所述表情推荐面板;第三显示单元,配置为在所述会话界面内,显示所述目标入口图标对应的目标表情展示面板,所述目标表情展示面板显示有第一预览图像,所述第一预览图像为所述目标表情图像的预览图像。
- 一种图像显示设备,其特征在于,包括:处理器;存储器,用于存储可执行指令;其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-13中任一项所述的图像显示方法。
- 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-13中任一项所述的图像显示方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22742036.1A EP4270186A4 (en) | 2021-01-22 | 2022-01-11 | IMAGE DISPLAY METHOD AND APPARATUS, DEVICE AND MEDIUM |
JP2023544244A JP2024506497A (ja) | 2021-01-22 | 2022-01-11 | 画像表示方法、装置、デバイス及び記憶媒体 |
US18/355,873 US12106410B2 (en) | 2021-01-22 | 2023-07-20 | Customizing emojis for users in chat applications |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110088297.1A CN114816599B (zh) | 2021-01-22 | 2021-01-22 | 图像显示方法、装置、设备及介质 |
CN202110088297.1 | 2021-01-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/355,873 Continuation US12106410B2 (en) | 2021-01-22 | 2023-07-20 | Customizing emojis for users in chat applications |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022156557A1 true WO2022156557A1 (zh) | 2022-07-28 |
Family
ID=82523874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/071150 WO2022156557A1 (zh) | 2021-01-22 | 2022-01-11 | 图像显示方法、装置、设备及介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US12106410B2 (zh) |
EP (1) | EP4270186A4 (zh) |
JP (1) | JP2024506497A (zh) |
CN (1) | CN114816599B (zh) |
WO (1) | WO2022156557A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12111977B2 (en) * | 2022-07-06 | 2024-10-08 | Bonggeun Kim | Device and method for inputting characters |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150286371A1 (en) * | 2012-10-31 | 2015-10-08 | Aniways Advertising Solutions Ltd. | Custom emoticon generation |
CN108227956A (zh) * | 2018-01-10 | 2018-06-29 | 厦门快商通信息技术有限公司 | 一种聊天工具表情推荐方法及系统 |
CN109120866A (zh) * | 2018-09-27 | 2019-01-01 | 腾讯科技(深圳)有限公司 | 动态表情生成方法、装置、计算机可读存储介质和计算机设备 |
CN109215007A (zh) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | 一种图像生成方法及终端设备 |
CN109948093A (zh) * | 2017-07-18 | 2019-06-28 | 腾讯科技(深圳)有限公司 | 表情图片生成方法、装置及电子设备 |
CN111541950A (zh) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | 表情的生成方法、装置、电子设备及存储介质 |
CN112532507A (zh) * | 2019-09-17 | 2021-03-19 | 上海掌门科技有限公司 | 用于呈现表情图像、用于发送表情图像的方法和设备 |
CN113342435A (zh) * | 2021-05-27 | 2021-09-03 | 网易(杭州)网络有限公司 | 一种表情处理方法、装置、计算机设备及存储介质 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100570545C (zh) * | 2007-12-17 | 2009-12-16 | 腾讯科技(深圳)有限公司 | 表情输入方法及装置 |
US20130159919A1 (en) * | 2011-12-19 | 2013-06-20 | Gabriel Leydon | Systems and Methods for Identifying and Suggesting Emoticons |
US10303746B1 (en) * | 2012-12-21 | 2019-05-28 | CRLK, Inc. | Method for coding a vanity message for display |
IL226047A (en) * | 2013-04-29 | 2017-12-31 | Hershkovitz Reshef May | A method and system for giving personal expressions |
WO2016018111A1 (en) * | 2014-07-31 | 2016-02-04 | Samsung Electronics Co., Ltd. | Message service providing device and method of providing content via the same |
CN104298429B (zh) * | 2014-09-25 | 2018-05-04 | 北京搜狗科技发展有限公司 | 一种基于输入的信息展示方法和输入法系统 |
US20170046065A1 (en) * | 2015-04-07 | 2017-02-16 | Intel Corporation | Avatar keyboard |
US20170018289A1 (en) * | 2015-07-15 | 2017-01-19 | String Theory, Inc. | Emoji as facetracking video masks |
US10445425B2 (en) * | 2015-09-15 | 2019-10-15 | Apple Inc. | Emoji and canned responses |
US10025972B2 (en) * | 2015-11-16 | 2018-07-17 | Facebook, Inc. | Systems and methods for dynamically generating emojis based on image analysis of facial features |
CN105608715B (zh) * | 2015-12-17 | 2019-12-10 | 广州华多网络科技有限公司 | 一种在线合影方法及系统 |
CA3009758A1 (en) * | 2015-12-29 | 2017-07-06 | Mz Ip Holdings, Llc | Systems and methods for suggesting emoji |
CN105700703A (zh) * | 2016-02-24 | 2016-06-22 | 北京小牛互联科技有限公司 | 一种在键盘的文字输入界面嵌入表情并支持自定义表情的方法和装置 |
WO2018057541A1 (en) * | 2016-09-20 | 2018-03-29 | Google Llc | Suggested responses based on message stickers |
CN106331529A (zh) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | 一种图像拍摄方法及装置 |
CN106875460A (zh) * | 2016-12-27 | 2017-06-20 | 深圳市金立通信设备有限公司 | 一种图片表情合成方法和终端 |
US10951562B2 (en) * | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
JP6360227B2 (ja) | 2017-04-13 | 2018-07-18 | 株式会社L is B | メッセージシステム |
US10348658B2 (en) * | 2017-06-15 | 2019-07-09 | Google Llc | Suggested items for use with embedded applications in chat conversations |
CN108038102B (zh) * | 2017-12-08 | 2021-05-04 | 北京小米移动软件有限公司 | 表情图像的推荐方法、装置、终端及存储介质 |
US11088983B2 (en) * | 2017-12-29 | 2021-08-10 | Titus Deac | Messaging system with prefabricated icons and methods of use |
CN108388557A (zh) * | 2018-02-06 | 2018-08-10 | 腾讯科技(深圳)有限公司 | 消息处理方法、装置、计算机设备和存储介质 |
US10834026B2 (en) * | 2019-01-24 | 2020-11-10 | Jiseki Health, Inc. | Artificial intelligence assisted service provisioning and modification for delivering message-based services |
CN111756917B (zh) * | 2019-03-29 | 2021-10-12 | 上海连尚网络科技有限公司 | 信息交互方法、电子设备和计算机可读介质 |
KR102186794B1 (ko) * | 2019-05-07 | 2020-12-04 | 임주은 | 커스텀 이모티콘을 생성하고 전송하는 장치 및 방법 |
CN110458916A (zh) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | 表情包自动生成方法、装置、计算机设备及存储介质 |
CN111726536B (zh) * | 2020-07-03 | 2024-01-05 | 腾讯科技(深圳)有限公司 | 视频生成方法、装置、存储介质及计算机设备 |
CN111966804A (zh) * | 2020-08-11 | 2020-11-20 | 深圳传音控股股份有限公司 | 一种表情处理方法、终端及存储介质 |
CN112199032A (zh) * | 2020-09-30 | 2021-01-08 | 北京搜狗科技发展有限公司 | 一种表情推荐方法、装置和电子设备 |
CN112131422A (zh) * | 2020-10-23 | 2020-12-25 | 腾讯科技(深圳)有限公司 | 表情图片生成方法、装置、设备及介质 |
-
2021
- 2021-01-22 CN CN202110088297.1A patent/CN114816599B/zh active Active
-
2022
- 2022-01-11 WO PCT/CN2022/071150 patent/WO2022156557A1/zh active Application Filing
- 2022-01-11 EP EP22742036.1A patent/EP4270186A4/en active Pending
- 2022-01-11 JP JP2023544244A patent/JP2024506497A/ja active Pending
-
2023
- 2023-07-20 US US18/355,873 patent/US12106410B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150286371A1 (en) * | 2012-10-31 | 2015-10-08 | Aniways Advertising Solutions Ltd. | Custom emoticon generation |
CN109948093A (zh) * | 2017-07-18 | 2019-06-28 | 腾讯科技(深圳)有限公司 | 表情图片生成方法、装置及电子设备 |
CN108227956A (zh) * | 2018-01-10 | 2018-06-29 | 厦门快商通信息技术有限公司 | 一种聊天工具表情推荐方法及系统 |
CN109215007A (zh) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | 一种图像生成方法及终端设备 |
CN109120866A (zh) * | 2018-09-27 | 2019-01-01 | 腾讯科技(深圳)有限公司 | 动态表情生成方法、装置、计算机可读存储介质和计算机设备 |
CN112532507A (zh) * | 2019-09-17 | 2021-03-19 | 上海掌门科技有限公司 | 用于呈现表情图像、用于发送表情图像的方法和设备 |
CN111541950A (zh) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | 表情的生成方法、装置、电子设备及存储介质 |
CN113342435A (zh) * | 2021-05-27 | 2021-09-03 | 网易(杭州)网络有限公司 | 一种表情处理方法、装置、计算机设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4270186A4 |
Also Published As
Publication number | Publication date |
---|---|
CN114816599A (zh) | 2022-07-29 |
US12106410B2 (en) | 2024-10-01 |
CN114816599B (zh) | 2024-02-27 |
JP2024506497A (ja) | 2024-02-14 |
EP4270186A4 (en) | 2024-05-15 |
US20230410394A1 (en) | 2023-12-21 |
EP4270186A1 (en) | 2023-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7529236B2 (ja) | インタラクティブ情報処理方法、装置、機器、及び媒体 | |
KR102173536B1 (ko) | 공유된 관심사를 갖는 메시지들의 갤러리 | |
WO2022105862A1 (zh) | 视频生成及显示方法、装置、设备、介质 | |
WO2018010682A1 (zh) | 直播方法、直播数据流展示方法和终端 | |
WO2022121626A1 (zh) | 视频显示及处理方法、装置、系统、设备、介质 | |
US20180293088A1 (en) | Interactive comment interaction method and apparatus | |
WO2022105710A1 (zh) | 一种会议纪要的交互方法、装置、设备及介质 | |
JP7463519B2 (ja) | ビデオに基づくインタラクション実現方法、装置、機器および媒体 | |
CN113014854B (zh) | 互动记录的生成方法、装置、设备及介质 | |
WO2020221103A1 (zh) | 显示用户情绪的方法及设备 | |
CN115079884B (zh) | 会话消息的显示方法、装置、设备及存储介质 | |
US20220092071A1 (en) | Integrated Dynamic Interface for Expression-Based Retrieval of Expressive Media Content | |
CN115379136B (zh) | 特效道具处理方法、装置、电子设备及存储介质 | |
WO2024037491A1 (zh) | 媒体内容处理方法、装置、设备及存储介质 | |
CN113949901A (zh) | 评论分享方法、装置和电子设备 | |
US12106410B2 (en) | Customizing emojis for users in chat applications | |
CN115097984A (zh) | 交互方法、装置、电子设备和存储介质 | |
WO2023134558A1 (zh) | 交互方法、装置、电子设备、存储介质和程序产品 | |
CN111581554A (zh) | 一种信息推荐方法及装置 | |
CN110704151A (zh) | 一种信息处理方法、装置和电子设备 | |
CN116170681A (zh) | 媒体内容发送方法、装置、设备及存储介质 | |
CN115499672B (zh) | 图像显示方法、装置、设备及存储介质 | |
CN114780190B (zh) | 消息处理方法、装置、电子设备及存储介质 | |
EP4418089A1 (en) | Data processing method and apparatus, electronic device, and storage medium | |
US20240357197A1 (en) | Sharing of content collections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22742036 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023544244 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 2022742036 Country of ref document: EP Effective date: 20230724 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |