CN110780955A - Method and equipment for processing emoticon message - Google Patents

Method and equipment for processing emoticon message Download PDF

Info

Publication number
CN110780955A
CN110780955A CN201910838924.1A CN201910838924A CN110780955A CN 110780955 A CN110780955 A CN 110780955A CN 201910838924 A CN201910838924 A CN 201910838924A CN 110780955 A CN110780955 A CN 110780955A
Authority
CN
China
Prior art keywords
message
picture information
information
character string
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910838924.1A
Other languages
Chinese (zh)
Other versions
CN110780955B (en
Inventor
梁文昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianshang Xinchang Network Technology Co Ltd
Original Assignee
Lianshang Xinchang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianshang Xinchang Network Technology Co Ltd filed Critical Lianshang Xinchang Network Technology Co Ltd
Priority to CN201910838924.1A priority Critical patent/CN110780955B/en
Publication of CN110780955A publication Critical patent/CN110780955A/en
Application granted granted Critical
Publication of CN110780955B publication Critical patent/CN110780955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application aims to provide a method for processing emotive messages, which comprises the following steps: responding to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, and presenting an editing window related to the first emoticon message in the current conversation page; in response to an information input operation of the user in the editing window, generating a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message; and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page. The method and the device for the emotion recognition improve the experience of the user in using the emotion bag.

Description

Method and equipment for processing emoticon message
Technical Field
The present application relates to the field of communications, and in particular, to a technique for processing emoticon messages.
Background
The emoticon is a network expression symbol including symbols, static pictures, GIF moving pictures and the like in various forms. In recent years, by means of rapid development of science and technology and new media, the change and update of the expression package are very frequent, and the expression package gradually becomes diversified expression package culture and becomes a new carrier for influencing activities. The emoticon is not only widely applied to social software and becomes a way for communication among users, but also becomes an effective way for intimate communication between brands and consumers, and more enterprise brands are added to marketing based on the emoticon.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for processing emotive messages.
According to an aspect of the present application, there is provided a method for processing emotive messages, the method including:
responding to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, and presenting an editing window related to the first emoticon message in the current conversation page;
in response to an information input operation of the user in the editing window, generating a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to another aspect of the present application, there is provided a method for processing emotive messages, the method including:
responding to a processing trigger operation of a user on a first emotion message in a current conversation page of an application, and if a character string exists in an input box of the current conversation page, generating a corresponding second emotion message by adding the character string to picture information of the first emotion message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to an aspect of the present application, there is provided a user equipment for processing emotive messages, the equipment including:
the one-to-one module is used for responding to the selection triggering operation of a user on a first emotion message in a current conversation page of an application and presenting an editing window related to the first emotion message in the current conversation page;
a second module, configured to generate, in response to an information input operation of the user in the editing window, a corresponding second emoticon message by adding a character string input by the user through the information input operation to picture information of the first emoticon message;
and the third module is used for storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to still another aspect of the present application, there is provided a user equipment for processing emoticon messages, the equipment including:
the second module is used for responding to the processing triggering operation of a user on a first emotion message in a current conversation page of an application, and if a character string exists in an input box of the current conversation page, generating a corresponding second emotion message by adding the character string to picture information of the first emotion message;
and the second module is used for storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to an aspect of the present application, there is provided a network device for processing emotive messages, wherein the device includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform:
responding to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, and presenting an editing window related to the first emoticon message in the current conversation page;
in response to an information input operation of the user in the editing window, generating a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to still another aspect of the present application, there is provided a network device for processing emotive messages, wherein the device includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform:
responding to a processing trigger operation of a user on a first emotion message in a current conversation page of an application, and if a character string exists in an input box of the current conversation page, generating a corresponding second emotion message by adding the character string to picture information of the first emotion message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
responding to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, and presenting an editing window related to the first emoticon message in the current conversation page;
in response to an information input operation of the user in the editing window, generating a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
According to yet another aspect of the application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
responding to a processing trigger operation of a user on a first emotion message in a current conversation page of an application, and if a character string exists in an input box of the current conversation page, generating a corresponding second emotion message by adding the character string to picture information of the first emotion message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
Compared with the prior art, in the application, in response to the selection triggering operation of a user on a first expression message in a current session page of an application, user equipment presents an editing window related to the first expression message in the current session page, and then generates a corresponding second expression message by adding a character string input by the user to picture information of the first expression message.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1a shows a schematic diagram for processing emoji messages, according to one embodiment of the present application;
FIG. 1b shows a schematic diagram for processing emoji messages according to yet another embodiment of the present application;
FIG. 2 illustrates a flow diagram of a method for processing emoji messages, according to yet another embodiment of the present application;
FIG. 3 shows a flow diagram of a method for processing emoji messages, according to another embodiment of the present application;
FIG. 4 shows a device diagram of a user device for processing emoji messages, according to one embodiment of the present application;
FIG. 5 shows a device diagram of a user device for processing emoji messages, according to another embodiment of the present application;
FIG. 6 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1a is a schematic diagram for processing emotive messages according to an embodiment of the application, where a user holds a user device, the user device has a social application installed therein, presents a session page of the application, and triggers an operation (for example, a randomly selected emotive package by the user) in response to selection of a first emotive message in a current session page of the application by the user, where the first emotive message includes "hello" character information. The user device presents an edit window related to the first emotive message in the current session page, generates a corresponding second emotive message by adding a character string input by the user through an information input operation to picture information of the first emotive message in response to the information input operation (e.g., the user inputs "hello") of the user in the edit window, and then adds the "hello" character string to the picture information of the first emotive message to generate a corresponding second emotive message as shown in fig. 1b through OCR technology, wherein the user device includes but is not limited to a mobile phone, a tablet, a computer, etc. (having a touch screen).
Fig. 2 illustrates a method for processing emotive messages according to an embodiment of the present application, which includes step S101, step S102, and step S103. In step S101, in response to a selection trigger operation of a user on a first emoticon message in a current session page of an application, a user equipment presents an edit window related to the first emoticon message in the current session page; in step S102, in response to an information input operation of the user in the editing window, the user equipment generates a corresponding second emotive message by adding a character string input by the user through the information input operation to the picture information of the first emotive message; in step S103, the user equipment stores the second emoticon message in an emoticon library or sends the second emoticon message to a session or other sessions corresponding to the current session page.
Specifically, in step S101, in response to a selection trigger operation of a user on a first emoticon message in a current session page of an application, the user equipment presents an edit window related to the first emoticon message in the current session page. For example, the selection trigger operation includes, but is not limited to, a manual click selection of the first emoticon information (e.g., emoticon message) or a slide selection. And based on the selection triggering operation of the user on the first emoticon message, the user equipment presents an editing window related to the first emoticon message in the current conversation page, wherein the editing window can be used for the user to perform input operation. In some embodiments, in step S101, in response to a selection trigger operation of a user on a first emoticon message in a current session page of an application, a user equipment obtains picture information of the first emoticon message, and presents an edit window related to the first emoticon message in the current session page. And on the premise that the user equipment acquires the picture information of the first expression message, providing a basis for carrying out character string adding operation based on the picture information of the first expression message. In some embodiments, the obtaining the picture information of the first emoticon message includes: requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message; and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message. For example, the picture information of the first emoticon message may be accessed by the user (e.g., saved, edited, etc.), read locally by the user equipment, or request the server of the application to obtain the picture information. In this case, different from a case that the emoticon cannot be accessed in the prior art, if the user equipment acquires the access right related to the first emoticon message from the server corresponding to the application, the picture information of the first emoticon message can be acquired more conveniently. In some embodiments, the obtaining the picture information of the first emoticon message includes: and if the application is forbidden to access the picture information of the first emotion message, acquiring the picture information of the first emotion message by executing screen capture operation on the current conversation page. For example, the picture information of the first emoticon message may not be accessible by the user (e.g., saving or the like), when the first emoticon message is selected, the user equipment presents the first emoticon message on the current session page, performs a screen capture operation on the current session page, and identifies the picture information corresponding to the first emoticon message from the captured picture, for example, first obtains one or more sub-pictures from the captured picture by an algorithm such as edge detection, and then determines the sub-picture with the maximum matching degree as the picture information corresponding to the first emoticon message according to the matching degree between the first emoticon message and each sub-picture. For example, when the first emoticon message is matched with each sub-image, the first emoticon message may be scaled to the same size as the sub-image to be matched. Under the condition that the picture information of the first expression information is acquired through screen capture, the picture information can be efficiently acquired without being limited by application access authority, and meanwhile, on the premise that the picture information of the first expression information is acquired by user equipment, a basis is provided for performing character string adding operation on the basis of the picture information of the first expression information.
In step S102, in response to an information input operation of the user in the editing window, the user equipment generates a corresponding second emotive message by adding a character string input by the user through the information input operation to the picture information of the first emotive message. For example, the user equipment presents an editing window related to the first emotive message in the current conversation page, and in response to an information input operation (for example, an operation of inputting text information and the like) of the user in the editing window, the user equipment generates a corresponding second emotive message by adding a character string input by the user through the information input operation to the picture information of the first emotive message. For example, the character string of the user equipment is directly added to the picture information of the first emoticon message, or, in some embodiments, the generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message includes: acquiring original picture information corresponding to the picture information of the first expression message; and generating a corresponding second emoticon message by adding a character string input by the user through the information input operation to the original picture information. For example, the user equipment acquires a first emoticon message, sends the first emoticon message to a corresponding server, acquires original image information corresponding to the image information of the first emoticon message through image matching query, and then generates a corresponding second emoticon message by adding a character string input by the user through the information input operation to the original image information. Under the condition of adding the character string by acquiring the original picture, the operation of the user equipment end is simplified, and the user equipment is not required to carry out complex character string removing operation on the original character string in the first expression. In some embodiments, the generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the original picture information includes: determining a preset position of a character string input by the user through the information input operation in the original picture information; and generating a corresponding second emoticon message by adding the character string to a predetermined position in the original picture information. And under the condition of confirming the preset position, the position of the character string in the second expression information is more fit, and the accurate display of the second expression information is given to the user. In some embodiments, the determining the predetermined position of the character string in the original picture information comprises: adding the character string to a candidate position of the original picture information, and obtaining score information of the character string added to the candidate position of the original picture information through a classifier; and if the score information is greater than or equal to a preset first score threshold value, determining the candidate position as a preset position. Wherein, the function of the classifier is as follows: the conventional task is to learn classification rules and classifiers using a given class, known training data, and then classify (or predict) unknown data. For example, the user equipment first adds the character string to the candidate position of the original picture information, and obtains score information of the candidate position of the original picture information to which the character string is added through the classifier after the character string is added, for example, the classifier confirms score information according to the position of the existing character string in the original picture, and if the score information is greater than or equal to a predetermined first score threshold, the candidate position is determined as a predetermined position. In this case, the character string input by the user may be set at a suitable position of the original picture, providing a basis for subsequent presentation of the second emoji message. In some embodiments, the determining the predetermined position of the character string in the original picture information comprises: adding the character string to a candidate position of the original picture information, and acquiring first score information of the character string added to the candidate position of the original picture information through a classifier; if the first score information is smaller than a preset second score threshold, adjusting the parameter information of the character string to obtain second score information; and if the second score information is greater than or equal to a preset second score threshold value, determining the candidate position as a preset position. For example, the user equipment first adds the character string to the candidate position of the original picture information, and obtains first score information of the character string added to the candidate position of the original picture information through a classifier after adding the character string, and if the first score information is smaller than a predetermined second score threshold, adjusts parameter information of the character string to obtain second score information, wherein the parameter information includes a font, a size, a placement angle and the like of the character string. And the user equipment adjusts the parameter information of the character string until the second score information is greater than or equal to a preset second score threshold value, and determines the candidate position as a preset position. In this case, the character string input by the user may be set at a suitable position of the original picture, providing a basis for subsequent presentation of the second emoji message.
In some embodiments, the generating a corresponding second emoji message by adding the character string input by the user through the information input operation to the picture information of the first emoji message includes: removing a first character string in the first expression information to acquire picture information of the first expression message; and generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message. Under the condition that the user equipment terminal independently removes the first character string in the first expression information to acquire the picture information of the first expression message, resources of the server terminal are not occupied, and resources are saved. In some embodiments, the removing a first character string in the first emotion information to obtain picture information of the first emotion message includes: identifying and acquiring a first character string in the first expression information; and removing the first character string in the first expression information to acquire the picture information of the first expression message. For example, the user equipment identifies a first character string in the first emoticon information through an OCR (optical character recognition) technology, and removes the first character string through an image processing technology to obtain the picture information of the first emoticon message, and in some embodiments, the user equipment restores the picture information of the first emoticon message through the image processing technology. And in the case that the first character string in the first expression information is recognized, providing a basis for subsequently adding the character string input by the user.
In step S103, the user equipment stores the second emoticon message in an emoticon library or sends the second emoticon message to a session or other sessions corresponding to the current session page. On the basis of generation of the second emotion message, the user equipment sends the second emotion message to a session or other sessions corresponding to the session page, and emotion bag use experience of the user is improved.
For example, a user holds user equipment, a social application is installed in the user equipment, a session page of the application is presented, and an operation (for example, one emoticon randomly selected by the user) is triggered in response to selection of a first emoticon message in a current session page of the application by the user, wherein the first emoticon message comprises "byebye" character information. The user equipment presents an editing window related to the first emotive message in the current conversation page, generates a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message in response to an information input operation (for example, the user inputs 'byebye') of the user in the editing window, for example, the user equipment identifies and removes 'byebye' character information through an OCR technology, then adds the 'byebye' character string to the picture information of the first emotive message to generate a corresponding second emotive message, and sends the second emotive information including 'byebye' characters to a conversation or other conversations corresponding to the conversation page.
Fig. 3 illustrates a method for processing an emotive message according to an embodiment of the present application, which includes step S201 and step S202. In step S201, in response to a processing trigger operation of a user on a first emoticon message in a current session page of an application, if a character string exists in an input box of the current session page, a user equipment generates a corresponding second emoticon message by adding the character string to picture information of the first emoticon message; in step S202, the user equipment stores the second emoticon message in an emoticon library or sends the second emoticon message to a session or other sessions corresponding to the session page.
Specifically, in step S201, in response to a processing triggering operation of a user on a first emoticon message in a current session page of an application, if a character string exists in an input box of the current session page, the user equipment generates a corresponding second emoticon message by adding the character string to picture information of the first emoticon message. For example, in response to a character string input operation of a user in an input box of a current session page of an application, a user device presents one or more emoji messages corresponding to the character string, and in response to a processing trigger operation of the user on a first emoji message, the processing trigger operation includes, without limitation, a sending trigger operation, an editing trigger operation, and a saving trigger operation, for example, the user selects a first emoji message from one or more tag messages to send, and the user device adds the character string to picture information of the first emoji message to generate a corresponding second emoji message. Acquiring picture information of the first expression message; generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message. And on the premise that the user equipment acquires the picture information of the first expression message, providing a basis for carrying out character string adding operation based on the picture information of the first expression message. In some embodiments, the obtaining the picture information of the first emoticon message includes: requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message; and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message. For example, the picture information of the first emoticon message may be accessed (e.g., saved, etc.) by the user, and the user equipment locally reads the picture information or requests the server of the application to obtain the picture information. In this case, different from a case that the emoticon cannot be accessed in the prior art, if the user equipment acquires the access right related to the first emoticon message from the server corresponding to the application, the picture information of the first emoticon message can be acquired more conveniently. In some embodiments, the obtaining the picture information of the first emoticon message includes: and if the application is forbidden to access the picture information of the first expression message, obtaining the picture information of the first expression message through screen capturing. For example, the picture information of the first emoticon message may not be accessible by the user (e.g., saving or the like), when the first emoticon message is selected, the user equipment presents the first emoticon message on the current session page, performs a screen capture operation on the current session page, and identifies the picture information corresponding to the first emoticon message from the captured picture, for example, first obtains one or more sub-pictures from the captured picture by an algorithm such as edge detection, and then determines the sub-picture with the maximum matching degree as the picture information corresponding to the first emoticon message according to the matching degree between the first emoticon message and each sub-picture. For example, when the first emoticon message is matched with each sub-image, the first emoticon message may be scaled to the same size as the sub-image to be matched. And on the premise that the user equipment acquires the picture information of the first expression message, providing a basis for carrying out character string adding operation based on the picture information of the first expression message. In some embodiments, the generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message comprises: acquiring original picture information corresponding to the picture information of the first expression message; generating a corresponding second emoticon message by adding the character string to the original picture information. For example, the user equipment acquires a first emoticon message, sends the first emoticon message to a corresponding server, acquires original image information corresponding to the image information of the first emoticon message through image matching query, and then generates a corresponding second emoticon message by adding a character string input by the user through the information input operation to the original image information. Under the condition of adding the character string by acquiring the original picture, the operation of the user equipment end is simplified, and the user equipment is not required to carry out complex character string removing operation on the original character string in the first expression.
In step S202, the user equipment stores the second emoticon message in an emoticon library or sends the second emoticon message to a session or other sessions corresponding to the session page. On the basis of generation of the second emotion message, the user equipment sends the second emotion message to a session or other sessions corresponding to the session page, and emotion bag use experience of the user is improved.
In some embodiments, the method further includes step S203 (not shown), in step S203, in response to the processing triggering operation, if the input box of the current conversation page is empty, the user equipment sends the first emoticon message to a conversation or other conversations corresponding to the conversation page. And under the condition that the input frame of the current conversation page is empty, the user equipment sends the first emotion message to the conversation or other conversations corresponding to the conversation page, so that the message sending efficiency can be improved.
For example, a user holds a user device, a social application is installed in the user device, a conversation page of the application is presented, and in response to an input operation of the user in an input box of the conversation page (for example, the user inputs a character string "hello"), the user device presents one or more emoticon messages corresponding to the "hello" character string. Responding to a processing trigger operation of a user on a first emotient message in one or more emotient messages (for example, one emotient message randomly selected by the user is taken as the first emotient message), wherein the first emotient message comprises 'hello' character information, adding a character string 'hello' to picture information of the first emotient message by user equipment to generate a corresponding second emotient message, for example, recognizing and removing the 'hello' character information by the user equipment through an OCR technology, then adding the 'hello' character string to the picture information of the first emotient message to generate a corresponding second emotient message, and sending the second emotient information comprising the 'hello' character to a session or other sessions corresponding to the session page by the user equipment.
Fig. 4 shows a user equipment for processing emotive messages according to an embodiment of the present application, which includes a one-to-one module 101, a two-to-two module 102, and a three-to-three module 103. A one-to-one module 101, configured to respond to a selection trigger operation of a user on a first emoticon message in a current session page of an application, and present an edit window related to the first emoticon message in the current session page; a second module 102, configured to generate, in response to an information input operation of the user in the editing window, a corresponding second emoji message by adding a character string input by the user through the information input operation to picture information of the first emoji message; a third module 103, configured to store the second emoticon message in an emoticon library or send the second emoticon message to a session or other sessions corresponding to the current session page.
Specifically, the one-to-one module 101 is configured to, in response to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, present an edit window related to the first emoticon message in the current conversation page. For example, the selection trigger operation includes, but is not limited to, a manual click selection or a slide selection of the first emotion information. And based on the selection triggering operation of the user on the first emoticon message, the user equipment presents an editing window related to the first emoticon message in the current conversation page, wherein the editing window can be used for the user to perform input operation. In some embodiments, the one-to-one module 101 is configured to, in response to a selection trigger operation of a user on a first emoticon message in a current session page of an application, acquire picture information of the first emoticon message, and present an edit window related to the first emoticon message in the current session page. Here, the operation of obtaining the picture information of the first emotion message is the same as or similar to that of the embodiment shown in fig. 1, and therefore is not repeated here, and is included herein by reference. In some embodiments, the obtaining the picture information of the first emoticon message includes: requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message; and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message. Here, the operation of obtaining the picture information of the first emotion message is the same as or similar to that of the embodiment shown in fig. 1, and therefore is not repeated here, and is included herein by reference. In some embodiments, the obtaining the picture information of the first emoticon message includes: and if the application is forbidden to access the picture information of the first emotion message, acquiring the picture information of the first emotion message by executing screen capture operation on the current conversation page. Here, the operation of obtaining the picture information of the first emotion message is the same as or similar to that of the embodiment shown in fig. 1, and therefore is not repeated here, and is included herein by reference.
A second module 102, configured to generate, in response to an information input operation of the user in the editing window, a corresponding second emoji message by adding a character string input by the user through the information input operation to the picture information of the first emoji message. For example, the user equipment presents an editing window related to the first emotive message in the current conversation page, and in response to an information input operation (for example, an operation of inputting text information and the like) of the user in the editing window, the user equipment generates a corresponding second emotive message by adding a character string input by the user through the information input operation to the picture information of the first emotive message. For example, the character string of the user equipment is directly added to the picture information of the first emoticon message, or, in some embodiments, the generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message includes: acquiring original picture information corresponding to the picture information of the first expression message; and generating a corresponding second emoticon message by adding a character string input by the user through the information input operation to the original picture information. Here, the operation of generating the corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message is the same as or similar to the embodiment shown in fig. 1, and thus, the operation is not repeated and is included herein by reference. In some embodiments, the generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the original picture information includes: determining a preset position of a character string input by the user through the information input operation in the original picture information; and generating a corresponding second emoticon message by adding the character string to a predetermined position in the original picture information. Here, the operation of generating the corresponding second emoticon message by adding the character string input by the user through the information input operation to the original picture information is the same as or similar to that of the embodiment shown in fig. 1, and thus, the operation is not repeated and is included herein by reference. In some embodiments, the determining the predetermined position of the character string in the original picture information comprises: adding the character string to a candidate position of the original picture information, and obtaining score information of the character string added to the candidate position of the original picture information through a classifier; and if the score information is greater than or equal to a preset first score threshold value, determining the candidate position as a preset position. Wherein, the function of the classifier is as follows: the conventional task is to learn classification rules and classifiers using a given class, known training data, and then classify (or predict) unknown data. Here, the operation of determining the predetermined position of the character string in the original picture information is the same as or similar to that of the embodiment shown in fig. 1, and therefore, the description is omitted, and the operation is incorporated herein by reference. In some embodiments, the determining the predetermined position of the character string in the original picture information comprises: adding the character string to a candidate position of the original picture information, and acquiring first score information of the character string added to the candidate position of the original picture information through a classifier; if the first score information is smaller than a preset second score threshold, adjusting the parameter information of the character string to obtain second score information; and if the second score information is greater than or equal to a preset second score threshold value, determining the candidate position as a preset position. Here, the operation of determining the predetermined position of the character string in the original picture information is the same as or similar to that of the embodiment shown in fig. 1, and therefore, the description is omitted, and the operation is incorporated herein by reference.
In some embodiments, the generating a corresponding second emoji message by adding the character string input by the user through the information input operation to the picture information of the first emoji message includes: removing a first character string in the first expression information to acquire picture information of the first expression message; and generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message. Here, the operation of generating the corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message is the same as or similar to the embodiment shown in fig. 1, and thus, the operation is not repeated and is included herein by reference. In some embodiments, the removing a first character string in the first emotion information to obtain picture information of the first emotion message includes: identifying and acquiring a first character string in the first expression information; and removing the first character string in the first expression information to acquire the picture information of the first expression message. Here, the operation of removing the first character string in the first emotion information to obtain the picture information of the first emotion message is the same as or similar to that in the embodiment shown in fig. 1, and thus is not repeated here, and is included herein by reference.
A third module 103, configured to store the second emoticon message in an emoticon library or send the second emoticon message to a session or other sessions corresponding to the current session page. On the basis of generation of a second emotion message, the user equipment sends the second emotion message to a session or other sessions corresponding to the current session page, and emotion bag use experience of a user is improved.
Here, the specific implementation of the above-mentioned one-to-one module 101, the two-to-one module 102, and the one-to-three module 103 is the same as or similar to the embodiment related to steps S101, S102, and S103 in fig. 1, and therefore, the description thereof is omitted, and the detailed description thereof is incorporated herein by reference.
Fig. 5 shows a user equipment for processing emotive messages according to an embodiment of the present application, which includes a two-in-one module 201 and a two-in-two module 202. A first module 201, configured to respond to a processing trigger operation of a user on a first emoticon message in a current session page of an application, and if a character string exists in an input box of the current session page, generate a corresponding second emoticon message by adding the character string to picture information of the first emoticon message; a second module 202, configured to store the second emoticon message in an emoticon library or send the second emoticon message to a session or other sessions corresponding to the current session page.
Specifically, the first-second module 201 is configured to respond to a processing trigger operation of a user on a first emoticon message in a current session page of an application, and if a character string exists in an input box of the current session page, generate a corresponding second emoticon message by adding the character string to picture information of the first emoticon message. For example, in response to a character string input operation of a user in an input box of a current session page of an application, the user equipment presents one or more emoji messages corresponding to the character string, and in response to a processing trigger operation of the user on a first emoji message (for example, the user selects the first emoji message from one or more emoji messages), the user equipment adds the character string to picture information of the first emoji message to generate a corresponding second emoji message. In some embodiments, the generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message comprises: acquiring picture information of the first expression message; generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message. The operation of generating the corresponding second emoticon message by adding the character string to the picture information of the first emoticon message is the same as or similar to the embodiment shown in fig. 2, and is not repeated herein, and is included herein by reference. In some embodiments, the obtaining the picture information of the first emoticon message includes: requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message; and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message. The operation of obtaining the picture information of the first emotion message is the same as or similar to that of the embodiment shown in fig. 2, and therefore is not repeated here, and is included herein by reference. In some embodiments, the obtaining the picture information of the first emoticon message includes: and if the application is forbidden to access the picture information of the first expression message, obtaining the picture information of the first expression message through screen capturing. The operation of obtaining the picture information of the first emotion message is the same as or similar to that of the embodiment shown in fig. 2, and therefore is not repeated here, and is included herein by reference. In some embodiments, the generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message comprises: acquiring original picture information corresponding to the picture information of the first expression message; generating a corresponding second emoticon message by adding the character string to the original picture information. The operation of generating the corresponding second emoticon message by adding the character string to the picture information of the first emoticon message is the same as or similar to the embodiment shown in fig. 2, and is not repeated herein, and is included herein by reference.
A second module 202, configured to store the second emoticon message in an emoticon library or send the second emoticon message to a session or other sessions corresponding to the current session page. On the basis of generation of the second emotion message, the user equipment sends the second emotion message to a session or other sessions corresponding to the session page, and emotion bag use experience of the user is improved.
In some embodiments, the user equipment further includes a second-third module 203 (not shown), where the second-third module 203 is configured to, in response to the processing trigger operation, send the first emoticon message to a session or another session corresponding to the session page if the input box of the current session page is empty. The specific implementation manner of the two or three modules 203 is the same as or similar to the embodiment of the step S203, and therefore, the detailed description is omitted, and the detailed implementation manner is included herein by reference.
Here, the example of the specific implementation manner of the two-in-one module 201 and the two-in-two module 202 is the same as or similar to the embodiment of the steps S201 and S202 in fig. 2, and therefore, the detailed description is omitted, and the detailed implementation manner is incorporated herein by reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 6 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 6, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (18)

1. A method for processing emotive messages, wherein the method comprises:
responding to a selection trigger operation of a user on a first emoticon message in a current conversation page of an application, and presenting an editing window related to the first emoticon message in the current conversation page;
in response to an information input operation of the user in the editing window, generating a corresponding second emotive message by adding a character string input by the user through the information input operation to picture information of the first emotive message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
2. The method of claim 1, wherein the generating of the corresponding second emoji message by adding the character string input by the user through the information input operation to the picture information of the first emoji message comprises:
acquiring original picture information corresponding to the picture information of the first expression message;
and generating a corresponding second emoticon message by adding a character string input by the user through the information input operation to the original picture information.
3. The method of claim 2, wherein the generating of the corresponding second emoji message by adding the character string input by the user through the information input operation to the original picture information comprises:
determining a preset position of a character string input by the user through the information input operation in the original picture information;
and generating a corresponding second emoticon message by adding the character string to a predetermined position in the original picture information.
4. The method of claim 3, wherein the determining the predetermined location of the character string in the original picture information comprises:
adding the character string to a candidate position of the original picture information, and obtaining score information of the character string added to the candidate position of the original picture information through a classifier;
and if the score information is greater than or equal to a preset first score threshold value, determining the candidate position as a preset position.
5. The method of claim 3, wherein the determining the predetermined location of the character string in the original picture information comprises:
adding the character string to a candidate position of the original picture information, and acquiring first score information of the character string added to the candidate position of the original picture information through a classifier;
if the first score information is smaller than a preset second score threshold, adjusting the parameter information of the character string to obtain second score information;
and if the second score information is greater than or equal to a preset second score threshold value, determining the candidate position as a preset position.
6. The method of claim 1, wherein the generating of the corresponding second emoji message by adding the character string input by the user through the information input operation to the picture information of the first emoji message comprises:
removing a first character string in the first expression information to acquire picture information of the first expression message;
and generating a corresponding second emoticon message by adding the character string input by the user through the information input operation to the picture information of the first emoticon message.
7. The method of claim 6, wherein the removing a first character string in the first emotion information to obtain picture information of the first emotion message comprises:
identifying and acquiring a first character string in the first expression information;
and removing the first character string in the first expression information to acquire the picture information of the first expression message.
8. The method of claim 1, wherein the presenting an edit window in a current session page of an application for a first emoji message in response to a user selection trigger operation of the first emoji message in the current session page comprises:
responding to a selection trigger operation of a user on a first emotion message in a current session page of an application, acquiring picture information of the first emotion message, and presenting an editing window related to the first emotion message in the current session page.
9. The method of claim 8, wherein the obtaining of the picture information of the first emoticon message comprises:
requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message;
and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message.
10. The method of claim 8 or 9, wherein the acquiring picture information of the first emoticon message comprises:
and if the application is forbidden to access the picture information of the first emotion message, acquiring the picture information of the first emotion message by executing screen capture operation on the current conversation page.
11. A method for processing emotive messages, wherein the method comprises:
responding to a processing trigger operation of a user on a first emotion message in a current conversation page of an application, and if a character string exists in an input box of the current conversation page, generating a corresponding second emotion message by adding the character string to picture information of the first emotion message;
and storing the second emotion message to an emotion library or sending the second emotion message to a session or other sessions corresponding to the current session page.
12. The method of claim 11, wherein the method further comprises:
responding to the processing trigger operation, and if the input frame of the current conversation page is empty, sending the first emotion message to a conversation or other conversations corresponding to the conversation page.
13. The method of claim 11, wherein the generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message comprises:
acquiring picture information of the first expression message;
generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message.
14. The method of claim 13, wherein the obtaining of the picture information of the first emoticon message comprises:
requesting a server corresponding to the application to acquire access authority of the picture information of the first emotion message;
and if the application is allowed to access the picture information of the first expression message, obtaining the picture information of the first expression message.
15. The method of claim 13 or 14, wherein the acquiring picture information of the first emoticon message comprises:
and if the application is forbidden to access the picture information of the first expression message, obtaining the picture information of the first expression message through screen capturing.
16. The method of any of claims 11-15, wherein the generating a corresponding second emoji message by adding the character string to the picture information of the first emoji message comprises:
acquiring original picture information corresponding to the picture information of the first expression message;
generating a corresponding second emoticon message by adding the character string to the original picture information.
17. An apparatus for processing emotive messages, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of claims 1 to 16.
18. A computer-readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods of claims 1 to 16.
CN201910838924.1A 2019-09-05 2019-09-05 Method and equipment for processing expression message Active CN110780955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838924.1A CN110780955B (en) 2019-09-05 2019-09-05 Method and equipment for processing expression message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838924.1A CN110780955B (en) 2019-09-05 2019-09-05 Method and equipment for processing expression message

Publications (2)

Publication Number Publication Date
CN110780955A true CN110780955A (en) 2020-02-11
CN110780955B CN110780955B (en) 2023-08-22

Family

ID=69384019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838924.1A Active CN110780955B (en) 2019-09-05 2019-09-05 Method and equipment for processing expression message

Country Status (1)

Country Link
CN (1) CN110780955B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342435A (en) * 2021-05-27 2021-09-03 网易(杭州)网络有限公司 Expression processing method and device, computer equipment and storage medium
CN114693827A (en) * 2022-04-07 2022-07-01 深圳云之家网络有限公司 Expression generation method and device, computer equipment and storage medium
CN114880062A (en) * 2022-05-30 2022-08-09 网易(杭州)网络有限公司 Chat expression display method and device, electronic device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090282111A1 (en) * 2008-05-12 2009-11-12 Qualcomm Incorporated Methods and Apparatus for Referring Media Content
CN104780093A (en) * 2014-01-15 2015-07-15 阿里巴巴集团控股有限公司 Method and device for processing expression information in instant messaging process
CN105160033A (en) * 2015-09-30 2015-12-16 北京奇虎科技有限公司 Expression character string processing method and device
CN106657650A (en) * 2016-12-26 2017-05-10 努比亚技术有限公司 System expression recommendation method and device, and terminal
CN107566243A (en) * 2017-07-11 2018-01-09 阿里巴巴集团控股有限公司 A kind of picture sending method and equipment based on instant messaging
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN108628911A (en) * 2017-03-24 2018-10-09 微软技术许可有限责任公司 It is predicted for expression input by user
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110061900A (en) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium
CN110140106A (en) * 2017-11-20 2019-08-16 华为技术有限公司 According to the method and device of background image Dynamically Announce icon
CN110174980A (en) * 2019-05-24 2019-08-27 上海掌门科技有限公司 A kind of method and apparatus that information being presented in session window

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090282111A1 (en) * 2008-05-12 2009-11-12 Qualcomm Incorporated Methods and Apparatus for Referring Media Content
CN104780093A (en) * 2014-01-15 2015-07-15 阿里巴巴集团控股有限公司 Method and device for processing expression information in instant messaging process
US20150200881A1 (en) * 2014-01-15 2015-07-16 Alibaba Group Holding Limited Method and apparatus of processing expression information in instant communication
CN105160033A (en) * 2015-09-30 2015-12-16 北京奇虎科技有限公司 Expression character string processing method and device
CN106657650A (en) * 2016-12-26 2017-05-10 努比亚技术有限公司 System expression recommendation method and device, and terminal
CN108628911A (en) * 2017-03-24 2018-10-09 微软技术许可有限责任公司 It is predicted for expression input by user
CN107566243A (en) * 2017-07-11 2018-01-09 阿里巴巴集团控股有限公司 A kind of picture sending method and equipment based on instant messaging
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110140106A (en) * 2017-11-20 2019-08-16 华为技术有限公司 According to the method and device of background image Dynamically Announce icon
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN110061900A (en) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 Message display method, device, terminal and computer readable storage medium
CN110174980A (en) * 2019-05-24 2019-08-27 上海掌门科技有限公司 A kind of method and apparatus that information being presented in session window

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342435A (en) * 2021-05-27 2021-09-03 网易(杭州)网络有限公司 Expression processing method and device, computer equipment and storage medium
CN114693827A (en) * 2022-04-07 2022-07-01 深圳云之家网络有限公司 Expression generation method and device, computer equipment and storage medium
CN114880062A (en) * 2022-05-30 2022-08-09 网易(杭州)网络有限公司 Chat expression display method and device, electronic device and storage medium
CN114880062B (en) * 2022-05-30 2023-11-14 网易(杭州)网络有限公司 Chat expression display method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN110780955B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN110336735B (en) Method and equipment for sending reminding message
CN110417641B (en) Method and equipment for sending session message
CN110765395B (en) Method and equipment for providing novel information
CN110266505B (en) Method and equipment for managing session group
CN109669657B (en) Method and equipment for conducting remote document collaboration
CN110336733B (en) Method and equipment for presenting emoticon
CN110780955B (en) Method and equipment for processing expression message
CN110333919B (en) Method and equipment for presenting social object information
CN108984234B (en) Calling prompt method for mobile terminal and camera device
CN110321189B (en) Method and equipment for presenting hosted program in hosted program
CN110827061A (en) Method and equipment for providing presentation information in novel reading process
CN110430253B (en) Method and equipment for providing novel update notification information
CN111162990B (en) Method and equipment for presenting message notification
CN110768894B (en) Method and equipment for deleting session message
CN111506232A (en) Method and equipment for controlling menu display in reading application
CN112818719B (en) Method and equipment for identifying two-dimensional code
CN112822430B (en) Conference group merging method and device
CN111817945B (en) Method and equipment for replying communication information in instant communication application
CN109636922B (en) Method and device for presenting augmented reality content
CN113157162B (en) Method, apparatus, medium and program product for revoking session messages
CN115719053A (en) Method and equipment for presenting reader labeling information
CN112684961B (en) Method and equipment for processing session information
CN109657514B (en) Method and equipment for generating and identifying two-dimensional code
CN110311945B (en) Method and equipment for presenting resource pushing information in real-time video stream
CN110308833B (en) Method and equipment for controlling resource allocation in application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant