CN113342435A - Expression processing method and device, computer equipment and storage medium - Google Patents

Expression processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113342435A
CN113342435A CN202110587687.3A CN202110587687A CN113342435A CN 113342435 A CN113342435 A CN 113342435A CN 202110587687 A CN202110587687 A CN 202110587687A CN 113342435 A CN113342435 A CN 113342435A
Authority
CN
China
Prior art keywords
expression
information
target
editing
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110587687.3A
Other languages
Chinese (zh)
Inventor
李将
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110587687.3A priority Critical patent/CN113342435A/en
Publication of CN113342435A publication Critical patent/CN113342435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an expression processing method and device, computer equipment and a storage medium. According to the scheme, the expression editing function is added in the information interaction interface, when the situation that a user operates the expression in the information interaction interface is detected, the selected expression is edited according to the interaction information content displayed on the information interaction interface and the information interaction object performing information interaction with the user, the edited expression is sent to the information interaction object, the expression is not required to be edited through third-party application, the operation is convenient and fast, and therefore the expression processing efficiency can be improved.

Description

Expression processing method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to an expression processing method and device, computer equipment and a storage medium.
Background
With the development of computer technology, the application of intelligent terminals is more and more extensive. Chat and social contact through the intelligent terminal become an essential part of life of people. In the process of chatting through a network, the expression is an extremely important social mode, and the expression is deeply loved by users because of the characteristics of vivid image, harmonious and funny testimony, various forms and more concise and powerful expression than language characters, so the expression plays an important role in the chatting process.
In research and practice of related technologies, the inventor of the present application finds that, in the prior art, when a user modifies an expression, the expression needs to be modified in a third-party application, so that the operation is cumbersome, and the processing efficiency of the expression is affected.
Disclosure of Invention
The embodiment of the application provides an expression processing method and device, computer equipment and a storage medium, and can improve the expression processing efficiency.
The embodiment of the application provides an expression processing method, which comprises the following steps:
when the editing operation of a target expression in an information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: the target expression and information editing area;
editing the content of the target expression through the information editing area, and displaying the edited target expression on the expression editing page;
and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
Correspondingly, an embodiment of the present application further provides an expression processing apparatus, including:
the display unit is used for displaying an expression editing page on the information interaction interface when the editing operation of the target expression in the information interaction interface is detected, wherein the expression editing page at least comprises: the target expression and information editing area;
the first editing unit is used for editing the content of the target expression through the information editing area and displaying the edited target expression on the expression editing page;
and the confirming unit is used for displaying the edited target expression on the information interaction interface when the confirming operation of the edited target expression is detected.
In some embodiments, the first editing unit includes:
the first determining subunit is used for determining a target information selection control from a plurality of information selection controls based on the touch operation of the user on the information editing area;
the first obtaining subunit is configured to obtain target editing information corresponding to the target information selection control;
and the first synthesis subunit is used for synthesizing the target editing information and the target expression to obtain an edited target expression.
In some embodiments, the first obtaining subunit may be specifically configured to:
determining the functional attribute of the target information selection control;
and acquiring editing information corresponding to the functional attribute to obtain the target editing information.
In some embodiments, the first synthesis subunit may be specifically configured to:
determining the content type of the target editing information;
determining a target position from the target expression according to the content type, wherein the target position is used for placing the target editing information;
and synthesizing the target editing information and the target expression based on the target position to obtain the edited target expression.
In some embodiments, the first synthesis subunit may be further specifically configured to:
if the content type is a text type, determining the target position from the blank area of the target expression;
and if the content type is an image type, determining the target position from the image.
In some embodiments, the first synthesis subunit is further specifically configured to:
if the content type is a text type, acquiring a color value of each pixel point in the target expression; scanning pixel points in the target expression based on the color values, and determining a blank area in the target expression; calculating the maximum blank area in the blank areas according to a specified algorithm to obtain the target position;
and if the content type is an image type, determining the target position from the image.
In some embodiments, the display unit includes:
the second acquisition subunit is used for acquiring the number of the information interaction objects;
the second determining subunit is used for determining the display mode of the information editing area according to the number;
and displaying an expression editing page on the information interaction interface based on the target expression and the display mode.
In some embodiments, the second determining subunit may be specifically configured to:
acquiring user information of the information interaction object;
if the number does not reach the preset number, determining that the display mode of the information editing area is the first display mode;
and if the number reaches a preset number, determining that the display mode of the information editing area is the second display mode.
In some embodiments, the display unit comprises
And the first display subunit is used for displaying an expression editing page on the information interaction interface when the editing operation on the target expression in the interaction information display area is detected or when the editing operation on the target expression in the user operation area is detected.
In some embodiments, the display unit further comprises:
the recognition subunit is used for carrying out semantic recognition on the text information in the target expression to obtain a semantic recognition result;
the third obtaining subunit is configured to obtain, according to the semantic recognition result, a user image of a home user or an information interaction object, where the information interaction object is associated with the current information interaction interface;
and the second display subunit is used for displaying the expression editing page on the information interaction interface at least based on the user image.
In some embodiments, the apparatus further comprises:
the first acquisition unit is used for acquiring currently displayed interactive information of the information interactive interface when touch operation on the expression control is detected;
and the first screening unit is used for screening a target expression from an expression graph library according to the interaction information and displaying the target expression on the information interaction interface.
In some embodiments, the first screening unit comprises:
the fourth acquiring subunit is configured to acquire a plurality of expression labels in the expression gallery and an expression corresponding to each expression label;
the matching subunit is used for matching the interaction information with the plurality of expression labels and determining a target expression label matched with the interaction image;
and the fifth acquiring subunit is configured to acquire an expression corresponding to the target expression label, so as to obtain the target expression.
In some embodiments, the apparatus further comprises:
the second acquisition unit is used for acquiring currently displayed interactive information of the information interactive interface when touch operation on the expression control is detected;
and the second screening unit is used for screening the target expression from the expression gallery according to the interaction information and displaying the target expression on the information interaction interface.
In some embodiments, the apparatus further comprises:
the first analysis unit is used for analyzing the target expression to obtain a plurality of sub-images if the target expression is a dynamic image;
the third editing unit is used for editing any frame of target sub-image in the multi-frame sub-images through the information editing area to obtain an edited target sub-image;
a third obtaining unit, configured to obtain image content of the edited target sub-image and image content layout information of other sub-images in the multiple frames of sub-images;
the first processing unit is used for correspondingly editing the other sub-images based on the image content of the edited target sub-image and the image content layout information of the other sub-images to obtain the edited other sub-images of the other sub-images;
and the first synthesis unit is used for synthesizing all the edited sub-images to obtain the edited target expression.
In some embodiments, the apparatus further comprises:
the second analysis unit is used for analyzing the target expression to obtain a plurality of sub-images if the target expression is a dynamic image;
the second processing unit is used for editing each frame of target sub-image through the information editing area;
and the second synthesis unit is used for synthesizing all the edited sub-images to obtain the edited target expression.
In some embodiments, the apparatus further comprises:
and the sending unit is used for sending the edited target expression to one or more information interaction objects when sending operation aiming at the information interaction interface is detected.
Correspondingly, the embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the expression processing method provided in any one of the embodiments of the present application.
Correspondingly, the embodiment of the application also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by the processor to execute the expression processing method.
According to the embodiment of the application, the expression editing function is added in the information interaction interface, when the situation that a user operates the expression in the information interaction interface is detected, the selected expression is edited according to the interaction information content displayed on the information interaction interface and the information interaction object performing information interaction with the user, the situation image is not required to be edited through third-party application, the operation is convenient and fast, and therefore the processing efficiency of the expression can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an expression processing method according to an embodiment of the present application.
Fig. 2 is a schematic view of an information interaction interface according to an embodiment of the present disclosure.
Fig. 3 is a schematic view of another information interaction interface provided in the embodiment of the present application.
Fig. 4 is a schematic view of a display mode of an information editing area according to an embodiment of the present application.
Fig. 5 is a schematic diagram of sending an expression according to an embodiment of the present application.
Fig. 6 is another expression sending schematic diagram provided in the embodiment of the present application.
Fig. 7 is another expression sending schematic diagram provided in the embodiment of the present application.
Fig. 8 is a schematic diagram of an expression editing page provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of an edited expression according to an embodiment of the present application.
Fig. 10 is a schematic diagram illustrating sending of an expression according to an embodiment of the present application.
Fig. 11 is a flowchart illustrating another expression processing method according to an embodiment of the present application.
Fig. 12 is a schematic flowchart of another expression processing method according to an embodiment of the present application.
Fig. 13 is a schematic display view of another information interaction interface provided in the embodiment of the present application.
Fig. 14 is a schematic display view of another information interaction interface provided in the embodiment of the present application.
Fig. 15 is a schematic display view of another information interaction interface provided in the embodiment of the present application.
Fig. 16 is a block diagram of an expression processing apparatus according to an embodiment of the present application.
Fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an expression processing method and device, a storage medium and computer equipment. Specifically, the expression processing method according to the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or other device. The terminal may be a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a portable media player, etc., and a fixed terminal such as a desktop computer, etc.
For example, the computer device may be a terminal, and the terminal may display an expression editing page on the information interaction interface when an editing operation on a target expression in the information interaction interface is detected, where the expression editing page at least includes: a target expression and information editing area; editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page; and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
Based on the above problems, embodiments of the present application provide a first expression processing method, apparatus, computer device, and storage medium, which can improve the processing efficiency of expressions.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment of the present application provides an expression processing method, which may be executed by a terminal or a server, and the expression processing method is described as an example executed by the terminal.
As shown in fig. 1, fig. 1 is a schematic flow chart of an expression processing method according to an embodiment of the present application. The specific flow of the expression processing method can be as follows:
101. and when the editing operation of the target expression in the information interaction interface is detected, displaying an expression editing page on the information interaction interface.
In the embodiment of the present application, the information interaction interface is an interface for performing information interaction between a current user and an information interaction object, for example, the information interaction interface may be a user interface provided by an application program for performing communication between the user and the user through media such as a network. The information interaction object can comprise various objects, such as other users or intelligent question-answering robots. Wherein, the information interaction can include: text interaction, image interaction, voice interaction, and the like.
Wherein, information interaction interface includes: the interactive information display area can display the interactive information between the current user and the information interactive object; the user operation area may be used for a user to perform an input operation, for example, the user may input contents such as text, pictures, voice, and the like through the user operation area.
For example, please refer to fig. 2, and fig. 2 is a schematic diagram of an information interaction interface according to an embodiment of the present disclosure. In fig. 2, the information interaction interface may include an interaction information display area and a user operation area. Wherein, the interactive information display area displays the head portrait of the current user and the head portrait of the information interactive object and the interactive information between the user and the information interactive object, i what is you at dry? "," work woollen "," eaten? "," not yet, good flow ". The user operation area can be displayed as a keyboard input area, and a user can input information through the keyboard input area to communicate with the information interaction object.
The target expression refers to an expression package which needs to be edited, the target expression can be selected by a user, for example, the user can select an expression which the user wants to send from an expression library, that is, an expression package library, and the expression in the expression library can be a stored local expression.
For example, please refer to fig. 3, and fig. 3 is a schematic view of another information interaction interface provided in the embodiment of the present application. In fig. 3, the expression may be displayed in the user operation area, and the expression may be triggered and displayed by clicking the expression control on the user operation area by the user. The user can select an expression from a plurality of expressions to send, and the like. If the user performs touch operation on the first expression in the user operation area, the first expression can be determined to be a target expression, and then the target expression is processed according to the user requirement.
In the embodiment of the application, in order to improve user experience, the content of the expression can be edited in the information interaction interface. In order to distinguish from the sending operation of the expression, the sending operation may be set to a click operation, and the editing operation may be set to a long press operation. The editing operation and the sending operation may be operations in a touch manner, a keyboard button, a mouse click manner, and the like, and are not limited to being applied to a touch screen device.
For example, if it is detected that the touch operation of the user on the expression is a click operation, the expression touched by the user may be sent; if the fact that the touch operation of the user on the expression is long-time pressing operation is detected, editing operation on the content of the expression of the touch operation can be triggered.
Specifically, when the expression is displayed in the user operation area, the user may perform an editing operation on the selected target expression, where the editing operation is a long-time pressing operation, for example, the user operation area may display a first expression, a second expression, and a third expression, and the user may long-time press the first expression if the user wants to edit the first expression, and may trigger the expression editing page by long-time pressing the first expression.
Wherein, the expression editing page may include: the system comprises a target expression and an information editing area, and the content of the target expression can be edited through the information editing area.
In some embodiments, if the user wants to edit and send an expression sent by the information interaction object, but the expression is not stored in the expression library, or the expression library stores the expression but the user needs to spend time searching, in this case, in order to save the terminal storage space and save the user searching time, the step "when the editing operation on the target expression in the information interaction interface is detected, the expression editing page is displayed on the information interaction interface", may include the following operations:
if the target expression corresponding to the editing operation is in the interactive information display area, displaying an expression editing page on the information interactive interface;
and if the target expression corresponding to the editing operation is in the user operation area, displaying an expression editing page on the information interaction interface.
For example, when a user edits an expression in the interactive information display area, the user can trigger the information interactive interface to display an expression editing page, and can edit the expression in the interactive information display area.
For another example, when the user performs an editing operation on the expression in the user operation area, the expression editing page may be triggered to be displayed on the information interaction interface, and the expression in the user operation area may be edited.
By the method, the user can edit the local expression or the expression sent by the information interaction object, and the user experience is improved.
In some embodiments, in order to ensure that the complete editing information is displayed, the step "displaying an emoticon editing page on the information interaction interface" may include the following operations:
acquiring the number of information interaction objects;
determining the display mode of the information editing area according to the quantity;
and displaying an expression editing page on the information interaction interface based on the target expression and the display mode.
The number of the information interaction objects refers to the number of people who perform information interaction with a local user in the information interaction interface, and the number of the information interaction objects can be single or multiple.
Wherein, the exhibition mode may include: a first presentation mode and a second presentation mode. The method comprises the steps that a first display mode is to display user information in an information editing area, a second display mode is to display a classification control in the information editing area, the classification control is used for triggering and displaying an expression editing sub-page, and the expression editing sub-page displays the user information corresponding to the classification control.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a display mode of an information editing area according to an embodiment of the present application. The first display mode of the information editing area on the information interaction interface on the left side of fig. 4 is that the information interaction between the local user and a single information interaction object is performed on the information interaction interface on the left side, and a target expression selected by the local user for editing is displayed on the expression editing page, and a plurality of functional controls include: a first control, a second control, a third control, and a fourth control. Wherein, the first control is displayed with 'Zhang III'; the second control is displayed with a 'three' image; the third control is displayed with 'custom text'; and a 'custom picture' is displayed on the fourth control. The user can operate the function control according to the requirement and select the editing information.
Wherein, be the second show mode in information editing area on the information interaction interface on figure 4 right side, the information interaction interface on right side is for information interaction between home terminal user and a plurality of information interaction objects, shows at expression editing page that there is the home terminal user to select the target expression of carrying out the edition to a plurality of function control, include: a first control, a second control, a third control, and a fourth control. Wherein a user name is displayed on the first control; a user image is displayed on the second control; the third control is displayed with 'custom text'; and a 'custom picture' is displayed on the fourth control. And different function controls of the expression editing page correspond to different expression editing sub-pages. For example, the expression editing sub-page displayed on the information interaction interface on the right side of fig. 4 is an expression editing sub-page corresponding to the "user name" of the first control in the expression editing page, and the user names of all information interaction objects in the current information interaction interface are displayed on the expression editing sub-page. And displaying the user information of all the information interaction objects through the expression editing sub-page, so that the user can conveniently check the editing information and select the editing information.
In some embodiments, the step "determining the presentation mode of the information editing region according to the number" may include the following processes:
if the quantity does not reach the preset quantity, determining that the display mode of the information editing area is a first display mode;
and if the number reaches the preset number, determining that the display mode of the information editing area is the second display mode.
Wherein, the preset number can be for judging that the current information interaction interface is the information interaction between the user and a single information interaction object, or judging that the current information interaction interface is the information interaction between the user and a plurality of information interaction objects, for example, the preset number can be: 2, when the number of the information interaction objects is less than 2, namely the number of the information interaction objects does not reach the preset number, determining that the current information interaction interface is the information interaction between the user and a single information interaction object; when the number of the information interaction objects is greater than or equal to 2, that is, the number reaches a preset number, it can be determined that the current information interaction interface is information interaction between the user and the plurality of information interaction objects.
Furthermore, when the number does not reach the preset number, the user name and the user image of each information interaction object can be obtained; generating a first information editing subregion according to all user names, wherein the first information editing subregion comprises a plurality of name selection sub-controls, and the name selection sub-controls are used for selecting the user names; and generating a second information editing subregion according to the user image, wherein the second information editing subregion comprises a plurality of image selection sub-controls, and the image selection sub-controls are used for selecting the user name image.
The information editing area comprises a function attribute control, the function attribute control is used for selecting the type of information, the content of the target expression is edited through the information editing area, a target information editing sub-area is determined from the first information editing sub-area and the second information editing sub-area according to second selection operation of a user on the function attribute control, and the target information editing sub-area is displayed on the information interaction interface.
In some embodiments, to improve the user experience, before the step "detecting an editing operation on a target expression in the information interaction interface", the following steps may be further included:
and when the editing operation of the target expression in the interactive information display area is detected, or when the editing operation of the target expression in the user operation area is detected, displaying an expression editing page on the information interactive interface.
The information interaction interface comprises an expression control, and the expression control is used for triggering the expression to be displayed on the information interaction interface.
Referring to fig. 5, fig. 5 is a schematic diagram of sending an expression according to an embodiment of the present application. In the information interaction interface on the left side of fig. 5, when the touch operation of the user on the expression control is detected, it can be recognized that the user intention is: and sending the expression. At this time, the terminal can acquire the interactive information with the sending time currently displayed on the information interactive interface being closest to the current time, then recognize the text content of the interactive information, and then screen out the expression most similar to the text content from the expression gallery, so that the target expression can be obtained.
For example, in the information interaction interface on the left side of fig. 5, the text content of the interaction information whose sending time is closest to the current time is obtained as follows: and finally, sending the target expression through an instruction of the target expression by the user, namely displaying the target expression in an interactive information display area of the information interactive interface.
In some embodiments, when it is detected that the target expression contains text information, in order to save time for the user to perform an editing operation, the step "displaying an expression editing page on the information interaction interface" may include the following operations:
performing semantic recognition on text information in the target expression to obtain a semantic recognition result;
acquiring a user image of a home terminal user or an information interaction object according to a semantic recognition result;
and displaying an expression editing page on the information interaction interface at least based on the user image.
The semantic recognition is one of the important components of Natural Language Processing (NLP) technology, and the core of semantic recognition understands not only the meaning of a text vocabulary but also the meaning of the word represented in a sentence or a chapter. The semantic recognition technically needs to do: the method comprises the following steps of semantic analysis and ambiguity elimination at the level of text, vocabulary, syntax, morphology, chapter (paragraph) and corresponding meaning recombination so as to achieve the aim of identifying the text, the vocabulary, the syntax, the morphology and the chapter (paragraph).
For example, firstly, the text content in the target expression may be acquired, and the text content is subjected to semantic recognition processing by the NLP technology, so as to obtain the meaning of the text content, that is, the semantic recognition result.
The information interaction interface can be an interface for information interaction between a user and a single information interaction object, or an interface for information interaction between the user and a plurality of information interaction objects, and the user and the plurality of information interaction objects can be in the same information discussion group or the same group chat group.
And the information interaction object is associated with the current information interaction interface. Then, the information interaction object associated with the current information interaction interface includes: the information interaction objects are information interaction objects which are independent from the local user in the current information interaction interface, or the information interaction objects which are in the same information discussion group or the same group chat group with the local user in the current information interaction interface.
In some embodiments, to increase the interest of the expression, the user image may be a photograph containing a human face.
Furthermore, after the meaning of the text content in the target expression is determined, semantic recognition can be performed on the text information in the interactive information display area in the information interactive interface, the text information in the interactive information display area most similar to the meaning of the text content in the target expression is found out, then an information interactive object corresponding to the text information in the interactive information display area is obtained, and a user picture of the information interactive object is used as a target user picture.
Editing the target expression based on the target user image can represent that the target user image is superimposed on a suitable area in the target expression to obtain a new expression, namely the target expression.
For example, please refer to fig. 6, where fig. 6 is another expression transmission diagram provided in the embodiment of the present application. In fig. 6, the left side is an information interaction interface showing an expression, after the editing operation of the user on the expression is detected, the text information in the expression can be obtained as "poor helplessness and obesity", and then semantic recognition is performed on the text information in the expression, and the recognition result is obtained as follows: poor, helpless and obese. Then, semantic recognition is carried out on the interactive information displayed on the information interactive interface, and the interactive information which is most similar to the text information semantics of the expression can be found to be 'still, good and poor'. Furthermore, the sender corresponding to the interactive information can be determined to be the current user, then, the image containing the face of the current user can be obtained, the image of the target user is obtained, then the image of the face of the current user is overlapped in a proper area of the target expression, the target expression is obtained, and then the target expression can be sent.
In some embodiments, in order to improve user experience, semantic recognition may be performed on the interactive information in the interactive information display area, a user name of the home terminal user or the information interactive object is obtained according to a semantic recognition result, and then the user name is superimposed on the target expression, so as to obtain an edited target expression.
For example, please refer to fig. 7, and fig. 7 is another expression sending schematic diagram provided in the embodiment of the present application. In fig. 7, the left side is an information interaction interface displaying expressions, after the editing operation of the user on the expressions is detected, the interaction information with the sending time closest to the current time in the interaction information display area can be obtained, and the interaction information is "hello", and the semantic recognition is performed on the interaction information to obtain a recognition result: you are funny, wherein "you" is also the information interaction object with the home terminal user for information interaction. Further, it may be determined that the user wants to place the username of the information interaction object on the target expression, and then, the username of the information interaction object may be obtained as: and Zhang III, then superposing the user name of the current user on a proper area of the target expression to obtain the target expression, and then sending the target expression. Similarly, the method can intelligently edit the expression, and the expression desired by the user can be generated without manual operation of the user, so that the user experience is improved.
In some embodiments, to implement the intelligent emoticon recommendation, before the step "detecting an editing operation on a target emoticon in the information interaction interface", the following steps may be further included:
when touch operation on the expression control is detected, acquiring currently displayed interactive information of an information interactive interface;
and screening out the target expression from the expression library according to the interaction information, and displaying the target expression on an information interaction interface.
The information interaction interface comprises an expression control, and the expression control is used for triggering the expression to be displayed on the information interaction interface.
The interactive information is information that has been sent by the user in the interactive information display area, for example, in fig. 7, the information display area includes "unknown" and "hello" interactive information.
Further, the expression corresponding to the interaction information is screened out from the expression graph library according to the interaction information, so that a target expression can be obtained, and then the target expression is displayed on an information interaction interface to recommend an expression package for a user.
In some embodiments, in order to quickly determine the emoticon that the user needs to edit, the step "screen out the target emoticon from the emoticon library according to the interaction information" may include the following operations:
acquiring a plurality of expression labels in an expression graph library and an expression corresponding to each expression label;
matching the interactive information with a plurality of expression labels, and determining a target expression label matched with the interactive image;
and obtaining the expression corresponding to the target expression label to obtain the target expression.
The expression labels refer to the categories of expressions, and a user can set labels for the expressions in the expression gallery and set the expressions of the same type as the same expression. For example, the emoticon library may include: first expression, second expression, third expression etc. wherein, contains the personage in first expression and the second expression, contains the animal in the third expression, and the user can set up the expression label of first expression and second expression and be: a person, setting the emoji tag of the third expression as an animal, and so on.
In some embodiments, the system may also automatically set a tag for an expression in the expression gallery, identify the expression, set the tag according to the identification result, and the like.
Specifically, the text content of the interactive information can be acquired, and the text content of the mutual information is matched with the expression tags, so that the expression tag which is most matched with the interactive information can be determined from the plurality of expression tags, namely the target expression tag. Further, the expression corresponding to the target expression label is obtained from the expression library, and the target expression, that is, the expression that the user wants to perform editing operation, is obtained.
In some embodiments, to implement the intelligent emoticon recommendation, before the step "detecting an editing operation on a target emoticon in the information interaction interface", the following steps may be further included:
when touch operation on the expression control is detected, acquiring input information currently displayed on an information interaction interface;
and screening out the target expression from the expression library according to the input information, and displaying the target expression on an information interaction interface.
The information interaction interface comprises an expression control, and the expression control is used for triggering the expression to be displayed on the information interaction interface.
The input information is the content input by the user in an input box in the information interaction interface.
Further, the expression corresponding to the input information is screened out from the expression graph library according to the input information, so that the target expression can be obtained, and then the target expression is displayed on the information interaction interface to recommend the expression package for the user. The expression corresponding to the input information is screened out from the expression library according to the input information, which can be referred to as the above implementation manner of screening out the expression corresponding to the interaction information from the expression library according to the interaction information.
102. And editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page.
The information editing area can comprise a plurality of information selection controls, and different selection controls can correspond to different editing information;
in some embodiments, in order to improve the expression editing efficiency, the step "edit the content of the target expression through the information editing area" may include the following process:
determining a target information selection control from a plurality of information selection controls based on touch operation of a user on the information editing area;
acquiring target editing information corresponding to the target information selection control;
and synthesizing the target editing information and the target expression to obtain the edited target expression.
Referring to fig. 8, fig. 8 is a schematic view of an expression editing page according to an embodiment of the present application. In fig. 8, after the editing operation of the user on the target expression in the user operation area is detected, the expression editing page is triggered to be displayed on the information interaction interface, and the position of the expression editing page on the information interaction interface may be beside the target expression, so that the user can compare the edited target expression with the original target expression and observe the edited target expression conveniently.
The target expression and the information selection control after amplification are displayed on the expression editing page, and the method comprises the following steps: the device comprises a first control, a second control, a third control and a fourth control.
For example, if the user performs a touch operation on a first control of the information editing area, it may be determined that the first control is a target information selection control.
In some embodiments, in order to facilitate a user to quickly edit an expression, the step "obtaining target editing information corresponding to the target information selection control" may include the following operations:
determining the functional attribute of the target information selection control;
and acquiring editing information corresponding to the functional attributes to obtain target editing information.
Wherein, the functional attribute may include: and adding text or adding images, wherein the target editing information can be the user name or the user image of an information interaction object associated with the information interaction interface. For example, the current information interaction interface may be an interface where a current user performs information interaction with a first user, and the target editing information may be: a user name or user image of the current user, or a user name or user image of the first user. For another example, the current information interaction interface may be: if the current user, the first user, and the second user perform an interface for information interaction, the target editing information may be: the user name or user image of the current user, or the user name or user image of the first user, or the user name or user image of the second user.
For example, in fig. 8, "three sheets" are displayed on the first control, which indicates that the functional attribute of the first control is an added text, and the added text corresponding to the first control is a name of the information interaction object; the second control is displayed with a 'third' image, the functional attribute of the second control is indicated to be an added image, and the editing information corresponding to the image object added by the second control is an image of the information interaction object; a user-defined text is displayed on the third control, the functional attribute of the third control is indicated to be an added text, and the editing information corresponding to the third control is a text input by a user; a user-defined picture is displayed on the fourth control, the functional attribute of the fourth control is indicated to be an added image, and the editing information corresponding to the added image of the fourth control is a picture selected by the user from local.
For example, if it is detected that the information selection control corresponding to the touch operation of the user on the expression editing page is the second control, it may be determined that the second selection control is the target selection control, and further, the editing information corresponding to the second control, that is, the image of the information interaction object is obtained, and the target editing information is obtained, and finally, the image of the interaction object and the target expression are synthesized, and the edited expression can be obtained.
Please refer to fig. 9, and fig. 9 is a schematic diagram of an edited expression according to an embodiment of the present application. In fig. 9, after the target expression is edited, the edited target expression, the reset control and the sending control are displayed on the expression editing page, where the reset control may be used to edit the target expression again, and the sending control may be used to send the edited target expression. If the user is satisfied with the edited target expression, the user can click the sending control to send the edited target expression; if the user is not satisfied with the edited target expression, the user can click the reset control to edit the target expression again; or the user does not want to send the target expression, the user can click the closing control at the upper right corner of the expression editing page to close the expression editing page. Therefore, diversified requirements of users can be met.
In some embodiments, in order to improve expression processing efficiency, the step "synthesizing the target editing information and the target expression to obtain an edited target expression" may include the following operations:
determining the content type of the target editing information;
determining a target position from the target expression according to the content type;
and synthesizing the target editing information and the target expression based on the target position to obtain the edited target expression.
The content type of the editing information may include: text type, image type, etc., the target location is used to place the target editing information.
In some embodiments, the expression may include an image, and in order to correspond to editing information of different content types, different areas may be selected from the target expression to add the editing information, so as to implement intellectualization of expression editing. The step of determining the target position from the target expression according to the content type may include the following operations:
if the content type is a text type, determining a target position from a blank area of the target expression;
and if the content type is the image type, determining the target position from the image.
Wherein, the colour of the pixel in the blank region can be white, and the difference of the colour value of each pixel in the blank region is in presetting the difference within range to guarantee not including the important content in the expression in the blank region.
In some embodiments, in order to quickly determine the target location, the step "determining the target location from a blank area of the target expression" may include the following operations:
obtaining a color value of each pixel point in the target expression;
scanning pixel points in the target expression based on the color values, and determining a blank area in the target expression;
and calculating the maximum blank area in the blank areas according to a specified algorithm to obtain the target position.
The method comprises the steps that a blank area is selected from a target expression, the color value of each pixel point in the target expression can be obtained, each image is formed by arranging and combining a pile of pixel points, and each pixel point is represented by RGB. For a picture, for example, the length and width of the size of the picture are 100x100, and then the picture has 100x 100-10000 pixel points.
Wherein the color value may be represented by RGB. The RGB color scheme is a color standard in the industry, and various colors are obtained by changing three color channels of Red (Red), Green (Green), and Blue (Blue) and superimposing the three color channels, RGB represents the colors of the three channels of Red, Green, and Blue, and all colors on the terminal screen can be mixed by three color lights of Red, Green, and Blue according to different proportions. Any one color on the screen may be recorded and expressed by a set of RGB values, which may be expressed using integers. Typically, RGB has 256 levels each, numerically represented from 0,1, 2. For example, white is represented by RGB (255, 255, 255).
Further, according to the characteristics of the white pixel point values, the image can be circularly scanned from the initial coordinate position point (0, 0) to the image end point (100 ), each pixel point in the image is compared with a white pixel point, and then the maximum blank area in the scheme is obtained by utilizing a greedy algorithm.
The greedy algorithm is a simpler and quicker design technology for solving some optimal solution problems. The greedy algorithm is characterized in that the greedy algorithm is performed step by step, the optimal selection is often performed according to certain optimization measure on the basis of the current situation, various possible overall situations are not considered, and a large amount of time consumed for finding the optimal solution to exhaust all the possible solutions is saved. The greedy algorithm adopts a top-down mode to make successive greedy selection by an iteration method, the solved problem is simplified into a subproblem with a smaller scale every time the greedy selection is made, and an optimal solution of the problem can be obtained through each greedy selection. Although it is guaranteed at each step that a locally optimal solution is obtained, the resulting global solution is sometimes not necessarily optimal. Greedy algorithms generally proceed as follows: establishing a mathematical model to describe the problem; dividing the solved problem into a plurality of sub-problems; solving each subproblem to obtain a local optimal solution of the subproblem; and synthesizing the local optimal solution of the subproblem into a solution of the original solution problem.
Specifically, in the embodiment of the application, the plurality of blank areas are determined according to the color values of the pixel points, and then the blank area with the largest area is selected from the plurality of blank areas through a greedy algorithm, so that the target position can be obtained. And distributing a larger display area for the target editing information in the target expression by selecting the maximum blank area so as to highlight the difference between the edited expression and the expression before editing.
The object category refers to a category of the object, and the object may be classified into different categories according to different classification rules, for example, the object may be classified into: human, animal, plant, etc., and the application is not limited thereto.
The step of determining the target position from the target expression based on the object type refers to selecting an area where an object with the same object type as the object type of the object in the target editing information is located from the target expression, namely the target position.
Object recognition is a fundamental research in the field of computer vision, among other things, and its task is to identify what object is in an image and to report the position and orientation of this object in the scene represented by the image. The object identification comprises the following steps: image preprocessing, feature extraction, feature selection, modeling, matching and positioning, and finally the object type and the position in the image can be obtained.
For example, first identifying the object type of the object included in the target editing information, and obtaining the object type in the target editing information may include: a first object species; then, identifying the object type of the object included in the target expression, and obtaining the object type in the target editing information may include: the first object type, the second object type and the third object type may be determined that the same object type included in the target editing information and the target expression is the first object type, and then the region where the object of the first object type in the target expression is located is obtained, which is the target position.
Through the method, the appropriate area can be quickly identified from the target expression for adding the target information. For example, if the target expression includes an animal type and the target addition information also includes an animal type, the animal in the target editing information may be replaced with the position of the animal in the target expression.
After the target position is determined, the target editing information and the target position can be synthesized to obtain the edited target expression of the editing message.
For example, if the content type of the target editing information is a character type, selecting a blank area from the expression to be processed, and performing superposition processing on the target editing information and the blank area to obtain the edited target expression.
For another example, if the content type of the target editing information is a picture type, identifying the object type in the target editing information, determining the region of the object type in the target expression, obtaining the target position, and performing fusion processing on the target editing information and the target position to obtain the edited target expression.
In some embodiments, the target expression may be a dynamic image, and in order to improve the display effect of the edited target expression, the method may further include the following steps:
if the target expression is a dynamic image, analyzing the target expression to obtain a plurality of sub-images;
editing any one frame of target sub-image in the multi-frame sub-image through the information editing area to obtain an edited target sub-image;
acquiring the image content of the edited target sub-image and the image content layout information of other sub-images in the multi-frame sub-image;
performing corresponding editing processing on other sub-images based on the image content of the edited target sub-image and the image content layout information of the other sub-images to obtain other edited sub-images of the other sub-images;
and synthesizing all the edited sub-images to obtain the edited target expression.
The dynamic image is formed by compressing and combining a series of pictures, and the playing principle is that the series of pictures are played one by one.
Since there may be a difference in picture content in the moving image, after the target editing information is determined, the position for placing the target editing information may not be the same for each picture. Then, a different position may be selected for each picture in the dynamic image, and then the target edit information is added to each picture.
The target sub-image may be any one of the multiple frame sub-images, for example, the target sub-image may be a first frame sub-image in the dynamic image.
Furthermore, the user can edit the target sub-image in the information editing area to obtain the edited target sub-image.
Wherein, the image content of the edited target sub-image may include: editing information for the user to edit the target sub-image, for example, adding a user name or a user image.
And processing other sub-images according to the editing information, wherein the other sub-images are the sub-images in the multi-frame image except the target sub-image, and the editing information is obtained by editing the target sub-image by a user. The image content layout information is also the position information of different contents in other sub-images, and the position of adding the editing information in other sub-images can be determined according to the image content layout information.
Further, each other sub-image is edited through the editing information and the position, that is, the edited sub-image corresponding to each other sub-image can be obtained, so that all edited sub-images corresponding to the multi-frame sub-image of the dynamic image can be obtained, and then all edited sub-images are synthesized to obtain the edited target expression.
In some embodiments, in order to improve the display effect of the edited target interactive image, the method may further include the following steps:
if the target expression is a dynamic image, analyzing the target expression to obtain a plurality of sub-images;
editing each frame of target sub-image through the information editing area;
and synthesizing all the edited sub-images to obtain the edited target expression.
Specifically, the dynamic interactive image may be analyzed, that is, a static picture in the dynamic interactive image is extracted to obtain multiple frames of sub-images, then a target position in each frame of sub-image is determined, target editing information is obtained through editing information selected by a user in a target editing area, then the target editing information is added to the target position in each frame of sub-image, all edited sub-images may be obtained, and finally, all edited sub-images are synthesized to obtain an edited target interactive image.
In some embodiments, the dynamic image may be analyzed, that is, a static picture in the dynamic image is extracted to obtain multiple frames of sub-images, then a target position in each frame of sub-image is determined, target editing information is obtained through editing information selected by a user in a target editing area, then the target editing information is added to the target position in each frame of sub-image, all edited sub-images may be obtained, and finally, all edited sub-images are synthesized to obtain an edited target expression.
In some embodiments, in order to improve the editing efficiency of the dynamic image, after the target editing information is determined, the target editing information may be drawn in the transparent picture, and then the transparent picture with the target editing information is superimposed on the dynamic image, so that the transparent picture is always overlaid on the dynamic image during the playing process of the dynamic image, the target editing information may be always present, and the editing operation is not performed on each static picture of the dynamic image, thereby improving the processing efficiency.
103. And when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
In fig. 9, the expression editing page shows the edited target expression, the reset control, and the send control. If the click operation of the user on the sending control is detected, that is, the determination operation is determined, the function corresponding to the sending control can be executed: and sending the edited target expression. After the target expression is sent, the edited target expression can be displayed in an interactive information display area of the information interactive interface so as to prompt a user to finish sending the expression.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating sending an expression according to an embodiment of the present application. After the user sees the information interaction interface shown in fig. 9, the transmission of the edited target expression can be completed.
In some embodiments, in order to facilitate quick transmission of the target expression by the user, the method further comprises the steps of:
and when the sending operation aiming at the information interaction interface is detected, sending the edited target expression to one or more information interaction objects.
After the user finishes the editing operation on the target expression on the information interaction interface, the sending operation of the edited target expression can be finished immediately through the information interaction interface, so that the edited target expression is sent to one or more information interaction objects for performing information interaction with the current user.
The embodiment of the application discloses an expression processing method, which comprises the following steps: when the editing operation of the target expression in the information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: a target expression and information editing area; editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page; and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface. According to the embodiment of the application, the expression editing function is added in the information interaction interface, when the situation that a user operates the expression in the information interaction interface is detected, the selected expression is edited according to the interaction information content displayed on the information interaction interface and the information interaction object performing information interaction with the user, the edited expression is sent to the information interaction object, the expression image is not required to be edited through third-party application, the operation is convenient, and the processing efficiency of the expression can be improved.
Based on the above description, the expression processing method of the present application will be further described below by way of example. Referring to fig. 11, fig. 11 is a schematic flowchart of another expression processing method provided in the embodiment of the present application, and taking the expression processing method as an example when the application scenario is a chat conversation scenario, a specific flow may be as follows:
201. when the terminal detects the touch operation of the expression package in the conversation interface by the user, the target expression package corresponding to the touch operation is determined, and an expression editing page is displayed on the conversation interface.
The chat interface refers to an interface for users to perform chat conversations in the chat application, and the chat application may be various applications installed in the terminal for performing chat conversations between users.
The conversation interface can comprise a conversation area and an input area, the conversation area displays conversation contents of the user and the chat object, and the input area is used for user input operation. When a user inputs a text, the input area is displayed with an input keyboard, and when the user views the emoticon, the input area is displayed with the emoticon.
For example, an expression package may be displayed in an input area of the current dialog interface, and if a touch operation of the user on the expression package in the input area is detected, a target expression package selected by the user may be determined according to the expression package corresponding to the touch operation, and an expression editing page is displayed on the dialog interface, which is referred to in the above embodiments and will not be described herein in detail.
202. And when the terminal detects the selection operation of the user on the editing information in the expression editing page, determining the target editing information corresponding to the selection operation.
The emotion editing page displays an emotion package selected by the user and editing information, where the editing information may be personal information of a chat object in a current conversation interface in chat with the user, and for example, the editing information may be a user name or a user avatar. The expression editing page can also comprise a custom information control, and the custom information control is used for a user to define texts or custom pictures.
Furthermore, the target editing information, that is, the content that the user wants to add to the target emoticon, can be determined according to the selection operation of the user on the emoticon editing page.
203. The terminal obtains the file attribute of the target expression package, and judges whether the type of the target expression package is a preset expression package type or not according to the file attribute.
The file attribute refers to a picture format of the target expression package, the picture format is a format of a computer storage picture, and common storage formats include: JPEG (Joint Photographic experts Group), GIF (Graphics Interchange Format), one feature of the GIF Format is that a plurality of color images can be stored in one GIF file, and if a plurality of image data stored in one file are read out one by one and displayed on a screen, the simplest animation can be formed.
For example, the preset emoticon type may be a static emoticon, and the file attribute of the obtained target emoticon may be: in the JPEG format, that is, the target emoticon is a picture, and it can be determined that the target emoticon is a static emoticon, then step 204 can be executed.
For another example, the file attribute of the obtained target expression package may be: if the GIF format, that is, the target emoticon is a plurality of pictures, can determine that the target emoticon is a dynamic image packet, then step 205 can be executed.
204. And the terminal superposes the target editing information and the target expression package to obtain the processed target expression package.
After determining that the target expression package is the static expression package, the target editing information may be directly superimposed on the target expression package, and a specific superimposing manner may refer to the above embodiment. Further, after the superposition operation is completed, the processed emoticon can be obtained.
205. And the terminal analyzes the target expression package to obtain a plurality of expression package pictures.
After the target expression packet is determined to be the dynamic image packet, the dynamic image packet is composed of a plurality of pictures, so that the display effect of the dynamic image packet can be ensured, and the dynamic image packet can be analyzed first to obtain the pictures contained in the dynamic image packet.
206. And the terminal superimposes the target editing information and each emotion packet picture to obtain a plurality of processed emotion packet pictures.
After the dynamic image packet is parsed into a plurality of pictures, each picture may be processed in step 204 to obtain a processed picture corresponding to each picture.
207. And the terminal synthesizes the plurality of processed emotion packet pictures to obtain a processed target emotion packet.
Specifically, the terminal synthesizes the processed pictures according to the playing sequence of the dynamic image packet to obtain the processed dynamic image packet, that is, the processed target expression packet.
208. And when the terminal receives an expression package sending instruction, sending the processed target expression package.
After the terminal generates the processed target emotion packet, the processed target emotion packet can be displayed on a conversation interface, and then the user can send the processed target emotion packet to the current chat object to complete emotion packet sending operation.
The embodiment of the application discloses an expression package processing method, which comprises the following steps: when the terminal detects the touch operation of the expression package in the conversation interface by the user, determining a target expression package corresponding to the touch operation, and displaying an expression editing page on the conversation interface; when the terminal detects that a user selects the editing information in the expression editing page, determining target editing information corresponding to the selection operation; the terminal acquires the file attribute of the target expression package and judges whether the type of the target expression package is a preset expression package type or not according to the file attribute; and if the type of the target expression package is the preset expression package type, the terminal superimposes the target editing information and the target expression package to obtain the processed target expression package. If the type of the target expression package is not the preset expression package type, the terminal analyzes the target expression package to obtain a plurality of expression package pictures, then the target editing information is overlapped with each expression package picture to obtain a plurality of processed expression package pictures, and the plurality of processed expression package pictures are synthesized to obtain a processed target expression package; and when the terminal receives an expression package sending instruction, sending the processed target expression package. Therefore, the processing efficiency of the emoticons can be improved.
In some embodiments. Referring to fig. 12, fig. 12 is a schematic flowchart of another emotion bag processing method provided in the embodiment of the present application, taking the emotion bag processing method as an example of an application scenario in which a user posts a social dynamic scenario, a specific flow may be as follows:
301. when the terminal detects the touch operation of the expression package in the social dynamic publishing interface of the user, the target expression package corresponding to the touch operation is determined, and the expression editing page is displayed on the social dynamic publishing interface.
The social dynamic publication interface refers to an interface for users to publish personal dynamics in social applications, and the personal dynamics is also a statement published by the users, and includes: the text, the picture and/or other contents can be seen by other users who are in the same social platform with the user, and operations such as comment and like can be performed on the personal dynamic state.
For example, please refer to fig. 13, and fig. 13 is a schematic view illustrating another information interaction interface provided in the embodiment of the present application. In FIG. 13, the social dynamic publication interface is shown with dynamic information published by the user: the user name is called: zhang III, the published dynamic information content is: "true happy today! "; the user name is called: xiaoli, the published dynamic information content is: what arrangement is something about every weekend? "; the user name is called: the content of the published dynamic information of the floret is as follows: "morning good". ". Include two controls under each piece dynamic information, be respectively: comment controls and like controls. The comment control can be used for commenting on the dynamic state, and the like control can be used for like on the dynamic state.
Further, the user may trigger the social dynamic posting interface to display the input area by a touch operation on the comment control in fig. 13. Referring to fig. 14, fig. 14 is a schematic view illustrating another information interaction interface according to an embodiment of the present disclosure. The social dynamic publication interface of FIG. 14 is shown with an input area for user input operations. When a user inputs a text, the input area is displayed with an input keyboard, and when the user views the emoticon, the input area is displayed with the emoticon.
For example, an expression package may be displayed in the current input area, and if a touch operation of the user on the expression package in the input area is detected, a target expression package selected by the user may be determined according to the expression package corresponding to the touch operation, and an expression editing page is displayed on the social dynamic publishing interface, which is referred to in the above embodiments and is not described herein in detail.
302. And when the terminal detects the selection operation of the user on the editing information in the expression editing page, determining the target editing information corresponding to the selection operation.
The expression editing page displays the expression package selected by the user and editing information, where the editing information may be personal information of the user corresponding to the dynamic information in the current social dynamic publishing interface, and for example, the editing information may be a user name or a user avatar. The expression editing page can also comprise a custom information control, and the custom information control is used for a user to define texts or custom pictures.
Furthermore, the target editing information, that is, the content that the user wants to add to the target emoticon, can be determined according to the selection operation of the user on the emoticon editing page.
303. And the terminal superposes the target editing information and the target expression package to obtain the processed target expression package.
After determining that the target expression package is the static expression package, the target editing information may be directly superimposed on the target expression package, and a specific superimposing manner may refer to the above embodiment. Further, after the superposition operation is completed, the processed emoticon can be obtained.
304. And when the terminal receives a comment instruction, commenting the dynamic information of the social dynamic publishing interface through the processed target expression package.
After the terminal generates the processed target expression package, the processed target expression package can be displayed on the social dynamic publishing interface, and the user dynamic comment operation on the social dynamic publishing interface is completed.
Referring to fig. 15, fig. 15 is a schematic view illustrating another information interaction interface according to an embodiment of the present disclosure. In fig. 15, the user "zhang san" publishes the personal dynamics, and the contents of the personal dynamics are: today is really happy! The user 'Xiaoming' comments on the personal dynamic state, and replies an expression package which is displayed in a personal dynamic comment area in the social dynamic publishing interface, so that the user can comment on the personal dynamic use expression packages published by other users.
The embodiment of the application discloses an expression package processing method, which comprises the following steps: when the terminal detects the touch operation of an expression package in a social dynamic publishing interface by a user, determining a target expression package corresponding to the touch operation, and displaying an expression editing page on the social dynamic publishing interface; when the terminal detects that a user selects the editing information in the expression editing page, determining target editing information corresponding to the selection operation; the terminal superimposes the target editing information and the target expression package to obtain a processed target expression package; and when the terminal receives a comment instruction, commenting the dynamic information of the social dynamic publishing interface through the processed target expression package. Therefore, the processing efficiency of the emoticons can be improved.
In order to better implement the expression processing method provided in the embodiments of the present application, an embodiment of the present application further provides an expression processing apparatus based on the expression processing method. The meaning of the noun is the same as that in the expression processing method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 16, fig. 16 is a block diagram of an expression processing apparatus according to an embodiment of the present application, where the apparatus includes:
the display unit 401 is configured to display an expression editing page on an information interaction interface when an editing operation on a target expression in the information interaction interface is detected, where the expression editing page at least includes: the target expression and information editing area;
a first editing unit 402, configured to edit the content of the target expression through the information editing area, and display the edited target expression on the expression editing page;
a confirming unit 403, configured to display the edited target expression on the information interaction interface when a confirming operation on the edited target expression is detected.
In some embodiments, the first editing unit 402 may include:
the first determining subunit is used for determining a target information selection control from a plurality of information selection controls based on the touch operation of the user on the information editing area;
the first obtaining subunit is configured to obtain target editing information corresponding to the target information selection control;
and the first synthesis subunit is used for synthesizing the target editing information and the target expression to obtain an edited target expression.
In some embodiments, the first obtaining subunit may be specifically configured to:
determining the functional attribute of the target information selection control;
and acquiring editing information corresponding to the functional attribute to obtain the target editing information.
In some embodiments, the first synthesis subunit may be specifically configured to:
determining the content type of the target editing information;
determining a target position from the target expression according to the content type, wherein the target position is used for placing the target editing information;
and synthesizing the target editing information and the target expression based on the target position to obtain the edited target expression.
In some embodiments, the first synthesis subunit may be further specifically configured to:
if the content type is a text type, determining the target position from the blank area of the target expression;
and if the content type is an image type, determining the target position from the image.
In some embodiments, the first synthesis subunit may be further specifically configured to:
if the content type is a text type, acquiring a color value of each pixel point in the target expression; scanning pixel points in the target expression based on the color values, and determining a blank area in the target expression; calculating the maximum blank area in the blank areas according to a specified algorithm to obtain the target position;
and if the content type is an image type, determining the target position from the image.
In some embodiments, the display unit 401 may include:
the second acquisition subunit is used for acquiring the number of the information interaction objects;
the second determining subunit is used for determining the display mode of the information editing area according to the number;
and displaying an expression editing page on the information interaction interface based on the target expression and the display mode.
In some embodiments, the second determining subunit may be specifically configured to:
acquiring user information of the information interaction object;
if the number does not reach the preset number, determining that the display mode of the information editing area is the first display mode;
and if the number reaches a preset number, determining that the display mode of the information editing area is the second display mode.
In some embodiments, the display unit 401 may include:
and the first display subunit is used for displaying an expression editing page on the information interaction interface when the editing operation on the target expression in the interaction information display area is detected or when the editing operation on the target expression in the user operation area is detected.
In some embodiments, the display unit 401 may further include:
the recognition subunit is used for carrying out semantic recognition on the text information in the target expression to obtain a semantic recognition result;
the third obtaining subunit is configured to obtain, according to the semantic recognition result, a user image of a home user or an information interaction object, where the information interaction object is associated with the current information interaction interface;
and the second display subunit is used for displaying the expression editing page on the information interaction interface at least based on the user image.
In some embodiments, the apparatus may further comprise:
the first acquisition unit is used for acquiring currently displayed interactive information of the information interactive interface when touch operation on the expression control is detected;
and the first screening unit is used for screening a target expression from an expression graph library according to the interaction information and displaying the target expression on the information interaction interface.
In some embodiments, the first screening unit may include:
the fourth acquiring subunit is configured to acquire a plurality of expression labels in the expression gallery and an expression corresponding to each expression label;
the matching subunit is used for matching the interaction information with the plurality of expression labels and determining a target expression label matched with the interaction image;
and the fifth acquiring subunit is configured to acquire an expression corresponding to the target expression label, so as to obtain the target expression.
In some embodiments, the apparatus may further comprise:
the second acquisition unit is used for acquiring currently displayed interactive information of the information interactive interface when touch operation on the expression control is detected;
and the second screening unit is used for screening the target expression from the expression gallery according to the interaction information and displaying the target expression on the information interaction interface.
In some embodiments, the apparatus may further comprise:
the first analysis unit is used for analyzing the target expression to obtain a plurality of sub-images if the target expression is a dynamic image;
the third editing unit is used for editing any frame of target sub-image in the multi-frame sub-images through the information editing area to obtain an edited target sub-image;
a third obtaining unit, configured to obtain image content of the edited target sub-image and image content layout information of other sub-images in the multiple frames of sub-images;
the processing unit is used for correspondingly editing the other sub-images based on the image content of the edited target sub-image and the image content layout information of the other sub-images to obtain the edited other sub-images of the other sub-images;
and the first synthesis unit is used for synthesizing all the edited sub-images to obtain the edited target expression.
In some embodiments, the apparatus further comprises:
the second analysis unit is used for analyzing the target expression to obtain a plurality of sub-images if the target expression is a dynamic image;
the second processing unit is used for editing each frame of target sub-image through the information editing area;
and the second synthesis unit is used for synthesizing all the edited sub-images to obtain the edited target expression.
In some embodiments, the apparatus may further comprise:
and the sending unit is used for sending the edited target expression to one or more information interaction objects when sending operation aiming at the information interaction interface is detected.
The embodiment of the application discloses an expression processing device, when detecting the editing operation of a target expression in an information interaction interface through a display unit 401, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: in the target expression and information editing area, the first editing unit 402 edits the content of the target expression through the information editing area, and displays the edited target expression on the expression editing page, and the confirming unit 403 displays the edited target expression on the information interaction interface when detecting a confirming operation on the edited target expression. Thus, the processing efficiency of the expressions can be improved.
Correspondingly, the embodiment of the application also provides a computer device, and the computer device can be a terminal. As shown in fig. 17, fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer-readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby monitoring the computer device 500 as a whole.
In this embodiment of the application, the processor 501 in the computer device 500 loads instructions corresponding to processes of one or more applications into the memory 502, and the processor 501 runs the applications stored in the memory 502, so as to implement various functions as follows:
when the editing operation of the target expression in the information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: a target expression and information editing area; editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page; and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 17, the computer device 500 further includes: touch-sensitive display screen 503, radio frequency circuit 504, audio circuit 505, input unit 506 and power 507. The processor 501 is electrically connected to the touch display screen 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 17 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 503 can be used for displaying a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 501, and can receive and execute commands sent by the processor 501. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 501 to determine the type of the touch event, and then the processor 501 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 503 can also be used as a part of the input unit 506 to implement an input function.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
Audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 505 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 505 and converted into audio data, which is then processed by the audio data output processor 501, and then transmitted to, for example, another computer device via the rf circuit 504, or output to the memory 502 for further processing. The audio circuitry 505 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Optionally, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 507 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 17, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, when the computer device provided in this embodiment detects an editing operation on a target expression in an information interaction interface, an expression editing page is displayed on the information interaction interface, where the expression editing page at least includes: a target expression and information editing area; editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page; and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any expression processing method provided by the embodiments of the present application. For example, the computer program may perform the steps of:
when the editing operation of the target expression in the information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: a target expression and information editing area;
editing the content of the target expression through the information editing area, and displaying the edited target expression on an expression editing page;
and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any expression processing method provided in the embodiments of the present application, beneficial effects that can be achieved by any expression processing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The expression processing method, device, storage medium and computer device provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (20)

1. An expression processing method, comprising:
when the editing operation of a target expression in an information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the expression editing page at least comprises: the target expression and information editing area;
editing the content of the target expression through the information editing area, and displaying the edited target expression on the expression editing page;
and when the confirmation operation of the edited target expression is detected, displaying the edited target expression on the information interaction interface.
2. The method of claim 1, wherein the information editing region comprises a plurality of information selection controls;
the editing the content of the target expression through the information editing area comprises the following steps:
determining a target information selection control from a plurality of information selection controls based on the touch operation of the user on the information editing area;
acquiring target editing information corresponding to the target information selection control;
and synthesizing the target editing information and the target expression to obtain the edited target expression.
3. The method according to claim 2, wherein the obtaining of the target editing information corresponding to the target information selection control includes:
determining the functional attribute of the target information selection control;
and acquiring editing information corresponding to the functional attribute to obtain the target editing information.
4. The method of claim 3, wherein the target editing information is a user name or a user image associated with the information interaction interface.
5. The method of claim 2, wherein the synthesizing the target editing information and the target expression to obtain an edited target expression comprises:
determining the content type of the target editing information;
determining a target position from the target expression according to the content type, wherein the target position is used for placing the target editing information;
and synthesizing the target editing information and the target expression based on the target position to obtain the edited target expression.
6. The method of claim 5, wherein the target expression comprises an image; determining a target position from the target expression according to the content type includes:
if the content type is a text type, determining the target position from the blank area of the target expression;
and if the content type is an image type, determining the target position from the image.
7. The method of claim 6, wherein the determining the target location from the blank area of the target expression comprises:
obtaining the color value of each pixel point in the target expression;
scanning pixel points in the target expression based on the color values, and determining a blank area in the target expression;
and calculating the maximum blank area in the blank areas according to a specified algorithm to obtain the target position.
8. The method of claim 1, wherein the information interaction interface comprises: the interactive information display area displays interactive information between a user and an information interactive object;
the displaying of the expression editing page on the information interaction interface further comprises:
acquiring the number of the information interaction objects;
determining the display mode of the information editing area according to the number;
and displaying an expression editing page on the information interaction interface based on the target expression and the display mode.
9. The method of claim 8, wherein the presentation mode comprises: the method comprises a first display mode and a second display mode, wherein the first display mode is to display user information in an information editing area, the second display mode is to display a classification control in the information editing area, the classification control is used for triggering and displaying an expression editing sub-page, and the expression editing sub-page displays the user information corresponding to the classification control;
the determining the display mode of the information editing area according to the number comprises:
acquiring user information of the information interaction object;
if the number does not reach the preset number, determining that the display mode of the information editing area is the first display mode;
and if the number reaches a preset number, determining that the display mode of the information editing area is the second display mode.
10. The method of claim 1, wherein the information interaction interface comprises: an interactive information display area and a user operation area;
when the editing operation of the target expression in the information interaction interface is detected, an expression editing page is displayed on the information interaction interface, and the method comprises the following steps:
and when the editing operation of the target expression in the interactive information display area is detected, or when the editing operation of the target expression in the user operation area is detected, displaying an expression editing page on the information interactive interface.
11. The method of claim 1, wherein the target expression contains at least text information;
displaying an expression editing page on the information interaction interface, wherein the expression editing page comprises:
performing semantic recognition on the text information in the target expression to obtain a semantic recognition result;
acquiring a user image of a local user or an information interaction object according to the semantic recognition result, wherein the information interaction object is associated with the current information interaction interface;
and displaying the expression editing page on the information interaction interface at least based on the user image.
12. The method according to any one of claims 1-11, wherein the information interaction interface comprises an emoticon, and the emoticon is used for triggering the information interaction interface to display emotions;
before the detecting the editing operation on the target expression in the information interaction interface, the method further comprises the following steps:
when touch operation on the expression control is detected, acquiring currently displayed interactive information of the information interactive interface;
and screening out a target expression from an expression library according to the interaction information, and displaying the target expression on the information interaction interface.
13. The method of claim 12, wherein the screening out the target expression from the expression gallery according to the interaction information comprises:
acquiring a plurality of expression labels in the expression graph library and an expression corresponding to each expression label;
matching the interaction information with the plurality of expression labels, and determining a target expression label matched with the interaction image;
and obtaining the expression corresponding to the target expression label to obtain the target expression.
14. The method according to any one of claims 1-12, wherein before the detecting the editing operation on the target expression in the information interaction interface, the method further comprises:
when touch operation on the expression control is detected, acquiring input information currently displayed on the information interaction interface;
and screening out a target expression from the expression graph library according to the input information, and displaying the target expression on the information interaction interface.
15. The method according to any one of claims 1-11, further comprising:
if the target expression is a dynamic image, analyzing the target expression to obtain a plurality of sub-images;
editing any one frame of target sub-image in the multi-frame sub-image through the information editing area to obtain an edited target sub-image;
acquiring the image content of the edited target sub-image and the image content layout information of other sub-images in the multi-frame sub-image;
performing corresponding editing processing on the other sub-images based on the image content of the edited target sub-image and the image content layout information of the other sub-images to obtain edited other sub-images of the other sub-images;
and synthesizing all the edited sub-images to obtain the edited target expression.
16. The method according to any one of claims 1-11, further comprising:
if the target expression is a dynamic image, analyzing the target expression to obtain a plurality of sub-images;
editing each frame of target sub-image through the information editing area;
and synthesizing all the edited sub-images to obtain the edited target expression.
17. The method of claim 1, further comprising:
and when the sending operation aiming at the information interaction interface is detected, sending the edited target expression to one or more information interaction objects.
18. An expression processing apparatus, characterized in that the apparatus comprises:
the display unit is used for displaying an expression editing page on the information interaction interface when the editing operation of the target expression in the information interaction interface is detected, wherein the expression editing page at least comprises: the target expression and information editing area;
the first editing unit is used for editing the content of the target expression through the information editing area and displaying the edited target expression on the expression editing page;
and the confirming unit is used for displaying the edited target expression on the information interaction interface when the confirming operation of the edited target expression is detected.
19. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the expression processing method according to any one of claims 1 to 17 when executing the program.
20. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the expression processing method according to any one of claims 1 to 17.
CN202110587687.3A 2021-05-27 2021-05-27 Expression processing method and device, computer equipment and storage medium Pending CN113342435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110587687.3A CN113342435A (en) 2021-05-27 2021-05-27 Expression processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110587687.3A CN113342435A (en) 2021-05-27 2021-05-27 Expression processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113342435A true CN113342435A (en) 2021-09-03

Family

ID=77472263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110587687.3A Pending CN113342435A (en) 2021-05-27 2021-05-27 Expression processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113342435A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092608A (en) * 2021-11-17 2022-02-25 广州博冠信息科技有限公司 Expression processing method and device, computer readable storage medium and electronic equipment
CN114553810A (en) * 2022-02-22 2022-05-27 广州博冠信息科技有限公司 Expression picture synthesis method and device and electronic equipment
WO2022156557A1 (en) * 2021-01-22 2022-07-28 北京字跳网络技术有限公司 Image display method and apparatus, device, and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368199A (en) * 2017-07-01 2017-11-21 北京奇虎科技有限公司 The expression management method and device of social software based on mobile terminal
CN107610206A (en) * 2017-09-29 2018-01-19 北京金山安全软件有限公司 Dynamic picture processing method and device, storage medium and electronic equipment
CN109166164A (en) * 2018-07-25 2019-01-08 维沃移动通信有限公司 A kind of generation method and terminal of expression picture
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111966804A (en) * 2020-08-11 2020-11-20 深圳传音控股股份有限公司 Expression processing method, terminal and storage medium
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368199A (en) * 2017-07-01 2017-11-21 北京奇虎科技有限公司 The expression management method and device of social software based on mobile terminal
CN107610206A (en) * 2017-09-29 2018-01-19 北京金山安全软件有限公司 Dynamic picture processing method and device, storage medium and electronic equipment
CN109166164A (en) * 2018-07-25 2019-01-08 维沃移动通信有限公司 A kind of generation method and terminal of expression picture
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111966804A (en) * 2020-08-11 2020-11-20 深圳传音控股股份有限公司 Expression processing method, terminal and storage medium
CN112800365A (en) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 Expression package processing method and device and intelligent device
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022156557A1 (en) * 2021-01-22 2022-07-28 北京字跳网络技术有限公司 Image display method and apparatus, device, and medium
CN114092608A (en) * 2021-11-17 2022-02-25 广州博冠信息科技有限公司 Expression processing method and device, computer readable storage medium and electronic equipment
CN114092608B (en) * 2021-11-17 2023-06-13 广州博冠信息科技有限公司 Expression processing method and device, computer readable storage medium and electronic equipment
CN114553810A (en) * 2022-02-22 2022-05-27 广州博冠信息科技有限公司 Expression picture synthesis method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11443462B2 (en) Method and apparatus for generating cartoon face image, and computer storage medium
CN113342435A (en) Expression processing method and device, computer equipment and storage medium
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN109660728B (en) Photographing method and device
CN111884908B (en) Contact person identification display method and device and electronic equipment
US20200374391A1 (en) Electronic device for providing telephone number associated information, and operation method therefor
CN111476209B (en) Handwriting input recognition method, handwriting input recognition equipment and computer storage medium
CN107292817B (en) Image processing method, device, storage medium and terminal
CN110544287B (en) Picture allocation processing method and electronic equipment
EP4273745A1 (en) Gesture recognition method and apparatus, electronic device, readable storage medium, and chip
CN112001312A (en) Document splicing method, device and storage medium
US20240089331A1 (en) Display Method, Related Device, and Non-Transitory Readable Storage Medium
KR20190072067A (en) Terminal and server providing a video call service
CN108600079B (en) Chat record display method and mobile terminal
CN110750368A (en) Copying and pasting method and terminal
CN109669710B (en) Note processing method and terminal
CN114880062A (en) Chat expression display method and device, electronic device and storage medium
CN109166164B (en) Expression picture generation method and terminal
CN108710521B (en) Note generation method and terminal equipment
EP4099162A1 (en) Method and apparatus for configuring theme color of terminal device, and terminal device
CN114064173A (en) Method, device, medium and equipment for creating instant session page
CN113645476A (en) Picture processing method and device, electronic equipment and storage medium
CN112053416B (en) Image processing method, device, storage medium and computer equipment
CN112764600B (en) Resource processing method, device, storage medium and computer equipment
CN112449098B (en) Shooting method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication