CN112035032B - Expression adding method and device - Google Patents

Expression adding method and device Download PDF

Info

Publication number
CN112035032B
CN112035032B CN202010615161.7A CN202010615161A CN112035032B CN 112035032 B CN112035032 B CN 112035032B CN 202010615161 A CN202010615161 A CN 202010615161A CN 112035032 B CN112035032 B CN 112035032B
Authority
CN
China
Prior art keywords
expression
input
marker
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010615161.7A
Other languages
Chinese (zh)
Other versions
CN112035032A (en
Inventor
严超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010615161.7A priority Critical patent/CN112035032B/en
Publication of CN112035032A publication Critical patent/CN112035032A/en
Application granted granted Critical
Publication of CN112035032B publication Critical patent/CN112035032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an expression adding method and device, and belongs to the technical field of expression processing. The method comprises the following steps: receiving a first input of a user under the condition that the first text information is displayed; responding to the first input, and displaying an expression preview interface; receiving a second input of the user aiming at the expression preview interface; and responding to the second input, and adding the target expression to a position corresponding to the target expression marker in the first text message. According to the method and the device, the situation that the user switches the input method and the input page of the emotion bag back and forth in the information input process can be avoided, the operation steps of the user are reduced, the time of the user is saved, and the user experience is further improved.

Description

Expression adding method and device
Technical Field
The application belongs to the technical field of expression processing, and particularly relates to an expression adding method and device.
Background
The rapid development of smart phones and the internet changes the daily communication and chat modes of people, and more people send information and chat through mobile electronic devices (such as mobile phones and the like). In the chat process, a plurality of users can insert some expressions, so that the interest of the chat can be improved.
And in the daily chat process, the emoticons are embedded in the characters. After a user inputs a segment of characters, the emotion packet page needs to be opened, the needed emotion is selected and inserted, then the input method page is opened again, and character editing is carried out. That is to say, the user needs to switch back and forth between the text input method page and the expression package input page, so that the operation steps of the user are increased, the time of the user is wasted, and the user experience is poor.
Content of application
The embodiment of the application aims to provide an expression adding method and device, and the problems that in the prior art, user operation steps are added in an expression adding mode, user time is wasted, and user experience is poor are solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an expression adding method, including:
receiving a first input of a user under the condition that the first text information is displayed;
responding to the first input, and displaying an expression preview interface;
receiving a second input of the user aiming at the expression preview interface;
and responding to the second input, and adding the target expression to the position corresponding to the target expression marker in the first text message.
In a second aspect, an embodiment of the present application provides an expression adding device, including:
the first input receiving module is used for receiving first input of a user under the condition of displaying the first text information;
the preview interface display module is used for responding to the first input and displaying an expression preview interface;
the second input receiving module is used for receiving second input of the user for the expression preview interface;
and the target expression adding module is used for responding to the second input and adding the target expression to the position corresponding to the target expression marker in the first text message.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the expression adding method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the expression adding method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the expression adding method according to the first aspect.
In the embodiment of the application, a first input of a user is received under the condition that first text information is displayed, an expression preview interface is displayed in response to the first input, a second input of the user for the expression preview interface is received, and a target expression is added to a position corresponding to a target expression marker in the first text information in response to the second input. According to the embodiment of the application, the expression marker is added in the first text message in advance, and expression replacement is performed on the expression marker in the first text message, so that the situation that a user switches an input method and an expression package input page back and forth in the process of inputting information is avoided, the operation steps of the user are reduced, the time of the user is saved, and the user experience is further improved.
Drawings
Fig. 1 is a flowchart illustrating steps of an expression adding method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an expression replacement page provided in an embodiment of the present application;
fig. 3 is a schematic diagram of another expression replacement page provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an expression adding device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it should be understood that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any inventive effort, shall fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The expression adding scheme provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an expression adding method provided in an embodiment of the present application is shown, and as shown in fig. 1, the expression adding method may specifically include the following steps:
step 101: in the case of displaying the first text information, a first input by a user is received.
The embodiment of the application can be applied to scenes for performing expression replacement according to the expression markers added in the text information input process.
The first character information refers to the text information which is input by the user and needs to be sent.
The first input refers to an input performed by the user to display an emoticon preview interface.
The first input may be an input performed by a user for the first textual information, e.g., the user double-clicks a text character in the first textual information to form the first input, etc.
Of course, not limited to this, in a specific implementation, the first input corresponding to the first text information may also be generated in other manners, for example, a "confirm" button for completing input is set in the session interface in advance, and after the user inputs a section of first text information to be sent, the "confirm" button may be clicked to generate the first input of the currently input first text information, and the like.
In the process of displaying the first text message, a first input from the user may be received, and step 102 is executed.
Step 102: and responding to the first input, and displaying an expression preview interface.
The expression preview interface is a preview interface displaying expressions, such as an expression display area shown in the lower interface illustrated in fig. 2 and 3.
The expression marker is a marker added to the first text information input by the user and used for adding an expression, namely, in the process of inputting the first text information by the user, when an expression needs to be added to a certain position of the first text information, in order to avoid frequent switching of an input method and an expression package page, the expression marker can be added to the position needing to be added to the first text information. In this embodiment, the expression marker may be a marker preset by a user, or may be a marker set by a system, such as "#", and the like, and specifically, may be determined according to an actual situation, which is not limited in this embodiment.
After receiving the first input of the user, an expression preview interface may be displayed, and specifically, the following two cases may be divided according to whether the first text information includes an expression marker:
1. after receiving a first input of a user, if the first text information contains at least one emotion marker, displaying an emotion preview interface.
In this embodiment, when the first text message includes at least one expression marker, after receiving a first input from a user, an expression preview interface may be displayed, so that the user may select a desired expression from the expression preview interface to replace the expression marker.
2. After receiving the first input of the user, if the first text information does not contain the expression marker, the first text information is sent.
When the first text message does not contain the expression marker, the first text message can be directly sent in combination with the first input of the user.
It can be understood that, in this embodiment, consideration is given to a case where the first text information includes the expression marker, and for a case where the first text information does not include the expression marker, the first text information may be directly sent, which is not described in much detail in this embodiment.
After the emoticon preview interface is displayed, step 103 is executed.
Step 103: and receiving a second input of the user aiming at the expression preview interface.
The second input refers to the input of replacing the expression marker in the first text information by the user selection target expression executed by the user on the expression preview interface.
The second input may be an input formed by a user clicking a certain expression in the expression preview interface, or an input formed by a user dragging a certain expression, and specifically, a specific operation form of the second input may be set according to a service requirement, which is not limited in this embodiment.
After the emoticon preview interface is displayed, a second input executed by the user for the emoticon preview interface can be received, and then step 104 is executed.
Step 104: and responding to the second input, and adding the target expression to the position corresponding to the target expression marker in the first text message.
The target expression refers to an expression selected by the user from the expression preview interface for replacing the expression marker.
The target expression marker is an expression marker that needs to be replaced with a target expression in the first text message, and for example, the expression marker included in the first text message includes: the marker 1, the marker 2, and the marker 3, when the marker 1 needs to be replaced with a target expression, the marker 1 is used as a target expression marker. And when the marker 2 and the marker 3 need to be replaced by the target expression, the marker 2 and the marker 3 are used as the target expression markers.
It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After second input, executed by the user, for the expression preview interface is received, a target expression selected by the user can be obtained, and the target expression marker in the first text information is replaced by the target expression, that is, the target expression is added to a position corresponding to the target expression marker. In particular, the detailed description may be made in conjunction with specific implementations described below.
In a specific implementation manner of the present application, the first text information includes at least one expression marker, and the step 104 may include:
substep S1: in response to the second input, determining the user-selected emoji marker.
In this embodiment, after receiving a second input of the user on the expression preview interface, the expression marker selected by the user may be determined according to the second input performed by the user on the expression preview interface.
In this embodiment, the expression markers selected by the user may be sequentially used as the expression markers selected by the user according to the arrangement order of the expression markers added in the first text information. The expression markers may also be selected according to the sequence of the expression markers in the first text message set by the user, and specifically, the sequence may be determined according to business requirements, which is not limited in this embodiment.
After receiving a second input from the user to the emoji preview interface, the emoji tag selected by the user may be determined according to the second input, and then, sub-step S2 is performed.
Substep S2: and determining a first expression marker belonging to the target type in the at least one expression marker according to the target type corresponding to the expression marker selected by the user, and determining the first expression marker as the target expression marker.
The target type is a type of an expression marker selected by the user, and in this embodiment, the expression marker of the same type may be preset, that is, when the user wants to add the same expression at least two positions in the information to be transmitted, two identical markers may be added in the information to be transmitted in advance, as shown in fig. 2, the markers added in the information to be transmitted have "#", and "# 1 #", where two "#" means that the two markers are markers of the same type.
The first expression marker is an expression marker belonging to a target type in at least one expression marker in the first text message.
After the expression markers selected by the user are obtained, the target type corresponding to the expression markers selected by the user can be obtained, and according to the target type, a first expression marker belonging to the target type in at least one expression marker included in the first text message is determined.
A first expression marker belonging to the target type in the at least one expression marker is determined according to the target type corresponding to the expression marker selected by the user, and the first expression marker may be determined as the target expression marker.
After determining the target expression marker, sub-step S3 is performed.
Sub-step S3: and adding the target expression to the position corresponding to the target expression marker.
After the target expression marker is determined, the target expression can be added to the position corresponding to the target expression marker, so that expression replacement of the expression marker in the first text information is achieved.
In this embodiment, the sequence of at least one expression marker in the first text information may be further sequentially replaced, and specifically, the following specific implementation manner may be described in detail.
In another specific implementation manner of the present application, after adding the target expression to the position corresponding to the target expression marker, the method may further include:
step M1: and determining a second expression marker in the expression markers according to a preset sequence.
In this embodiment, the second expression marker is an expression marker selected according to a preset sequence after the target expression marker is replaced by the target expression in the first text message.
The preset sequence may be a sequence of the expression markers in the first text information. I.e. the order in which the expression tags are arranged in the first text information.
The preset sequence may also be the sequence of the arrangement of the emotion markers in the first text message by the user.
And selecting a second expression marker needing expression replacement from at least one expression marker of the first text information according to a preset sequence when the target expression is added to the position corresponding to the target expression marker.
After the second one of the expression markers is determined, step M2 is performed.
Step M2: and receiving a third input of the user aiming at the expression preview interface.
The third input refers to input performed by the user on the emotion preview interface for replacing the second emotion marker with the selected emotion.
In this embodiment, the third input may be an input that is executed by the user in the expression preview interface to click a certain expression, or an input that is executed by the user in the expression preview interface to drag a certain expression, and the like.
After determining the second expression marker of the expression markers, a third input from the user to the expression preview interface may be received, and step M3 is performed.
Step M3: in response to the third input, adding a second target expression to a location corresponding to the second expression marker.
The second target expression refers to an expression selected by the user from the expression preview interface for replacing the second expression marker.
After receiving a third input of the user for the expression preview interface, according to the third input, adding a second target expression to a position corresponding to the second expression marker, specifically, replacing the second expression marker in the first text information with the second target expression.
In this embodiment, when the user needs to send the first text message, the expression marker that is not replaced in the first text message may be deleted, and then the text message is sent, which may be specifically described in detail below with reference to the following specific implementation manner.
In a specific implementation manner of the present application, after the step 104, the method may further include:
step N1: and receiving fourth input of the user aiming at the first text information.
In this embodiment, the fourth input refers to an input of the user sending the first text message.
In some examples, the fourth input may be an input generated by clicking a certain button by the user, for example, after the user completes replacing the expression marker in the first text message, the "send" button may be clicked by the user, when an expression marker which is not replaced exists in the text message, the prompt message may be displayed, and the "confirm" button may be displayed, whether to confirm sending of the message to be sent is selected by the user, and an operation of clicking the confirm button by the user may be regarded as the fourth input.
In some examples, the fourth input may be an input formed by the user double-clicking on a certain text in the first text information, for example, when the user needs to send the first text information, the certain text in the first text information may be double-clicked by the user to form the fourth input.
It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
When the user sends the first text message, a fourth input of the user for the first text message may be received, and step N2 is performed.
Step N2: and responding to the fourth input, deleting the unreplaced expression markers in the first text message, and sending the first text message.
The expression marker which is not replaced refers to the expression marker which is not replaced in the first text message.
After receiving a fourth input of the user for the first text message, it may be determined whether an expression marker that is not replaced exists in the first text message, and when the expression marker that is not replaced exists, the expression marker that is not replaced in the first text message may be acquired, the expression marker that is not replaced in the first text message may be deleted, and the first text message may be sent.
The expression adding method provided by the embodiment of the application has the beneficial effects that the expression adding method has, and can realize batch replacement of the expression markers in the text information, so that the time of a user is further saved, and the user experience is improved.
According to the expression adding method provided by the embodiment of the application, under the condition that the first text information is displayed, a first input of a user is received, an expression preview interface is displayed in response to the first input, a second input of the user for the expression preview interface is received, and a target expression is added to a position corresponding to a target expression marker in the first text information in response to the second input. According to the embodiment of the application, the expression marker is added in the first text message in advance, and expression replacement is performed on the expression marker in the first text message, so that the situation that a user switches an input method and an expression package input page back and forth in the process of inputting information is avoided, the operation steps of the user are reduced, the time of the user is saved, and the user experience is further improved.
It should be noted that, in the expression adding method provided in the embodiment of the present application, the execution main body may be an expression adding device, or a control module used for executing the loaded expression adding method in the expression adding device. The expression adding method provided by the embodiment of the present application is described by taking an example in which an expression adding device executes a loaded expression adding method.
Referring to fig. 4, a schematic structural diagram of an expression adding device provided in an embodiment of the present application is shown, and as shown in fig. 4, the expression adding device may specifically include the following modules:
a first input receiving module 410, configured to receive a first input of a user when the first text information is displayed;
a preview interface display module 420, configured to display an expression preview interface in response to the first input;
the second input receiving module 430 is configured to receive a second input of the user for the emotion preview interface;
and the target expression adding module 440 is configured to add a target expression to a position corresponding to the target expression marker in the first text information in response to the second input.
Optionally, the preview interface display module 420 includes:
the preview interface display unit is used for responding to the first input and displaying an expression preview interface if the first text information comprises at least one expression marker;
the device further comprises:
and the first text information sending module is used for responding to the first input and sending the first text information if the first text information does not include the expression marker.
Optionally, the first text message includes at least one expression marker, and the target expression adding module 440 includes:
an expression marker determination unit for determining the expression marker selected by the user in response to the second input;
a target identifier determining unit, configured to determine, according to a target type corresponding to an expression identifier selected by the user, a first expression identifier belonging to the target type in the at least one expression identifier, and determine the first expression identifier as the target expression identifier;
and the target expression adding unit is used for adding the target expression to the position corresponding to the target expression marker.
Optionally, the apparatus further comprises:
the second marker determining module is used for determining a second expression marker in the expression markers according to a preset sequence;
the third input receiving module is used for receiving a third input of the user for the expression preview interface;
and the second expression adding module is used for responding to the third input and adding a second target expression to the position corresponding to the second expression marker.
Optionally, the preset sequence includes at least one of: the sequence of the expression markers in the first text information and the sequence of the expression markers in the first text information arranged by the user.
Optionally, the apparatus further comprises:
the fourth input receiving module is used for receiving fourth input of the first text information by the user;
and the expression marker deleting module is used for responding to the fourth input, deleting the expression markers which are not replaced in the first text message, and sending the first text message.
According to the expression adding device provided by the embodiment of the application, under the condition that the first text information is displayed, a first input of a user is received, an expression preview interface is displayed in response to the first input, a second input of the user for the expression preview interface is received, and a target expression is added to a position corresponding to a target expression marker in the first text information in response to the second input. According to the embodiment of the application, the expression marker is added in the first text message in advance, and expression replacement is performed on the expression marker in the first text message, so that the situation that a user switches an input method and an expression package input page back and forth in the process of inputting information is avoided, the operation steps of the user are reduced, the time of the user is saved, and the user experience is further improved.
The expression adding device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The expression adding device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The expression adding device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, an electronic device is further provided in this embodiment of the present application, as shown in fig. 5, the electronic device 500 may include a processor 510, a memory 509, and a program or an instruction stored in the memory 509 and executable on the processor 510, where the program or the instruction, when executed by the processor 510, implements each process of the expression addition method embodiment, and may achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
As shown in fig. 6, the electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so that the functions of managing charging, discharging, and power consumption are implemented through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 607 is configured to receive a first input from a user when the first text information is displayed;
responding to the first input, and displaying an expression preview interface;
receiving a second input of the user aiming at the expression preview interface;
and responding to the second input, and adding the target expression to a position corresponding to the target expression marker in the first text message.
According to the method and the device for displaying the expression preview interface, under the condition that the first text information is displayed, a first input of a user is received, the expression preview interface is displayed in response to the first input, a second input of the user for the expression preview interface is received, and the target expression is added to the position corresponding to the target expression marker in the first text information in response to the second input. According to the embodiment of the application, the expression marker is added in the first text message in advance, and expression replacement is performed on the expression marker in the first text message, so that the situation that a user switches an input method and an expression package input page back and forth in the process of inputting information is avoided, the operation steps of the user are reduced, the time of the user is saved, and the user experience is further improved.
Optionally, the display unit 606 is configured to, in response to the first input, display an expression preview interface if the first text information includes at least one expression marker;
the radio frequency unit 601 is further configured to:
responding to the first input, and if the first text information does not include the expression marker, sending the first text information.
Optionally, the first text information includes at least one expression marker, and the adding the target expression to a position corresponding to the target expression marker in the first text information in response to the second input includes:
in response to the second input, determining the user-selected emoji marker;
determining a first expression marker belonging to the target type in the at least one expression marker according to the target type corresponding to the expression marker selected by the user, and determining the first expression marker as the target expression marker;
and adding the target expression to the position corresponding to the target expression marker.
Optionally, after the adding the target expression to the position corresponding to the target expression marker, the method further includes:
determining a second expression marker in the expression markers according to a preset sequence;
receiving a third input of the user for the expression preview interface;
in response to the third input, adding a second target expression to a location corresponding to the second expression marker.
Optionally, the preset sequence includes at least one of: the sequence of the expression markers in the first text information and the sequence of the expression markers in the first text information arranged by the user.
Optionally, after the adding, in response to the second input, a target expression to a position corresponding to a target expression marker in the first text information, the method further includes:
receiving a fourth input of the user aiming at the first text information;
and responding to the fourth input, deleting the non-replaced expression markers in the first text message, and sending the first text message.
The embodiment of the application can realize batch replacement of the expression markers in the text information, further save the time of a user and improve the experience of the user.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two portions of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
An embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the expression adding method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the expression addition method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order, depending on the functionality involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the above embodiment method can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present application may be substantially or partially embodied in the form of a software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and including instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the scope of the invention as defined by the appended claims.

Claims (8)

1. An expression adding method, comprising:
receiving a first input of a user under the condition that the first text information is displayed;
responding to the first input, and displaying an expression preview interface;
receiving a second input of the user aiming at the expression preview interface;
responding to the second input, and adding a target expression to a position corresponding to a target expression marker in the first text message;
the first text message includes at least one expression marker, and adding a target expression to a position corresponding to the target expression marker in the first text message in response to the second input includes:
in response to the second input, determining the user-selected emoji marker;
determining a first expression marker belonging to the target type in the at least one expression marker according to the target type corresponding to the expression marker selected by the user, and determining the first expression marker as the target expression marker;
adding the target expression to a position corresponding to the target expression marker; and when the number of the target expression markers in the first text message is at least two, adding the target expression to the positions corresponding to the at least two target expression markers.
2. The method of claim 1, wherein displaying an emoji preview interface in response to the first input comprises:
responding to the first input, and if the first text information comprises at least one expression marker, displaying an expression preview interface;
the method further comprises the following steps:
responding to the first input, and if the first text information does not include the expression marker, sending the first text information.
3. The method of claim 1, wherein after the adding the target expression to the position corresponding to the target expression marker, further comprising:
determining a second expression marker in the expression markers according to a preset sequence;
receiving a third input of the user for the expression preview interface;
in response to the third input, adding a second target expression to a location corresponding to the second expression marker.
4. The method of claim 3, wherein the predetermined order comprises at least one of: the sequence of the expression markers in the first text information and the sequence of the expression markers in the first text information arranged by the user.
5. The method of claim 1, wherein after the adding a target expression to the first text message at a position corresponding to the target expression marker in response to the second input, further comprising:
receiving a fourth input of the user aiming at the first text information;
and responding to the fourth input, deleting the non-replaced expression markers in the first text message, and sending the first text message.
6. An expression adding device, comprising:
the first input receiving module is used for receiving first input of a user under the condition of displaying the first text information;
the preview interface display module is used for responding to the first input and displaying an expression preview interface;
the second input receiving module is used for receiving second input of the user for the expression preview interface;
the target expression adding module is used for responding to the second input and adding a target expression to a position corresponding to a target expression marker in the first text message;
the first text message comprises at least one expression marker, and the target expression adding module comprises:
an expression marker determination unit for determining the expression marker selected by the user in response to the second input;
a target marker determining unit, configured to determine, according to a target type corresponding to the expression marker selected by the user, a first expression marker belonging to the target type in the at least one expression marker, and determine the first expression marker as the target expression marker;
the target expression adding unit is used for adding the target expression to the position corresponding to the target expression marker; and when the number of the target expression markers in the first text message is at least two, adding the target expression to the positions corresponding to the at least two target expression markers.
7. The apparatus of claim 6, wherein the preview interface display module comprises:
the preview interface display unit is used for responding to the first input and displaying an expression preview interface if the first text information comprises at least one expression marker;
the device further comprises:
and the first text information sending module is used for responding to the first input and sending the first text information if the first text information does not include the expression marker.
8. The apparatus of claim 6, further comprising:
the second marker determining module is used for determining a second expression marker in the expression markers according to a preset sequence;
the third input receiving module is used for receiving a third input of the user for the expression preview interface;
and the second expression adding module is used for responding to the third input and adding a second target expression to the position corresponding to the second expression marker.
CN202010615161.7A 2020-06-30 2020-06-30 Expression adding method and device Active CN112035032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010615161.7A CN112035032B (en) 2020-06-30 2020-06-30 Expression adding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010615161.7A CN112035032B (en) 2020-06-30 2020-06-30 Expression adding method and device

Publications (2)

Publication Number Publication Date
CN112035032A CN112035032A (en) 2020-12-04
CN112035032B true CN112035032B (en) 2022-07-12

Family

ID=73579769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010615161.7A Active CN112035032B (en) 2020-06-30 2020-06-30 Expression adding method and device

Country Status (1)

Country Link
CN (1) CN112035032B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294220A (en) * 2012-02-28 2013-09-11 联想(北京)有限公司 Input method and device
CN103809766A (en) * 2012-11-06 2014-05-21 夏普株式会社 Method and electronic device for converting characters into emotion icons
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN104298429A (en) * 2014-09-25 2015-01-21 北京搜狗科技发展有限公司 Information presentation method based on input and input method system
CN110058776A (en) * 2019-02-13 2019-07-26 阿里巴巴集团控股有限公司 The message issuance method and device and electronic equipment of Web page
CN111258434A (en) * 2020-01-14 2020-06-09 上海米哈游天命科技有限公司 Method, device, equipment and storage medium for inserting pictures into chat interface

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063427A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
EP3326051B1 (en) * 2015-09-09 2020-10-21 Apple Inc. Emoji and canned responses
CN109445614A (en) * 2018-10-02 2019-03-08 彭红文 A kind of expression input method, device, server and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294220A (en) * 2012-02-28 2013-09-11 联想(北京)有限公司 Input method and device
CN103809766A (en) * 2012-11-06 2014-05-21 夏普株式会社 Method and electronic device for converting characters into emotion icons
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN104298429A (en) * 2014-09-25 2015-01-21 北京搜狗科技发展有限公司 Information presentation method based on input and input method system
CN110058776A (en) * 2019-02-13 2019-07-26 阿里巴巴集团控股有限公司 The message issuance method and device and electronic equipment of Web page
CN111258434A (en) * 2020-01-14 2020-06-09 上海米哈游天命科技有限公司 Method, device, equipment and storage medium for inserting pictures into chat interface

Also Published As

Publication number Publication date
CN112035032A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN113300938B (en) Message sending method and device and electronic equipment
CN111984115A (en) Message sending method and device and electronic equipment
CN112486444B (en) Screen projection method, device, equipment and readable storage medium
CN113285866B (en) Information sending method and device and electronic equipment
CN112817676A (en) Information processing method and electronic device
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
CN114327088A (en) Message sending method, device, electronic equipment and medium
CN112286611B (en) Icon display method and device and electronic equipment
CN113590008A (en) Chat message display method and device and electronic equipment
CN112637407A (en) Voice input method and device and electronic equipment
CN112818094A (en) Chat content processing method and device and electronic equipment
CN112286615A (en) Information display method and device of application program
EP4351117A1 (en) Information display method and apparatus, and electronic device
CN115718581A (en) Information display method and device, electronic equipment and storage medium
CN113239212B (en) Information processing method and device and electronic equipment
CN112269510B (en) Information processing method and device and electronic equipment
CN112035032B (en) Expression adding method and device
CN112230817B (en) Link page display method and device and electronic equipment
CN112099715B (en) Information processing method and device
CN114327706A (en) Information sharing method and device, electronic equipment and readable storage medium
CN114625296A (en) Application processing method and device
CN113852540A (en) Information sending method, information sending device and electronic equipment
CN113342241A (en) Target character selection method and device, electronic equipment and storage medium
CN113141296A (en) Message display method and device and electronic equipment
CN113037618B (en) Image sharing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant