CN111158817A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN111158817A
CN111158817A CN201911345613.8A CN201911345613A CN111158817A CN 111158817 A CN111158817 A CN 111158817A CN 201911345613 A CN201911345613 A CN 201911345613A CN 111158817 A CN111158817 A CN 111158817A
Authority
CN
China
Prior art keywords
input
image
interface
user
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911345613.8A
Other languages
Chinese (zh)
Inventor
黄金宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911345613.8A priority Critical patent/CN111158817A/en
Publication of CN111158817A publication Critical patent/CN111158817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an information processing method and electronic equipment, relates to the technical field of communication, and aims to solve the problem that the existing electronic equipment is poor in message display effect in a chat interface. The method comprises the following steps: receiving a first input of a first image and target text information input by a user under the condition that a conversation interface is displayed; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information. The method can be applied to a scene that the electronic equipment displays the graphic and text information in the conversation interface.

Description

Information processing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an information processing method and electronic equipment.
Background
With the development of electronic technology and communication technology, the functions of electronic equipment are more and more diversified, application programs of the electronic equipment are more and more abundant, and people can learn, work, socialize and the like by using the electronic equipment. For example, a conversation (i.e., chat) may be conducted between multiple users via an instant messaging application of an electronic device.
Currently, when a user chats with other users in a conversation interface (e.g., a group chat interface), an electronic device may display messages sent by the user to the other users and also display messages received from the other users in the conversation interface. In the case where a user transmits a plurality of messages (e.g., messages in the form of text, pictures, etc.) in a short time, the electronic device displays the transmitted plurality of messages on the chat interface one by one.
However, in the multi-user conversation process, because the message refreshing speed in the chat interface is too fast, messages of others may be displayed among multiple messages in the chat interface in a mixed manner, and sometimes a semantic misunderstanding of context may be formed, so that multiple users cannot communicate smoothly. Thus, the electronic device is less effective in displaying messages in the chat interface.
Disclosure of Invention
The embodiment of the invention provides an information processing method and electronic equipment, and aims to solve the problem that the existing electronic equipment is poor in message display effect in a chat interface.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an information processing method, which is applied to an electronic device, and the method includes: receiving a first input of a first image and target text information input by a user under the condition that a conversation interface is displayed; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information.
In a second aspect, an embodiment of the present invention provides an electronic device, which includes a receiving module and a display module. The receiving module is used for receiving a first input of a first image and target text information input by a user under the condition that the display module displays the conversation interface. The display module is used for responding to the first input and displaying a first interface, and the first interface comprises a first image and target text information. The display module is further used for displaying a second image in the conversation interface under the condition that the receiving module receives a second input of the user, wherein the second image is an image formed by combining the first image and the target text information.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the computer program is executed by the processor, the steps of the information processing method in the first aspect are implemented.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the information processing method in the first aspect.
In the embodiment of the invention, a first input of a first image and target text information input by a user can be received under the condition that a conversation interface is displayed; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information. The embodiment of the invention is applied to the scene of sending the image and the text information in the multi-user conversation process, and the embodiment of the invention can send the image and the text information which are needed to be sent by the user in the multi-user conversation process by combining the image and the text information into a whole, thereby avoiding the situation that the content of the message is misinterpreted due to the separation of the image and text information in the sending process. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved. Therefore, the embodiment of the invention can improve the effect of displaying the message by the electronic equipment.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an information processing method according to an embodiment of the present invention;
fig. 3 is one of schematic interfaces of an application of an information processing method according to an embodiment of the present invention;
FIG. 4 is a second schematic interface diagram of an application of the information processing method according to the embodiment of the present invention;
fig. 5 is a third schematic interface diagram of an application of the information processing method according to the embodiment of the present invention;
FIG. 6 is a fourth schematic view of an interface applied by the information processing method according to the embodiment of the present invention;
FIG. 7 is a fifth schematic view of an interface applied by the information processing method according to the embodiment of the present invention;
FIG. 8 is a sixth schematic view of an interface applied by the information processing method according to the embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a second schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims herein are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first image and the second image, etc. are for distinguishing different images, rather than for describing a particular order of the images.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units, or the like; plural elements means two or more elements, and the like.
The embodiment of the invention provides an information processing method and electronic equipment, which can receive first input of a first image and target text information input by a user under the condition of displaying a conversation interface; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information. The embodiment of the invention is applied to the scene of sending the image and the text information in the multi-user conversation process, and the embodiment of the invention can send the image and the text information which are needed to be sent by the user in the multi-user conversation process by combining the image and the text information into a whole, thereby avoiding the situation that the content of the message is misinterpreted due to the separation of the image and text information in the sending process. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved. Therefore, the embodiment of the invention can improve the effect of displaying the message by the electronic equipment.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the information processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the information processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the information processing method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the information processing method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not limited in particular.
An execution main body of the information processing method provided in the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the information processing method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily describe the information processing method provided by the embodiment of the present invention.
An information processing method provided by an embodiment of the present invention is exemplarily described below with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides an information processing method, which may include steps 201 to 203 described below.
Step 201, the electronic device receives a first input of a first image and target text information input by a user under the condition of displaying a session interface.
The first input may be a selection operation on an image and a character input.
In the embodiment of the present invention, when a user uses an electronic device to perform a conversation with another user through an instant messaging application, if the user needs to share a first image and target text information related to the first image with another user, at this time, the user may input the first image and the target text information (i.e., the first input) to trigger the electronic device to perform a synthesis process on the first image and the target text information (where, the image and the text information may be referred to as simply as text-text information) input by the user, and then the electronic device synthesizes the image and the text information into a whole and sends the integrated image and text information to an electronic device of another user.
In the embodiment of the present invention, the session interface may be an interface for two or more users to perform a session (i.e., chat) in an instant messaging application.
Optionally, in the embodiment of the present invention, the first image may be a picture, a video, or any other possible image, which may be determined specifically according to an actual use requirement, and the embodiment of the present invention is not limited.
It should be noted that the number of the first images is not limited to one, and may also be two or more (for example, the first images are a plurality of pictures); similarly, the number of target text messages may not be limited to one, and may be two or more (for example, the target text messages may be a plurality of text messages). The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Step 202, the electronic device responds to the first input and displays a first interface, and the first interface comprises a first image and target text information.
In the embodiment of the present invention, in the first interface, the first image and the target text information may be different contents of one image (i.e., a second image described below). Alternatively, in the first interface, the first image and the target text information may be two independent types of information.
And step 203, the electronic equipment displays a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information.
Optionally, in this embodiment of the present invention, the second input may be an input to a sending control (which is used to trigger sending of the second image), for example, a click input (for example, a single click input or a double click input) or a long press input (for example, an input with a pressing duration longer than a preset duration), or may be any other input meeting an actual use requirement, which may be determined specifically according to the actual use requirement, and this embodiment of the present invention is not limited.
The information processing method provided by the embodiment of the invention can receive the first input of the first image and the target text information input by the user under the condition of displaying the session interface; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information. The embodiment of the invention is applied to the scene of sending the image and the text information in the multi-user conversation process, and the embodiment of the invention can send the image and the text information which are needed to be sent by the user in the multi-user conversation process by combining the image and the text information into a whole, thereby avoiding the situation that the content of the message is misinterpreted due to the separation of the image and text information in the sending process. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved. Therefore, the embodiment of the invention can improve the effect of displaying the message by the electronic equipment.
A possible implementation of the information processing method provided by the embodiment of the present invention is exemplarily described below by a first implementation and a second implementation described below.
First implementation
In a first implementation manner, the session interface (shown as 30 in fig. 3) includes a first sending control (shown as 31 in fig. 3), where the first sending control is a control that sends user input information based on a trigger operation. The first input may include a first sub-input and a second sub-input, the first sub-input is an input of the user inputting the first image and the target text information in an input box (also referred to as an edit box and an input box, as shown at 32 in fig. 3) of the session interface, and the second sub-input is an input of the user to the first sending control.
Illustratively, as shown in fig. 3, the user may enter a first image and target text information in an input box 32 of the conversation interface 30 and then click on a first send control 31.
In this case, the step 202 described above can be specifically realized by the steps 202A to 202C described below.
Step 202A, the electronic device responds to the first sub-input and the second sub-input, and detects whether the information to be sent comprises image and text information.
In this embodiment of the present invention, if the electronic device detects that the information to be sent includes image and text information, the electronic device continues to execute step 202B and step 202C described below. If the electronic device does not detect that the information to be sent includes images and text information, for example, the electronic device detects that the information to be sent only includes one or more images, or the electronic device detects that the information to be sent only includes one or more pieces of text information, the electronic device sends the message in a conventional message manner (i.e., a one-by-one message sending manner).
Step 202B, the electronic equipment synthesizes the first image and the target text information into a second image.
In the embodiment of the invention, under the condition that the electronic equipment detects that the information to be sent comprises the image and the text information, the electronic equipment synthesizes the first image and the target text information into the second image.
Optionally, in the embodiment of the present invention, the step 202B may be specifically implemented by the following step 202B 1.
Step 202B1, the electronic device displays the target text information in the target area in the first interface to obtain a second image.
In the embodiment of the invention, the electronic device can determine the target area according to the input sequence of the first image and the target text information. The target area may be a blank area in the first image.
Exemplarily, taking the first image as a picture, if the text information is input first and then the picture is input, the text information is placed above the picture; if the picture is input first and then the text information is input, the text information is placed below the picture.
As another example, assuming that the target text information includes first text information and second text information, as shown in fig. 3, if the user first inputs the first text information "this is what we took when small", then inputs the first image "(picture)", and then inputs the second text information "the right boy is me", the electronic device may determine the display order of the image and the text information according to the input order of the image and the text information. As shown in fig. 4, the electronic device displays the first text information 34, then displays the first image 35, and then displays the second text information 36 in the first interface 33.
Step 202C, the electronic device displays the second image in the first interface.
Optionally, in the embodiment of the present invention, formats of the second image and the first image may be the same or different, and may be determined specifically according to an actual use requirement, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, the electronic device may combine the first image and the target text information input by the user into the second image according to the input sequence of the first image and the target text information input by the user, with the format of the image (i.e., the first image) that the user needs to send as a reference.
In a first implementation manner, if a user inputs a first image and target text information in an input box of a session interface, the electronic device detects whether information or content to be sent includes image and text information, and if the electronic device detects that the information or content to be sent includes both image and text information, the electronic device synthesizes the first image and the target text information into a second image (which is equivalent to packaging), and displays the second image on the first interface.
Further optionally, in the first implementation manner, after the step 202B, the information processing method provided in the embodiment of the present invention may further include the following step 204. Accordingly, the step 203 can be specifically realized by the step 203A described below.
And step 204, the electronic equipment displays target prompt information, wherein the prompt information comprises sending options.
Wherein the target prompt message is used for prompting the user to send the second image.
As shown in fig. 4, the electronic device displays a target prompt message 37 in the first interface 33, and the prompt message includes a sending option 38 for sending the second image to the target electronic device based on the trigger operation.
And step 203A, under the condition that second input of the sending option by the user is received, the electronic equipment displays a second image in the conversation interface.
The following describes, by taking the first image as an example, a specific implementation process of the first implementation manner described above with reference to fig. 3 and 4.
As shown in fig. 3, if a user needs to share a first image and target text information related to the first image with another user, at this time, the user may input or edit the first image and the target text information to be sent (i.e., the first input mentioned above) in an input box 31 of the conversation interface 30, and click a sending control 32.
Accordingly, the electronic device may detect whether the information or content input in the input box 31 contains both image and text information in response to the user input, and in the case where it is detected that the information or content input in the input box 31 contains both image and text information, the electronic device performs a synthesizing process on the first image input by the user and the target text information.
Further, as shown in fig. 4, the electronic device combines the image and the text information, and then displays a second image (including a first image 35 and target text information 34 and 36) on the first interface 33. If the user clicks the send option 38 in the prompt message 37, as shown in fig. 5, the electronic device displays the second image 39 in the conversation interface 30, so as to package and combine the image and the text message into the second image, and display the second image in the conversation interface of the chat group.
In the first implementation manner of the invention, the image and the text information input by the user can be integrated, and then the integrated image is displayed to the user in the conversation interface, i.e. the image-text information can be integrally displayed in the conversation process of multiple users, so that the image-text information sent by the user in the conversation process is continuous and complete, and the user can more clearly understand the message content. Therefore, the embodiment of the invention can improve the effect of displaying the message by the electronic equipment.
Second implementation
In the second implementation, in the case of displaying the conversation interface (as shown in 41 in (a) in fig. 6), the user can view a plurality of images (as shown in fig. 6) displayed by the electronic device through an album entry (as shown in 42 in (a) in fig. 6) in the conversation interface, and further, the user can select a first image from the plurality of images.
The first input includes a third sub-input, a fourth sub-input and a fifth sub-input, the third sub-input is input by a user to the first image (a selection operation shown as 43 in (b) in fig. 6), the fourth sub-input is input by the user to an editing control (a selection operation shown as 44 in (b) in fig. 6), the editing control is a control for editing the image input by the user based on a trigger operation, and the fifth sub-input is input by the user to input target text information in the first interface.
In this case, the step 202 can be specifically realized by the step 202D and the step 202E described below.
Step 202D, the electronic device responds to the third sub-input and the fourth sub-input and displays a first interface, and the first interface comprises a first image.
The first interface may be an interface capable of editing image and text information.
In the embodiment of the present invention, the electronic device may start an editing function in response to the third sub-input and the fourth sub-input, add a layer of frame (similar to a function of a photo frame, and the synthesized image may include the frame) on the periphery of the selected first image, and enable a user to edit a text inside the frame.
Step 202E, the electronic device responds to the fifth sub-input, the first interface is updated, and the updated first interface displays the first image and the target text information.
And the display position of the target text information on the first interface is an area corresponding to the fifth sub-input in the first interface. That is, the text information corresponding to the fifth sub-input may be displayed in the area where the fifth sub-input is located.
In a second implementation manner, a user may select a first image from a plurality of images displayed by the electronic device through an album entry in the session interface and click on the edit control, and accordingly, the electronic device responds to display of a first interface, where the first interface includes the first image. Further, the user may input the target text information for the first image in the first interface, and accordingly, the electronic device may update the first interface, where the updated first interface displays the first image and the target text information.
Further optionally, in a second implementation manner, the first interface further includes a second sending control, where the second sending control is a control that sends the edited image based on a triggering operation. In this case, the step 203 can be specifically realized by the step 203B and the step 203C described below.
Step 203B is that the electronic device synthesizes the first image and the target text information into a second image when receiving a second input of the user to the second sending control.
And step 203C, the electronic equipment displays the second image in the conversation interface.
In the embodiment of the present invention, after the user edits the text information of the first image in the first interface, if the user performs a second input (for example, a click input) on the second sending control, the electronic device synthesizes the first image and the target text information into a second image (equivalent to packaging), and displays the second image on the first interface.
Through the second implementation mode, the text information can be edited more flexibly, meanwhile, complicated operations such as skip application and the like are avoided, a flexible editing function can be achieved, the simultaneous sending of the image-text information can also be achieved, and the user experience is improved.
The following describes, by taking the first image as an example, a specific implementation process of the second implementation manner described above by referring to fig. 6, fig. 7, and fig. 8.
As shown in fig. 6 (a) and (b), the user may select a first image 43 from the plurality of images displayed by the electronic device through the album portal 42 in the session interface 41 (i.e., the third sub-input described above) and click the edit control 44 (i.e., the fourth sub-input described above), and as shown in fig. 7 (a), the electronic device may display a first interface 45 in response to the third sub-input and the fourth sub-input, the first interface 45 containing a first image 46, and the user may edit text information on the first image 46 on the first interface 45.
In the case where the electronic apparatus displays the first interface 45 including the first image 46 as shown in fig. 7 (a), if the user inputs the target text information in a blank area (a shaded area as shown in fig. 7 (b)) in the first interface 45, the electronic apparatus updates the first interface 45 as shown in fig. 7 (b), and the updated first interface 45 includes the first image 46 and the target text information 47 and 48.
As shown in fig. 7 (b), the first interface 45 further includes a second sending control 49, and the second sending control 49 is a control for sending the edited image based on the trigger operation. If the user clicks the second sending control, as shown in fig. 8, the electronic device synthesizes the first image and the target text information into a second image, and displays the second image 50 in the conversation interface 41, so that the image and the text information are packaged and synthesized into the second image, and the second image is displayed in the conversation interface of the chat group.
In the second implementation manner of the invention, the user can select the image and edit the text information aiming at the image, then the image and the edited text information are synthesized into a new image after the user confirms, and the new image is displayed to the user in the conversation interface, namely the image-text information can be integrally displayed in the conversation process of multiple users, so that the image-text information sent by the user in the conversation process is continuous and complete, and the user can more clearly understand the message content. Therefore, the embodiment of the invention can improve the effect of displaying the message by the electronic equipment.
Optionally, the information processing method provided in the embodiment of the present invention may further include step 205 described below.
Step 205, in the case of receiving the second input, the electronic device sends a first message to the target electronic device, where the first message includes the second image;
the conversation interface is an interface for a user of the electronic equipment and a user of the target electronic equipment to carry out conversation. And the target electronic equipment displays a second image in the conversation interface under the condition of receiving the first message.
With reference to the first implementation manner, if the user clicks the sending option in the prompt message in the first interface, the electronic device displays the second image in the session interface, and sends the first message to the target electronic device to instruct the target electronic device to display the second image in the session interface. Therefore, in the multi-user conversation process, the electronic equipment of each user can display the second image in real time, and the effect of displaying the message by the electronic equipment can be improved.
With reference to the second implementation manner, if the user clicks the second sending control in the first interface, the electronic device synthesizes the first image and the target text information into a second image, displays the second image in the session interface, and sends the first message to the target electronic device, so as to instruct the target electronic device to display the second image in the session interface, thereby improving the effect of displaying messages by the electronic device.
It should be noted that, in the embodiment of the present invention, for a single image or multiple images, the two implementation manners described above can be adopted to ensure that the teletext information is continuously and integrally transmitted. For dynamic images or videos, the two implementation modes can be adopted to ensure the continuous and integral transmission of the image-text information. Wherein, for the dynamic image or video, the dynamic image or video and the text information can be synthesized and then transmitted in a frame pause mode.
Specifically, under the condition that the electronic device displays the session interface, the electronic device may superimpose the plurality of images in response to the input of the user selecting the plurality of images and inputting the text information to obtain a dynamic image, and add the text information input by the user to the dynamic image to obtain a dynamic image (i.e., a second image) with the combined image-text information. Or, in a case that the electronic device displays the session interface, the electronic device may add the text information input by the user to a certain frame of image among the multiple images in response to the input of the user selecting the multiple images and inputting the text information, and then superimpose the multiple images to obtain a dynamic image (i.e., a second image) with the combined image-text information. The display effect is similar to the effect of displaying a presentation (PowerPoint, PPT).
The embodiment of the invention is applied to the scene of sending the image and the text information in the multi-user conversation process, and can send the image and the text information which are needed to be sent by the user in the multi-user conversation process by combining the image and the text information into a whole, thereby avoiding the situation that the content of the message is misinterpreted due to the separation of the image and text information when sending. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved.
As shown in fig. 9, an embodiment of the present invention provides an electronic device 700, where the electronic device 700 may include a receiving module 701 and a display module 702;
the receiving module 701 is configured to receive a first input that a user inputs a first image and target text information in a case where the display module 702 displays a session interface;
the display module 702 is configured to display a first interface in response to the first input, the first interface including a first image and target text information;
the display module 702 is further configured to display a second image in the conversation interface, where the second input of the user is received by the receiving module 701, where the second image is an image synthesized by the first image and the target text information.
Optionally, in the embodiment of the present invention, the session interface includes a first sending control, where the first sending control is a control that sends user input information based on a trigger operation; the first input comprises a first sub-input and a second sub-input, the first sub-input is input by a user for inputting a first image and target text information in an input box of the conversation interface, and the second sub-input is input by the user for the first sending control;
the display module 702 is specifically configured to, in response to the first sub-input and the second sub-input, synthesize the first image and the target text information into a second image and place the second image in the first interface when it is detected that the information to be sent includes an image and text information.
Optionally, in this embodiment of the present invention, the display module 702 is specifically configured to display the target text information in the target area in the first interface, so as to obtain the second image. The target area is an area in the first interface except the first image, and the target area is an area determined according to the input sequence of the first image and the target text information.
Optionally, in this embodiment of the present invention, the display module 702 is further configured to display target prompt information after the first image and the target text information are synthesized into a second image, where the target prompt information is used to prompt a user to send the second image, and the prompt information includes a sending option;
the display module 702 is further configured to display a second image in the session interface if the receiving module 701 receives a second input of the sending option from the user.
Optionally, in this embodiment of the present invention, the first input includes a third sub-input, a fourth sub-input, and a fifth sub-input, where the third sub-input is input by a user to the first image, the fourth sub-input is input by the user to an editing control, the editing control is a control for editing an image input by the user based on a trigger operation, and the fifth sub-input is input by the user to input target text information in the first interface;
the display module 702 is specifically configured to respond to the third sub-input and the fourth sub-input, and display a first interface, where the first interface includes a first image;
the display module 702 is further specifically configured to update the first interface in response to the fifth sub-input, where the updated first interface displays the first image and the target text information;
and the display position of the target text information on the first interface is an area corresponding to the fifth sub-input in the first interface.
Optionally, in this embodiment of the present invention, the first interface further includes a second sending control. In this case, the display module 702 is specifically configured to, when the receiving module 701 receives a second input to the second sending control from the user, combine the first image and the target text information into a second image, and display the second image in the conversation interface.
Optionally, in this embodiment of the present invention, with reference to fig. 9, as shown in fig. 10, the electronic device according to this embodiment of the present invention further includes a sending module 703, where the sending module 703 is configured to send a first message to the target electronic device when the receiving module 701 receives a second input, where the first message includes a second image.
The conversation interface is an interface for a user of the electronic equipment and a user of the target electronic equipment to carry out conversation. And the target electronic equipment displays the second image in a conversation interface under the condition of receiving the first message.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
The electronic equipment provided by the embodiment of the invention can receive the first input of the first image and the target text information input by the user under the condition of displaying the conversation interface; and in response to the first input, displaying a first interface, the first interface including a first image and target text information; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information. The electronic equipment is applied to a scene of sending image and text information in the multi-user conversation process, and the electronic equipment can be used for sending the image and the text information which are needed to be sent by a user in the multi-user conversation process by combining the image and the text information into a whole, so that the condition that the content of the message is misinterpreted due to separation in the process of sending the image-text information is avoided. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved. Therefore, the electronic equipment can improve the effect of displaying the message.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 11, the electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A user input unit 807 for receiving a first input of a first image and target text information by a user in a case where the session interface is displayed; a display unit 806 for displaying a first interface including the first image and the target text information, for the first input received by the user input unit 807; and displaying a second image in the conversation interface under the condition that a second input of the user is received, wherein the second image is an image synthesized by the first image and the target text information.
The embodiment of the invention provides electronic equipment, which is applied to a scene of sending image and text information in a multi-user conversation process. Because the image and the text information are synthesized and sent, the image-text message is displayed integrally, and the image-text message is continuous and visual, so that the content required to be expressed by the user can be displayed better, the user can understand the message content conveniently, and the user experience is improved. Therefore, the electronic equipment can improve the effect of displaying the message.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The electronic device 800 provides wireless broadband internet access to the user via the network module 802, such as to assist the user in sending and receiving e-mail, browsing web pages, and accessing streaming media.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the electronic apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input unit 804 may include an image capture device (e.g., a camera) 8040, a Graphics Processing Unit (GPU) 8041, and a microphone 8042. An image capture device 8040 (e.g., a camera) captures image data for still pictures or video. The graphic processor 8041 processes image data of still pictures or video obtained by the image capturing apparatus in the video capturing mode or the image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The electronic device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the electronic device 800 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although the touch panel 8071 and the display panel 8061 are shown in fig. 11 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 808 is an interface for connecting an external device to the electronic apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 800 or may be used to transmit data between the electronic device 800 and external devices.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby monitoring the whole electronic device. Processor 810 may include one or more processing units; optionally, the processor 810 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The electronic device 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and optionally, the power supply 811 may be logically coupled to the processor 810 via a power management system to manage charging, discharging, and power consumption management via the power management system.
In addition, the electronic device 800 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 810 shown in fig. 11, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the information processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the information processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method disclosed in the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. An information processing method applied to an electronic device, the method comprising:
receiving a first input of a first image and target text information input by a user under the condition that a conversation interface is displayed;
in response to the first input, displaying a first interface, the first interface including the first image and the target text information;
and under the condition that a second input of the user is received, displaying a second image in the conversation interface, wherein the second image is an image synthesized by the first image and the target text information.
2. The method according to claim 1, wherein a first sending control is included in the session interface, and the first sending control is a control for sending user input information based on a triggering operation; the first input comprises a first sub-input and a second sub-input, the first sub-input is input of a user for inputting a first image and target text information in an input box of the conversation interface, and the second sub-input is input of the user for the first sending control;
the displaying, in response to the first input, a first interface, comprising:
and responding to the first sub input and the second sub input, and under the condition that the information to be sent comprises images and text information, synthesizing the first images and the target text information into the second images, and displaying the second images in the first interface.
3. The method of claim 2, wherein the synthesizing the first image and the target text information into the second image comprises:
displaying the target text information in a target area in the first interface to obtain the second image;
the target area is an area of the first interface except the first image, and the target area is an area determined according to the input sequence of the first image and the target text information.
4. The method according to claim 2 or 3, characterized in that after said synthesizing the first image and the target text information into the second image, the method further comprises:
displaying target prompt information, wherein the target prompt information is used for prompting a user to send the second image, and the prompt information comprises sending options;
the displaying a second image in the conversation interface in case of receiving a second input of the user includes:
displaying the second image in the conversation interface in case of receiving a second input of the sending option by the user.
5. The method according to claim 1, wherein the first input comprises a third sub-input, a fourth sub-input and a fifth sub-input, the third sub-input is input by a user to the first image, the fourth sub-input is input by the user to an editing control, the editing control is a control for editing the image input by the user based on a trigger operation, and the fifth sub-input is input by the user to input target text information in the first interface;
the displaying, in response to the first input, a first interface, comprising:
displaying the first interface in response to the third sub-input and the fourth sub-input, the first interface including the first image;
responding to the fifth sub-input, updating the first interface, wherein the updated first interface displays the first image and the target text information;
and the display position of the target text information on the first interface is an area corresponding to the fifth sub-input in the first interface.
6. The method of claim 5, further comprising a second send control in the first interface;
the displaying a second image in the conversation interface in case of receiving a second input of the user includes:
and under the condition that second input of the user to the second sending control is received, synthesizing the first image and the target text information into a second image, and displaying the second image in the conversation interface.
7. An electronic device, comprising a receiving module and a display module;
the receiving module is used for receiving a first input of a first image and target text information input by a user under the condition that the display module displays a conversation interface;
the display module is used for responding to the first input and displaying a first interface, and the first interface comprises the first image and the target text information;
the display module is further configured to display a second image in the session interface when the receiving module receives a second input from the user, where the second image is an image synthesized by the first image and the target text information.
8. The electronic device of claim 7, wherein a first sending control is included in the session interface, and the first sending control is a control for sending user input information based on a triggering operation; the first input comprises a first sub-input and a second sub-input, the first sub-input is input of a user for inputting a first image and target text information in an input box of the conversation interface, and the second sub-input is input of the user for the first sending control;
the display module is specifically configured to, in response to the first sub-input and the second sub-input, synthesize the first image and the target text information into the second image and display the second image in the first interface when it is detected that the information to be sent includes an image and text information.
9. The electronic device according to claim 8, wherein the display module is specifically configured to display the target text information in a target area in the first interface, so as to obtain the second image;
the target area is an area of the first interface except the first image, and the target area is an area determined according to the input sequence of the first image and the target text information.
10. The electronic device of claim 8 or 9, wherein the display module is further configured to display a target prompt message after the first image and the target text message are combined into the second image, the target prompt message being used to prompt a user to send the second image, the prompt message including a sending option;
the display module is further configured to display the second image in the session interface when the receiving module receives a second input of the sending option from the user.
11. The electronic device of claim 7, wherein the first input comprises a third sub-input, a fourth sub-input and a fifth sub-input, the third sub-input is input by a user to the first image, the fourth sub-input is input by the user to an editing control, the editing control is a control for editing the image input by the user based on a trigger operation, and the fifth sub-input is input by the user to input target text information in the first interface;
the display module is specifically configured to display the first interface in response to the third sub-input and the fourth sub-input, where the first interface includes the first image;
the display module is specifically further configured to update the first interface in response to the fifth sub-input, where the updated first interface displays the first image and the target text information;
and the display position of the target text information on the first interface is an area corresponding to the fifth sub-input in the first interface.
12. The electronic device of claim 11, further comprising a second send control in the first interface;
the display module is specifically configured to, when the receiving module receives a second input to the second sending control from the user, combine the first image and the target text information into the second image, and display the second image in the session interface.
13. An electronic device, characterized in that the electronic device comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information processing method according to any one of claims 1 to 6.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the information processing method according to any one of claims 1 to 6.
CN201911345613.8A 2019-12-24 2019-12-24 Information processing method and electronic equipment Pending CN111158817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911345613.8A CN111158817A (en) 2019-12-24 2019-12-24 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911345613.8A CN111158817A (en) 2019-12-24 2019-12-24 Information processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111158817A true CN111158817A (en) 2020-05-15

Family

ID=70557898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911345613.8A Pending CN111158817A (en) 2019-12-24 2019-12-24 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111158817A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748844A (en) * 2020-12-31 2021-05-04 维沃移动通信有限公司 Message processing method and device and electronic equipment
CN114374761A (en) * 2022-01-10 2022-04-19 维沃移动通信有限公司 Information interaction method and device, electronic equipment and medium
CN115079889A (en) * 2022-06-10 2022-09-20 北京字跳网络技术有限公司 Information processing method, device, equipment, medium and product
WO2023046105A1 (en) * 2021-09-24 2023-03-30 维沃移动通信有限公司 Message sending method and apparatus and electronic device
WO2023051384A1 (en) * 2021-09-29 2023-04-06 维沃移动通信有限公司 Display method, information sending method, and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6253231B1 (en) * 1998-10-07 2001-06-26 Sony Corporation System and method for incorporating image data into electronic mail documents
CN105426103A (en) * 2015-11-10 2016-03-23 网易(杭州)网络有限公司 Message editing method and device on mobile terminal
CN106155508A (en) * 2015-04-01 2016-11-23 腾讯科技(上海)有限公司 A kind of information processing method and client
CN106874249A (en) * 2017-02-09 2017-06-20 北京金山安全软件有限公司 Information display method and device and terminal equipment
CN107835117A (en) * 2017-10-19 2018-03-23 上海爱优威软件开发有限公司 A kind of instant communicating method and system
US20180137119A1 (en) * 2016-11-16 2018-05-17 Samsung Electronics Co., Ltd. Image management method and apparatus thereof
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN108182041A (en) * 2017-12-18 2018-06-19 维沃移动通信有限公司 A kind of data display method and mobile terminal
CN108712322A (en) * 2018-04-27 2018-10-26 北京奇安信科技有限公司 Message treatment method and device
CN109656657A (en) * 2018-12-10 2019-04-19 珠海豹趣科技有限公司 A kind of image display method and apparatus
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
CN110020411A (en) * 2019-03-29 2019-07-16 上海掌门科技有限公司 Graph-text content generation method and equipment
CN110580730A (en) * 2018-06-11 2019-12-17 北京搜狗科技发展有限公司 picture processing method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6253231B1 (en) * 1998-10-07 2001-06-26 Sony Corporation System and method for incorporating image data into electronic mail documents
CN106155508A (en) * 2015-04-01 2016-11-23 腾讯科技(上海)有限公司 A kind of information processing method and client
CN105426103A (en) * 2015-11-10 2016-03-23 网易(杭州)网络有限公司 Message editing method and device on mobile terminal
US20180137119A1 (en) * 2016-11-16 2018-05-17 Samsung Electronics Co., Ltd. Image management method and apparatus thereof
CN106874249A (en) * 2017-02-09 2017-06-20 北京金山安全软件有限公司 Information display method and device and terminal equipment
CN107835117A (en) * 2017-10-19 2018-03-23 上海爱优威软件开发有限公司 A kind of instant communicating method and system
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN108182041A (en) * 2017-12-18 2018-06-19 维沃移动通信有限公司 A kind of data display method and mobile terminal
CN108712322A (en) * 2018-04-27 2018-10-26 北京奇安信科技有限公司 Message treatment method and device
CN110580730A (en) * 2018-06-11 2019-12-17 北京搜狗科技发展有限公司 picture processing method and device
CN109656657A (en) * 2018-12-10 2019-04-19 珠海豹趣科技有限公司 A kind of image display method and apparatus
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
CN110020411A (en) * 2019-03-29 2019-07-16 上海掌门科技有限公司 Graph-text content generation method and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748844A (en) * 2020-12-31 2021-05-04 维沃移动通信有限公司 Message processing method and device and electronic equipment
WO2023046105A1 (en) * 2021-09-24 2023-03-30 维沃移动通信有限公司 Message sending method and apparatus and electronic device
WO2023051384A1 (en) * 2021-09-29 2023-04-06 维沃移动通信有限公司 Display method, information sending method, and electronic device
CN114374761A (en) * 2022-01-10 2022-04-19 维沃移动通信有限公司 Information interaction method and device, electronic equipment and medium
CN115079889A (en) * 2022-06-10 2022-09-20 北京字跳网络技术有限公司 Information processing method, device, equipment, medium and product

Similar Documents

Publication Publication Date Title
WO2021098678A1 (en) Screencast control method and electronic device
US20210034223A1 (en) Method for display control and mobile terminal
CN108540655B (en) Caller identification processing method and mobile terminal
CN109525874B (en) Screen capturing method and terminal equipment
CN109408168B (en) Remote interaction method and terminal equipment
CN111158817A (en) Information processing method and electronic equipment
CN109874038B (en) Terminal display method and terminal
CN108712577B (en) Call mode switching method and terminal equipment
US11658932B2 (en) Message sending method and terminal device
WO2019196691A1 (en) Keyboard interface display method and mobile terminal
CN111124245B (en) Control method and electronic equipment
CN108874352B (en) Information display method and mobile terminal
CN109032486B (en) Display control method and terminal equipment
WO2020186964A1 (en) Audio signal outputting method and terminal device
CN109710349B (en) Screen capturing method and mobile terminal
CN111782115B (en) Application program control method and device and electronic equipment
CN109412932B (en) Screen capturing method and terminal
CN110865745A (en) Screen capturing method and terminal equipment
WO2021175143A1 (en) Picture acquisition method and electronic device
CN111147919A (en) Play adjustment method, electronic equipment and computer readable storage medium
CN108600079B (en) Chat record display method and mobile terminal
CN109491634B (en) Screen display control method and terminal equipment
CN110855549A (en) Message display method and terminal equipment
CN110990172A (en) Application sharing method, first electronic device and computer-readable storage medium
WO2021082772A1 (en) Screenshot method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination