CN113099033A - Information sending method, information sending device and electronic equipment - Google Patents

Information sending method, information sending device and electronic equipment Download PDF

Info

Publication number
CN113099033A
CN113099033A CN202110335661.XA CN202110335661A CN113099033A CN 113099033 A CN113099033 A CN 113099033A CN 202110335661 A CN202110335661 A CN 202110335661A CN 113099033 A CN113099033 A CN 113099033A
Authority
CN
China
Prior art keywords
input
target
image
keyword
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110335661.XA
Other languages
Chinese (zh)
Inventor
张孝东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110335661.XA priority Critical patent/CN113099033A/en
Publication of CN113099033A publication Critical patent/CN113099033A/en
Priority to PCT/CN2022/082710 priority patent/WO2022206538A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Abstract

The application discloses an information sending method, an information sending device and electronic equipment, and belongs to the technical field of communication. The information sending method comprises the following steps: determining at least one keyword based on the first information displayed in the information editing box; receiving a first input of a user; determining a first image in response to the first input; displaying a target image, wherein the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on the at least one keyword. According to the information sending method, the key information in the image can be automatically marked by determining the key words, a large amount of manual operation is saved, and the operation experience of sending the image by a user can be optimized.

Description

Information sending method, information sending device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an information sending method, an information sending device and electronic equipment.
Background
The transmission image is a scene which is encountered at high frequency in the information transmission process. In a scene where an image is transmitted, the following requirements often arise: and marking a partial area in the original image to remind an opposite end of paying attention to the content of the area. In the prior art, when the user needs to call the editing interface of the image and manually edit the mark, the user can send the mark, and the operation is complicated.
Disclosure of Invention
The embodiment of the application aims to provide an information sending method, an information sending device and electronic equipment, which can solve the problem that image labeling and sharing operations are complex.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information sending method, where the method includes:
determining at least one keyword based on the first information displayed in the information editing box;
receiving a first input of a user;
determining a first image in response to the first input;
displaying a target image, wherein the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on the at least one keyword.
In a second aspect, an embodiment of the present application provides an information sending apparatus, including:
the first determining module is used for determining at least one keyword based on the first information displayed in the information editing box;
the first receiving module is used for receiving a first input of a user;
a second determination module to determine a first image in response to the first input;
the first processing module is used for displaying a target image, wherein the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on the at least one keyword.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the key information in the image can be automatically marked by determining the key words, so that a large amount of manual operation is saved, and the operation experience of a user for sending the image can be optimized.
Drawings
Fig. 1 is a flowchart of an information sending method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface for sending information according to an embodiment of the present disclosure;
fig. 3 is a second schematic view of an interface for sending information according to an embodiment of the present application;
fig. 4 is a third schematic view of an interface for sending information according to an embodiment of the present application;
FIG. 5 is a fourth schematic view of an interface for sending information provided by an embodiment of the present application;
FIG. 6 is a fifth schematic view of an interface for sending messages according to an embodiment of the present disclosure;
FIG. 7 is a sixth schematic view of an interface for sending messages according to an embodiment of the present disclosure;
FIG. 8 is a seventh schematic view of an interface for sending messages according to an embodiment of the present disclosure;
FIG. 9 is an eighth schematic view of an interface for sending messages according to an embodiment of the present application;
FIG. 10 is a ninth illustration of an interface for sending messages according to an embodiment of the present application;
FIG. 11 is a tenth schematic view of an interface for sending messages provided by an embodiment of the present application;
fig. 12 is a structural diagram of an information transmitting apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 14 is a second hardware schematic diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information sending method, the information sending apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
The information sending method can be applied to the terminal, and can be specifically, but not limited to, executed by hardware or software in the terminal. The execution subject of the information transmission method may be a terminal, or a control device of the terminal, or the like.
Terminals include, but are not limited to, mobile phones or other portable communication devices such as tablets having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the terminal may not be a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the following various embodiments, a terminal including a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The embodiment of the present application provides an information sending method, where an execution subject of the information sending method may be a terminal, including but not limited to a mobile terminal, a fixed terminal, or a control device of the terminal.
As shown in fig. 1, the information transmitting method includes: step 110, step 120, step 130 and step 140.
Step 110, determining at least one keyword based on the first information displayed in the information editing box;
it can be understood that, when the terminal works normally, the user can open a chat interactive interface of a target social Application (APP).
The target social application may be an APP with an instant messaging function, such as an instant messaging APP. The target social application may support single-to-single chat or multi-user group chat, and is not limited herein.
The chat interactive interface is distinguished according to the contact persons.
The information sending method of the embodiment can be applied to a single-to-single chat scene, in the scene, the chat interaction interface is a single-to-single-double chat interface between a user and a single contact person, and the information sending method of the embodiment is used for achieving information sending of the single contact person.
The information sending method of the embodiment can also be applied to a group chat scene, in the scene, the chat interaction interface is a group chat interface between the user and a plurality of contacts, and the information sending method of the embodiment is used for realizing the information sending of the plurality of contacts.
As shown in fig. 2, the chat interactive interface may include a chat log display area for displaying chat logs, contact identifiers and the like, or a chat log display area for displaying chat logs, group chat identifiers and the like, and an information edit box.
The chat log may include text information, picture information, video information, audio information, document information, and the like. Contact identifications include, but are not limited to, contact avatars and contact nicknames, among others. Group chat identifiers include, but are not limited to, group chat names, graphical symbols identifying the group chat interface, contact avatars for individual contacts within the chat group, and the like.
It should be noted that the terminal may display the chat logs in the chat log display area in chronological order.
The information editing box is used for the user to input chat information. The information editing box can support the user to input chat information such as text, pictures, videos, voice, documents and the like. And after the user inputs the chat information in the information editing frame and triggers the sending option in the information editing frame, the terminal displays the chat information input by the user in the information editing frame in a chat record display area for other contacts and the user to check.
As shown in fig. 2, the information edit box may include a keyboard control area and an information display area to be sent, and the first information may be displayed in the information display area to be sent in the information edit box.
The first information may be input in a variety of ways:
first, the first information may be generated by a user through an input to the keyboard control area, where the input is mainly a touch input.
In this embodiment, a user realizes input of first information by clicking a control on a keyboard control area, and the first information is displayed in an information display area to be sent.
The input method can be five-stroke input, pinyin input (9 keys or 26 keys and the like), handwriting input and the like, and Chinese, English or other characters can be input.
Second, the first information may be generated by a user through voice input.
In this embodiment, a user speaks information to be input, a terminal picks up audio information through a sound pickup, semantic recognition is performed, first information can be obtained, and the first information is displayed in an information display area to be sent.
The first information is textual information, such as "XXX" in fig. 2, where "XXX" may refer to a pronoun or phrase or sentence, such as "XXX" may be "location", "time", "flower" or "solar and flight in solitary life", etc.
Keywords may be determined from the first information, and the keywords may be one or more, and the plurality includes two or more.
The keywords are used for searching corresponding target content from the first image.
When the first image includes text information, the keyword may be used to find associated text information from the first image, and such an association relationship may be the same relationship, an upper-lower relationship, or other corresponding relationships. For example, the first image includes text information such as "2-storied conference room of the cizu po pox hotel, 10:30 start meeting" and the like, and the keywords are "location" and "time", so that the corresponding target content in the first image can be "10: 30" of the 2-storied conference room of the cizu pox hotel.
When the image information is included in the first image, the keyword may be used to find associated image information from the first image. For example, the first image includes image information of an apple, a banana, a person, and the like, and the keyword is "apple", the corresponding target content in the first image may be "apple".
The method for determining the keyword from the first information may be various, including manual selection, automatic extraction, and the like, and the method for determining the keyword is specifically described below from three different implementation angles with reference to the accompanying drawings.
One, manual selection
In this embodiment, the step 110 of determining at least one keyword based on the first information displayed in the information editing box may include:
receiving a sixth input of the first information by the user;
responding to the sixth input to obtain a candidate word;
in the above step, the sixth input is used to select a candidate word from the first information.
Wherein the sixth input may be expressed in at least one of the following ways:
first, the sixth input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, the receiving of the sixth input by the user may be represented by receiving a touch operation of the user on the first information in a display area of the terminal display screen.
In the embodiment shown in fig. 2 and 3, the sixth input may include: the touch control operation of pressing the first information and the touch control operation of dragging the cursor.
In actual implementation, a user presses the first information and drags a cursor to obtain a candidate word, where "XXX" in fig. 3 is the candidate word.
The display area may also display a first target control corresponding to the candidate word.
In the embodiment shown in fig. 3, the display area may further display four controls, "select all," "copy," "cut," and "mark," where the mark is the first target control.
The operation method is similar to the internal logic of the conventional operation such as copy and cut, and the use is more convenient for users.
Second, the sixth input may be represented as a physical key input.
In this embodiment, the body of the terminal is provided with an entity key corresponding to the selection, and receives a sixth input of the user, which may be expressed as receiving a sixth input of the user pressing the corresponding entity key; the sixth input may also be a combined operation of pressing multiple physical cases simultaneously.
Third, the sixth input may appear as a voice input.
In this embodiment, the terminal may directly determine "XXX" as a candidate word when receiving speech such as "XXX as a candidate word".
Even more, the terminal may directly determine "XXX" as a keyword when receiving a voice such as "XXX as a keyword".
Of course, in other embodiments, the sixth input may also be expressed in other forms, which may be determined according to actual needs, and this is not limited in this application.
Receiving a seventh input of the user;
in response to a seventh input, the candidate word is determined to be a keyword.
In this step, the seventh input is used to determine the candidate word as the keyword.
Wherein the seventh input may be expressed in at least one of the following ways:
first, the seventh input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, the receiving of the seventh input by the user may be represented by receiving a touch operation of the user on a display area of a display screen of the terminal.
As shown in fig. 3, the seventh input may be a touch operation in which the user clicks a "mark".
After determining the candidate word as the keyword, the keyword may be displayed in a different manner from other contents in the first information, such as different font color or different gray level of the located region, as shown in fig. 4, and the font color changes to gray after "XXX" is determined as the keyword.
The operation method is similar to the internal logic of conventional operations such as copying and cutting, the use is more convenient for users, and the keywords can be displayed in different modes to play a role in indication.
Second, the seventh input may be represented as a physical key input.
In this embodiment, the body of the terminal is provided with an entity key corresponding to the determination, and receives a seventh input of the user, which may be expressed as receiving a seventh input of the user pressing the corresponding entity key; the seventh input may also be a combined operation of pressing multiple physical cases simultaneously.
Third, the seventh input may be represented as a voice input.
In this embodiment, the terminal may determine "XXX" as a keyword when receiving speech such as "XXX as a keyword".
Of course, in other embodiments, the seventh input may also be represented in other forms, including but not limited to character input, and the like, which may be determined according to actual needs, and this is not limited in this application.
After one keyword is determined through the steps, the actions can be repeated, and a second keyword is continuously determined.
If the target keyword needs to be cancelled, the same can be achieved in a similar manner, taking the cancellation of the target keyword through touch input as an example: the user clicks or presses the target keyword, the following four controls of full selection, copying, cutting and canceling mark are displayed, and then the canceling mark is clicked, so that the target keyword can be canceled.
Secondly, for the first information input by the user according to the target format, the terminal automatically extracts
In this embodiment, the step 110 of determining at least one keyword based on the first information displayed in the information editing box may include:
extracting a keyword from a target position of first information, wherein the first information is input according to a target format;
in this embodiment, the first information itself is input in the target format, so that the terminal can automatically extract the keywords.
As shown in fig. 5, the target format is "keyword: . The content is as follows: "the first information is" keyword: XXX. The content is as follows: written well in this words, the terminal can automatically extract the keyword "XXX".
This embodiment can also be used in scenarios with multiple keywords, such as the target format "keyword: ,. The content is as follows: "the first information is" keyword: XXX, YYY. The content is as follows: written well in this sentence, the terminal can automatically extract the keywords "XXX" and "YYY".
Correspondingly, the position of the keyword displayed in the first image is a target position, for example, the first image includes the text information "aaaaaaxxxbbbbbbbbyyyyccccc", and in the case that the keyword is "XXX" and "YYY", the target position is a position where "XXX" and "YYY" are displayed, and the target position may be slightly larger than the actual display positions of "XXX" and "YYY".
The target format can be preset for the system or set for the user according to personal habits.
According to the mode, the user does not need to manually select the keywords again, the keywords can be accurately locked, the operation complexity of the user is greatly simplified, and the use experience is better.
Thirdly, the terminal automatically extracts according to the preset target rule
In this embodiment, the step 110 of determining at least one keyword based on the first information displayed in the information editing box may include:
and determining at least one keyword from the first information based on a target rule, wherein the target rule is preset.
In this embodiment, the first information has no fixed format, and the terminal determines the keyword from the first information according to the target rule.
The target rule can be set by the user according to requirements. For example, in an administrative work scene, the target rule may be that the extraction place and time are used as keywords; or for fruit vendors, the target rules may be set to fruit.
Therefore, the information sending method can automatically and accurately extract the keywords according to the scene of information sending or the identity information of the user, and flexibly realize intelligent marking.
In this embodiment, identifying keywords from the first information requires the use of semantic identification techniques.
By the mode, a user can input any information as the first information, the use flexibility is higher, the user does not need to manually select the keyword, the operation complexity of the user can be greatly simplified, and the use experience is better.
Step 120, receiving a first input of a user;
step 130, responding to a first input, determining a first image;
in step 120, the first input is used to determine a first image, i.e. an image to be annotated and transmitted.
Wherein the first input may be expressed in at least one of the following ways:
first, the first input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, receiving the first input of the user may be represented by receiving a touch operation of the user on a display area of a display screen of the terminal.
In order to reduce the misoperation rate of the user, the action area of the first input can be limited to a specific area, such as the upper middle area of the current display interface; or displaying a second target control on the current interface, and touching the second target control to realize the first input; or setting the first input as a continuous multi-tap operation on the display area within the target time interval.
Second, the first input may be represented as a physical key input.
In this embodiment, the body of the terminal is provided with a physical key for calling out the album, and receives a first input of the user, which may be expressed as receiving a first input of the user pressing the corresponding physical key; the first input may also be a combined operation of pressing multiple physical cases simultaneously.
Third, the first input may be represented as a voice input.
In the embodiment, the terminal can trigger the display of the album preview interface when receiving voice such as "open album".
Of course, in other embodiments, the first input may also represent a combination of the above-mentioned multiple inputs, or take other forms, including but not limited to character input, etc., which may be determined according to actual needs, and this is not limited in this application.
The following describes a method for determining the first image from four different implementation angles, respectively, with reference to the accompanying drawings.
Firstly, calling out a control pointing to a target album through a keyword.
In this embodiment, step 120, receiving a first input from a user, includes: receiving a first input of a keyword by a user;
the first input may be any of the ways described in the above embodiments.
For example, for the embodiment shown in fig. 4, from the first information "XXX," which says good, "it is determined that" XXX "is the keyword, and" XXX "is displayed in gray," which says good "is displayed in black. The first input may be the area where the long press "XXX" is located.
The first input is for displaying a second control pointing to a target album.
Step 130, in response to a first input, determining a first image, comprising: displaying a second control pointing to the target album in response to the first input; receiving an eighth input of the second control by the user; in response to an eighth input, a first image is determined from the target album.
In the embodiment shown in FIG. 6, in the current chat interactive interface, a second control "album" is displayed, which may point to the target album.
There may be multiple situations for the album to which the second control may point:
firstly, pointing to the photo album where all images are located;
secondly, pointing to a preset target photo album, such as a photo album corresponding to the current APP;
third, the target album is associated with keywords.
And through semantic analysis on the keyword, the displayed second control points to the target album corresponding to the associated word.
For example, if the keyword is the character information, the target album is the character sub album.
The eighth input is used to identify the first image from the target album, and may be presented in a plurality of forms with reference to other inputs.
The following description will take the first input as a touch operation as an example.
The first input comprises a touch operation of clicking a second control by a user and a touch operation of clicking a first image in the target album.
In actual execution, a user clicks the second control, the interface displays a target album display interface, the target album display interface can include a thumbnail of an image, the user clicks the target thumbnail to confirm the target thumbnail as the first image, or the user clicks the confirmation control after clicking the target thumbnail to confirm the target thumbnail as the first image.
The mode for determining the first image is intuitive, the operability is strong, the operation freedom of the user is high, and the first image can be determined at any time according to personal needs after the keyword is determined.
And secondly, determining the first image through a control pointing to the target album in the chat interactive interface.
Step 120, receiving a first input of a user, including: receiving a first input of a user to a third control pointing to the target album;
in this embodiment, a third control, such as a "+" control, is displayed in the chat interactive interface itself that points to the target album.
The first input may be any of the ways described in the above embodiments.
For example, from the first information "XXX", which says good ", it is determined that" XXX "is a keyword, and" XXX "is displayed in gray, and" written good "is displayed in black. The first input may be clicking on the area where "+" is located.
The first input is used for displaying a target album display interface and determining a first image.
Step 130, in response to a first input, determining a first image, comprising: in response to a first input, a first image is determined from the target album.
In actual execution, the user clicks the third control, the interface displays a target album display interface, the target album display interface may include a thumbnail of an image, and the user clicks the target thumbnail to confirm the target thumbnail as the first image, or clicks the confirmation control after clicking the target thumbnail to confirm the target thumbnail as the first image.
The mode for determining the first image is intuitive, the operability is strong, the operation freedom of the user is high, and the first image can be determined at any time according to personal needs after the keyword is determined.
And thirdly, determining a first image from the images of the chat interactive interface.
In this embodiment, step 120, receiving a first input from a user, includes: receiving a first input of a user to a second image on a current display interface;
step 130, in response to a first input, determining a first image, comprising: in response to the first input, the second image is treated as the first image.
It is understood that, in the chat log of the chat interactive interface, with an image, an image can be directly determined as the first image.
The first input may be any of the ways described in the above embodiments.
The following description will take the first input as a touch operation as an example.
The first input includes a touch operation of a user pressing and dragging an image of the chat interaction interface.
As shown in fig. 7, in an actual implementation, after determining the second image from the chat log by dragging the chat interactive interface, the user presses the second image for a long time, and drags the second image into the information editing frame, and dragging the second image into the information editing frame may indicate that at least a portion of the second image overlaps at least a portion of the information editing frame, so that the second image may be determined as the first image.
The method for determining the first image is intuitive and has strong operability, and the images in the chat records do not need to be downloaded in the chat scene.
Fourthly, automatically displaying a fourth control pointing to the target album according to the time interval for receiving the input
In this embodiment, the method further comprises: displaying a fourth control pointing to the target album under the condition that the input of the keyword determined again by the user is not received in the target time period after the last keyword is determined;
it can be understood that, in the case that the first information is displayed in the information editing box, the user selects the keyword by operation, for example, if the user does not determine the keyword again within the target time period after determining the nth keyword, it indicates that the keyword has been selected, which is that the current chat interactive interface automatically displays the fourth control.
The target time period may be preset by the system or preset by the user, for example, the target time period may be 1s-3 s.
Step 120, receiving a first input of a user, including: receiving a first input of a user to a fourth control pointing to the target album;
in this embodiment, the first input may be any of the ways described in the above embodiments.
The first input is used for displaying a target album display interface and determining a first image.
Step 130, in response to a first input, determining a first image, comprising: in response to a first input, a first image is determined from the target album.
In actual execution, the user clicks the fourth control, the interface displays a target album display interface, the target album display interface may include a thumbnail of an image, and the user clicks the target thumbnail to confirm the target thumbnail as the first image, or clicks the confirmation control after clicking the target thumbnail to confirm the target thumbnail as the first image.
Of course, after the fourth control pointing to the target album is displayed, if the user needs to further determine the keyword manually, the operation can be continued.
The mode of determining the first image is intuitive, and a control pointing to the target album can be automatically called.
And step 140, displaying the target image.
The target image is an image obtained by adding a target mark to the target position of the first image, and the target position and the target mark are determined based on at least one keyword.
In actual execution, the terminal uses the keyword to match and search in the first image, and marks the area of the searched target content, wherein the marking mode comprises the following steps: the area under the target content is marked with underlining or closed outline.
The position of the keyword displayed in the first image is a target position, for example, the first image includes the text information "aaaaaaxxxbbbbbbyyyccccc", and in the case that the keyword is "XXX", the target position is a position displaying "XXX", and the target position may be slightly larger than the actual display position of "XXX" so as to completely mark the keyword without affecting the display of the keyword itself.
In fig. 8 and 10, the target image is shown, and fig. 9 and 11 show the chat interactive interface after the target image is sent to the opposite terminal.
When the target image is not sent, the target image can be displayed in a plurality of display modes:
one, floating display on the chat interactive interface.
This is the case, for example, in fig. 8 and 10.
This way the overall view of the chat interactive interface by the user is not affected.
And secondly, displaying the information in an information editing frame.
In this manner, the target image can be displayed in a thumbnail manner.
The mode does not influence the overall view of the user on the chat interactive interface, is convenient for realizing the operations such as deletion and the like, and prevents the mistaken sending.
And thirdly, independently displaying an image editing interface.
In this manner, the current interface displays an editing interface for the target image.
In this way, the target image can be displayed in a larger area as much as possible, and further editing can be conveniently realized.
Of course, the target image can be displayed in the chat interactive interface after being sent to the opposite terminal.
It should be noted that the form of the mark may be various, and parameters such as color, shape, line width and the like of the mark may be set to be adjustable. The parameters can be set independently for different flags.
There are various ways to adjust the marking parameters, and the following describes the ways to adjust the marking parameters in two different implementation angles.
Firstly, when the keywords are determined, the marking characteristics corresponding to the keywords are determined.
Optionally, in this embodiment, in step 110, after determining at least one keyword, the method may further include:
receiving a second input of the user to the first control corresponding to the keyword;
in response to the second input, determining a target mark associated with the keyword;
after determining the first image at step 130, and before displaying the target image at step 140, the method may further comprise:
and adding a target mark at the target position of the first image to generate a target image.
In this embodiment, the second input is used to determine a target mark, so that the target content is located from the first image based on the keyword, i.e. the target mark can be marked.
The parameters of the target mark may include the color, shape, line width, etc. of the mark.
In actual implementation, after a keyword is determined, the first control is displayed in the vicinity of the keyword.
The second input may refer to the first input and have a plurality of expression forms, and for example, when the second input is a touch operation on the first control, the first control is clicked, a target mark editing menu may be displayed, and a parameter of a target mark associated with the keyword may be determined in the target mark editing menu.
Of course, in other embodiments, it may be configured that after a keyword is determined, i.e., the first control is displayed in the vicinity of the keyword, and when the second input is not received within the target time period, the first control disappears.
In this way, the first control can influence reading and other operations of other information, and the keyword can correspond to the default target mark at the moment.
Secondly, after the target image is displayed, the target mark is adjusted.
In this embodiment, after displaying the target image, the method may further include:
receiving a third input of the target image by the user;
in response to a third input, parameters of the markers in the target image are adjusted or at least some of the markers in the target image are deleted.
The third input may refer to the first input and have a plurality of expression forms, and taking the third input as a touch operation as an example, the user may display an editing menu by long-pressing a target mark in the target image, thereby implementing editing or deletion of the target mark.
In this way, the user can also correct the mark before sending the target image.
The following specifically describes the embodiments of the present application from two different implementation perspectives.
First, at least part of the first information is also displayed in the target image.
In a first embodiment, the target image comprises at least part of the first information.
After displaying the target image at step 140, the method may further include:
receiving a fourth input from the user;
and responding to the fourth input, and sending the target image to the opposite terminal.
It is understood that the target image includes: the original first image, the indicia, and at least a portion of the first information.
The fourth input can refer to the first input and has various expression forms, and the target image can be sent to the opposite terminal by clicking the sending control by taking the fourth input as a touch operation as an example.
As shown in fig. 10, the first information is "XXX" which says good, "the keyword is" XXX, "XXX" is marked in a frame in the target image, and "good" which says good "is displayed, and at least part of the first information is also used as part of the target mark; clicking the "send" control sends out a piece of information, which is the target image, as shown in fig. 11.
In a second embodiment, after determining the first image in step 130, and before displaying the target image in step 140, the method may further comprise:
under the condition that the keywords comprise a first keyword and a second keyword, determining a target position in the first image according to the first keyword, and determining a target mark according to the second keyword;
adding a target mark at a target position of the first image, wherein the target mark comprises an indicating mark for indicating the target position and a content mark containing a second keyword;
the first keywords meet the first format condition, and the second keywords meet the second format condition.
In this embodiment, the first information itself is input in the target format, so that the terminal can automatically extract the first keyword and the second keyword.
As shown in fig. 5, the target format is "keyword: . The content is as follows: "the first format condition is" keyword: "and the second format condition is" content: ".
Then, when the first information is "keyword: XXX. The content is as follows: when the word is written well, the terminal can automatically extract the first keyword "XXX" and the second keyword "written well".
The first keyword "XXX" is used to determine the target location in the first image, i.e. to look up "XXX" from the first image, resulting in the target location. The second keyword "this word writes well" may determine the target mark, which may include determining an attribute of the target mark, the attribute representing the content and form of the target mark.
Taking fig. 11 as an example, the target mark includes an indication mark (mark box) and a content mark (this sentence is written well) outside "XXX".
The two sending modes fuse the image and the text, and the directivity is stronger.
And secondly, at least part of the first information and the target image are respectively sent to the opposite terminal.
In this embodiment, the target image does not include at least part of the first information;
after displaying the target image, the method further comprises:
receiving a fifth input of the user;
in response to a fifth input, the target image and at least part of the first information are sent to the peer.
The fifth input may refer to the first input and have a plurality of expression forms, and for example, when the fifth input is a touch operation, clicking the "send" control may send the target image and at least part of the first information to the opposite terminal.
As shown in fig. 8, the first information is "XXX", which says good ", the keyword is" XXX ", and in the target image," XXX "is marked; clicking the "send" control sends out two pieces of information, which are the target image and "XXX", respectively, as shown in fig. 9.
Of course, other information than the keyword in the first information may be transmitted, such as that the two pieces of information are the target image and "this word writes good", respectively.
The sending mode can clearly show the target image and the text information which the user wants to send to the opposite terminal.
And thirdly, only transmitting the target image.
In this embodiment, the target image comprises only the original first image and the indicative target mark.
According to the information sending method, the key information in the image can be automatically marked by determining the key words, a large amount of manual operation is saved, and the operation experience of sending the image by a user can be optimized.
It should be noted that, in the information sending method provided in the embodiment of the present application, the execution main body may be an information sending apparatus, or a control module in the information sending apparatus for executing a loaded information sending method. In the embodiment of the present application, an information transmission method provided in the embodiment of the present application will be described by taking an example in which an information transmission apparatus executes an information transmission processing method.
The embodiment of the application also provides an information sending device.
As shown in fig. 12, the information transmitting apparatus includes: a first determination module 210, a first receiving module 220, a second determination module 230, and a first display module 240.
A first determining module 210, configured to determine at least one keyword based on the first information displayed in the information editing box;
a first receiving module 220, configured to receive a first input of a user;
a second determining module 230 for determining the first image in response to the first input;
and a first display module 240, configured to display a target image, where the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on at least one keyword.
According to the information sending device provided by the embodiment of the application, the key information in the image can be automatically marked by determining the key words, a large amount of manual operation is saved, and the operation experience of sending the image by a user can be optimized.
In some embodiments, the apparatus may further comprise:
the second receiving module is used for receiving a second input of the user to the first control corresponding to the keyword after the at least one keyword is determined;
a third determination module for determining a signature associated with the keyword in response to the second input;
and the first processing module is used for adding the target mark at the target position of the first image after the first image is determined and before the target image is displayed, and generating the target image.
In some embodiments, the apparatus may further comprise:
a fourth determining module, configured to determine, when the keywords include a first keyword and a second keyword, a target position in the first image according to the first keyword, and determine the target mark according to the second keyword;
the first processing module is further used for adding the target mark at the target position of the first image, wherein the target mark comprises an indicating mark for indicating the target position and a content mark containing the second keyword;
the first keywords meet a first format condition, and the second keywords meet a second format condition.
In some embodiments, the apparatus may further comprise:
the third receiving module is used for receiving a third input of the target image from the user after the target image is displayed;
and the second processing module is used for responding to a third input and adjusting the parameters of the marks in the target image or deleting at least part of the marks in the target image.
In some embodiments, the target image includes first information; the apparatus may further include:
a fourth receiving module, configured to receive a fourth input of the user after the target image is displayed;
and the third processing module is used for responding to the fourth input and sending the target image to the opposite terminal.
Alternatively, the first and second electrodes may be,
in some embodiments, the target image does not include at least part of the first information, the apparatus may further include:
a fifth receiving module, configured to receive a fifth input of the user after the target image is displayed;
a sixth processing module, configured to send the target image and at least part of the first information to the peer in response to the fifth input.
In some embodiments, the first determining module is further configured to receive a sixth input of the first information by the user; responding to the sixth input to obtain a candidate word; receiving a seventh input of the user; in response to a seventh input, determining the candidate word as a keyword;
alternatively, the first and second electrodes may be,
the first determining module is also used for extracting keywords from the target position of the first information, and the first information is input according to a target format;
alternatively, the first and second electrodes may be,
the first determining module is further used for determining at least one keyword from the first information based on a target rule, and the target rule is preset.
In some embodiments, the first receiving module is further configured to receive a first input of a keyword by a user;
the second determining module is also used for responding to the first input and displaying a second control pointing to the target album; receiving an eighth input of the second control by the user; in response to an eighth input, determining a first image from the target album;
alternatively, the first and second electrodes may be,
the first receiving module is also used for receiving first input of a user to a third control pointing to the target album;
the second determining module is also used for responding to the first input and determining the first image from the target album;
alternatively, the first and second electrodes may be,
the first receiving module is also used for receiving first input of a user on a first image on the current display interface;
a second determining module, further configured to take the second image as the first image in response to the first input;
alternatively, the first and second electrodes may be,
the first receiving module is further used for receiving a first input of the user to a fourth control pointing to the target album under the condition that the input of the keyword determined again by the user is not received in the target time period after the last keyword is determined;
and the second determining module is also used for responding to the first input and determining the first image from the target album.
The information transmitting apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information transmission device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information sending apparatus provided in the embodiment of the present application can implement each process implemented by the information sending apparatus in the method embodiments of fig. 1 to fig. 11, and is not described here again to avoid repetition.
As shown in fig. 13, an electronic device according to an embodiment of the present application is further provided, which includes a processor 320, a memory 310, and a program or an instruction stored in the memory 310 and executable on the processor 320, where the program or the instruction is executed by the processor 320 to implement the processes of the information sending method embodiment, and can achieve the same technical effect, and no further description is provided herein to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and the like.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 410 is configured to determine at least one keyword based on the first information displayed in the information edit box;
an input unit 404 for receiving a first input of a user;
a processor 410 for determining a first image in response to a first input;
a display unit 406, configured to display a target image, where the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on at least one keyword.
According to the electronic equipment provided by the embodiment of the application, the key information in the image can be automatically marked by determining the key words, so that a large amount of manual operation is saved, and the operation experience of a user for sending the image can be optimized.
Optionally, the input unit 404 is further configured to receive, after determining at least one keyword, a second input of the user to the first control corresponding to the keyword;
a processor 410, further for determining a target mark associated with the keyword in response to a second input;
the processor 410 is further configured to, after determining the first image and before displaying the target image, add a target mark to the target position of the first image to generate the target image.
Optionally, the processor 410 is further configured to, in a case that the keywords include a first keyword and a second keyword, determine a target position in the first image according to the first keyword, and determine a target mark according to the second keyword; adding the target mark at a target position of the first image, wherein the target mark comprises an indicating mark for indicating the target position and a content mark containing the second keyword; the first keywords meet a first format condition, and the second keywords meet a second format condition.
Optionally, the input unit 404 is further configured to receive a third input of the target image from the user after the target image is displayed;
the processor 410 is further configured to adjust a parameter of the marker in the target image or delete at least a portion of the target marker in the target image in response to a third input.
Optionally, the target image comprises at least part of the first information; an input unit 404, further configured to receive a fourth input by the user after the target image is displayed;
the processor 410 is further configured to send the target image to the peer in response to a fifth input.
Optionally, the target image does not include at least part of the first information; an input unit 404, further configured to receive a fifth input from the user after the target image is displayed;
the processor 410 is further configured to send the target image and at least part of the first information to the peer in response to a fifth input.
Optionally, the processor 410 is further configured to receive a sixth input of the first information from the user; responding to the sixth input to obtain a candidate word; receiving a seventh input of the user; in response to a seventh input, determining the candidate word as a keyword;
optionally, the processor 410 is further configured to extract a keyword from a target position of the first information, where the first information is input according to a target format;
optionally, the processor 410 is further configured to determine at least one keyword from the first information based on a target rule, where the target rule is preset.
Optionally, the input unit 404 is further configured to receive a first input of a keyword by a user;
a processor 410, further configured to display, in response to the first input, a second control pointing to the target album; receiving an eighth input of the second control by the user; in response to an eighth input, determining a first image from the target album;
optionally, the input unit 404 is further configured to receive a first input of a third control pointing to the target album from the user;
a processor 410, further configured to determine a first image from the target album in response to a first input;
optionally, the input unit 404 is further configured to receive a first input of the second image on the current display interface by the user;
a processor 410, further configured to treat the second image as the first image in response to a first input;
optionally, the display unit 406 is configured to display a fourth control pointing to the target album when an input that the user determines the keyword again is not received within the target time period after the last keyword is determined;
an input unit 404, further configured to receive a first input to the fourth control by the user;
the processor 410 is further configured to determine a first image from the target album in response to the first input.
It should be noted that, in this embodiment, the electronic device 400 may implement each process in the method embodiment in this embodiment and achieve the same beneficial effects, and for avoiding repetition, details are not described here.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the process of the embodiment of the information sending method is implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above embodiment of the information sending method, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An information transmission method, comprising:
determining at least one keyword based on the first information displayed in the information editing box;
receiving a first input of a user;
determining a first image in response to the first input;
displaying a target image, wherein the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on the at least one keyword.
2. The method according to claim 1, wherein after said determining at least one keyword, the method further comprises:
receiving a second input of the user to the first control corresponding to the keyword;
determining a target mark associated with the keyword in response to the second input;
after the determining the first image and before the displaying the target image, the method further comprises:
and adding the target mark at the target position of the first image to generate the target image.
3. The information transmission method according to claim 1, wherein after the determining of the first image and before the displaying of the target image, the method further comprises:
under the condition that the keywords comprise a first keyword and a second keyword, determining a target position in the first image according to the first keyword, and determining the target mark according to the second keyword;
adding the target mark at a target position of the first image, wherein the target mark comprises an indicating mark for indicating the target position and a content mark containing the second keyword;
the first keywords meet a first format condition, and the second keywords meet a second format condition.
4. The information transmission method according to claim 1, wherein after the displaying the target image, the method further comprises:
receiving a third input of the target image by the user;
in response to the third input, adjusting a parameter of a target mark in the target image or deleting at least a portion of the target mark in the target image.
5. The information transmission method according to claim 1,
the target image comprises at least part of the first information;
after the displaying the target image, the method further comprises:
receiving a fourth input from the user;
in response to the fourth input, sending the target image to an opposite end;
alternatively, the first and second electrodes may be,
the target image does not include at least part of the first information;
after the displaying the target image, the method further comprises:
receiving a fifth input of the user;
in response to the fifth input, sending the target image and at least part of the first information to an opposite end.
6. The information transmission method according to any one of claims 1 to 5,
the determining at least one keyword based on the first information displayed in the information editing box comprises:
receiving a sixth input of the first information by the user;
responding to the sixth input to obtain a candidate word;
receiving a seventh input of the user;
in response to the seventh input, determining the candidate word as the keyword;
alternatively, the first and second electrodes may be,
extracting the keyword from a target position of the first information, wherein the first information is input according to a target format;
alternatively, the first and second electrodes may be,
determining at least one keyword from the first information based on a target rule, wherein the target rule is preset.
7. The information transmission method according to any one of claims 1 to 5,
the receiving a first input of a user comprises: receiving a first input of the keyword by a user;
the determining, in response to the first input, a first image comprises: displaying a second control pointing to a target album in response to the first input; receiving an eighth input of the second control by the user; determining a first image from the target album in response to the eighth input;
alternatively, the first and second electrodes may be,
the receiving a first input of a user comprises: receiving a first input of a user to a third control pointing to the target album;
the determining, in response to the first input, a first image comprises: determining a first image from the target album in response to the first input;
alternatively, the first and second electrodes may be,
the receiving a first input of a user comprises: receiving a first input of a user to a second image on a current display interface;
the determining, in response to the first input, a first image comprises: in response to the first input, treating the second image as a first image;
alternatively, the first and second electrodes may be,
the method comprises the following steps: displaying a fourth control pointing to the target album under the condition that the input of the keyword determined again by the user is not received in the target time period after the last keyword is determined;
the receiving a first input of a user comprises: receiving a first input of the fourth control by a user;
the determining, in response to the first input, a first image comprises: in response to the first input, a first image is determined from the target album.
8. An information transmission apparatus, comprising:
the first determining module is used for determining at least one keyword based on the first information displayed in the information editing box;
the first receiving module is used for receiving a first input of a user;
a second determination module to determine a first image in response to the first input;
the first display module is used for displaying a target image, wherein the target image is an image obtained by adding a target mark to a target position of the first image, and the target position and the target mark are determined based on the at least one keyword.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the information transmission method according to any one of claims 1-7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the information transmission method according to any one of claims 1 to 7.
CN202110335661.XA 2021-03-29 2021-03-29 Information sending method, information sending device and electronic equipment Pending CN113099033A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110335661.XA CN113099033A (en) 2021-03-29 2021-03-29 Information sending method, information sending device and electronic equipment
PCT/CN2022/082710 WO2022206538A1 (en) 2021-03-29 2022-03-24 Information sending method, information sending apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110335661.XA CN113099033A (en) 2021-03-29 2021-03-29 Information sending method, information sending device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113099033A true CN113099033A (en) 2021-07-09

Family

ID=76670553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110335661.XA Pending CN113099033A (en) 2021-03-29 2021-03-29 Information sending method, information sending device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113099033A (en)
WO (1) WO2022206538A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593614A (en) * 2021-07-28 2021-11-02 维沃移动通信(杭州)有限公司 Image processing method and device
WO2022206538A1 (en) * 2021-03-29 2022-10-06 维沃移动通信有限公司 Information sending method, information sending apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345839A (en) * 2018-01-22 2018-07-31 维沃移动通信有限公司 A kind of method and mobile terminal of keyword positioning
CN110109607A (en) * 2019-05-10 2019-08-09 网易(杭州)网络有限公司 Information processing method and device, electronic equipment and storage medium
KR20200017263A (en) * 2018-08-08 2020-02-18 삼성전자주식회사 Electronic device for providing keywords regarding product information included in the image
CN111656438A (en) * 2018-01-26 2020-09-11 三星电子株式会社 Electronic device and control method thereof
CN112383666A (en) * 2020-11-09 2021-02-19 维沃移动通信有限公司 Content sending method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120151398A1 (en) * 2010-12-09 2012-06-14 Motorola Mobility, Inc. Image Tagging
JP2018170653A (en) * 2017-03-30 2018-11-01 京セラドキュメントソリューションズ株式会社 Image forming apparatus and program
CN113099033A (en) * 2021-03-29 2021-07-09 维沃移动通信有限公司 Information sending method, information sending device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345839A (en) * 2018-01-22 2018-07-31 维沃移动通信有限公司 A kind of method and mobile terminal of keyword positioning
CN111656438A (en) * 2018-01-26 2020-09-11 三星电子株式会社 Electronic device and control method thereof
KR20200017263A (en) * 2018-08-08 2020-02-18 삼성전자주식회사 Electronic device for providing keywords regarding product information included in the image
CN110109607A (en) * 2019-05-10 2019-08-09 网易(杭州)网络有限公司 Information processing method and device, electronic equipment and storage medium
CN112383666A (en) * 2020-11-09 2021-02-19 维沃移动通信有限公司 Content sending method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022206538A1 (en) * 2021-03-29 2022-10-06 维沃移动通信有限公司 Information sending method, information sending apparatus, and electronic device
CN113593614A (en) * 2021-07-28 2021-11-02 维沃移动通信(杭州)有限公司 Image processing method and device
CN113593614B (en) * 2021-07-28 2023-12-22 维沃移动通信(杭州)有限公司 Image processing method and device

Also Published As

Publication number Publication date
WO2022206538A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
EP3528140A1 (en) Picture processing method, device, electronic device and graphic user interface
CN113518026B (en) Message processing method and device and electronic equipment
WO2022206538A1 (en) Information sending method, information sending apparatus, and electronic device
CN112954046B (en) Information transmission method, information transmission device and electronic equipment
CN113300938A (en) Message sending method and device and electronic equipment
CN113467660A (en) Information sharing method and electronic equipment
CN114827068A (en) Message sending method and device, electronic equipment and readable storage medium
CN113849092A (en) Content sharing method and device and electronic equipment
CN112181351A (en) Voice input method and device and electronic equipment
CN113114845A (en) Notification message display method and device
CN107784037B (en) Information processing method and device, and device for information processing
WO2023284640A1 (en) Picture processing method and electronic device
CN112163432A (en) Translation method, translation device and electronic equipment
CN111428001A (en) Short message information retrieval method, device and storage medium
CN113852540B (en) Information transmission method, information transmission device and electronic equipment
WO2022228433A1 (en) Information processing method and apparatus, and electronic device
CN113315691B (en) Video processing method and device and electronic equipment
CN113593614B (en) Image processing method and device
CN113238686B (en) Document processing method and device and electronic equipment
CN115361354A (en) Message processing method and device, electronic equipment and readable storage medium
CN113783770A (en) Image sharing method, image sharing device and electronic equipment
CN113010072A (en) Searching method and device, electronic equipment and readable storage medium
CN112866469A (en) Method and device for recording call content
CN113127653A (en) Information display method and device
CN112764603A (en) Message display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709