CN112764601B - Information display method and device and electronic equipment - Google Patents

Information display method and device and electronic equipment Download PDF

Info

Publication number
CN112764601B
CN112764601B CN202011622758.0A CN202011622758A CN112764601B CN 112764601 B CN112764601 B CN 112764601B CN 202011622758 A CN202011622758 A CN 202011622758A CN 112764601 B CN112764601 B CN 112764601B
Authority
CN
China
Prior art keywords
information
target
image
type
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011622758.0A
Other languages
Chinese (zh)
Other versions
CN112764601A (en
Inventor
周广胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011622758.0A priority Critical patent/CN112764601B/en
Publication of CN112764601A publication Critical patent/CN112764601A/en
Application granted granted Critical
Publication of CN112764601B publication Critical patent/CN112764601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Abstract

The application discloses an information display method and device and electronic equipment, and belongs to the technical field of electronics. The method comprises the following steps: receiving first information input by a user under the condition that a display interface comprises an input box, wherein the first information comprises target information; and under the condition that the first information is of a first type, displaying second information based on the target information, wherein the second information is obtained by updating the first information into a second type according to the target information. The method and the device can improve the efficiency of information conversion.

Description

Information display method and device and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to an information display method and device and electronic equipment.
Background
With the development of electronic technology, the variety of functions that can be implemented on electronic devices is increasing, for example: the conversion of information can be generally realized on the current electronic equipment. However, in the prior art, electronic devices generally can only convert voice information into text information, and cannot convert information in other forms, so that users can only manually convert information, and the efficiency of information conversion is low.
Disclosure of Invention
An embodiment of the present application provides an information display method, an information display apparatus, and an electronic device, which can solve the problem of low information conversion efficiency.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information display method, including:
receiving first information input by a user under the condition that a display interface comprises an input box, wherein the first information comprises target information;
and under the condition that the first information is of a first type, displaying second information based on the target information, wherein the second information is obtained by updating the first information into a second type according to the target information.
In a second aspect, an embodiment of the present application provides an information display apparatus, including:
the display device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving first information input by a user under the condition that a display interface comprises an input box, and the first information comprises target information;
and the display module is used for displaying second information based on the target information under the condition that the first information is of a first type, wherein the second information is obtained by updating the first information into a second type according to the target information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when the first information is of the first type, the second information is displayed based on the target information, wherein the second information is obtained by updating the first information into the second type according to the target information, so that conversion between two different types of information, namely the first information and the second information, can be realized without manual input of a user, and the information conversion efficiency is improved.
Drawings
Fig. 1 is a flowchart of an information display method according to an embodiment of the present application;
FIG. 2 is a flow chart of another information display method provided by an embodiment of the present application;
FIG. 3 is a flow chart of another information display method provided by an embodiment of the application;
FIG. 4 is a flow chart of another information display method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of different information transformations in another information display method provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an information display device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information display method, the information display device, and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of an information display method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step 101, receiving first information input by a user under the condition that a display interface comprises an input box, wherein the first information comprises target information.
The specific type of the display interface is not limited herein, for example: the display interface can be a group chat interface or a single chat interface (namely a dialogue interface with a certain dialogue person), and in addition, the display interface can also be an information input interface. And the input box is used for inputting the position of the first information.
The type of the first information is not limited herein, and for example: the first information may be text information, audio information, image information, or video information. That is to say: the type of the first information may be text, audio, image, or video.
If the target information corresponds to the type (also referred to as a type) of the first information, that is, if the type of the first information is different, the content of the target information is also different, for example: when the first information is text information, the target information may be keyword information or keyword information in the text information. When the first information is image information, the target information may be content highlighted in a preset color or a preset size in the image information, and of course, the target information may also be content with identification in the image, such as content of a building or a portrait of a person.
When the first information is video information, the target information may be watermark information appearing in the video, and the watermark information may include information such as the provenance of the video and the platform ID number of the video. When the first information is audio information, then the target information may be a key audio segment in the audio information, which may refer to an audio segment that may summarize the main content of the audio information.
It should be noted that the first information may include target information, that is, the target information may be a part of the first information; of course, the target information and the first information may be packaged into a whole data packet; in addition, the target information can also be read from the first information, for example: the first information is video information, and the target information can be determined by identifying watermark information on an image frame picture in the video information (i.e. the target information is determined as the watermark information).
And 102, under the condition that the first information is of a first type, displaying second information based on the target information, wherein the second information is obtained by updating the first information into a second type according to the target information.
It should be noted that the first type and the second type are two different types, and when the first information is converted into the second type, the information of the second type is the second information, and the first information may refer to the information of the first type.
It should be noted that, referring to fig. 5, fig. 5 may show a schematic diagram of conversion between different types of first information and second information.
In addition, the content represented by the first information and the second information may match, but the types of the two are different, for example: the first information may be text information, the second information may be image information, the first information may include text information of facial features of a certain user, and the second information may present the facial features of the user in the form of an image.
The specific manner of converting the first information into the second information of the second type based on the target information is not limited herein, for example: the second information may be retrieved from a local preset database or a server, or may be synthesized according to the first information.
As an optional implementation manner, in a case where the first type is text and the second type is image or video, the displaying the second information based on the target information includes:
receiving N first target objects which are sent by a server and are obtained by searching the target information, wherein the first target objects are first target images or first target videos;
under the condition that the matching degrees of the N first target objects and the target information are lower than a first threshold value, extracting M second target objects corresponding to the target information from a local preset database;
and performing object synthesis based on the M second target objects to obtain and display second information, wherein N and M are positive integers.
The first target image and the first target video may be results retrieved from the server according to the target information.
In this case, the first target object may be a first target image or a first target video, and the case where the first target object is the first target image and the case where the first target object is the first target video may be referred to in the following corresponding embodiments, respectively.
In this embodiment, when the matching degrees of the N first target objects and the target information are all lower than the first threshold, it is described that the N first target objects do not completely express the content of the first information, and therefore, the second information may be determined by image synthesis. In this way, the integrity of the finally obtained second information can be improved.
As an optional implementation manner, in a case that the first type is a text and the second type is an image, the displaying the second information based on the target information includes:
receiving N first target images which are sent by a server and are obtained by searching the target information;
under the condition that the matching degrees of the N first target images and the target information are lower than a first threshold value, extracting M second target images corresponding to the target information from a local preset database;
and performing image synthesis based on the M second target images to obtain and display second information, wherein N and M are positive integers.
That is to say: the first target object and the second target object in the above embodiments may be the first target image and the second target image, respectively, in the present embodiment.
When the first type is a text and the second type is an image, the target information may be keyword information or keyword information in the text information. And a search may be performed on the server based on the target information to determine a first target image matching the target information.
The matching degree between the first target image and the target information may refer to: the degree of coincidence of the content included in the first object image and the content of the object information, for example: the target information includes two texts of "a" and "b", but the content of the first target image includes only "a", the matching degree of the first target image with the target information may be 50%.
When the matching degree of the N first target images and the target information is lower than the first threshold, it is indicated that the N first target images cannot completely express the content of the first information, and therefore, the second information can be determined in an image synthesis manner.
And M second target images may be understood as images necessary for synthesizing the second information, for example: the content of the second information may include "a building" and "B user", two second target images may be obtained, where one second target image includes "a building" and the other second target image includes "B user", respectively, so that the two second target images are synthesized to obtain second information that includes both "a building" and "B user", so that the second information is relatively complete.
In this embodiment, the content of the first information cannot be completely expressed in the N first target images, and image synthesis may be performed based on the M second target images to obtain the second information, so that the second information can more completely express the content of the first information, and the integrity of information conversion is improved.
In addition, the local preset database may refer to a local database included in the electronic device, and may store a plurality of second target images, and certainly, the local preset database may also include other types of images, and at the same time, may also include other types of information, which is not limited herein. The local preset database may also be referred to as a local stories library.
In addition, as an optional implementation manner, when the matching degree between at least a part of the N first target images and the target information is higher than the first threshold, the first target image with the highest matching degree with the target information in the at least a part of the first target images may be determined as the second information.
As an optional implementation manner, the performing object synthesis based on the M second target objects, and obtaining and displaying second information includes:
synthesizing according to the M second target objects to obtain first synthesis information;
determining the first synthetic information as the second information and displaying the second information when the conversion rate of the first synthetic information is greater than or equal to a second threshold, wherein the conversion rate is the ratio of the content of the first synthetic information to the content of the first information;
and displaying first prompt information under the condition that the conversion rate of the first synthesis information is smaller than a second threshold, wherein the first prompt information is used for prompting that the object synthesis cannot be carried out according to the M second target objects.
When the second target object is a second target image and a second target video, reference may be made to corresponding embodiments hereinafter.
In this embodiment, whether the conversion rate of the first synthesized information is greater than or equal to the second threshold is determined, and the first synthesized information greater than or equal to the second threshold is determined as the second information, so that the second information can more completely express the content of the first information, and omission of the content of the first information is reduced.
As an optional implementation manner, in a case that the second target object is a second target image, the performing object synthesis based on the M second target objects to obtain and display second information includes:
synthesizing according to the M second target images to obtain first synthesis information;
determining the first synthetic information as the second information and displaying the second information when the conversion rate of the first synthetic information is greater than or equal to a second threshold, wherein the conversion rate is the ratio of the content of the first synthetic information to the content of the first information;
and displaying first prompt information under the condition that the conversion rate of the first synthesis information is smaller than a second threshold, wherein the first prompt information is used for prompting that image synthesis cannot be performed according to the M second target images.
Wherein the conversion is illustrated by an example, such as: the first information is text information, the first synthetic information is image information, the first information comprises two texts of 'A building' and 'B user', and the image shown by the first synthetic information only comprises 'A building', so that the conversion rate of the first synthetic information is 50%.
When the conversion rate of the first synthetic information is greater than or equal to the second threshold, it indicates that the conversion rate of the first synthetic information is higher, the content of the first information can be expressed more completely, and the omission of the content in the first information by the first synthetic information is less, and at this time, the first synthetic information can be determined as the second information.
In contrast, when the conversion rate of the first synthesis information is lower than the second threshold, which indicates that the first synthesis information cannot completely express the content of the first information, the first synthesis information has a large omission of the content of the first information, and at this time, the first prompt information may be displayed for prompting the user that image synthesis cannot be performed according to M second target images, that is to say: the first prompt message can prompt the user that the conversion fails or cannot be performed, so that the user can adjust the content of the first message or the content of the target message to improve the conversion rate of the final second message. Of course, the first prompt message may also prompt the user to abort the conversion.
In this embodiment, whether the conversion rate of the first synthesized information is greater than or equal to the second threshold is determined, and the first synthesized information greater than or equal to the second threshold is determined as the second information, so that the second information can more completely express the content of the first information, and omission of the content of the first information is reduced. Meanwhile, when the conversion rate of the first synthesized information is less than the second threshold, the first prompt information may be displayed so that the user may adjust the content of the first information or the target information, or the user may be caused to abandon the conversion.
As an optional implementation manner, when the first type is a text and the second type is a video, in the case that the first information is the first type, the displaying the second information based on the target information includes:
receiving N first target videos which are sent by a server and are obtained by searching the target information;
under the condition that the matching degrees of the N first target videos and the target information are lower than a third threshold value, extracting M second target videos corresponding to the target information from a local preset database;
and performing video synthesis based on the M second target videos to obtain second information and displaying the second information, wherein N and M are positive integers.
That is to say: the first target object in the above embodiment may be the first target video in the present embodiment, and the second target object may be the second target video in the present embodiment.
As an optional implementation manner, in a case that the second target object is a second target video, the performing object composition based on the M second target objects to obtain and display second information includes:
synthesizing according to the M second target videos to obtain second synthesis information;
determining the second synthetic information as the second information and displaying the second information when the conversion rate of the second synthetic information is greater than or equal to a fourth threshold, wherein the conversion rate is the ratio of the content of the second synthetic information to the content of the first information;
and displaying second prompt information under the condition that the conversion rate of the second synthesis information is smaller than a fourth threshold, wherein the second prompt information is used for prompting that video synthesis cannot be performed according to the M second target videos.
It should be noted that the first type is a text, the second type is two implementation manners when the video is a video, and the first type is a text, and the second type is an image, which may correspond to the first type and the second type, and the first type is a text, and the second type is a video, which may refer to corresponding expressions in the first type is a text and the second type is an image, and have the same beneficial technical effects, and no further description is provided here.
The two embodiments when the first type is text and the second type is video are different from the two embodiments when the first type is text and the second type is image: the second type is different, one is an image, and the other is a video, so that the first type can be a text, and the images in the two embodiments when the second type is an image can be correspondingly replaced by the videos in the two embodiments when the first type is a text and the second type is a video.
In addition, as an optional implementation, the method further includes:
in the case that N first target videos are obtained through retrieval, and the matching degree of at least part of the N first target videos and the target information is higher than or equal to a third threshold value, cluing can be carried out in the application store server;
determining the at least part of the first target video as second information, and determining a target application program which can provide the at least part of the first target video;
the displaying the second information based on the target information includes:
and displaying the second information based on the target information, and displaying the information of the target application program.
Therefore, the user can determine the second information and can also quickly determine which application programs can provide the second information, and the target application program can be selected to be downloaded under the condition that the target application program is not downloaded, so that the second information can be acquired.
It should be noted that, in each of the embodiments included in this embodiment, both the matching degree and the conversion rate may be displayed on the electronic device, so that the user may observe the matching degree and the conversion rate of each piece of information more intuitively.
As an optional implementation manner, when the first type is a text and the second type is an image, the displaying the second information based on the target information includes:
receiving N first target images which are sent by a server and are obtained by searching the target information; acquiring K third target images obtained by local retrieval in the electronic equipment;
and determining the image with the highest matching degree with the target information in the N first target images and the K third target images as second information, and displaying the second information, wherein N and K are positive integers.
Therefore, the second information can be made to express the content of the first information more completely.
Note that the present embodiment can also be applied to a case where the second type is a video, that is, only the image in the above embodiment needs to be replaced with a video.
It should be noted that, in the case that the first type may be text, the second type may be video, audio or image; in the case where the first type is video, audio, or image, the second type may be text.
The following three embodiments illustrate the processing flow when the first type is text and the second type is image, video, or audio, respectively.
In the first embodiment, when the first type is a text and the second type is an image, the keyword information (namely target characteristic information) extracted according to the text information is searched in the network, and the search result is displayed in a multi-selection mode; for the low retrieval matching degree, an automatic tool is adopted to synthesize the image information in the text information, such as a flow chart, a landscape painting, a portrait and the like; for a conversion below the threshold, the user is prompted to fail the conversion.
In the second embodiment, when the first type is a text and the second type is a video, after the text is paraphrased by artificial intelligence, the information such as a video source with high matching degree is searched in an application store for the second time according to keyword information (namely target information) extracted from the text, and the application information which can provide the video in the application store is obtained (and the video can be sent by accompanying the application information); for the situation that the retrieval degree is low, the video effect graph is tried to be synthesized, for example, the operation that a user inputs a binding mobile phone number in a mailbox, the operation that a bank card is bound in a wallet of an application program, the recent weather condition change situation and the like, after the content of the text is explained according to artificial intelligence, relevant operation processes are searched and the video effect graph is synthesized; for a conversion below the threshold, the user is prompted to fail the conversion.
In the third embodiment, when the first type is a text and the second type is an audio, searching a matching information base (which may be local to the electronic device or a server) according to the first information, and if a characteristic value (i.e., target information, which may include song names, lyrics, etc.) is matched, selecting a feasible mode in a local song base or a local application program or the internet to obtain audio information; and if the matching rate is not high, directly converting the first information into robot reading audio information corresponding to the content.
As an optional implementation manner, in the case that the first type is an image and the second type is a video, the displaying the second information based on the target information includes:
under the condition that the image is a first type image, recognizing and displaying text information of the first type image; at this time, the second information may be understood as: text information of the first type of image.
Identifying the film and television information corresponding to a second type image under the condition that the image is the second type image; displaying a downloading path corresponding to the video information based on the video information; at this time, the second information may be understood as: and downloading paths corresponding to the video information.
And under the condition that the image is a third type image, extracting an editing mode of the third type image, and displaying an application program corresponding to the editing mode. At this time, the second information may be understood as: and displaying the application program corresponding to the editing mode.
Wherein, the first type image, the second type image and the third type image can be different type images respectively, for example: the first type of image may be a screenshot of the electronic device, the second type of image may be a movie video frame image, and the third type of image may be a composite image.
In this way, different operations can be performed according to different specific types of images (namely, the first type image, the second type image or the third type image), so as to determine different types of second information, thereby improving the diversity and flexibility of the second information determination mode, and simultaneously improving the diversity of the types of the second information.
In addition, the text information of the first type image may be at least part of text information included in the first type image, and the download path of the movie information may refer to that the application program corresponding to the editing mode may refer to an editing application program capable of providing the corresponding editing mode.
As an optional implementation manner, in the case that the first type is an image and the second type is a text;
in the case that the display interface includes an input box, receiving first information input by a user, including:
receiving first information input by a user, and determining the type of the first information;
and determining the target information according to the type of the first information.
In this way, determining the target information according to the type of the first information (image information in this case) can improve the diversity of the target information determination manner and the accuracy of the determination result.
The first information can be divided into a screenshot of the electronic device, a video frame image, a composite image and an image shot by the electronic device according to different types of the first information. It should be noted that, if the types of the first information are different, the target information corresponding to the first information may also be different.
For example: as an optional implementation manner, in a case that the type of the first information is a screenshot of the electronic device, the target information may be first target content in the screenshot, and a display manner of the first target content may be at least one of the following manners, and the display manner of the first target content may include: displaying in a preset color, displaying in a preset size, and displaying at a preset position.
The specific content of the first target content is not limited herein, for example: the first target content may be text information included in the screenshot, such as a phrase or a sentence.
That is to say: the target information may be a text with a high conversion requirement in the screenshot, such as a text marked by a red line and located in the middle of the screenshot, or a phrase or a sentence with keyword information.
In addition, after the target information is identified, the first information may be converted into a second type based on the target information, so as to obtain second information, that is, to identify a text with a high conversion requirement in the screenshot or all texts in the screenshot, and to directly convert the text with a high conversion requirement in the screenshot (or the screenshot, which belongs to the image information) or all texts in the screenshot into text information, and display the text information.
That is to say: at least part of the first information may be converted into the second information. When only partial information of the first information is converted, it may be understood that the target information is converted into the second information, and when all information of the first information is converted, it may be understood that the first information is entirely converted into the second information.
The present embodiment can be applied to an embodiment in which the first type is an image and the second type is a video. Details are not repeated.
As another alternative, in the case where the type of the first information is a video frame image, the target information may be a feature image (for example, a building, a portrait of a person, or a landscape) in the video frame image.
After the feature image is recognized, an image having a high degree of matching with the feature image may be retrieved from a server (may also be referred to as the internet), a route that can provide the image may be retrieved from a server corresponding to an application store, and text information corresponding to the route may be displayed.
As another alternative, when the type of the first information is a composite image, the target information may be a processing mode adopted by the composite image, for example: and at least one of the types of filters added to the synthesized image, special effect processing modes, splicing modes, cutting modes and the like.
After the processing method is determined, an acquisition route of an application program that can implement the processing method may be searched for in the application store server, and text information corresponding to the route may be displayed.
As another alternative, when the type of the first information is an image captured by the electronic device, the target information may be a portrait of a person and a landmark building included in the image, and of course, the target information may also be a capturing location of the image.
In addition, after the target information is determined, an image including information such as a person portrait, a landmark building, and a shooting location may be retrieved in association with the target information, and an acquisition path of the image including the information such as the person portrait, the landmark building, and the shooting location may be displayed.
It should be noted that the above specific embodiments may also be applied to the embodiments in which the first type is an image and the second type is a video. Details are not repeated.
As an optional implementation manner, in a case where the first type is a video and the second type is a text, the target information includes identification information for marking a provenance of the video information.
The embodiment can be applied to extracting the identification information in the video information, and the identification information can be watermark information used for indicating the information such as the provenance of the video information or the ID number of the playing platform, and then searching in the application store server according to the identification information, and displaying the path text information which can provide the video information.
Thus, the embodiment can convert the video information into the text information, thereby further improving the diversity of different types of information conversion modes.
As an optional implementation, in the case that the first type is audio and the second type is text; the method further comprises the following steps:
converting the first information into target text information, and determining the target information of the target text information;
the displaying of the second information based on the target information includes:
retrieving the target information;
and determining the detected retrieval information and the source information corresponding to the retrieval information as the second information, and displaying the second information.
The first information is audio information, and the target text information is obtained by converting the first information, so that the target text information can be understood as text information corresponding to the content of the audio information, and the target information can be part of information in the target text information, for example: the target text information is "i like to play badminton", and the target information may include "badminton".
The source information corresponding to the retrieval information may include at least one of a network link and a hyperlink.
Thus, the embodiment can realize the conversion of the audio information into the text information, thereby further improving the diversity of conversion modes between different types of information.
In the embodiment of the present application, the electronic Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
In the embodiment of the present application, through steps 101 to 102, when the first information is of the first type, the second information is displayed based on the target information, where the second information is obtained by updating the first information to the second type according to the target information, and thus, conversion between two different types of information, namely the first information and the second information, can be achieved without manual input by a user, and thus, the efficiency of information conversion is improved.
This embodiment is illustrated below in three embodiments:
example one
The first type in this embodiment is text, and the second type is image.
Step 201, extracting keyword information (namely target information) through artificial intelligence paraphrasing text information;
step 202, searching in an internet database (which can be understood as a server) according to the keyword information, and selecting image information (which can be understood as a first target image) with the highest matching degree, wherein the matching degree is recorded as M1;
step 203, continuously searching keyword information in a local database (local electronic equipment), selecting picture information with the highest matching degree, and recording the matching degree as M2;
step 204, taking M ═ Max (M1, M2), if M (which can be understood as the matching degree between the first target image and the target information) is smaller than the matching success degree M0 (which can be understood as the first threshold), then step 205 is executed; if M is greater than the matching success M0, go to step 206;
step 205, trying to synthesize image information corresponding to the text information through a material base (which can be understood as a local preset database) for the keyword information of the text information, determining the synthesis conversion degree T (which can be understood as a conversion rate) according to the ratio of the processed text to the total text after synthesis is completed, executing step 206 if the conversion degree is higher than T0 (which can be understood as a second threshold), and executing step 207 if the conversion degree is lower than T0.
And step 206, the conversion is successful, and the final conversion result and M or T are displayed to the input party for selection processing.
And step 207, prompting that the conversion fails, and informing the user to adjust the input content to continue to repeat the conversion operation or abandon the conversion.
Example II,
In this embodiment, the first type is text, and the second type is video.
301, extracting keyword information (namely target information) through artificial intelligence paraphrasing text information;
step 302, searching in an internet database (which can be understood as a server) according to the keyword information, and selecting video information (which can be understood as a first target video) with the highest matching degree, wherein the matching degree is recorded as M1;
step 303, continuing to search the video in the local machine (which can be understood as the local electronic device), selecting the video information with the highest matching degree, and recording the matching degree as M2;
step 304, if M is greater than Max (M1, M2), executing step 305 if M is less than M0, otherwise executing step 306, where M may be understood as a matching degree of the first target video and the target information, and M0 may be understood as a third threshold;
step 305, attempting to perform animation synthesis on the extracted keyword information by using material resources (which can be understood as a second target video), calculating a conversion rate T according to the proportion of the processed text to the total text after the synthesis processing is finished, if T > T0 (which can be understood as a fourth threshold), executing step 307, otherwise, executing step 308;
step 306, providing the video which is successfully searched and path information (namely, second information) of the relevant video to the input party for browsing or processing;
step 307, providing the successfully converted video file to an input party for browsing or processing;
and step 308, the operation of converting the text into the video fails, and the user is reminded to modify the text or abandon the conversion.
Example III,
In this embodiment, the first type is an image, and the second type is a text.
Step 401, judging the type of the image (which can be understood as first information) according to the standard;
step 402, if the image is a terminal screenshot, identifying key character information (namely target information), converting the key character information into text information (namely second information), and carrying out the next step; if the image is not the terminal screenshot, directly executing step 403;
step 403, identifying the image as a video image frame type, retrieving a network database (i.e., a server), acquiring a provider source, acquiring a corresponding download path (i.e., second information) in an application store, and entering the next step; if not, go to step 404 directly;
step 404, identifying the image as a synthesized image type, analyzing and extracting an image processing mode, searching a path of an application program which can provide a corresponding image processing mode from a server of an application store, and entering the next step; if not, go to step 405 directly;
step 405, if the image is identified as the photographing type, executing step 406; if not, go to step 408;
step 406, acquiring detailed image information (i.e., target information, which may be time and place information), and converting the recognized person portrait, landmark building, famous scenery spot and the like into text information (i.e., second information) according to the database;
step 407, after the conversion is successful, comprehensively arranging the converted text information, and displaying the text information to an input party for processing;
step 408, judging whether the conversion information is generated or not, and entering step 409 if the conversion information is not generated;
in step 409, the operation of converting the image into the text fails, and the user is reminded to modify the image or abandon the conversion.
It should be noted that, in the information display method provided in the embodiment of the present application, the execution main body may be an information display device, or a control module for executing the information display method in the information display device. In the embodiment of the present application, an information display device executing an information display method is taken as an example, and the information display device provided in the embodiment of the present application is described.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an information display device according to an embodiment of the present application, and as shown in fig. 6, an information display device 600 includes:
the receiving module 601 is configured to receive first information input by a user under the condition that a display interface includes an input box, where the first information includes target information;
a displaying module 602, configured to display second information based on the target information when the first information is of a first type, where the second information is obtained by updating the first information to a second type according to the target information.
Optionally, in a case where the first type is text and the second type is image or video, the display module 602 includes:
the receiving submodule is used for receiving N first target objects which are sent by a server and are obtained by searching the target information, and the first target objects are first target images or first target videos;
the extraction sub-module is used for extracting M second target objects corresponding to the target information from a local preset database under the condition that the matching degrees of the N first target objects and the target information are lower than a first threshold;
and the synthesis submodule is used for carrying out object synthesis on the basis of the M second target objects to obtain and display second information, wherein N and M are positive integers.
Optionally, the synthesis submodule comprises:
the synthesizing unit is used for synthesizing according to the M second target objects to obtain first synthesis information;
a determination unit configured to determine the first synthesis information as the second information and display the second information if a conversion rate of the first synthesis information, which is a ratio of content of the first synthesis information to content of the first information, is greater than or equal to a second threshold;
and a display unit, configured to display first prompt information when a conversion rate of the first synthesis information is smaller than a second threshold, where the first prompt information is a prompt that object synthesis cannot be performed according to the M second target objects.
Optionally, in a case where the first type is an image and the second type is a video, the display module 602 includes:
the first display sub-module is used for identifying and displaying text information of the first type of image under the condition that the image is the first type of image;
the second display submodule is used for identifying the film and television information corresponding to the second type image under the condition that the image is the second type image; displaying a downloading path corresponding to the video information based on the video information;
and the third display sub-module is used for extracting the editing mode of the third type image and displaying the application program corresponding to the editing mode under the condition that the image is the third type image.
In the embodiment of the application, the conversion between the first information and the second information of different types can be realized without manual input of a user, so that the information conversion efficiency is improved.
The information display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 5, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the above-mentioned embodiment of the information display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
A user input unit 807 for receiving first information input by a user in a case that the display interface includes an input box, wherein the first information includes target information;
a display unit 806, configured to display second information based on the target information when the first information is of a first type, where the second information is obtained by updating the first information to a second type according to the target information.
Optionally, in the case that the first type is text and the second type is image or video;
a display unit 806, further configured to receive N first target objects obtained by retrieval of the target information sent by the server, where the first target objects are first target images or first target videos;
the processor 810 is configured to extract M second target objects corresponding to the target information from a local preset database when matching degrees of the N first target objects and the target information are all lower than a first threshold;
the display unit 806 is further configured to perform object synthesis based on the M second target objects, to obtain and display second information, where N and M are positive integers.
Optionally, the processor 810 is further configured to synthesize according to the M second target objects, so as to obtain first synthesis information; determining the first synthesis information as the second information in a case where a conversion rate of the first synthesis information is greater than or equal to a second threshold;
a display unit 806, configured to display the second information, where the conversion rate is a ratio of content of the first synthesized information to content of the first information; and displaying first prompt information under the condition that the conversion rate of the first synthesis information is smaller than a second threshold, wherein the first prompt information is used for prompting that the object synthesis cannot be carried out according to the M second target objects.
Optionally, in the case that the first type is an image and the second type is a video;
the display unit 806 is further configured to identify and display text information of the first type image if the image is the first type image; identifying the film and television information corresponding to a second type image under the condition that the image is the second type image; displaying a downloading path corresponding to the video information based on the video information; and under the condition that the image is a third type image, extracting an editing mode of the third type image, and displaying an application program corresponding to the editing mode.
Therefore, the conversion between the first type information and the second type information can be realized without manual input of a user, and the information conversion efficiency is improved.
It should be understood that, in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above information display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. An information display method, comprising:
receiving first information input by a user under the condition that a display interface comprises an input box, wherein the first information comprises target information;
under the condition that the first information is of a first type, displaying second information based on the target information, wherein the second information is obtained by updating the first information into a second type according to the target information;
in a case where the first type is an image and the second type is a text, the displaying second information based on the target information includes:
under the condition that the image is a first type image, recognizing and displaying text information of the first type image; wherein the target information is text information included in the image;
identifying the film and television information corresponding to a second type image under the condition that the image is the second type image; displaying a downloading path corresponding to the video information based on the video information; wherein the target information is a characteristic image in the image;
under the condition that the image is a third type image, extracting an editing mode of the third type image, and displaying an application program corresponding to the editing mode; and the target information is a processing mode adopted by the image.
2. The method according to claim 1, wherein in the case that the first type is text and the second type is image or video, the displaying second information based on the target information comprises:
receiving N first target objects which are sent by a server and are obtained by searching the target information, wherein the first target objects are first target images or first target videos;
under the condition that the matching degrees of the N first target objects and the target information are lower than a first threshold value, extracting M second target objects corresponding to the target information from a local preset database;
and performing object synthesis based on the M second target objects to obtain and display second information, wherein N and M are positive integers.
3. The method according to claim 2, wherein the performing object synthesis based on the M second target objects, obtaining and displaying second information comprises:
synthesizing according to the M second target objects to obtain first synthesis information;
determining the first synthetic information as the second information and displaying the second information when the conversion rate of the first synthetic information is greater than or equal to a second threshold, wherein the conversion rate is the ratio of the content of the first synthetic information to the content of the first information;
and displaying first prompt information under the condition that the conversion rate of the first synthesis information is smaller than a second threshold, wherein the first prompt information is used for prompting that the object synthesis cannot be carried out according to the M second target objects.
4. An information display device, comprising:
the display device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving first information input by a user under the condition that a display interface comprises an input box, and the first information comprises target information;
the display module is used for displaying second information based on the target information under the condition that the first information is of a first type, wherein the second information is obtained by updating the first information into a second type according to the target information;
in a case where the first type is an image and the second type is a text, the display module includes:
the first display sub-module is used for identifying and displaying text information of the first type of image under the condition that the image is the first type of image; wherein the target information is text information included in the image;
the second display submodule is used for identifying the film and television information corresponding to the second type image under the condition that the image is the second type image; displaying a downloading path corresponding to the video information based on the video information; wherein the target information is a characteristic image in the image;
the third display sub-module is used for extracting an editing mode of the third type image and displaying an application program corresponding to the editing mode under the condition that the image is the third type image; and the target information is a processing mode adopted by the image.
5. The information display device according to claim 4, wherein in a case where the first type is text and the second type is image or video, the display module includes:
the receiving submodule is used for receiving N first target objects which are sent by a server and are obtained by searching the target information, and the first target objects are first target images or first target videos;
the extraction sub-module is used for extracting M second target objects corresponding to the target information from a local preset database under the condition that the matching degrees of the N first target objects and the target information are lower than a first threshold;
and the synthesis submodule is used for carrying out object synthesis on the basis of the M second target objects to obtain and display second information, wherein N and M are positive integers.
6. The information display device according to claim 5, wherein the synthesis sub-module includes:
the synthesizing unit is used for synthesizing according to the M second target objects to obtain first synthesis information;
a determination unit configured to determine the first synthesis information as the second information and display the second information if a conversion rate of the first synthesis information, which is a ratio of content of the first synthesis information to content of the first information, is greater than or equal to a second threshold;
a display unit, configured to display first prompt information when a conversion rate of the first synthesis information is smaller than a second threshold, where the first prompt information is used to prompt that object synthesis cannot be performed according to the M second target objects.
7. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the information display method according to any one of claims 1-3.
8. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the information display method according to any one of claims 1-3.
CN202011622758.0A 2020-12-31 2020-12-31 Information display method and device and electronic equipment Active CN112764601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011622758.0A CN112764601B (en) 2020-12-31 2020-12-31 Information display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011622758.0A CN112764601B (en) 2020-12-31 2020-12-31 Information display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112764601A CN112764601A (en) 2021-05-07
CN112764601B true CN112764601B (en) 2022-07-01

Family

ID=75698592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011622758.0A Active CN112764601B (en) 2020-12-31 2020-12-31 Information display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112764601B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684497A (en) * 2018-12-19 2019-04-26 北京金山安全软件有限公司 Image-text matching information sending method and device and electronic equipment
CN111324760A (en) * 2020-02-19 2020-06-23 名创优品(横琴)企业管理有限公司 Image retrieval method and device
CN111639282A (en) * 2020-05-29 2020-09-08 维沃移动通信有限公司 Information display method, display device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110307255A1 (en) * 2010-06-10 2011-12-15 Logoscope LLC System and Method for Conversion of Speech to Displayed Media Data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684497A (en) * 2018-12-19 2019-04-26 北京金山安全软件有限公司 Image-text matching information sending method and device and electronic equipment
CN111324760A (en) * 2020-02-19 2020-06-23 名创优品(横琴)企业管理有限公司 Image retrieval method and device
CN111639282A (en) * 2020-05-29 2020-09-08 维沃移动通信有限公司 Information display method, display device and electronic equipment

Also Published As

Publication number Publication date
CN112764601A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN109688463B (en) Clip video generation method and device, terminal equipment and storage medium
US20090161963A1 (en) Method. apparatus and computer program product for utilizing real-world affordances of objects in audio-visual media data to determine interactions with the annotations to the objects
US8938153B2 (en) Representative image or representative image group display system, representative image or representative image group display method, and program therefor
US20080215548A1 (en) Information search method and system
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN112052784B (en) Method, device, equipment and computer readable storage medium for searching articles
CN106407358A (en) Image searching method and device and mobile terminal
CN113177419B (en) Text rewriting method and device, storage medium and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN111542817A (en) Information processing device, video search method, generation method, and program
CN108133209A (en) Target area searching method and its device in a kind of text identification
CN112860921A (en) Information searching method and device
CN111967367A (en) Image content extraction method and device and electronic equipment
CN112764601B (en) Information display method and device and electronic equipment
CN108052506B (en) Natural language processing method, device, storage medium and electronic equipment
CN115061580A (en) Input method, input device, electronic equipment and readable storage medium
CN113362426B (en) Image editing method and image editing device
CN112261321B (en) Subtitle processing method and device and electronic equipment
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
CN113593614A (en) Image processing method and device
CN113268961A (en) Travel note generation method and device
CN112261483A (en) Video output method and device
CN113163256A (en) Method and device for generating operation flow file based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant