CN112596656A - Content identification method, device and storage medium - Google Patents

Content identification method, device and storage medium Download PDF

Info

Publication number
CN112596656A
CN112596656A CN202011597027.5A CN202011597027A CN112596656A CN 112596656 A CN112596656 A CN 112596656A CN 202011597027 A CN202011597027 A CN 202011597027A CN 112596656 A CN112596656 A CN 112596656A
Authority
CN
China
Prior art keywords
recognition
text
content
floating window
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011597027.5A
Other languages
Chinese (zh)
Inventor
莫志伟
朱英涛
刘曼烨
涂权蓉
蔡文
钱庄
潘琼
仲晨
王家星
范馨文
高贺
郑健鹏
罗泓婷
柳亦婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011597027.5A priority Critical patent/CN112596656A/en
Publication of CN112596656A publication Critical patent/CN112596656A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The disclosure relates to a content identification method, a content identification device and a storage medium. The method comprises the following steps: when a first trigger operation aiming at display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface; when the text recognition floating window is displayed, performing text recognition on the display content based on a second trigger operation received by the text recognition floating window; and under the condition that the text recognition fails or the text recognition floating window is not displayed, carrying out picture recognition on the display content based on a third trigger operation received by the picture recognition floating window. Thus, in the first aspect, full scene recognition of the transfer gate function can be realized; in the second aspect, text recognition is preferentially used, so that network traffic can be saved; in the third aspect, picture recognition is used as a bottom-finding scheme, so that power consumption generated by content recognition can be reduced, and the stability of content recognition can be improved.

Description

Content identification method, device and storage medium
Technical Field
The present disclosure relates to the field of computer communications, and in particular, to a content identification method, device and storage medium.
Background
In the related art, when the function of the transfer gate is used, only characters in the application program can be captured based on a preset function. For example, text crawling may be performed within an application by a content catcher-like method. However, if only the characters in the application program are captured, not all the contents can be captured, which may cause the captured scene to be limited, and the captured characters may be inaccurate or the character contents may not be captured. Based on the above problem, the function of the transmission gate is not good, and the user experience is reduced.
Disclosure of Invention
The disclosure provides a content recognition method, a content recognition device and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a content identification method applied to an electronic device, including:
when a first trigger operation aiming at display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface;
when the text recognition floating window is displayed, performing text recognition on the display content based on a second trigger operation received by the text recognition floating window;
and under the condition that the text recognition fails or the text recognition floating window is not displayed, carrying out picture recognition on the display content based on a third trigger operation received by the picture recognition floating window.
Optionally, the method further includes:
when the text recognition floating window receives the second trigger operation, displaying a text recognition control on the current interface, and displaying a picture recognition control in a first set range of the text recognition control;
when the picture identification floating window receives the third trigger operation, displaying the picture identification control on the current interface, and displaying the text identification control in a second set range of the picture identification control;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
Optionally, the performing picture identification on the display content based on a third trigger operation received by the picture identification floating window includes:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
Optionally, the method further includes:
sending the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and receiving a word segmentation result and an intention recognition result returned by the server, and displaying the word segmentation result and the intention recognition result on the current interface.
Optionally, the word segmentation result includes at least one character, and the method further includes:
displaying at least one of the characters on the current interface;
and updating the intention recognition result according to the detected selected operation aiming at least one character.
Optionally, when the first trigger operation for the display content on the current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface includes:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
Optionally, the displaying the text recognition floating window and/or the picture recognition floating window on the current interface according to the grabbing result includes:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
Optionally, the method further includes:
when a fifth trigger operation aiming at the current interface is detected, displaying a screen capture frame on the current interface;
and performing text recognition on the screen capture content positioned in the screen capture frame.
Optionally, the method further includes:
adjusting the size and/or position of the screen mask based on the detected adjustment operation;
the text recognition of the screenshot content located in the screenshot box comprises the following steps:
and performing text recognition on the screenshot content in the adjusted screenshot box.
According to a second aspect of the embodiments of the present disclosure, there is provided a content identification apparatus applied to an electronic device, including:
the first display module is configured to display a text recognition floating window and/or a picture recognition floating window on a current interface when a first trigger operation aiming at display content on the current interface is detected;
the first identification module is configured to perform text identification on the display content based on a second trigger operation received by the text identification floating window when the text identification floating window is displayed;
and the second identification module is configured to perform picture identification on the display content based on a third trigger operation received by the picture identification floating window under the condition that text identification fails or the text identification floating window is not displayed.
Optionally, the apparatus further comprises:
the second display module is configured to display a text recognition control on the current interface and display the picture recognition control in a first set range of the text recognition control when the text recognition floating window receives the second trigger operation;
the third display module is configured to display the picture identification control on the current interface and display the text identification control in a second set range of the picture identification control when the picture identification floating window receives the third trigger operation;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
Optionally, the second identification module is further configured to:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
Optionally, the apparatus further comprises:
the sending module is configured to send the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and the receiving module is configured to receive the word segmentation result and the intention recognition result returned by the server and display the word segmentation result and the intention recognition result on the current interface.
Optionally, the word segmentation result includes at least one character, and the apparatus further includes:
a third display module configured to display at least one of the characters on the current interface;
an updating module configured to update the intention recognition result according to the detected selected operation for at least one of the characters.
Optionally, the first display module is further configured to:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
Optionally, the first display module is further configured to:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
Optionally, the apparatus further comprises:
the fourth display module is configured to display a screen frame on the current interface when a fifth trigger operation for the current interface is detected;
and the third identification module is configured to perform text identification on the screen capture content positioned in the screen capture frame.
Optionally, the apparatus further comprises:
an adjustment module configured to adjust a size and/or a position of the screen mask based on the detected adjustment operation;
the third identification module is further configured to:
and performing text recognition on the screenshot content in the adjusted screenshot box.
According to a third aspect of the embodiments of the present disclosure, there is provided a content recognition apparatus including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to: when executed, performs the steps of any of the methods of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, wherein instructions, when executed by a processor of a content recognition apparatus, enable the content recognition apparatus to perform the steps of any one of the above-mentioned methods of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
as can be seen from the foregoing embodiments, according to the present disclosure, when a first trigger operation for display content on a current interface is detected, a text recognition floating window and/or a picture recognition floating window is displayed on the current interface, when the text recognition floating window is displayed, text recognition is performed on the display content based on a second trigger operation received by the text recognition floating window, and when the text recognition fails or the text recognition floating window is not displayed, picture recognition is performed on the display content based on a third trigger operation received by the picture recognition floating window.
In this way, in the first aspect, the text recognition floating window and/or the picture recognition floating window can be displayed on the current interface based on the first trigger operation, a plurality of different entries for recognizing the displayed content can be provided, and when the content recognition method is applied to a delivery door, full-scene recognition of the delivery door function can be realized. In the second aspect, text recognition is used first, rather than picture recognition being used directly for all content, which can save network traffic and reduce power consumption generated by content recognition. And thirdly, under the condition that the text recognition fails or the text recognition floating window is not displayed, picture recognition is carried out again, and the picture recognition is used as a bottom-pocket scheme, so that the stability of content recognition can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart diagram illustrating a content recognition method according to an example embodiment.
FIG. 2 is a first display interface diagram shown in accordance with an exemplary embodiment.
FIG. 3 is a second display interface diagram shown in accordance with an exemplary embodiment.
FIG. 4 is a third display interface diagram shown in accordance with an exemplary embodiment.
FIG. 5 is a fourth display interface diagram shown in accordance with an exemplary embodiment.
Fig. 6 is a flow diagram illustrating another content identification method according to an example embodiment.
Fig. 7 is a block diagram illustrating a content recognition apparatus according to an example embodiment.
Fig. 8 is a block diagram one illustrating a content recognition apparatus 700 according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
A content recognition method is provided in the embodiments of the present disclosure, fig. 1 is a schematic flow chart of a content recognition method according to an exemplary embodiment, as shown in fig. 1, the method may be applied to an electronic device, and the method mainly includes the following steps:
in step 101, when a first trigger operation for display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface;
in step 102, when the text recognition floating window is displayed, performing text recognition on the display content based on a second trigger operation received by the text recognition floating window;
in step 103, in the case that the text recognition fails or the text recognition floating window is not displayed, performing picture recognition on the display content based on a third trigger operation received by the picture recognition floating window.
Here, the electronic device includes a mobile terminal and a fixed terminal, wherein the mobile terminal includes: mobile phones, tablet computers, notebook computers, and the like; the fixed terminal includes: personal computers, and the like.
The first trigger operation may be an operation acting on display content on a current interface of the electronic device, and may be, for example, a selection operation, a click operation, a long-time press operation, and the like. In the embodiment of the disclosure, when a first trigger operation for the display content is detected, a text recognition floating window and/or a picture recognition floating window can be displayed on a current interface.
Taking the example that the display content is a text document, the first trigger operation may be a long-press operation for a certain page of the text document, or a long-press operation for a certain segment of characters in the text document.
Fig. 2 is a first schematic diagram of a display interface according to an exemplary embodiment, and as shown in fig. 2, when only the text recognition floating window 201 is displayed, the text recognition floating window 201 may be displayed on an edge side of the current interface, for example, the text recognition floating window may be displayed on the rightmost side of the current interface, so that convenience can be provided for inputting the second trigger operation. In other embodiments, the text recognition floating window may also be displayed at another position of the current interface, for example, at the lowest position of the current interface, as long as it is a position convenient for inputting the second trigger operation, and is not limited herein.
In the embodiment of the disclosure, when the text recognition floating window is displayed, a second trigger operation may be received based on the text recognition floating window, and when the second trigger operation is detected, text recognition may be performed on the display content. The second trigger operation may be an operation acting on the text recognition floating window, and may be, for example, a click operation, a long-press operation, or the like.
For example, when the second trigger operation is detected, text recognition is performed on the display content to obtain text content, and after the text content is obtained, the text content may be sent to the server, so that the server analyzes the text content to obtain an analysis result, and the analysis result is returned to the electronic device. Wherein, the analysis result comprises: word segmentation results and intention analysis results.
In the embodiment of the disclosure, under the condition that the text recognition fails or the text recognition floating window is not displayed, the picture recognition of the display content may be performed based on the third trigger operation received by the picture recognition floating window. Since the electronic device cannot recognize the text content in the display content under the condition that the text recognition fails or the text recognition floating window is not displayed, at this time, the third trigger operation may be received based on the picture recognition floating window, and the picture recognition may be performed on the display content based on the third trigger operation. Wherein, the picture identification of the display content comprises: the display content in the form of the picture is recognized based on Optical Character Recognition (OCR), and the text content in the display content is acquired.
In the embodiment of the disclosure, when a first trigger operation for display content on a current interface is detected, a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface, when the text recognition floating window is displayed, text recognition is performed on the display content based on a second trigger operation received by the text recognition floating window, and when the text recognition fails or the text recognition floating window is not displayed, picture recognition is performed on the display content based on a third trigger operation received by the picture recognition floating window.
In this way, in the first aspect, the text recognition floating window and/or the picture recognition floating window can be displayed on the current interface based on the first trigger operation, a plurality of different entries for recognizing the displayed content can be provided, and when the content recognition method is applied to a delivery door, full-scene recognition of the delivery door function can be realized. In the second aspect, text recognition is used first, rather than picture recognition being used directly for all content, which can save network traffic and reduce power consumption generated by content recognition. And thirdly, under the condition that the text recognition fails or the text recognition floating window is not displayed, picture recognition is carried out again, and the picture recognition is used as a bottom-pocket scheme, so that the stability of content recognition can be improved.
In some embodiments, the picture recognizing the display content based on the third trigger operation received by the picture recognition floating window includes:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
In the embodiment of the disclosure, since the electronic device cannot recognize the text content in the display content when the text recognition fails or the text recognition floating window is not displayed, at this time, the floating window may receive a third trigger operation based on the picture recognition, and perform the picture recognition on the display content based on the third trigger operation. In order to make the user clearly understand the content to be identified, the display content to be identified can be displayed on the current interface in the form of pictures so as to reduce the possibility of errors of the identified content.
The content to be recognized is in a picture form, so that a picture recognition control is automatically selected, and in the implementation process, the text recognition control needs to be triggered automatically, namely, a fourth trigger operation is input based on the text recognition control, so that the picture recognition is switched to the text recognition, and further the text content in the displayed content is obtained.
In the embodiment of the disclosure, text recognition is preferentially used instead of directly using picture recognition for all contents, so that network traffic can be saved and power consumption generated by content recognition can be reduced, and picture recognition is performed again under the condition that text recognition fails or a text recognition floating window is not displayed, and the picture recognition is used as a bottom-in-pocket scheme, so that the stability of content recognition can be improved.
In some embodiments, the method further comprises:
when the text recognition floating window receives the second trigger operation, displaying a text recognition control on the current interface, and displaying a picture recognition control in a first set range of the text recognition control;
when the picture identification floating window receives the third trigger operation, displaying the picture identification control on the current interface, and displaying the text identification control in a second set range of the picture identification control;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
In the embodiment of the disclosure, when the text recognition floating window receives the second trigger operation, the text recognition control is displayed on the current interface, and the picture recognition control is displayed within a first setting range of the text recognition control. The first setting range can be a range which is a first distance away from the text recognition control, and can be set as required, so long as the text recognition control and the picture recognition control can be displayed simultaneously and the switching is convenient.
And when the picture identification floating window receives the third trigger operation, displaying a text identification control on the current interface, and displaying the text identification control in a second set range of the picture identification control. The second setting range can be a range of a second distance from the picture recognition control, and can be set as required as long as the text recognition control and the picture recognition control can be displayed simultaneously and can be switched conveniently.
In some embodiments, the first setting range may be the same as the second setting range, and of course, the first setting range may be different from the second setting range as long as the respective functions can be realized.
In some embodiments, the text recognition control and the picture recognition control can be label type controls, and the text recognition control and the picture recognition control can be set to be label types, so that the text recognition control and the picture recognition control can be conveniently used and searched by a user, and the convenience of content recognition is further improved.
Fig. 3 is a diagram illustrating a second display interface according to an example embodiment, where as shown in fig. 3, a picture recognition control 302 may be displayed to the right of a text recognition control 301. In other embodiments, the picture recognition control may also be displayed on the left side of the text recognition control, which is not limited in this respect.
In the embodiment of the disclosure, both the text recognition control and the picture recognition control can be displayed on the current interface, when a user needs to switch the recognition modes, the user only needs to input corresponding trigger operation through the text recognition control or the picture recognition control, and through the mode, the text recognition and the picture recognition can be quickly switched.
In some embodiments, the method further comprises:
sending the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and receiving a word segmentation result and an intention recognition result returned by the server, and displaying the word segmentation result and the intention recognition result on the current interface.
In the embodiment of the disclosure, after the text content in the display content is identified, the text content may be sent to the server, and after the server receives the text content, the server may perform word segmentation processing and intention analysis on the text content, so as to obtain a word segmentation result and an intention identification result. For example, a tokenizer may be utilized to tokenize text content according to its semantics.
In some embodiments, the intent recognition result may be determined according to semantics of respective words in the text content. For example, if the text content includes a person name, the obtained intention recognition result may be a person introduction corresponding to the person name. If the text content includes a place name, the obtained intention recognition result may be the position of the place corresponding to the place name in the map.
In the embodiment of the disclosure, word segmentation processing and intention analysis are performed on text contents, so that corresponding word segmentation results and intention recognition results can be accurately determined, and the word segmentation results and the intention recognition results are displayed on the current interface of the electronic equipment, so that the electronic equipment is convenient to view and operate.
In some embodiments, the word segmentation result comprises at least one character, the method further comprising:
displaying at least one of the characters on the current interface;
and updating the intention recognition result according to the detected selected operation aiming at least one character.
Fig. 4 is a schematic diagram of a display interface showing a third example, and as shown in fig. 4, a word segmentation result 401 may be displayed below the current interface and an intention recognition result 402 may be displayed above the word segmentation result. In the embodiment of the disclosure, since the participle includes a plurality of characters, and each character is independently displayed, in the implementation process, a selection operation can be respectively input to each character, and the intention recognition result is updated according to the selection operation.
For example, if The selection operation is input for "The", "New", and "going", respectively, The correspondingly displayed intention recognition result is translated for "The New going. For example, if a selection operation is input to "michael lewis", the intention recognition result displayed in correspondence with the operation is an introduction to a character of michael lewis. That is, if the word segmentation objects acted by the selection operation are different, the corresponding intention recognition results are also different.
In the embodiment of the disclosure, each character in the segmentation result can be selected respectively, and the intention recognition result is updated according to the selected result, so that the step of clicking the search jump search page by the user can be saved, the use cost of the user is saved, and the use efficiency is improved.
In some embodiments, the displaying a text recognition floating window and/or a picture recognition floating window on the current interface when detecting the first trigger operation for the display content on the current interface includes:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
In the embodiment of the disclosure, when the first trigger operation is detected, content capture may be performed on an area where the display content is located, and the text recognition floating window and/or the picture recognition floating window may be displayed on the current interface according to a capture result. For example, content grabbing may be performed on an area where the content is displayed based on a preset function, where the preset function may be a content catcher type method.
Since the content is not always captured in the implementation process, in the embodiment of the present disclosure, which type of floating window is displayed on the current interface can be determined based on the capture result, and convenience can be provided for the user to input a corresponding trigger operation.
In some embodiments, the displaying a text recognition floating window and/or a picture recognition floating window on the current interface according to the grabbing result includes:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
In the embodiment of the disclosure, when the text content is captured, the text recognition floating window may be displayed on the current interface. As shown in fig. 2, when only the text recognition floating window is displayed, the text recognition floating window may be displayed on an edge side of the current interface, for example, the text recognition floating window may be displayed on a right side of the current interface, which may provide convenience for inputting the second trigger operation. In other embodiments, the text recognition floating window may also be displayed at another position of the current interface, for example, at the lowest position of the current interface, as long as it is a position convenient for inputting the second trigger operation, and is not limited herein.
Here, when the text content is captured, the text content which can be identified by the electronic device is displayed in the current display content, and when the text content exists, the text identification is preferentially used instead of directly using the picture identification for all the content, so that the network traffic can be saved and the power consumption generated by the content identification can be reduced.
Here, when the picture content is captured, the picture recognition floating window is displayed on the current interface. Under the condition that the text recognition fails or the text recognition floating window is not displayed, the picture recognition is carried out again, the picture recognition is used as a bottom-pocket scheme, and the stability of the content recognition can be improved.
Here, when neither the text content nor the picture content is captured, both the text recognition floating window and the picture recognition floating window are displayed on the current interface.
Fig. 5 is a schematic diagram of a display interface according to an exemplary embodiment, where as shown in fig. 5, a text recognition floating window 501 and a picture recognition floating window 502 may be displayed on the current interface in a vertical arrangement manner, and both the text recognition floating window 501 and the picture recognition floating window 502 may be displayed on the rightmost side of the current interface, so that convenience can be provided for an input triggering operation.
In the embodiment of the disclosure, different floating windows can be displayed on the current interface according to different grabbing results, different input interfaces are provided for a user, the user can enter different identification modes according to different needs, and the flexibility of content identification is further improved.
In some embodiments, the method further comprises:
when a fifth trigger operation aiming at the current interface is detected, displaying a screen capture frame on the current interface;
and performing text recognition on the screen capture content positioned in the screen capture frame.
In some embodiments, when the word segmentation result and the intention recognition result are displayed on the current interface, the method further comprises: detecting whether the surrounding area except the area where the word segmentation result and the intention recognition result are located receives the fifth trigger operation.
In the embodiment of the disclosure, since the recognition result obtained based on the content automatically captured by the electronic device is not necessarily required by the user, at this time, whether a fifth trigger operation is performed on the current interface or not may be detected, and when the fifth trigger operation is detected, the screen capture frame may be displayed on the current interface, and the screen capture content in the screen capture frame may be acquired, so as to perform text recognition. In the method, when the text recognition and the picture recognition are not successful or the recognition result is not good, the display content can be acquired in a screen capture mode, and then the display content is recognized, so that the content recognition scheme in the method is more stable and reliable.
In some embodiments, the method further comprises:
adjusting the size and/or position of the screen mask based on the detected adjustment operation;
the text recognition of the screenshot content located in the screenshot box comprises the following steps:
and performing text recognition on the screenshot content in the adjusted screenshot box.
Here, when the content within the screen capture frame is inaccurate or not desired by the user, the user may input an adjustment operation to the screen capture frame and adjust the size and/or position of the screen capture frame based on the adjustment operation. Wherein the adjusting operation comprises: drag operations, and the like. For example, the size of the screen capture frame can be increased or decreased by pulling the border of the screen capture frame, the position of the screen capture frame can be adjusted by dragging the screen capture frame, and the like.
In the embodiment of the disclosure, when the text recognition and the picture recognition are not successful or the recognition result is not good, the display content can be acquired in a screen capture mode, and then the display content is recognized, so that the content recognition scheme in the disclosure is more stable and reliable. When the screen capture frame does not meet the user requirements, the screen capture frame can be adjusted based on adjustment operation, so that the final result meets the user requirements, and the user experience can be improved.
Fig. 6 is a flow chart illustrating another content identification method according to an exemplary embodiment, as shown in fig. 6, the method mainly includes the following steps:
in step 601, the electronic device triggers a transfer gate.
Here, the transfer gate is triggered when a long press operation (first trigger operation) for the display content on the current interface of the application program is detected.
In step 602, the electronic device performs text recognition.
Here, when the display content is text content recognizable by the electronic device, the display content is subjected to text recognition.
In step 603, a "load data" floating window is displayed.
Here, when the display content is text content and the electronic device can recognize, the "load data" floating window is displayed at the side of the current interface. Wherein, the floating window of "loading data" may include: text recognition floating windows.
In step 604, the server requests a participle interface.
Here, when the electronic device displays the "data loading" floating window, the recognized text content may be uploaded to the server, and the server may invoke the word segmentation interface, perform word segmentation processing on the text content, obtain a word segmentation result, and return the word segmentation result to the electronic device.
In step 605, the electronic device presents a text recognition floating window.
In step 606, the electronic device presents the segmentation results.
Here, after receiving the word segmentation result returned by the server, the electronic device may display the word segmentation result on the current interface.
In step 607, a selection operation for the segmentation result is detected.
When the selection operation aiming at the word segmentation result is detected, the intention recognition interface is called in real time according to the character acted by the selection operation, the intention recognition result corresponding to the selected word segmentation is switched in real time above the word segmentation frame where the word segmentation result is located, and then the intention recognition of a single character, a plurality of characters or a whole sentence is realized.
In step 608, the server requests the intent recognition interface.
Here, when the electronic device displays the "load data" floating window, the recognized text content may be uploaded to the server, and the server may invoke the intention recognition interface, perform intention analysis on the text content, obtain an intention recognition result, and return the intention recognition result to the electronic device.
In step 609, the electronic device presents the intent recognition result.
Here, the electronic device may display the intention recognition result on the current interface after receiving the intention recognition result returned by the server. Here, the side of the current interface may exhibit three intention recognition results, such as: people encyclopedia, plants encyclopedia, location positioning, etc. In other embodiments, two intention recognition results, four intention recognition results, and the like may be presented, and the number of intention recognition results is not limited herein.
In some embodiments, after clicking the text recognition floating window, a word segmentation result appears below the current interface, and a plurality of intention recognition results are above the word segmentation result. In some embodiments, each intent recognition result may be underlined.
In some embodiments, both the text recognition control and the picture recognition control may also be displayed on the current interface to enable switching between text recognition and picture recognition.
In step 610, when character recognition is not performed, picture recognition is performed.
In step 611, the electronic device presents the picture identification floating window.
In step 612, the electronic device presents the screenshot.
In step 613, the electronic device presents a screenshot of the display content.
In step 614, the electronic device detects a click operation for the text recognition control.
In step 615, the electronic device invokes the optical character recognition interface.
In step 616, the electronic device presents the word segmentation result and the intent recognition result.
In step 617, the electronic device presents the picture recognition floating window and the text recognition floating window.
Here, when the electronic device does not recognize the text content or the picture content, the text recognition floating window and the picture recognition floating window may pop up at the same time at the side of the current interface, so as to enable the user to select whether to acquire the text content based on the text recognition or the picture recognition.
In step 618, the electronic device detects a click operation for the text recognition floating window.
In step 619, the electronic device performs text recognition on the display content.
In step 620, the electronic device detects a click operation for the picture identification floating window.
In step 621, the electronic device performs picture recognition on the display content.
In step 622, the electronic device detects a click operation on the peripheral region and displays a screen frame.
Here, the peripheral region may refer to a region other than the word segmentation result and the intention recognition result.
In step 623, the text recognition window is pulled down to the bottom of the current interface.
Here, when a click operation for the peripheral region is detected, the word segmentation result is pulled down to the bottom edge of the screen, and a region screen frame appears on the screen.
In step 624, the screen mask is adjusted.
Here, when the content within the screen capture frame is inaccurate or not desired by the user, the user may input an adjustment operation to the screen capture frame and adjust the size and/or position of the screen capture frame based on the adjustment operation. Wherein the adjusting operation comprises: drag operations, and the like. For example, the size of the screen capture frame can be increased or decreased by pulling the border of the screen capture frame, the position of the screen capture frame can be adjusted by dragging the screen capture frame, and the like.
In step 625, the electronic device generates a new screenshot content.
In step 626, the electronic device presents the new screenshot content.
In step 627, the electronic device invokes the optical character recognition interface.
In step 628, the electronic device presents the segmentation result and the intention recognition result.
According to the technical scheme, the text recognition control and the text recognition control are arranged, so that the text recognition and the picture recognition can be switched rapidly; text recognition is carried out by preferentially using a text capture mode, and network flow is saved compared with the mode of completely using picture capture capacity; the clicked content can be quickly identified on the word segmentation page in a word segmentation result clicking mode, the intention identification result corresponding to the clicked content is displayed, the process of clicking search in one step can be reduced, and the result desired by the user can be given more quickly; when the text recognition cannot capture data, the image capture technology is used for recognition, the image capture is used as a bottom pocket scheme, and the scheme for content recognition can be more stable.
Fig. 7 is a block diagram illustrating a content recognition apparatus according to an example embodiment. As shown in fig. 7, the content recognition apparatus 70 is applied to an electronic device, and mainly includes:
the first display module 71 is configured to display a text recognition floating window and/or a picture recognition floating window on a current interface when a first trigger operation for display content on the current interface is detected;
the first identification module 72 is configured to perform text identification on the display content based on a second trigger operation received by the text identification floating window when the text identification floating window is displayed;
and the second identification module 73 is configured to perform picture identification on the display content based on a third trigger operation received by the picture identification floating window under the condition that text identification fails or the text identification floating window is not displayed.
In some embodiments, the apparatus 70 further comprises:
the second display module is configured to display a text recognition control on the current interface and display the picture recognition control in a first set range of the text recognition control when the text recognition floating window receives the second trigger operation;
the third display module is configured to display the picture identification control on the current interface and display the text identification control in a second set range of the picture identification control when the picture identification floating window receives the third trigger operation;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
In some embodiments, the second identifying module 73 is further configured to:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
In some embodiments, the apparatus 70 further comprises:
the sending module is configured to send the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and the receiving module is configured to receive the word segmentation result and the intention recognition result returned by the server and display the word segmentation result and the intention recognition result on the current interface.
In some embodiments, the word segmentation result includes at least one character, and the apparatus 70 further includes:
a third display module configured to display at least one of the characters on the current interface;
an updating module configured to update the intention recognition result according to the detected selected operation for at least one of the characters.
In some embodiments, the first display module 71 is further configured to:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
In some embodiments, the first display module 71 is further configured to:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
In some embodiments, the apparatus 70 further comprises:
the fourth display module is configured to display a screen frame on the current interface when a fifth trigger operation for the current interface is detected;
and the third identification module is configured to perform text identification on the screen capture content positioned in the screen capture frame.
In some embodiments, the apparatus 70 further comprises:
an adjustment module configured to adjust a size and/or a position of the screen mask based on the detected adjustment operation;
the third identification module is further configured to:
and performing text recognition on the screenshot content in the adjusted screenshot box.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram one illustrating a content recognition apparatus 700 according to an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a display screen that provides an output interface between the device 700 and a user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of device 700, sensor assembly 714 may also detect a change in position of device 700 or a component of device 700, the presence or absence of user contact with device 700, orientation or acceleration/deceleration of device 700, and a change in temperature of device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as a Wi-Fi network, 2G or 7G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of a content recognition apparatus, enable the content recognition apparatus to perform a content recognition method, the method applied to an electronic device, comprising:
when a first trigger operation aiming at display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface;
when the text recognition floating window is displayed, performing text recognition on the display content based on a second trigger operation received by the text recognition floating window;
and under the condition that the text recognition fails or the text recognition floating window is not displayed, carrying out picture recognition on the display content based on a third trigger operation received by the picture recognition floating window.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A content identification method is applied to electronic equipment and comprises the following steps:
when a first trigger operation aiming at display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface;
when the text recognition floating window is displayed, performing text recognition on the display content based on a second trigger operation received by the text recognition floating window;
and under the condition that the text recognition fails or the text recognition floating window is not displayed, carrying out picture recognition on the display content based on a third trigger operation received by the picture recognition floating window.
2. The method of claim 1, further comprising:
when the text recognition floating window receives the second trigger operation, displaying a text recognition control on the current interface, and displaying a picture recognition control in a first set range of the text recognition control;
when the picture identification floating window receives the third trigger operation, displaying the picture identification control on the current interface, and displaying the text identification control in a second set range of the picture identification control;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
3. The method according to claim 1, wherein the picture recognition of the display content based on the third trigger operation received by the picture recognition floating window comprises:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
4. The method of claim 1, further comprising:
sending the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and receiving a word segmentation result and an intention recognition result returned by the server, and displaying the word segmentation result and the intention recognition result on the current interface.
5. The method of claim 4, wherein the word segmentation result comprises at least one character, the method further comprising:
displaying at least one of the characters on the current interface;
and updating the intention recognition result according to the detected selected operation aiming at least one character.
6. The method according to claim 1, wherein when a first trigger operation for display content on a current interface is detected, displaying a text recognition floating window and/or a picture recognition floating window on the current interface comprises:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
7. The method according to claim 6, wherein the displaying a text recognition floating window and/or a picture recognition floating window on the current interface according to the grabbing result comprises:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
8. The method according to any one of claims 1 to 7, further comprising:
when a fifth trigger operation aiming at the current interface is detected, displaying a screen capture frame on the current interface;
and performing text recognition on the screen capture content positioned in the screen capture frame.
9. The method of claim 8, further comprising:
adjusting the size and/or position of the screen mask based on the detected adjustment operation;
the text recognition of the screenshot content located in the screenshot box comprises the following steps:
and performing text recognition on the screenshot content in the adjusted screenshot box.
10. A content recognition device applied to an electronic device includes:
the first display module is configured to display a text recognition floating window and/or a picture recognition floating window on a current interface when a first trigger operation aiming at display content on the current interface is detected;
the first identification module is configured to perform text identification on the display content based on a second trigger operation received by the text identification floating window when the text identification floating window is displayed;
and the second identification module is configured to perform picture identification on the display content based on a third trigger operation received by the picture identification floating window under the condition that text identification fails or the text identification floating window is not displayed.
11. The apparatus of claim 10, further comprising:
the second display module is configured to display a text recognition control on the current interface and display the picture recognition control in a first set range of the text recognition control when the text recognition floating window receives the second trigger operation;
the third display module is configured to display the picture identification control on the current interface and display the text identification control in a second set range of the picture identification control when the picture identification floating window receives the third trigger operation;
the text recognition control and the picture recognition control are used for switching recognition modes of the display content.
12. The apparatus of claim 10, wherein the second identification module is further configured to:
when the picture identification floating window receives the third trigger operation, displaying the display content on the current interface in a picture mode, and detecting a fourth trigger operation acting on the text identification control;
and when the fourth trigger operation is detected, recognizing text content from the display content by using an optical character recognition technology.
13. The apparatus of claim 10, further comprising:
the sending module is configured to send the text content identified from the display content to a server, wherein the server is used for performing word segmentation processing and intention analysis on the text content;
and the receiving module is configured to receive the word segmentation result and the intention recognition result returned by the server and display the word segmentation result and the intention recognition result on the current interface.
14. The apparatus of claim 13, wherein the word segmentation result comprises at least one character, the apparatus further comprising:
a third display module configured to display at least one of the characters on the current interface;
an updating module configured to update the intention recognition result according to the detected selected operation for at least one of the characters.
15. The apparatus of claim 10, wherein the first display module is further configured to:
and when the first trigger operation is detected, content grabbing is carried out on the area where the display content is located, and a text recognition floating window and/or a picture recognition floating window are/is displayed on the current interface according to a grabbing result.
16. The apparatus of claim 15, wherein the first display module is further configured to:
if the text content is captured, displaying the text recognition floating window on the current interface;
if the picture content is captured, displaying the picture identification floating window on the current interface;
and if the text content and the picture content are not captured, displaying the text recognition floating window and the picture recognition floating window on the current interface.
17. The apparatus of any one of claims 10 to 16, further comprising:
the fourth display module is configured to display a screen frame on the current interface when a fifth trigger operation for the current interface is detected;
and the third identification module is configured to perform text identification on the screen capture content positioned in the screen capture frame.
18. The apparatus of claim 17, further comprising:
an adjustment module configured to adjust a size and/or a position of the screen mask based on the detected adjustment operation;
the third identification module is further configured to:
and performing text recognition on the screenshot content in the adjusted screenshot box.
19. A content recognition apparatus, comprising:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to: when executed, performs the steps of any of the above-described methods of claims 1-9.
20. A non-transitory computer readable storage medium having instructions which, when executed by a processor of a content recognition apparatus, enable the content recognition apparatus to perform the steps of any of the above methods of claims 1 to 9.
CN202011597027.5A 2020-12-28 2020-12-28 Content identification method, device and storage medium Pending CN112596656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011597027.5A CN112596656A (en) 2020-12-28 2020-12-28 Content identification method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011597027.5A CN112596656A (en) 2020-12-28 2020-12-28 Content identification method, device and storage medium

Publications (1)

Publication Number Publication Date
CN112596656A true CN112596656A (en) 2021-04-02

Family

ID=75203479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011597027.5A Pending CN112596656A (en) 2020-12-28 2020-12-28 Content identification method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112596656A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445818A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Article identification method, article identification device, electronic equipment and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN108958576A (en) * 2018-06-08 2018-12-07 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
CN109002759A (en) * 2018-06-07 2018-12-14 Oppo广东移动通信有限公司 text recognition method, device, mobile terminal and storage medium
CN109085982A (en) * 2018-06-08 2018-12-25 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
CN109933275A (en) * 2019-02-12 2019-06-25 努比亚技术有限公司 A kind of knowledge screen method, terminal and computer readable storage medium
US20200210048A1 (en) * 2018-12-28 2020-07-02 Beijing Xiaomi Mobile Software Co., Ltd. Multimedia resource management method and apparatus, and storage medium
CN112115947A (en) * 2020-09-27 2020-12-22 北京小米移动软件有限公司 Text processing method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484266A (en) * 2016-10-18 2017-03-08 北京锤子数码科技有限公司 A kind of text handling method and device
CN109002759A (en) * 2018-06-07 2018-12-14 Oppo广东移动通信有限公司 text recognition method, device, mobile terminal and storage medium
CN108958576A (en) * 2018-06-08 2018-12-07 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
CN109085982A (en) * 2018-06-08 2018-12-25 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
US20200210048A1 (en) * 2018-12-28 2020-07-02 Beijing Xiaomi Mobile Software Co., Ltd. Multimedia resource management method and apparatus, and storage medium
CN109933275A (en) * 2019-02-12 2019-06-25 努比亚技术有限公司 A kind of knowledge screen method, terminal and computer readable storage medium
CN112115947A (en) * 2020-09-27 2020-12-22 北京小米移动软件有限公司 Text processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445818A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Article identification method, article identification device, electronic equipment and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US10296201B2 (en) Method and apparatus for text selection
US20180121040A1 (en) Method and device for managing notification messages
US10509540B2 (en) Method and device for displaying a message
US11334225B2 (en) Application icon moving method and apparatus, terminal and storage medium
CN106354504B (en) Message display method and device
US11417335B2 (en) Method and device for information processing, terminal, server and storage medium
CN106485660B (en) Electronic map zooming method and device
CN106504295B (en) Method and device for rendering picture
CN108733397B (en) Update state determination method, apparatus, and storage medium
CN110618783B (en) Text broadcasting method, device and medium
CN106331328B (en) Information prompting method and device
CN107179837B (en) Input method and device
CN105095296B (en) File management method and device
CN106960026B (en) Search method, search engine and electronic equipment
CN112596656A (en) Content identification method, device and storage medium
CN105260088B (en) Information classification display processing method and device
CN108491535B (en) Information classified storage method and device
CN106919302B (en) Operation control method and device of mobile terminal
CN111092971A (en) Display method and device for displaying
CN112083841B (en) Information input method, device and storage medium
CN110688181B (en) Content display method and device in notification bar and storage medium
CN109714247B (en) Group chat information processing method and device, electronic equipment and storage medium
CN110417987B (en) Operation response method, device, equipment and readable storage medium
CN108469913B (en) Method, apparatus and storage medium for modifying input information
CN115577192A (en) Search result display method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination