CN109739416B - Text extraction method and device - Google Patents

Text extraction method and device Download PDF

Info

Publication number
CN109739416B
CN109739416B CN201810355599.9A CN201810355599A CN109739416B CN 109739416 B CN109739416 B CN 109739416B CN 201810355599 A CN201810355599 A CN 201810355599A CN 109739416 B CN109739416 B CN 109739416B
Authority
CN
China
Prior art keywords
text
user
extracting
target
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810355599.9A
Other languages
Chinese (zh)
Other versions
CN109739416A (en
Inventor
蒋立轩
李梓淳
曾昱景
陈子扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810355599.9A priority Critical patent/CN109739416B/en
Publication of CN109739416A publication Critical patent/CN109739416A/en
Application granted granted Critical
Publication of CN109739416B publication Critical patent/CN109739416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a text extraction method and a text extraction device. Therefore, through the text extraction method provided by the application, a user can complete text extraction of multiple controls in one window or multiple controls in multiple windows through one operation. Therefore, the method reduces repeated operation of extracting the text from the contents of a plurality of windows or a plurality of controls in the same window by the user, simplifies the operation process of the user and improves the extraction efficiency.

Description

Text extraction method and device
Technical Field
The present application relates to the field of text recognition technologies, and in particular, to a text extraction method and apparatus.
Background
When a user needs to input the text information currently displayed on the screen of the electronic equipment into the target interface, the user needs to spend more time and energy and has a higher error rate by depending on short-time memory power or recording the text information needing to be input in a handwriting mode.
In order to input the text information currently displayed on the screen of the electronic device into the target interface quickly, a text extraction software is currently developed in the industry. The text extraction software can recognize the characters on the selected area of the user and can input the characters to the target interface through copying and pasting operations.
However, when a user extracts text from the screen of the electronic device using the text extraction software, the user can perform text extraction operation on the content of only one control of one window at a time. If the text extraction operation needs to be performed on the contents of multiple windows or multiple controls of the same window, the user needs to continuously switch among the multiple windows to perform repeated operation for many times, or perform repeated operation for many times among the multiple controls of the same window, so that the text extraction operation is complicated, and the extraction efficiency is low.
For example, a "WeChat" chat interface window and a "QQ" chat interface window are displayed on a display interface of the electronic device, and a user needs to input certain text information on the "WeChat" chat interface window and the "QQ" chat interface window into a "Baidu dictionary", and then the user needs to perform text extraction operations on the "WeChat" chat interface window and the "QQ" chat interface window respectively. Therefore, when a user extracts text information from a plurality of windows or a plurality of controls of the same window using existing text extraction software, it is necessary to perform a plurality of repetitive operations.
Disclosure of Invention
In view of this, the present application provides a text extraction method and apparatus, so as to reduce repeated operations of a user extracting a text from contents of multiple windows or multiple controls in the same window, simplify a user operation process, and improve extraction efficiency.
In order to solve the technical problem, the following technical scheme is adopted in the application:
a text extraction method, comprising:
responding to a text information extraction triggering instruction input by a user, and identifying a control capable of extracting a text; the control capable of extracting the text is positioned in a window displayed on a screen;
responding to a selection operation of a target control input by a user, and extracting text information in the target control; the target control comprises at least one control capable of extracting text.
Optionally, the extracting, in response to a selection operation of a target control input by a user, text information in the target control specifically includes:
and extracting the text information in the selected target controls in response to the selection operation of the target controls input by the user and the text extraction confirmation instruction input by the user.
Optionally, the identifying a control that can extract a text and is displayed in a window on a screen specifically includes:
and drawing a frame on the periphery of the control capable of extracting the text.
Optionally, the method further comprises:
and loading a semi-transparent mark layer for covering the screen in response to a text information extraction triggering instruction input by a user.
Optionally, after the loading the semi-transparent mark layer for covering the screen, the method further includes:
responding to the selection operation of a user on the screen area based on the semi-transparent mark layer, and determining a selected target area;
and identifying and extracting the text in the target area by an optical character recognition technology.
Optionally, the determining the selected target area on the screen in response to the selection operation of the user on the screen area based on the semi-transparent mark layer specifically includes:
and determining a plurality of target areas on the selected screen in response to a plurality of selection operations of the screen area by the user based on the semi-transparent mark layer and a text extraction confirmation instruction input by the user.
Optionally, the semitransparent mark layer is of a hollowed-out structure, and the hollowed-out structure is opposite to the control capable of extracting the text;
the identifying of the control which is displayed in the window on the screen and can extract the text specifically comprises the following steps:
and highlighting the control capable of extracting the text at the hollow part of the semitransparent mark layer.
Optionally, the method further comprises:
and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
A text extraction apparatus comprising:
the identification unit is used for responding to a text information extraction triggering instruction input by a user and identifying a control capable of extracting a text; the control capable of extracting the text is positioned in a window displayed on a screen;
the extraction unit is used for responding to the selection operation of a target control input by a user and extracting the text information in the target control; the target control comprises at least one control capable of extracting text.
Optionally, the apparatus further comprises:
and the loading unit is used for responding to a text information extraction triggering instruction input by a user and loading a semitransparent mark layer for covering the screen.
Optionally, the apparatus further comprises:
the device comprises a determining unit, a judging unit and a display unit, wherein the determining unit is used for responding to the selection operation of a user on the screen area based on a semi-transparent mark layer after the semi-transparent mark layer for covering the screen is loaded, and determining the selected target area;
and the identification and extraction unit is used for identifying and extracting the text in the target area through an optical character recognition technology.
Compared with the prior art, the method has the following beneficial effects:
based on the above technical solutions, in the text extraction method provided in the embodiments of the present application, after a user inputs a text information extraction triggering instruction, an electronic device may identify a control capable of extracting a text, which is displayed in a window on a screen, where the identified control capable of extracting a text may be multiple controls in one window or multiple controls in multiple windows, and thus, the user may select a target control, from the identified controls, for which text information needs to be extracted as needed. Therefore, through the text extraction method provided by the application, a user can extract texts of a plurality of controls in one window or a plurality of controls in a plurality of windows through one operation. Therefore, the method reduces repeated operation of extracting the text from the contents of a plurality of windows or a plurality of controls in the same window by the user, simplifies the operation process of the user and improves the extraction efficiency.
Drawings
In order that the detailed description of the present application may be clearly understood, a brief description of the drawings that will be used when describing the detailed description of the present application will be provided. It is to be understood that these drawings are merely illustrative of some of the embodiments of the application.
Fig. 1 is a schematic diagram of a screen display interface corresponding to an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a text extraction method provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a screen display interface corresponding to the execution of S201 in the embodiment of the present application;
FIG. 4 is a schematic diagram of a screen display interface corresponding to another application scenario provided in the embodiment of the present application;
FIG. 5 is a schematic flow chart of another text extraction method provided in the embodiments of the present application;
fig. 6 is a schematic diagram of a screen display interface corresponding to the execution of S501 in the embodiment of the present application;
fig. 7 is a schematic diagram of a screen display interface corresponding to the execution of S503 according to the embodiment of the present application;
FIG. 8 is a schematic diagram of a screen display interface corresponding to another application scenario provided in the embodiment of the present application;
fig. 9 is a schematic flowchart of another text extraction method provided in the embodiment of the present application;
fig. 10 is a schematic diagram of a screen display interface corresponding to the executed S901 according to the embodiment of the present application;
fig. 11 is a schematic diagram of a screen display interface corresponding to the execution of S902 according to the embodiment of the present application;
fig. 12 is a schematic structural diagram of a text extraction apparatus according to an embodiment of the present application.
Detailed Description
As can be seen from the background art section, when a user extracts text information from a plurality of windows or a plurality of controls in the same window using existing text extraction software, a plurality of iterations are required. Therefore, the text extraction operation is complicated, and the extraction efficiency is low.
In order to solve the technical problem, the application provides a text extraction method, after a user inputs a text information extraction triggering instruction, an electronic device identifies controls capable of extracting texts and displayed in a window on a screen, the identified controls capable of extracting the texts can be multiple controls in one window or multiple controls in multiple windows, and therefore the user can select all target controls capable of extracting the text information from the identified controls at one time according to needs. Therefore, through the text extraction method provided by the application, a user can extract texts of a plurality of controls in one window or a plurality of controls in a plurality of windows through one operation. Therefore, the method reduces repeated operation of extracting the text from the contents of a plurality of windows or a plurality of controls in the same window by the user, simplifies the operation process of the user and improves the extraction efficiency.
The following describes in detail a specific embodiment of the text extraction method provided in the present application with reference to the drawings.
It should be noted that the text extraction method provided in the embodiment of the present application is applicable to the following scenarios: as shown in fig. 1, a screen 10 of the electronic device is tiled with a plurality of windows 11-13, and each window is respectively arranged with two controls a-F capable of extracting text. In addition, a text extraction trigger key 14 is provided on the screen 10,
referring to fig. 2, a text extraction method provided in the embodiment of the present application includes the following steps:
s201: and responding to a text information extraction triggering instruction input by a user, and identifying a control capable of extracting the text, wherein the control capable of extracting the text is positioned in a window displayed on a screen.
As a specific example of the present application, when the user presses the text extraction trigger key 14, the electronic device receives a text information extraction trigger instruction input by the user, and identifies a control capable of extracting text in a window displayed on the screen in response to the text information extraction trigger instruction input by the user.
As another specific example, in order to make it easier for a user to recognize a control that can extract a text, the electronic device identifies the control that can extract the text, which may specifically be:
and drawing a border on the periphery of the control from which the text can be extracted.
Specifically, for the screen shown in fig. 1, after S201 is executed, the screen display interface is shown in fig. 3, where in fig. 3, the borders 30 are drawn on the peripheries of the controls a-F.
It should be noted that, in the embodiment of the present application, an implementation manner of identifying a control from which text can be extracted is not limited to the above-described manner of drawing a border. As an extended embodiment of the present application, a preset mark may be added around the control capable of extracting the text, or the control capable of extracting the text may be highlighted in a display manner different from the control incapable of extracting the text.
It should be noted that, from the system bottom layer, in order to make the electronic device automatically draw a border around a control icon capable of extracting a text after receiving a text information extraction triggering instruction input by a user, an interface of a frame (framework) bottom layer needs to be modified.
S202: responding to the selection operation of a target control input by a user, and extracting text information in the target control; the target control comprises at least one control capable of extracting text.
After the control capable of extracting the text on the screen is identified, the user can select one or more identified controls capable of extracting the text as required, and after the electronic equipment receives the selection operation of the target control for input, the electronic equipment can respond to the selection operation of the target control input by the user to extract the text information in the target control.
It should be noted that, in the embodiment of the present application, a user may select a target control in a mouse click manner. When the screen is a touch screen, a user can select the target control in a touch click mode.
In the embodiment of the present application, there may be one or more target controls. When a plurality of target controls are provided, a user can select the plurality of target controls in a multi-selection mode.
In order to realize the selection of a plurality of target controls in a multi-selection mode, and the extraction of text information in the target controls is carried out. A text extraction confirmation key may be further disposed on the screen interface shown in fig. 1 or the screen interface shown in fig. 3, the user inputs a text extraction confirmation instruction through the text extraction confirmation key, and the electronic device starts a text extraction operation after receiving the text extraction confirmation instruction.
Therefore, after the user selects the plurality of target controls, the user can press the text extraction confirmation key to trigger the electronic equipment to perform text extraction operation. Thus, when there are a plurality of target controls, S202 may specifically be:
and extracting the text information in the selected target controls in response to the selection operation of the target controls input by the user and the text extraction confirmation instruction input by the user.
It should be noted that, in the embodiment of the present application, the user inputs the text extraction confirmation instruction, which is not limited to the key confirmation manner shown in the embodiment of the present application, and may also be a preset touch action on the screen.
S203: and inputting the extracted text information into the target interface so as to edit the text information in the target interface.
In order to edit the extracted text information, after the text information is extracted, the embodiment of the application can also input the extracted text information into a target interface so as to edit the text information in the target interface.
When the text information is extracted from the target controls, the extracted text information comprises a plurality of pieces of text information, and in order to avoid confusion among different pieces of text information, the text information is listed in the target interface one by one.
It should be noted that when the text information is extracted from a plurality of target controls and is input into the target interface, the effect of text integration of the multi-window and multi-control can be achieved.
In addition, in the embodiment of the application, the number of the target interfaces can be one or more, so that the user can display the extracted text information in one or more target interfaces according to the needs of the user.
The above is a specific implementation of the text extraction method provided in the embodiment of the present application. In this specific embodiment, after the user inputs the text information extraction triggering instruction, the electronic device may identify a control capable of extracting a text, which is displayed in a window on the screen, where the identified control capable of extracting a text may be multiple controls in one window, or multiple controls in multiple windows, and thus, the user may select all target controls requiring text information extraction from the identified controls at one time as needed. Therefore, through the text extraction method provided by the application, a user can extract texts of a plurality of controls in one window or a plurality of controls in a plurality of windows through one operation. Therefore, the method reduces repeated operation of extracting the text from the contents of a plurality of windows or a plurality of controls in the same window by the user, simplifies the operation process of the user and improves the extraction efficiency.
It should be noted that the text extraction method provided by the embodiment of the present application may be applied to a large screen connected to a mobile terminal, for example, a mobile phone, and a user operates the mobile terminal through an operation on the large screen.
In order to clearly understand the specific implementation of the text extraction method provided in the foregoing embodiment, a scene example description is performed on the specific implementation of the text extraction method in combination with an application scene.
Setting, the application scenario specifically includes: as shown in fig. 4, a "WeChat" application window 41, a "QQ" application window 42, and a "263" application window 43 are tiled on the screen, and a text extraction trigger key 44 and a text extraction confirmation key 45 are provided on the screen. Controls such as "wallet", "favorite", "album", "card pack", and "emoticon" are provided in the "WeChat" application window 41, and controls such as "friend action", "game", "watch", "read", and "music" are provided in the "QQ" application window 42. The user wants to extract the text information in the controls "favorites" and "reads" and the extracted text information is entered into the "263" application window 43.
Based on the application scenario, as shown in fig. 5, the text extraction method provided in the embodiment of the present application includes the following steps:
s501: in response to a pressing operation of the text extraction trigger button 44 by the user, control peripheral borders from which text can be extracted are drawn in the "WeChat" application window 41, "QQ" application window 42, and "263" application window 43 on the screen.
As shown in fig. 6, when the user presses the text extraction trigger button 44 on the screen, borders are drawn on the periphery of the controls "favorite" contained in the "WeChat" application window 41 and the controls "friend dynamic", "watch point", "read" contained in the "QQ" application window 42.
S502: in response to the click selection operation of the controls "favorite", "friend dynamic", and "read" and the pressing operation of the text extraction confirm button 45 by the user, text information in "favorite", "friend dynamic", and "read" is extracted.
In fig. 6, after the user clicks the selection controls "favorite", "friend dynamic", and "read" through the multi-selection mode, and then presses the text extraction confirmation key 45, the electronic device will extract the text information in "favorite", "friend dynamic", and "read" in response to the user clicking the selection operations of the controls "favorite", "friend dynamic", and "read" and the pressing operation of the text extraction confirmation key 45.
In order to facilitate the user to distinguish the selected control from the unselected controls, after the user selects a certain control, the display state of the control is a selected state, for example: the control is in a highlighted state.
S503: the extracted text information is input to the "263" application window 43.
The text information extracted from the collection, the friend dynamic and the reading is respectively set as follows: AAAAA, BBBBBBB, XXXXX. The electronic device lines up the extracted text AAAAA, bbbbbbbbb, XXXXXX in the "263" application window 43. As shown in particular in fig. 7.
Further, when non-editable contents are displayed on the screen, for example: in order to extract text information from non-editable display contents when pictures or other non-editable files such as PDF files are used, the embodiment of the present application further provides another implementation manner of a text extraction method.
It should be noted that another implementation manner of the text extraction method provided in the embodiment of the present application may be applied to the following specific scenarios: as shown in fig. 8, a window 81 and a non-editable display content 82 are tiled on the screen 80, and a plurality of widgets are arranged in the window 81. In addition, a text extraction trigger button 83 is provided on the screen 80.
Referring to fig. 9, another implementation manner of the text extraction method provided in the embodiment of the present application includes the following steps:
s901: responding to a text information extraction instruction input by a user, loading a semitransparent marking layer for covering a screen, and identifying a control capable of extracting a text; the control capable of extracting the text is positioned in a window displayed on a screen.
As shown in fig. 10, when the user presses the text extraction trigger key 83 on the screen 80, the electronic device receives a text information extraction trigger instruction input by the user, and in response to the text information extraction trigger instruction input by the user, loads a semi-transparent mark layer 101 for covering the screen, and identifies a control capable of extracting text in a window displayed on the screen.
It should be noted that the mark layer 101 is semi-transparent, for example, a semi-transparent mask, the mark layer 101 is located at the upper layer, the user can clearly identify the information displayed on the screen 80 through the mark layer 101, and the color of the mark layer 101 can be flexibly set according to the actual requirement. By means of the semitransparent mark layer 101, a user can conveniently select the range of the text information, and unnecessary text information is prevented from being selected.
It should be noted that, in a specific implementation manner of the text extraction method, a specific implementation manner of identifying a control capable of extracting a text displayed in a window on a screen may be the same as the specific implementation manner of the identification in S201.
In addition, in the embodiment of the present application, identification of a control from which text can be extracted may also be implemented by means of the loaded semi-transparent mark layer 101. The method specifically comprises the following steps: the loaded semi-transparent markup layer 101 does not overlay the controls from which text can be extracted. That is, the translucent mark layer 101 is hollowed out above the control from which the text can be extracted. Therefore, the identification of the control capable of extracting the text can be realized by highlighting the control capable of extracting the text at the hollow part of the semitransparent mark layer 101, and the effect that the control capable of extracting the text can be directly displayed to the user can be realized by the realization mode.
S902: and determining the selected target area in response to the selection operation of the screen area based on the semi-transparent mark layer by the user.
It should be noted that, in the embodiment of the present application, an event delivery mechanism at the bottom layer of the framework is modified, so that an operation event of a user at a semi-transparent mark layer can be delivered to a screen at a lower layer, thereby realizing selection of a target area.
The step may specifically be: the user can select the target area on the semitransparent marking layer 101 through mouse dragging operation, and when the screen is a touch screen, the user can select the target area on the semitransparent marking layer 101 through a sliding track corresponding to touch operation.
More specifically, the user may scribe a line on the translucent mark layer 101, and a rectangular area covering the start point and the end point of the scribe line is a selected target area. The scribing tracks can be transverse straight lines, vertical straight lines or oblique lines.
In order to distinguish the target area from the non-target area, in the embodiment of the present application, the translucent mark layer 101 on the target area is removed, and the screen display content below the translucent mark layer 101 is no longer obscured by the translucent mark layer and can be completely presented to the user. The schematic diagram of the screen interface after the target area is determined is shown in fig. 11.
It should be noted that the user may select one target region at a time, or may select a plurality of target regions at a time. The user can select multiple target areas at one time in a multi-selection mode. As an example, in order to realize the selection of multiple target areas at a time, a text extraction confirmation key may be provided on the screen, through which the user inputs a text extraction confirmation instruction, and the electronic device triggers the text extraction operation after receiving the text extraction confirmation instruction.
In this way, in practical application, after the user has selected all the target areas, the user clicks the text extraction confirmation key, thereby triggering text extraction operation. Thus, when there are a plurality of target areas, S902 may specifically be:
and determining a plurality of target areas on the selected screen in response to a plurality of selection operations of the screen area by the user based on the semi-transparent mark layer and a text extraction confirmation instruction input by the user.
S903: text within the target region is recognized and extracted by Optical Character Recognition (OCR).
It should be noted that, in the embodiment of the present application, the target area may perform a screenshot operation, and further, a text in the target area may be recognized by using an optical character recognition technology.
S904: and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
This step is the same as S204 in the above embodiment, and for the sake of brevity, will not be described in detail here.
It should be noted that, in the text information extraction method shown in fig. 9, when the user currently determines the target area through the selection operation of the screen area based on the semi-transparent mark layer, the identifier previously added to the control that can extract the text information correspondingly disappears. However, once the user has finished the selection, the identification applied to the control from which the text can be extracted will reappear. It should be noted that the two selection modes can be switched at any time according to the touch action or the mouse action.
The foregoing is a specific implementation manner of the text extraction method provided in the embodiment of the present application, and based on the specific implementation manner, the embodiment of the present application further provides a text extraction device.
Referring to fig. 12, a text extraction apparatus provided in an embodiment of the present application includes:
the identification unit 121 is configured to identify a control capable of extracting a text in response to a text information extraction triggering instruction input by a user; the control capable of extracting the text is positioned in a window displayed on a screen;
the extracting unit 122 is configured to, in response to a selection operation of a target control input by a user, extract text information in the target control; the target control comprises at least one control capable of extracting text.
As an optional embodiment, the text extraction apparatus may further include:
and the loading unit 123 is used for loading the semitransparent mark layer for covering the screen in response to the text information extraction triggering instruction input by the user.
As an optional embodiment, in order to implement extraction of non-editable text content, the text extraction apparatus may further include:
a determining unit 124, configured to determine, after loading a semi-transparent mark layer for covering a screen, a selected target area in response to a selection operation of a screen area by a user based on the semi-transparent mark layer;
and the recognition and extraction unit 125 is used for recognizing and extracting the text in the target area through an optical character recognition technology.
The embodiment of the text extraction apparatus is an apparatus embodiment corresponding to the embodiment of the text extraction method, and a specific implementation manner and achieved technical effects may refer to the description of the embodiment of the text extraction method, which is not described herein again.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (14)

1. A text extraction method, comprising:
responding to a text information extraction triggering instruction input by a user, and identifying a control capable of extracting a text; the control capable of extracting the text is positioned in a window displayed on a screen, and the identified control capable of extracting the text comprises more than one control in one window or more than one control in more than one window;
responding to a selection operation of a target control input by a user, and extracting text information in the target control; the target control comprises at least one control capable of extracting text.
2. The method according to claim 1, wherein the extracting text information in the target control in response to a selection operation of the target control input by a user specifically comprises:
and extracting the text information in the selected target controls in response to the selection operation of the target controls input by the user and the text extraction confirmation instruction input by the user.
3. The method according to claim 1 or 2, wherein the identifying a control that can extract text in a window displayed on a screen specifically includes:
and drawing a frame on the periphery of the control capable of extracting the text.
4. The method of claim 1, further comprising:
and loading a semi-transparent mark layer for covering the screen in response to a text information extraction triggering instruction input by a user.
5. The method of claim 4, wherein after loading the semi-transparent marking layer for covering the screen, further comprising:
responding to the selection operation of a user on the screen area based on the semi-transparent mark layer, and determining a selected target area;
and identifying and extracting the text in the target area by an optical character recognition technology.
6. The method according to claim 5, wherein the determining the selected target area on the screen in response to the user's selection operation of the screen area based on the semi-transparent marking layer comprises:
and determining a plurality of target areas on the selected screen in response to a plurality of selection operations of the screen area by the user based on the semi-transparent mark layer and a text extraction confirmation instruction input by the user.
7. The method according to any one of claims 4 to 6, wherein the semi-transparent mark layer is a hollow structure, and the hollow structure is opposite to the control capable of extracting the text;
the identifying of the control which is displayed in the window on the screen and can extract the text specifically comprises the following steps:
and highlighting the control capable of extracting the text at the hollow part of the semitransparent mark layer.
8. The method according to any one of claims 1-2, further comprising:
and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
9. The method of claim 3, further comprising:
and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
10. The method according to any one of claims 4-6, further comprising:
and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
11. The method of claim 7, further comprising:
and inputting the extracted text information into a target interface so as to edit the text information in the target interface.
12. A text extraction device characterized by comprising:
the identification unit is used for responding to a text information extraction triggering instruction input by a user and identifying a control capable of extracting a text; the control capable of extracting the text is positioned in a window displayed on a screen, and the identified control capable of extracting the text comprises more than one control in one window or more than one control in more than one window;
the extraction unit is used for responding to the selection operation of a target control input by a user and extracting the text information in the target control; the target control comprises at least one control capable of extracting text.
13. The apparatus of claim 12, further comprising:
and the loading unit is used for responding to a text information extraction triggering instruction input by a user and loading a semitransparent mark layer for covering the screen.
14. The apparatus of claim 13, further comprising:
the device comprises a determining unit, a judging unit and a display unit, wherein the determining unit is used for responding to the selection operation of a user on the screen area based on a semi-transparent mark layer after the semi-transparent mark layer for covering the screen is loaded, and determining the selected target area;
and the identification and extraction unit is used for identifying and extracting the text in the target area through an optical character recognition technology.
CN201810355599.9A 2018-04-19 2018-04-19 Text extraction method and device Active CN109739416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810355599.9A CN109739416B (en) 2018-04-19 2018-04-19 Text extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810355599.9A CN109739416B (en) 2018-04-19 2018-04-19 Text extraction method and device

Publications (2)

Publication Number Publication Date
CN109739416A CN109739416A (en) 2019-05-10
CN109739416B true CN109739416B (en) 2020-07-03

Family

ID=66354292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810355599.9A Active CN109739416B (en) 2018-04-19 2018-04-19 Text extraction method and device

Country Status (1)

Country Link
CN (1) CN109739416B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110196646A (en) * 2019-05-29 2019-09-03 维沃移动通信有限公司 A kind of data inputting method and mobile terminal
CN110297681B (en) * 2019-06-24 2024-06-11 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN110795310B (en) * 2019-10-30 2024-03-26 维沃移动通信有限公司 Information reminding method and electronic equipment
CN111176540A (en) * 2019-11-27 2020-05-19 云知声智能科技股份有限公司 Character extraction method and device
CN112181255B (en) * 2020-10-12 2024-08-02 深圳市欢太科技有限公司 Control identification method and device, terminal equipment and storage medium
CN114564141A (en) * 2020-11-27 2022-05-31 华为技术有限公司 Text extraction method and device
CN112558954A (en) * 2020-12-29 2021-03-26 北京来也网络科技有限公司 Information extraction method, device, medium and electronic equipment combining RPA and AI
CN112817514B (en) * 2021-01-25 2022-06-24 维沃移动通信(杭州)有限公司 Content extraction method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760153A (en) * 2016-01-27 2016-07-13 努比亚技术有限公司 Text extracting device and method
CN106168905A (en) * 2016-07-21 2016-11-30 北京奇虎科技有限公司 Text handling method, device and mobile terminal in a kind of mobile terminal
CN107943390A (en) * 2017-11-15 2018-04-20 维沃移动通信有限公司 A kind of word clone method and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8494280B2 (en) * 2006-04-27 2013-07-23 Xerox Corporation Automated method for extracting highlighted regions in scanned source
CN101631341A (en) * 2009-08-12 2010-01-20 深圳华为通信技术有限公司 Information identification method and mobile terminal
US9582913B1 (en) * 2013-09-25 2017-02-28 A9.Com, Inc. Automated highlighting of identified text
CN106951893A (en) * 2017-05-08 2017-07-14 奇酷互联网络科技(深圳)有限公司 Text information acquisition methods, device and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760153A (en) * 2016-01-27 2016-07-13 努比亚技术有限公司 Text extracting device and method
CN106168905A (en) * 2016-07-21 2016-11-30 北京奇虎科技有限公司 Text handling method, device and mobile terminal in a kind of mobile terminal
CN107943390A (en) * 2017-11-15 2018-04-20 维沃移动通信有限公司 A kind of word clone method and mobile terminal

Also Published As

Publication number Publication date
CN109739416A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109739416B (en) Text extraction method and device
JP6431120B2 (en) System and method for input assist control by sliding operation in portable terminal equipment
US9703462B2 (en) Display-independent recognition of graphical user interface control
CN104090648B (en) Data entry method and terminal
CN104360808A (en) Method and device for editing documents by using symbolic gesture instructions
CN106484266A (en) A kind of text handling method and device
CN103718149B (en) The processing method and touch-screen equipment of a kind of text
JP6427559B6 (en) Permanent synchronization system for handwriting input
US20120110459A1 (en) Automated adjustment of input configuration
KR20060114287A (en) Boxed and lined input panel
US8952897B2 (en) Single page soft input panels for larger character sets
CN101227669A (en) Mobile terminal with touch screen
CN109343757A (en) Operation control method of electronic equipment and electronic equipment
US20140123036A1 (en) Touch screen display process
KR20140039517A (en) Device and method implementing for particular function based on writing
CN104571866A (en) Screen capture method
CN113194024B (en) Information display method and device and electronic equipment
CN106598409B (en) Text copying method and device and intelligent terminal
WO2023045920A1 (en) Text display method and text display apparatus
JP3292752B2 (en) Gesture processing device and gesture processing method
JP2024064941A (en) Display method, apparatus, pen type electronic dictionary, electronic equipment, and recording medium
WO2015194814A1 (en) Method for simply inputting emoticon or sticker and apparatus for implementing method
JP7496699B2 (en) Display device
CN113099033A (en) Information sending method, information sending device and electronic equipment
JP2000099223A (en) Data processor with handwritten character input interface and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant