CN113239660A - Text display method and device and electronic equipment - Google Patents

Text display method and device and electronic equipment Download PDF

Info

Publication number
CN113239660A
CN113239660A CN202110476778.XA CN202110476778A CN113239660A CN 113239660 A CN113239660 A CN 113239660A CN 202110476778 A CN202110476778 A CN 202110476778A CN 113239660 A CN113239660 A CN 113239660A
Authority
CN
China
Prior art keywords
text
text content
feature information
contents
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110476778.XA
Other languages
Chinese (zh)
Inventor
任静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110476778.XA priority Critical patent/CN113239660A/en
Publication of CN113239660A publication Critical patent/CN113239660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/768Arrangements for image or video recognition or understanding using pattern recognition or machine learning using context analysis, e.g. recognition aided by known co-occurring patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/635Overlay text, e.g. embedded captions in a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a text display method and device and electronic equipment, and belongs to the technical field of electronics. The problem that the whole process for acquiring the text in the picture is complicated in steps and low in flexibility can be solved. The method comprises the following steps: acquiring M text contents in M image areas, wherein each image area corresponds to one text content, and M is a positive integer; determining a first text content and a second text content from the M text contents based on M feature information corresponding to the M text contents, wherein each text content corresponds to one feature information; displaying a second text content, wherein the second text content is as follows: the text contents except the first text contents in the M text contents; wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content. The method and the device are applied to scenes for displaying text contents in pictures.

Description

Text display method and device and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a text display method and device and electronic equipment.
Background
With the rapid development of electronic technology, the picture text recognition technology has been widely applied to electronic devices. Currently, a user can extract text in a picture through an intelligent picture recognition tool.
In the related art, when extracting text from a picture, the picture is usually subjected to global recognition and text extraction to obtain all the text in the picture, and for the case that the entire image region of the picture includes a text region and a picture region, the text in the picture region is usually not required. Therefore, there may be text that is not needed by the user among the text extracted from the picture by this method. In this case, the user needs to edit the extracted text to remove unnecessary text content. For example, after extracting text from a screenshot of a news page to obtain text in the news page and text in an illustration of the news page, a user needs to delete the text in the illustration from the obtained text to remove text content in the illustration which is not needed.
Thus, the whole process of acquiring the text in the picture is complicated in steps and low in flexibility.
Disclosure of Invention
The embodiment of the application aims to provide a text display method, which can solve the problems of complicated steps and low flexibility in the whole process of acquiring a text in a picture.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a text display method, where the method includes: acquiring M text contents in M image areas, wherein each image area corresponds to one text content, and M is a positive integer; determining a first text content and a second text content from the M text contents based on M feature information corresponding to the M text contents, wherein each text content corresponds to one feature information; displaying a second text content, wherein the second text content is as follows: the text contents except the first text contents in the M text contents; wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
In a second aspect, an embodiment of the present application provides a text display apparatus, including: the device comprises an acquisition module, a determination module and a display module, wherein: the acquiring module is configured to acquire M text contents in M image regions, where each image region corresponds to one text content, and M is a positive integer; the determining module is configured to determine, based on the M pieces of feature information corresponding to the M pieces of text content acquired by the acquiring module, a first text content from the M pieces of text content, where each text content corresponds to one piece of feature information; the display module is configured to display the second text content determined by the determining module, where the second text content is: the text contents except the first text contents in the M text contents; wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, the present application provides a computer program product stored in a non-volatile storage medium, the program product being executed by at least one processor to implement the method according to the first aspect.
In this embodiment, the text display device may acquire M text contents in M image areas, each image area corresponding to one text content, determine a first text content from the M text contents based on feature information corresponding to the M text contents, each text content corresponding to one feature information, and finally display a second text content of the M text contents except the first text content. By the method, the text display device can screen out text contents which do not need to be displayed (namely first text contents) from the plurality of text contents based on the M characteristic information corresponding to the M text contents and display the text contents which need to be displayed in the M text contents, so that the plurality of text contents in the image are extracted and displayed in a targeted manner, the step of acquiring the specific text contents is simplified, and the efficiency and the flexibility of acquiring and displaying the text contents in the image are improved.
Drawings
Fig. 1 is a flowchart of a method of displaying a text according to an embodiment of the present application;
fig. 2 is one of schematic diagrams of an interface applied by a text display method according to an embodiment of the present application;
fig. 3 is a second schematic diagram of an interface applied by a text display method according to an embodiment of the present application;
fig. 4 is a third schematic view of an interface applied by a text display method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a text display device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The text display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The text display method provided by the embodiment of the application can be applied to scenes for displaying text contents in pictures.
Take the example of displaying text content in a screenshot of a news page. Assume that the image region of the screenshot of the news page includes a text region and a picture region, the text region is a region where text in the news page is displayed, and the picture region is a region where an illustration in the news page is displayed. If a user wants to extract the text content of the text area in the screenshot of the news page, all the text content in the screenshot of the news page needs to be obtained first, and then the text content in the illustration is deleted from all the text content, so that only the text content of the text area in the screenshot of the news page is kept and displayed, and the whole process of obtaining the text in the image is complicated in steps and low in flexibility.
In this embodiment of the application, the text display device may obtain text contents of a text region and a picture region in a screenshot of a news page, determine the text content of the picture region from the text contents based on feature information (e.g., line spacing, font, etc.) of the text contents, and display the text contents except the text content of the picture region in the screenshot of the news page, so as to obtain the text content of the text region in the screenshot of the news page.
The embodiment of the application provides a text display method, which can be applied to electronic equipment, and fig. 1 shows a flowchart of the text display method provided by the embodiment of the application. As shown in fig. 1, the text display method provided in the embodiment of the present application may include the following steps 101 to 103:
step 101: m text contents in the M image areas are obtained.
Wherein, each image area corresponds to a text content, and M is a positive integer.
In this embodiment of the application, the M image areas are image areas in a first image, and the first image may be recorded as an image to be recognized. For example, the M image regions may be all or part of the image regions in the first image. Illustratively, each of the M image areas includes text content therein.
Illustratively, the first image may be a screenshot, for example, a screenshot of a page.
Illustratively, the region types of the M image regions may include at least one of: text area, picture area. For example, the first image is taken as a screenshot of a news page. The news page screenshot may include a plurality of image regions, where an image region where the news text content is located is a text region, and an image region where the news illustration is located is a picture region, as shown in fig. 2, in the image 20, the image region 21 and the image region 23 are picture regions in the news page screenshot, and the image region 22 and the image region 24 are text regions in the news page screenshot.
In this embodiment of the application, the above-mentioned M text contents may include any one of: words, symbols, links, and the like, which are not limited in this embodiment of the present application.
Alternatively, in this embodiment of the application, the text display device may identify M image regions, and extract the identified M text contents from the M image regions.
Step 102: and determining a first text content and a second text content from the M text contents based on the M pieces of feature information corresponding to the M text contents.
Wherein, each text content in the M text contents corresponds to a feature information respectively.
Optionally, in an embodiment of the present application, each of the M pieces of feature information includes at least one of: the feature information of the image area where the corresponding text content is located, and the feature information of the corresponding text content, where the feature information may include a text style.
For example, the feature information of the image area where the text content is located may include at least one of the following: the color of the image region (i.e., the background color of the text content), the color saturation of the image region, and the like, which are not limited in this embodiment.
Illustratively, the characteristic information of the text content may include at least one of: the feature information may also be other feature information of the text, and this is not limited in this embodiment of the present application.
For example, assuming that the M image regions include 4 image regions, where region 1 and region 3 are text regions and region 2 and region 4 are picture regions, the feature information of region 1 and region 3 may be: the characteristic information of the song body, line spacing 12px, black, region 2 and region 4 may be: black, line spacing 10px, grey.
It should be noted that, for an image to be recognized whose image region includes a text region and a picture region, because there is a difference between text styles of texts in the text region and the picture region, corresponding text contents can be determined in a targeted manner based on the difference between text styles of text contents in the image to be recognized.
Optionally, in this embodiment of the application, the text display device may identify the M image areas, and acquire feature information of text contents of the M image areas.
Optionally, in this embodiment of the application, the first text content is a text content of a picture region in the M image regions, and the second text content is a text content of a text region in the M image regions.
Optionally, the corresponding feature information of the first text content is different from the corresponding feature information of the second text content.
Alternatively, in this embodiment of the application, the text display device may determine the first text content and the second text content from the M text contents based on the M pieces of feature information, or the text display device may identify a picture region and a text region in the M image regions, and then determine the first text content and the second text content from the M text contents based on the image region where the text content is located.
In one example, the text display device may determine the type of the image region corresponding to each feature information from M feature information corresponding to the M image regions, then determine a text content corresponding to the feature information from the M text contents, that is, a first text content, based on the feature information corresponding to the picture region, and determine a text content corresponding to the feature information from the M text contents, that is, a second text content, based on the feature information corresponding to the picture region.
Further, the feature information of the text region in the web page and the text in the picture region may be obtained in advance through big data, the regions and the feature information may be stored correspondingly, and then the M feature information may be matched to determine the types of the picture regions (i.e., the text region or the picture region) to which the M feature information respectively corresponds.
For example, assuming that the M image regions include a region 1, a region 2, and a region 3, the regions sequentially correspond to feature information a, feature information b, and feature information c, the feature information a and the feature information c correspond to text regions, and the feature information b corresponds to picture regions, it may be determined that the first text content is the text content in the region (i.e., the region 2) corresponding to the feature information b, and the second text content is the text content in the image regions (i.e., the regions 1 and 3) corresponding to the feature information a and the feature information c.
In another example, the text display apparatus may recognize a picture region and a text region among the M image regions by an image recognition technique, and regard, as the first text content, a content in the picture region and regard, as the second text content, a content in the text region among the M text contents.
For example, assuming that M image regions include 4 image regions, where region 1 and region 3 are text regions and region 2 and region 4 are picture regions, the text display apparatus may identify the M image regions, determine the text regions (i.e., region 1 and region 3) and the picture regions (i.e., region 2 and region 4) therein, and then determine the text contents in region 1 and region 3 as the second text contents and the text contents in region 2 and region 4 as the first text contents.
Step 103: the second text content is displayed.
Wherein, the second text content is: among the M text contents, a text content other than the first text content.
Optionally, in this embodiment of the application, the electronic device may display the second text content on the first page. Further, the text display device may display the second text content in a text box, and display or suspend the text box on a first interface, where the first interface may be any application interface or a main interface in the electronic device.
Optionally, in this embodiment of the application, the electronic device may further display a function menu in a case that the second text content is displayed. Illustratively, the function menu includes at least one of the following function options: copy options, search options, translate options, note options, and share options, among others. The user can control the text display device to perform corresponding processing on the second text content through the at least one function option according to the requirement. Thus, display flexibility can be further improved.
Optionally, in this embodiment of the application, when the second text content is displayed on the first page, a content selection control may be suspended on the first page, and the user may select the target content from the first page by using the content selection control.
For example, referring to fig. 2, as shown in fig. 3, the text display device may display the texts in the image area 22 and the image area 24 on the page 31 according to the feature information corresponding to the text content in the image area, and display the function options corresponding to the "copy", "search", "translate", "memo", "share", and "save" functions on the page 31, so that the user can operate the texts.
In the text display method provided by the embodiment of the application, the text display device may acquire M text contents in M image areas, each image area corresponds to one text content, determine, based on feature information corresponding to the M text contents, a first text content from the M text contents, each text content corresponds to one feature information, and finally display a second text content, excluding the first text content, in the M text contents. By the method, the text display device can screen out text contents which do not need to be displayed (namely first text contents) from the plurality of text contents based on the M characteristic information corresponding to the M text contents and display the text contents which need to be displayed in the M text contents, so that the plurality of text contents in the image are extracted and displayed in a targeted manner, the step of acquiring the specific text contents is simplified, and the efficiency and the flexibility of acquiring and displaying the text contents in the image are improved.
Optionally, in this embodiment of the application, before the step 102 determines the first text content from the M text contents, the text display method provided in this embodiment of the application further includes the following step B1:
step B1: and determining the target feature information based on the M feature information.
Wherein, the target characteristic information is: among the M pieces of feature information, feature information satisfying a preset condition.
Optionally, the preset conditions include: the distribution is the most among the M text contents of the M image areas.
For example, the text display device may identify text features (i.e., feature information) of all texts in the image to be identified, obtain a plurality of text features, then count distribution conditions of the text features in all text contents, and extract the text features distributed most in the text contents as target feature information.
Illustratively, the process of determining the target feature information by the text display device specifically includes the following steps:
s1: characters in the M image areas are identified, and text characteristics of the characters, namely character styles, character paragraph intervals, line intervals and character ground colors (namely colors of the image areas where the characters are located), are obtained.
S2: and judging whether the acquired character style is the character style with the most occurrence, if so, executing S3, and if not, ending the current process.
S3: and judging whether the obtained paragraph spacing is the paragraph spacing with the largest occurrence, if so, executing S4, and if not, ending the current process.
S4: and judging whether the acquired character ground color is the most appeared character ground color or not, if so, executing S5, and if not, ending the current process.
S5: the text features of the text content of the M image areas (i.e., the above-described target text feature information) are saved. Optionally, in this embodiment of the present application, the process of step 102 may include the following steps 102a and 102 b:
step 102 a: and respectively determining first characteristic information which is not matched with the target characteristic information from the M pieces of characteristic information.
Step 102 b: the first text content corresponding to the first feature information is specified from the M text contents.
Optionally, the target feature information is feature information that is most distributed among the M pieces of feature information.
In one example, the target feature information may be feature information corresponding to text content in a text region of the M image regions.
In another example, the target feature information may be feature information corresponding to text content in a picture area of the M image areas.
For example, the text display device may count the distribution of the M pieces of feature information in the M pieces of text content in the M image regions, and extract the feature information distributed most in the M pieces of text content as the target feature information. That is, the feature information that appears the most frequently or occupies the largest percentage in the text content is taken as the target feature information.
For example, the feature information of the text content of the image area 1 is: bold, 12px, and the characteristic information of the text content of image area 2 is: the feature information of the text content of the song body, 10px, image area 3 is: and the feature information of bold, 12px and "bold, 12 px" is distributed most in the whole text content, and the feature information is the target text feature information.
It should be noted that, in a general case, for a page screenshot, the proportion of the text content in the text region in the entire page content is usually much larger than the proportion of the text content in the picture region in the entire page content, and therefore, the proportion of the feature information corresponding to the text content in the text region in the entire page content is also larger than the proportion of the feature information corresponding to the text content in the picture region in the entire page content, and therefore, the text content in the text region in the entire page content can be reversely deduced by determining the feature information that occurs the most times in the entire page content.
Alternatively, the text display apparatus may determine feature information (i.e., first feature information) that does not match from the target feature information from the M pieces of feature information, based on the target feature information described above.
For example, the target feature information may be feature information corresponding to text content in the text region, and the first feature information may be feature information corresponding to text content in the picture region.
Optionally, after determining the first feature information, the text display apparatus may determine, according to the first feature information, feature information matched with the first feature information from among feature information corresponding to the M text contents, and determine, according to the feature information, the corresponding first text content from among the M text contents.
In one example, in a case where the target feature information is feature information corresponding to text content of the text region, first feature information that does not match the target feature information is feature information in the picture region, and the display device may determine the text content (i.e., the first text content) in the picture region based on the first feature information.
In another example, in a case that the target feature information is feature information corresponding to text content of the picture region, first feature information that does not match the target feature information is feature information in the text region, and the display device may determine the text content (i.e., the first text content) in the text region based on the first feature information.
Further optionally, in this embodiment of the application, after the process of the step 102b, the text display method provided in this embodiment of the application further includes the following step a 1:
step A1: and deleting the first text content according to the position information corresponding to the first text content.
Illustratively, the position information is a position of a text in the first text content in the image area, and the position may be characterized by a row and a column of the text.
For example, the text display device may identify and scan text contents in the M image regions, and determine a line position corresponding to the text. Further, the text display device may establish a corresponding relationship between the line position and the column position corresponding to the text and the text, and store the corresponding relationship.
For example, after determining the first text content, the text display apparatus may obtain a line position of the text in the first text content based on a correspondence between a line position corresponding to the saved text and the text, and delete the first text content from the M text contents based on the line position of the text in the first text content.
For example, after determining the text contents of the picture region and the text region in the screenshot of the news page, the text display device may identify the text content to be deleted in the whole screenshot in combination with the positions of the whole image-text content and the text content, retain the remaining text content, and output the final text.
Therefore, the text display device can delete the specified text from the whole text content according to the line position of the specified text, so that only the content required by the user is reserved, and the display flexibility is improved.
Optionally, in this embodiment of the present application, after the process of the step 102b, the text display method provided in this embodiment of the present application further includes the following step C1:
step C1: and displaying the first text content and storing the first text content.
For example, the text display device may display the first text content on a second page. Further, the text display device may display the first text content in a text box, and display or suspend the text box on a second interface, where the second interface may be any application interface or a main interface in the electronic device.
Optionally, the electronic device may further display a function menu in a case where the first text content is displayed. Illustratively, the function menu includes at least one of the following function options: copy options, search options, translate options, note options, and share options, among others. The user can control the text display device to perform corresponding processing on the first text content through the at least one function option according to the requirement. Thus, display flexibility can be further improved.
Optionally, when the first text content is displayed on the second page, a content selection control may be suspended on the second page, and the user may select the target content from the second page by using the content selection control.
For example, in conjunction with fig. 2 described above, as shown in fig. 4, the text display apparatus may display the text content in the image area 21 and the image area 23 of the screenshot of the news page on the text display page 41.
Alternatively, the text display device may generate a text file (e.g., a document) corresponding to the first text content, and then store the first text content in the electronic device or the server through the text file, so as to facilitate subsequent use by the user.
Optionally, in this embodiment of the application, the text display apparatus may display a first text content of the M text contents on a first page, and display a second text content of the M text contents on a second page, so that a user can view the corresponding text contents in different pages respectively. Illustratively, the user can respectively view the text content to be viewed through the switching of the pages.
Optionally, in this embodiment of the application, for the text content in the determined multiple picture areas, the text display device may perform division of the text content based on display positions of the multiple picture areas in the image to be recognized. For example, the text display device may number the text content in each picture region according to the order of the display positions of the plurality of picture regions in the image to be recognized from front to back.
For example, when the user needs to save the text contents in the plurality of picture areas, the text display apparatus may automatically combine the text contents in the plurality of picture areas according to the number of each text content and save the combined text contents.
Optionally, in this embodiment of the application, the text display device may identify text content in the picture area based on a preset keyword.
For example, the text display device may identify a keyword in the text content in the entire image to be identified, and use the identified keyword as a preset keyword.
For example, the text display device may determine a target picture region including text content matching the preset keyword from among a plurality of picture regions of the image to be recognized based on the preset keyword, and then acquire the text content in the target picture display region.
It should be noted that, in the text display method provided in the embodiment of the present application, the execution main body may be a text display device, or a control module in the text display device for executing the text display method. In the embodiment of the present application, a text display device executes a text display method as an example, and the text display device provided in the embodiment of the present application is described.
An embodiment of the present application provides a text display apparatus, as shown in fig. 5, the text display apparatus 500 includes: the device comprises: an obtaining module 501, a determining module 502 and a displaying module 503, wherein:
the obtaining module 501 is configured to obtain M text contents in M image regions, where each image region corresponds to one text content, and M is a positive integer; the determining module 502 is configured to determine, based on M pieces of feature information corresponding to the M pieces of text content acquired by the acquiring module 501, a first text content and a second text content from the M pieces of text content, where each text content corresponds to one piece of feature information; the display module 503 is configured to display the second text content determined by the determining module 502, where the second text content is: among the M text contents, the text contents except the first text content; wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
Optionally, in this embodiment of the application, the determining module 502 is further configured to determine the target feature information based on the M pieces of feature information; wherein, the target characteristic information is: among the M pieces of feature information, feature information satisfying a preset condition. Optionally, in this embodiment of the application, the determining module 502 is specifically configured to determine, from the M pieces of feature information, first feature information that does not match with the target feature information; the determining module 502 is specifically configured to determine, from the M text contents, a first text content corresponding to the first feature information.
Optionally, in an embodiment of the present application, the apparatus further includes: a deletion module 504;
the deleting module 504 is configured to delete the first text content according to the position information corresponding to the first text content determined by the determining module 502.
Optionally, in an embodiment of the present application, the apparatus further includes: a storage module 505;
the display module 503 is further configured to display the first text content determined by the determining module 502, and the storage module 505 is configured to store the first text content determined by the determining module 502.
In the text display device provided in the embodiment of the present application, the text display device may acquire M text contents in M image regions, each image region corresponds to one text content, determine, based on feature information corresponding to the M text contents, a first text content from the M text contents, each text content corresponds to one feature information, and finally display a second text content, excluding the first text content, in the M text contents. By the method, the text display device can screen out text contents which do not need to be displayed (namely first text contents) from the plurality of text contents based on the M characteristic information corresponding to the M text contents and display the text contents which need to be displayed in the M text contents, so that the plurality of text contents in the image are extracted and displayed in a targeted manner, the step of acquiring the specific text contents is simplified, and the efficiency and the flexibility of acquiring and displaying the text contents in the image are improved.
The text display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The text display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The text display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the text display method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 is configured to obtain M text contents in M image regions, where each image region corresponds to one text content, and M is a positive integer; the processor 110 is configured to determine, based on the M pieces of feature information corresponding to the M pieces of text content, a first text content and a second text content from the M pieces of text content, where each text content corresponds to one piece of feature information; the display unit 106 is configured to display the second text content determined by the processor 110, where the second text content is: among the M text contents, the text contents except the first text content; wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
Optionally, in this embodiment of the application, the processor 110 is further configured to determine the target feature information based on the M pieces of feature information; wherein, the target characteristic information is: among the M pieces of feature information, feature information satisfying a preset condition.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to determine, from the M pieces of feature information, first feature information that does not match with the target feature information respectively; the processor 110 is specifically configured to determine a first text content corresponding to the first feature information from the M text contents.
Optionally, in this embodiment of the application, the processor 110 is configured to delete the first text content according to the determined position information corresponding to the first text content.
Optionally, in this embodiment of the application, the display unit 106 is further configured to display the first text content determined by the processor 110, and the memory 109 is configured to store the first text content determined by the processor 110.
In the electronic device provided in the embodiment of the present application, the electronic device may acquire M text contents in M image areas, each image area corresponds to one text content, determine, based on feature information corresponding to the M text contents, a first text content from the M text contents, each text content corresponds to one feature information, and finally display a second text content, excluding the first text content, in the M text contents. By the method, the text display device can screen out text contents which do not need to be displayed (namely first text contents) from the plurality of text contents based on the M characteristic information corresponding to the M text contents and display the text contents which need to be displayed in the M text contents, so that the plurality of text contents in the image are extracted and displayed in a targeted manner, the step of acquiring the specific text contents is simplified, and the efficiency and the flexibility of acquiring and displaying the text contents in the image are improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the text display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the text display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
The present application provides a computer program product, which is stored in a non-volatile storage medium and executed by at least one processor to implement the processes of the foregoing method embodiments, and achieve the same technical effects.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of displaying text, the method comprising:
acquiring M text contents in M image areas, wherein each image area corresponds to one text content, and M is a positive integer;
determining a first text content and a second text content from the M text contents based on M feature information corresponding to the M text contents, wherein each text content corresponds to one feature information;
displaying the second text content, wherein the second text content is as follows: text contents except the first text contents in the M text contents;
wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
2. The method of claim 1, wherein prior to determining the first textual content from the M textual contents, the method further comprises:
determining target feature information based on the M feature information;
wherein the target characteristic information is: and among the M pieces of characteristic information, the characteristic information meeting preset conditions.
3. The method according to claim 2, wherein the determining a first text content from the M text contents based on the M pieces of feature information corresponding to the M text contents comprises:
respectively determining first feature information which is not matched with the target feature information from the M feature information;
and determining the first text content corresponding to the first characteristic information from the M text contents.
4. The method according to claim 3, wherein after the determining the first text content corresponding to the first feature information from the M text contents, the method further comprises:
and deleting the first text content according to the position information corresponding to the first text content.
5. The method according to claim 3, wherein after the determining the first text content corresponding to the first feature information from the M text contents, the method further comprises: and displaying the first text content and storing the first text content.
6. A text display apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module, a determination module and a display module, wherein:
the acquisition module is used for acquiring M text contents in M image areas, each image area corresponds to one text content, and M is a positive integer;
the determining module is configured to determine, based on the M pieces of feature information corresponding to the M pieces of text content acquired by the acquiring module, a first text content from the M pieces of text content, where each text content corresponds to one piece of feature information;
the display module is configured to display the second text content determined by the determination module, where the second text content is: text contents except the first text contents in the M text contents;
wherein each feature information comprises at least one of: the characteristic information of the image area where the corresponding text content is located and the characteristic information of the corresponding text content.
7. The apparatus of claim 6,
the determining module is further configured to determine the target feature information based on the M feature information;
wherein, the target characteristic information is: and among the M pieces of characteristic information, the characteristic information meeting preset conditions.
8. The apparatus of claim 7,
the determining module is specifically configured to determine, from the M pieces of feature information, first feature information that does not match the target feature information, respectively;
the determining module is specifically configured to determine, from the M text contents, the first text content corresponding to the first feature information.
9. The apparatus of claim 8, further comprising: a deletion module;
and the deleting module is used for deleting the first text content according to the position information corresponding to the first text content determined by the determining module.
10. The apparatus of claim 8, further comprising: a storage module;
the display module is further configured to display the first text content determined by the determination module, and the storage module is configured to store the first text content determined by the determination module.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the text display method according to any one of claims 1-5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, carry out the steps of the text display method according to any one of claims 1 to 5.
CN202110476778.XA 2021-04-29 2021-04-29 Text display method and device and electronic equipment Pending CN113239660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110476778.XA CN113239660A (en) 2021-04-29 2021-04-29 Text display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110476778.XA CN113239660A (en) 2021-04-29 2021-04-29 Text display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113239660A true CN113239660A (en) 2021-08-10

Family

ID=77131649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110476778.XA Pending CN113239660A (en) 2021-04-29 2021-04-29 Text display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113239660A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635683A (en) * 2018-11-27 2019-04-16 维沃移动通信有限公司 Method for extracting content and terminal device in a kind of image
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
CN110045897A (en) * 2019-03-12 2019-07-23 维沃移动通信有限公司 A kind of information display method and terminal device
CN111062389A (en) * 2019-12-10 2020-04-24 腾讯科技(深圳)有限公司 Character recognition method and device, computer readable medium and electronic equipment
CN111444922A (en) * 2020-03-27 2020-07-24 Oppo广东移动通信有限公司 Picture processing method and device, storage medium and electronic equipment
CN111488826A (en) * 2020-04-10 2020-08-04 腾讯科技(深圳)有限公司 Text recognition method and device, electronic equipment and storage medium
CN111523286A (en) * 2020-04-16 2020-08-11 维沃移动通信有限公司 Picture display method and electronic equipment
CN111626383A (en) * 2020-05-29 2020-09-04 Oppo广东移动通信有限公司 Font identification method and device, electronic equipment and storage medium
CN111695381A (en) * 2019-03-13 2020-09-22 杭州海康威视数字技术股份有限公司 Text feature extraction method and device, electronic equipment and readable storage medium
CN112036395A (en) * 2020-09-04 2020-12-04 联想(北京)有限公司 Text classification identification method and device based on target detection
CN112364679A (en) * 2020-09-04 2021-02-12 联想(北京)有限公司 Image area identification method and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635683A (en) * 2018-11-27 2019-04-16 维沃移动通信有限公司 Method for extracting content and terminal device in a kind of image
CN109815462A (en) * 2018-12-10 2019-05-28 维沃移动通信有限公司 A kind of document creation method and terminal device
CN110045897A (en) * 2019-03-12 2019-07-23 维沃移动通信有限公司 A kind of information display method and terminal device
CN111695381A (en) * 2019-03-13 2020-09-22 杭州海康威视数字技术股份有限公司 Text feature extraction method and device, electronic equipment and readable storage medium
CN111062389A (en) * 2019-12-10 2020-04-24 腾讯科技(深圳)有限公司 Character recognition method and device, computer readable medium and electronic equipment
CN111444922A (en) * 2020-03-27 2020-07-24 Oppo广东移动通信有限公司 Picture processing method and device, storage medium and electronic equipment
CN111488826A (en) * 2020-04-10 2020-08-04 腾讯科技(深圳)有限公司 Text recognition method and device, electronic equipment and storage medium
CN111523286A (en) * 2020-04-16 2020-08-11 维沃移动通信有限公司 Picture display method and electronic equipment
CN111626383A (en) * 2020-05-29 2020-09-04 Oppo广东移动通信有限公司 Font identification method and device, electronic equipment and storage medium
CN112036395A (en) * 2020-09-04 2020-12-04 联想(北京)有限公司 Text classification identification method and device based on target detection
CN112364679A (en) * 2020-09-04 2021-02-12 联想(北京)有限公司 Image area identification method and electronic equipment

Similar Documents

Publication Publication Date Title
CN106484266B (en) Text processing method and device
CN112540740A (en) Split screen display method and device, electronic equipment and readable storage medium
CN112099684A (en) Search display method and device and electronic equipment
CN112929494B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN112256179B (en) Text processing method and device
CN113253883A (en) Application interface display method and device and electronic equipment
CN112083854A (en) Application program running method and device
CN113849092A (en) Content sharing method and device and electronic equipment
CN112836086A (en) Video processing method and device and electronic equipment
CN112181253A (en) Information display method and device and electronic equipment
CN113220393A (en) Display method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN111857466B (en) Message display method and device
CN113590008A (en) Chat message display method and device and electronic equipment
CN113239302A (en) Page display method and device and electronic equipment
CN112764639A (en) Screen capturing method and device and electronic equipment
CN112764606A (en) Identification display method and device and electronic equipment
CN111724455A (en) Image processing method and electronic device
CN113794943B (en) Video cover setting method and device, electronic equipment and storage medium
CN113239212B (en) Information processing method and device and electronic equipment
CN112765946B (en) Chart display method and device and electronic equipment
CN114625296A (en) Application processing method and device
CN114385562A (en) Text information deleting method and device and electronic equipment
CN111813303B (en) Text processing method and device, electronic equipment and readable storage medium
CN113010072A (en) Searching method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination