CN108647097B - Text image processing method and device, storage medium and terminal - Google Patents

Text image processing method and device, storage medium and terminal Download PDF

Info

Publication number
CN108647097B
CN108647097B CN201810468616.XA CN201810468616A CN108647097B CN 108647097 B CN108647097 B CN 108647097B CN 201810468616 A CN201810468616 A CN 201810468616A CN 108647097 B CN108647097 B CN 108647097B
Authority
CN
China
Prior art keywords
images
image
overlapping
text
sharpening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810468616.XA
Other languages
Chinese (zh)
Other versions
CN108647097A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810468616.XA priority Critical patent/CN108647097B/en
Publication of CN108647097A publication Critical patent/CN108647097A/en
Application granted granted Critical
Publication of CN108647097B publication Critical patent/CN108647097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses a text image processing method, a text image processing device, a storage medium and a terminal, wherein the method comprises the following steps: when a sharpening processing instruction facing a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images; respectively acquiring an operation step sequence corresponding to each second image according to the sharpening; determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images; the plurality of second images are subjected to sharpening processing in an asynchronous execution mode according to the overlapping step, and the power consumption of text image processing can be reduced.

Description

Text image processing method and device, storage medium and terminal
Technical Field
The embodiment of the application relates to the technical field of mobile terminals, in particular to a text image processing method, a text image processing device, a storage medium and a terminal.
Background
With the continuous development of the mobile terminal photographing function, after a user uses the mobile terminal to photograph continuous text images, the photographed text images can be directly processed through the mobile terminal.
However, when processing text images of continuous text contents, a user needs to perform similar processing operations for each captured text image, which is a complicated procedure, resulting in an increase in power consumption for processing text images.
Disclosure of Invention
An object of the embodiments of the present application is to provide a text image processing method, apparatus, storage medium, and terminal, which can reduce processing power consumption of a text image.
In a first aspect, an embodiment of the present application provides a text image processing method, including:
when a sharpening processing instruction facing a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images;
respectively acquiring an operation step sequence corresponding to each second image according to the sharpening;
determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images;
sharpening the plurality of second images in an asynchronous manner according to the overlapping step.
In a second aspect, an embodiment of the present application provides a text image processing apparatus, including:
the receiving module is used for receiving a sharpening processing instruction facing to a plurality of first images;
the searching module is used for searching a plurality of second images corresponding to continuous text contents from the plurality of first images when the receiving module receives the sharpening processing instruction facing the plurality of first images;
the acquisition module is used for respectively acquiring the operation step sequence corresponding to the second image searched by each search module according to the sharpening processing received by the receiving module;
a determining module, configured to determine an overlapping step according to the operation step sequence corresponding to the plurality of second images acquired by the acquiring module, where the overlapping step is a same operation step in operation step sequences corresponding to different second images;
and the asynchronous execution module is used for performing sharpening processing on the plurality of second images in an asynchronous execution mode according to the overlapping step determined by the determination module.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the text image processing method as shown in the first aspect.
In a fourth aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the text image processing method according to the first aspect when executing the computer program.
According to the text image processing scheme provided by the embodiment of the application, when a sharpening processing instruction facing a plurality of first images is received, a plurality of second images corresponding to continuous text contents are searched from the plurality of first images; secondly, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing; determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images; finally, the plurality of second images are subjected to sharpening processing in an asynchronous execution mode according to the overlapping step, so that the processing power consumption of the text image can be reduced.
Drawings
Fig. 1 is a schematic flowchart of a text image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another text image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another text image processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another text image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another text image processing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another text image processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a text image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
With the continuous development of the mobile terminal photographing function, after a user uses the mobile terminal to photograph continuous text images, the photographed text images can be directly processed through the mobile terminal. However, when processing text images of continuous text contents, a user needs to process every captured text image, but since the text images of the continuous text contents are not very different in capturing effect, the processing operation for each text image is not very different, similar steps are repeatedly executed many times, the operation is cumbersome, and the processing power consumption of the text images is increased.
The embodiment of the application provides a text image processing method, which can determine repeated steps among a plurality of text images to be processed when a sharpening processing instruction facing the plurality of text images to be processed is received, and sharpens the plurality of text images to be processed in an asynchronous execution mode, so that the condition that one sharpening processing instruction is received to process one text image to be processed is avoided, the processing operation of the text image is simplified, and the processing power consumption of the text image can be reduced. The specific scheme is as follows:
fig. 1 is a flowchart of a text image processing method according to an embodiment of the present application, where the method is used in a situation where a terminal performs text image processing on continuous text content, and the method may be executed by a mobile terminal with an image processing function or a mobile terminal installed with an image processing application (such as a beauty camera, a beauty show, and the like), where the mobile terminal may be a smart phone, a tablet computer, a wearable device, a notebook computer, and the like, and the method specifically includes the following steps:
step 110, when a sharpening processing instruction facing to the plurality of first images is received, searching a plurality of second images corresponding to the continuous text content from the plurality of first images.
The first image may be an image to be subjected to sharpening processing selected from a gallery by a user through a way of checking or frame selection. Optionally, the user generally selects a text image having a series of common features when selecting the first image, for example, the text image may correspond to a series of continuous text contents shot for the text contents on the book. The user may trigger a sharpening process instruction after selecting the plurality of first images. The sharpening processing instruction may be an instruction triggered after a user clicks a sharpening processing key, and the instruction may include one or more sharpening processing operations, for example, the sharpening processing instruction triggered after the user clicks a "one-key sharpening" key on an image processing interface on a display screen, and the instruction includes: the processing operation includes one or more of exposure control, white balance control, multi-frame noise reduction, histogram adjustment, contrast adjustment, smooth denoising, detail sharpening, and the like, wherein the processing operation included in the instruction may be a default of the system or may be preset by a user according to the needs of the user. The sharpening processing instructions may also include instructions generated by one or more sub-sharpening processing keys that are triggered by the user in a plurality of selectable sharpening processing operations. For example, a user may take multiple pictures of continuous text content, but the taken image is not clear due to dark shooting environment light, and at this time, the user may select a sharpening processing instruction triggered by sub-sharpening processing operations such as white balance control, multi-frame noise reduction, contrast adjustment and the like from multiple optional sharpening processing operations according to the reason causing the image not to be clear.
For example, the user may take multiple pictures of one text content when taking a picture, and multiple repeated pictures may appear at the time, or the user selects a cover picture of a non-text content by mistake. At this time, if the sharpening processing instruction is executed on the cover image of the repeated image or the non-text content, unnecessary processing operations are added, so that when the sharpening processing instruction for the plurality of first images is received, the plurality of second images (i.e., the images to be finally sharpened) corresponding to the continuous text content can be found from the plurality of first images. For example, the 10 first images selected by the user include 1 book cover image, 1 image of another non-text type selected by the user by mistake, and 8 images corresponding to continuous text contents of the book (of which, 3 are images of the same text contents). At this time, when a sharpening processing instruction for the 10 first images sent by the user is received, 8 images corresponding to the continuous text content are searched from the 10 first images, and then 2 repeated images are removed from the 3 images with the same text content, and finally 6 second images corresponding to the non-repeated continuous text content are obtained.
Optionally, if there is no repeated image or image of discontinuous text content in the plurality of first images selected by the user, the first image is the same as the second image.
And 120, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing.
The operation steps performed by the sharpening process may be multiple, such as exposure control, white balance control, multi-frame noise reduction, histogram adjustment, contrast adjustment, smooth noise reduction, detail sharpening, etc., but all the operations are not necessarily performed for each second image, for example, if there are only a small number of noise points in the image and there are no other problems, only the multi-frame noise reduction or smooth noise reduction operation in the sharpening process operation needs to be performed, and exposure, white balance, etc. do not need to be controlled. According to the operation steps executed by the sharpening process, the operation steps corresponding to each second image are determined by combining the processing requirements of each second image, for each second image, the sequence of the operation steps of the sharpening process is different, and the final processing result may be influenced.
Step 130, determining an overlapping step according to the operation step sequence corresponding to the plurality of second images.
Wherein, the overlapping step is the same operation step in the operation step sequence corresponding to different second images.
The plurality of second images correspond to continuous text contents, and the shooting environments of the second images are generally consistent, so that problems exist in the second images and have certain correlation, the performed sharpening processing operation can be slightly different, and most of the operations are the same. Thus, the overlapping steps in the plurality of second images may be determined according to the sequence of operation steps corresponding to each second image.
Alternatively, when the overlapping step is determined based on the operation step sequence corresponding to the plurality of second images, the overlapping step common to all the second images may be determined based on the operation step sequence corresponding to each second image, and if all the second images do not have the overlapping step common to all the second images, the overlapping step common to the plurality of second images may be selected, and the individual second images not having the overlapping step may be individually processed.
Step 140, performing sharpening on the plurality of second images in an asynchronous execution manner according to the overlapping step.
The overlapping step can have one step or multiple steps, each repeated step has a fixed thread, and the sharpening processing of the multiple second images in a synchronous mode needs to establish multiple threads for each step in advance, so that the power consumption of text image processing is high, and the condition of resource waste exists. For example, each step is provided with a plurality of threads, and in the process of sharpening, more idle threads are needed, thereby causing waste of resources. Therefore, in the embodiment of the application, one thread is set for each sharpening processing operation, and the thread corresponding to each repeated step is controlled to process each second image in sequence in an asynchronous execution mode. The method can reduce the power consumption of text image processing while ensuring the effect of the sharpening processing, and avoids the resource waste caused by idle threads.
According to the text image processing method provided by the embodiment of the application, when a sharpening processing instruction facing a plurality of first images is received, a plurality of second images corresponding to continuous text contents are searched from the plurality of first images; secondly, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing; determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images; finally, the plurality of second images are processed for sharpening in an asynchronous execution manner according to the overlapping step. According to the method and the device, the corresponding second images are searched in the first image, the overlapping step of the second images is determined, the overlapping step is executed asynchronously to carry out sharpening processing on the second images, processing operation on the text images is simplified, and processing power consumption of the text images can be reduced.
Fig. 2 is a schematic flowchart of another text image processing method provided in an embodiment of the present application, which is used to further explain the foregoing embodiment, and includes:
step 210, when a sharpening processing instruction for a plurality of first images is received, acquiring time information and scene information of the first images.
The time information of the first image may be a photographing time of the image or an interval time of two adjacent frames of images corresponding to continuous text contents. The scene information may be a scene corresponding to the image capturing content, for example, if the captured text content is a character in a book, the scene information is a book capturing scene; if the shot text content is characters on the television, the scene information is a television shooting scene; if the shooting content is a person, the scene information is a scene shot by the person.
When a sharpening processing instruction for multiple first images triggered by a user is received, time information and scene information corresponding to each first image are obtained, optionally, the time information and the scene information of the images are usually stored in attribute information of the images, and corresponding threads can be called to obtain the time information and the scene information of the images from the attribute information of the images.
Step 220, a plurality of candidate images with the same scene information are searched.
The searching for the multiple candidate images with the same scene information may be that, in the multiple first images, according to the scene information of each first image, the first image with the same scene information in each first image is searched as the candidate image, and the to-be-processed picture with the inconsistent scene in the first image is filtered.
Optionally, if a plurality of first images correspond to a plurality of identical scene information, a scene with a large number of images corresponding to the scene information may be selected, and a plurality of candidate images of the scene information may be selected. For example, if scene information of 2 images in 10 first images is a television shooting scene, and the remaining 8 images are book shooting scenes, the first images in the 8 book shooting scenes are selected as candidate images.
And step 230, determining a second image from the plurality of candidate images according to the time information and a preset time threshold.
The preset time threshold value can be set by default of the system, and can also be preset by the user according to the self requirement. The size of the preset time threshold may vary according to the information of the processed image, for example, if the alternative image is shot in a slide show played by a teacher during the course of the user, and the interval between two adjacent slides is longer when the slide show is played, at this time, the preset time threshold may be set larger. If the alternative images are shot by the user in a book, the page turning speed is high when the user shoots the characters on the book, the shooting frequency is relatively high, and at the moment, the preset time threshold value can be set to be smaller. Alternatively, the preset time threshold may be a shooting time of the image, for example, 9:00-9: 30. It may also be a frame interval time between two adjacent images, for example, 5 minutes.
The second image is determined from the plurality of candidate images according to the time information and the preset time threshold, and may be determined as the second image by judging whether the time information of each candidate image exceeds the preset time threshold, and if not, determining the second image as the second image.
Optionally, there may be a situation of repeated images in the second image selected by the time information and the preset time threshold, for example, when the user takes an image, two images are taken for one text content, both the two images certainly meet the scene information and the time information also meets the requirement of the preset time threshold, but if both the two images are taken as alternative images, the processing power consumption is certainly increased, at this time, the similarity determination may be performed on a plurality of alternative images whose time information meets the preset time threshold, and the repeated images corresponding to the same text content are filtered.
Optionally, since the repeated images are usually adjacent images, similarity determination (e.g., calculating correlation between images) may be performed on two adjacent images in the multiple candidate images, and if the similarity is greater than a similarity threshold, it is determined that the two images belong to the repeated images and one image needs to be deleted.
And step 240, executing an operation step sequence corresponding to each second image according to the sharpening process.
Step 250, determining the overlapping step according to the operation step sequence corresponding to the plurality of second images.
Step 260, performing sharpening processing on the plurality of second images in an asynchronous execution mode according to the overlapping step.
According to the text image processing method provided by the embodiment of the application, a plurality of second images can be determined in the plurality of first images according to the time information and the scene information of the first images, and after the operation step sequence corresponding to each second image is obtained, the overlapping step is determined to carry out sharpening processing on the plurality of second images in an asynchronous execution mode. The accuracy of the second image selection can be improved, and the processing power consumption of the text image is further reduced.
Fig. 3 is a schematic flowchart of another text image processing method provided in an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
step 301, when a sharpening processing instruction facing to a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images.
And step 302, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing.
Step 303, determining an overlapping step according to the operation step sequence corresponding to the plurality of second images.
And step 304, judging whether the overlapping step is a continuous step.
For each second image, there is a sequence of operation steps corresponding thereto, and the successive steps refer to steps in which the respective overlapping steps are arranged successively in the sequence of operation steps. Judging whether the overlapping step is a continuous step may be by a sequence of operation steps of the respective second images, judging whether the overlapping step determined in the respective second images is a continuous step, if so, executing step 305; if not, go to step 310.
Step 305, if the overlapping step is a consecutive step, acquiring a previous step and/or a subsequent step of the consecutive step.
If the overlapping step is a consecutive step, it is indicated that no further steps are present between consecutive steps, but that a preceding step and/or a subsequent step of sharpening the second image may also be present before or after the consecutive step. When the sharpening process operation is performed on the second image, not only the successive steps but also the preceding and/or subsequent steps thereof are performed, and therefore, if the overlapping step is a successive step, the preceding and/or subsequent steps of the successive steps, that is, the operation steps other than the successive step in the sharpening process operation for the second image are acquired. For example, if the sharpening process operation of the second image is operation 1 to operation 5, if operation 2 to operation 4 are consecutive overlapping steps, a preceding step of operation 2 to operation 4 (i.e., operation 1) and a subsequent step of operation 2 to operation 4 (i.e., operation 5) are acquired.
And step 306, judging whether the previous step or the subsequent step is obtained.
Determining whether the step obtained in step 305 is a previous step or a subsequent step, if yes, performing step 307, that is, performing the previous step first and then performing an overlapping step; if so, go to step 309, i.e., first go to the overlapping step and then go to the subsequent step; if there are previous steps and subsequent steps, step 307 to step 308 are executed first, and step 309 is executed after step 308 is completed.
Step 307, if the previous step exists, executing the previous step in a parallel execution mode, and caching the execution result of the previous step.
The sharpening process steps other than the superimposing step are usually specific operations different from the other second images, which are selected for problems specific to each second image, and therefore the preceding steps are usually different for each second image. In order to reduce the time consumption of text image processing, a parallel execution mode may be adopted, and the threads of the previous steps corresponding to the second images are simultaneously called to perform corresponding processing operations of the previous steps on the second images. And the execution result of the previous step is cached. So as to call the execution result of the previous step when the overlapping step is executed subsequently, and execute the processing operation of the overlapping step on the basis of the result.
Optionally, if the same preceding operation still exists in the preceding steps of each second image, the same preceding operation may be executed on the second images having the same preceding operation in an asynchronous execution manner, and other different preceding operations may be executed in a parallel execution manner.
Step 308, reading the execution result of the previous step of the cache in an asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode.
The execution of the overlap step is based on the results of the previous step execution, so that the cached results of the previous step execution may be read before the repeat operation is performed. Optionally, since the step of performing the overlapping on each second image uses an asynchronous execution mode, it is not necessary to set multiple threads to read the previous execution result of each second image in parallel, and similar to the overlapping operation, the previous execution result of each second image may be read according to the processing requirement using the asynchronous execution mode. For example, a thread for reading the execution result of the previous step may be set for reading the execution result of the previous step in the cache, before performing contrast adjustment of the first step in the overlapping step, the thread for reading the execution result of the previous step may be invoked to read the execution result of the previous step of the second image according to the second image to be subjected to contrast adjustment, after the reading is completed, the thread for reading the execution result of the previous step may be invoked to perform contrast adjustment operation on the basis of the execution result of the previous step, and after the contrast adjustment operation of the second image is completed, the thread for reading the execution result of the previous step may continue to read the execution result of the next previous step according to the next second image to be processed, so that the contrast adjustment thread may perform contrast adjustment operation on the next second image to be processed.
Step 309, if there is a subsequent step, executing the subsequent step when the overlapping step is executed.
If there is a subsequent operation after the successive operation, it is necessary to perform the subsequent step corresponding to each second image after the overlapping step is performed at each second image.
Optionally, when the subsequent steps are executed, if there is no overlapping step in the subsequent steps of each second image, a parallel execution mode may be adopted, and the threads corresponding to different subsequent operations are simultaneously called to execute the corresponding subsequent steps on the plurality of second images; if the overlapping step still exists in the subsequent operations of the second images, the overlapping operation can be executed on the second images with the overlapping subsequent operations in an asynchronous execution mode, and other non-overlapping subsequent operations can be executed in a parallel execution mode.
Step 310, if the overlapping step is a non-continuous step, determining whether the distinguishing step between the overlapping steps can be used as a preprocessing step.
The distinguishing step includes an operation step different from other second images except the overlapping step in the operation step sequence corresponding to each second image. For example, the sequence of operation steps corresponding to a certain second image is step 1 to step 5, and the overlapping steps are step 1, step 3 to step 4, in which case the distinguishing steps are step 2 and step 5. The preprocessing step may be a preparation step performed to improve the efficiency of the text image sharpening before a specific sharpening operation is performed on the text image, for example, the text content in the text image may include a table, and a border of the table is likely to interfere with the sharpening operation (e.g., the border of the table is mistaken as unclear text content during the processing, so as to add extra processing operation steps), and at this time, a non-text graphic (e.g., a line, a square, etc.) in the cut text image may be used as the preprocessing step.
If the overlapping step is a non-continuous step, it is determined whether the distinguishing step between the overlapping steps can be used as a preprocessing step, if so, the step 311 is executed, the preprocessing step is executed first, then the overlapping step is executed, if not, the step 313 is executed, and the overlapping step is executed directly.
And 311, if the distinguishing step can be used as a preprocessing step, executing the preprocessing step, and caching the execution result of the preprocessing step.
If the distinguishing step of the second image can be used as a preprocessing step, the preprocessing step is executed firstly, and the execution result of the preprocessing step is cached, so that the execution result of the preprocessing step is called when the overlapping step is executed subsequently, and the next overlapping step is executed on the basis of the result.
The embodiment of the present application does not limit the manner of executing the preprocessing step, and may adopt a synchronous execution manner or an asynchronous execution manner, for example, if the preprocessing operations of the plurality of second images are different, different threads of the preprocessing step may be simultaneously called in a parallel execution manner to execute the preprocessing operation on the second image that needs to be preprocessed. If the same preprocessing operation exists in the preprocessing operations of the plurality of second images, the same preprocessing operation can be executed on the second images with the same preprocessing operation in an asynchronous execution mode, and other different preprocessing operations can be executed in a parallel execution mode.
Step 312, the execution result of the pre-processing step of the cache is read in an asynchronous execution mode, and the overlapping step is executed in an asynchronous execution mode.
In this step, the overlapping step is discontinuous, the preprocessing operation is between the discontinuous overlapping steps, the execution result of the cached preprocessing step is read in an asynchronous execution mode, the overlapping step is executed in an asynchronous execution mode, or the overlapping step before the preprocessing operation is executed in the asynchronous execution mode, the execution result of the cached preprocessing step is read in the asynchronous execution mode and is superposed on the execution result of the previous overlapping step, and finally the overlapping step after the preprocessing step is executed in the asynchronous execution mode on the superposed result.
Step 313, if the distinguishing step cannot be taken as a preprocessing step, splitting the overlapping step into at least one group of continuous sub-overlapping steps, and executing the sub-overlapping steps in an asynchronous execution mode.
In this step, the overlapping step is discontinuous, and the splitting of the discontinuous overlapping step into at least one set of continuous sub-overlapping steps may be performed according to the discontinuity of the discontinuous overlapping step, for example, the sequence of the operation steps corresponding to a certain second image is step 1 to step 5, and the overlapping step is discontinuous step 1, step 4 and step 5, at this time, according to the discontinuity of the discontinuous overlapping step, step 1 may be regarded as one sub-overlapping step, and step 4 and step 5 may be regarded as a second sub-overlapping step. It should be noted that splitting into at least one group of consecutive sub-overlapping steps does not limit the split sub-overlapping steps to have consecutive sub-overlapping steps, and it may also occur that the split sub-overlapping steps are all single steps, for example, the sequence of operation steps corresponding to a certain second image is step 1 to step 5, the overlapping steps are non-consecutive step 1, step 3 and step 5, and at this time, when the split is performed according to the discontinuity of the non-consecutive overlapping steps, three independent steps of step 1, step 3 and step 5 are respectively used as three sub-overlapping steps.
After splitting successive pairs of overlapping steps, the sub-overlapping steps are performed in an asynchronous execution.
It should be noted that, if the overlapping step is a continuous step, and the continuous step does not have a preceding step and/or a subsequent step, the overlapping step may be directly executed in an asynchronous execution manner, that is, the sharpening processing operation on the plurality of second images may be completed.
According to the text image processing method provided by the embodiment of the application, different overlapping operation execution flows can be further determined according to whether the determined overlapping steps of the second images are continuous, the processing operation on a plurality of text images is simplified, and the processing power consumption of the text images can be reduced.
Fig. 4 is a schematic flowchart of another text image processing method provided in an embodiment of the present application, which is used to further explain the foregoing embodiment, and includes:
step 410, when a sharpening processing instruction facing to the plurality of first images is received, searching a plurality of second images corresponding to the continuous text content from the plurality of first images.
And step 420, executing an operation step sequence corresponding to each second image according to the sharpening process.
And step 430, determining an overlapping step according to the operation step sequence corresponding to the plurality of second images.
Step 440, the first thread is started to perform the first overlay step of the third image.
And the third image is any one of the second images.
The first thread may be a thread established for executing the first overlay step, and the first thread is called to execute when the first overlay step is executed on each third image. The first overlay step is performed asynchronously, and the first overlay step can only be performed on one third image at a time.
After the overlapping steps corresponding to the plurality of second images are determined, one of the plurality of second images is arbitrarily selected as a third image, and a first thread corresponding to the first overlapping step is called to execute the first overlapping step of the third image.
Step 450, when the first overlapping step is completed, a second thread is started to execute a second overlapping step of the third image, and a first thread is called to execute the first overlapping step of the fourth image.
Wherein, the second overlapping step is the next step of the first overlapping step, and the fourth image is the image except the third image in the second image.
When the overlapping step is one step, after the first thread is started to execute the first overlapping step of the third image, it is indicated that the overlapping step of the third image is completed, and at this time, the first thread needs to be called again to execute the first overlapping step of the fourth image, that is, the first thread is called in sequence to execute the first overlapping operation on each second image.
When the overlapping operation is a multi-step operation, after a first thread is started to execute a first overlapping step of a third image, a second thread corresponding to a second overlapping step is started for the third image to execute a second overlapping step of the third image, namely after the current overlapping step is executed, a thread corresponding to a next overlapping step is called to continue to execute a next overlapping operation on the image. After the third image is executed with the first overlapping operation step, the first thread is in an idle state, at this time, any one image except the third image can be selected from the plurality of second images to be used as a fourth image, the first thread is called to execute the first overlapping operation on the fourth image, and after the second thread is idle, the second overlapping operation is executed on the fourth image.
In this way, after the overlapping operation is performed on the plurality of second images in an asynchronous execution manner, some non-overlapping operations may exist on each second image, and at this time, other non-overlapping operations may be performed on each second image, so that the sharpening process of each second image is completed.
According to the text image processing method provided by the embodiment of the application, when the overlapping steps are executed on the second images, the corresponding threads are set for the overlapping steps, and the overlapping steps are executed on the second images in an asynchronous execution mode, so that the sharpening processing of the second images is completed, the processing operation on a plurality of text images is simplified, and the processing power consumption of the text images can be reduced.
Fig. 5 is a schematic flowchart of another text image processing method provided in an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and step 510, closing the multi-pin noise reduction mode when the first image is acquired.
The multi-frame noise reduction mode can be a photographing mode selected when the mobile terminal photographs, and when a user photographs one image, the system can reduce the noise of the image photographed by the user by photographing a plurality of images for processing. Because the text content is simple in content and obvious in characteristic, a plurality of images do not need to be shot in a multi-frame noise reduction mode for processing, and clear images can be obtained only by collecting one picture and carrying out traditional sharpening processing operations (such as denoising, detail sharpening and the like), the multi-frame noise reduction mode of the mobile terminal can be closed when the first image is obtained in order to reduce power consumption when the text content is shot.
And step 520, preprocessing the acquired first image.
The captured first image may have some problems of non-sharpness and needs to be processed, and therefore, the problem of non-sharpness of the first image needs to be preprocessed before the first image is subjected to sharpness processing, for example, a captured text image may have a certain angle deviation, and a preprocessing operation of adjusting the angle of the text image may be performed after the captured text image is acquired.
In the embodiment of the application, after the first image is obtained, it may be determined whether the first image needs to be preprocessed, if not, the original first image is kept, and if so, the first image is preprocessed correspondingly. Optionally, after the preprocessing, in order to prevent a shooting requirement of the user from being changed after the preprocessing, a comparison graph between the preprocessing result and the original first image may be displayed on a preview interface for the user to confirm, and when a confirmation instruction of the user is received, the preprocessed image is replaced with the original first image. For example, if the user finds that the expected effect of the user cannot be achieved after preprocessing through an original shot image of a preview interface and a preprocessed comparison image, the user can click a cancel button to reject the preprocessing result and still maintain the original first image; if the image after the preprocessing has better effect, the 'confirm' button can be clicked, and the preprocessed image is replaced by the original first image.
Step 530, when a sharpening processing instruction facing to the plurality of first images is received, searching a plurality of second images corresponding to the continuous text content from the plurality of first images.
And 540, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing.
Step 550, determining the overlapping step according to the operation step sequence corresponding to the plurality of second images.
And 560, performing sharpening processing on the plurality of second images in an asynchronous execution mode according to the overlapping step.
According to the text image processing method provided by the embodiment of the application, the multi-frame noise reduction mode is closed when the first image is shot, the obtained first image is preprocessed, when the sharpening processing instruction facing to the plurality of first images is received, the plurality of second images are searched for the plurality of first images, the step of overlapping the plurality of second images is determined to sharpen the second images, and the shooting power consumption and the processing power consumption of the text image can be reduced.
Fig. 6 is a schematic flowchart of another text image processing method provided in an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
step 610, when a sharpening processing instruction facing to the plurality of first images is received, searching a plurality of second images corresponding to the continuous text content from the plurality of first images.
And step 620, respectively acquiring an operation step sequence corresponding to each second image according to the sharpening processing.
Step 630, determining an overlapping step according to the sequence of operation steps corresponding to the plurality of second images.
Step 640, determine whether the second image includes an incomplete text area.
For a text image, it is particularly important whether a text region is complete, so before performing sharpening processing, it is determined whether the second image includes an incomplete text region, and optionally, the incomplete text image may include a text region missing or blurred. For example, it may be blurred text due to inaccurate focusing during shooting; there may also be a text portion missing due to excessive brightness at the time of shooting.
Optionally, the determining whether the second image includes the incomplete text region may be performed automatically by the mobile terminal, for example, the mobile terminal identifies each second image by combining the specific features of the text region, and if it is identified that the text region is blank, the text region is incomplete. For example, the user manually determines through the second image preview interface, and if the user finds that the second image includes an incomplete text region, the user triggers to repair the text region and issue an instruction.
Determining whether the second image includes an incomplete text region, if so, executing step 650 to repair the text region first, and if not, executing step 670 to perform sharpening processing on the second image in an asynchronous execution mode according to the overlapping step.
And step 650, if the second image contains the incomplete character area, repairing the character area.
If the second image contains the incomplete text area, the method for repairing the text area is not limited in the application, for example, if the incomplete text area is caused by text blurring, the blurred text content can be recognized by a text content recognition tool, and then the recognized text replaces the original text in the text area, so that the character area is repaired. If the incomplete character area is caused by character deletion, whether the deleted character part can determine the deleted character through a semantic analysis tool needs to be judged, if yes, the determined deleted character can be supplemented into the character deletion area, if the character deletion content is more and cannot be determined through the semantic analysis tool, the user can be prompted to input the deleted character, and the character area can be repaired according to the deleted character input by the user.
Optionally, if the user finds that an error occurs after the text area is repaired, the user may modify the error portion, for example, the user may select the error text area in the text image, click a modification key, input the correct text on the modification interface, and click confirmation, the system may automatically replace the error text.
And 660, performing sharpening processing on the plurality of repaired second images in an asynchronous execution mode according to the overlapping step.
After the text area of the second image is repaired, the repaired plurality of second images can be subjected to sharpening processing in an asynchronous execution mode according to the overlapping step.
And 670, performing sharpening processing on the plurality of second images in an asynchronous execution mode according to the overlapping step.
In the embodiment of the present application, step 640 to step 650 may be implemented by performing sharpening processing on the second image according to the overlapping step, determining whether the processed second image includes an incomplete text region, and repairing the text region if the incomplete text region is included.
The text image processing method provided by the embodiment of the application can judge whether the second image contains an incomplete character area before the second image is asynchronously overlapped, and if the incomplete character area exists, the character area of the second image is repaired and then the overlapping step is executed. The method and the device can improve the effect of the text image sharpening processing and reduce the processing power consumption of the text image while simplifying the processing operation of a plurality of text images.
Fig. 7 is a schematic structural diagram of a text image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: the receiving module 710, the searching module 720, the obtaining module 730, the determining module 740, and the asynchronous executing module 750.
A receiving module 710, configured to receive a sharpening processing instruction for a plurality of first images;
a searching module 720, configured to search, when the receiving module 710 receives the sharpening processing instruction for the multiple first images, multiple second images corresponding to consecutive text contents from the multiple first images;
an obtaining module 730, configured to perform, according to the sharpening process received by the receiving module 710, an operation step sequence corresponding to the second image found by each of the searching modules 720;
a determining module 740, configured to determine an overlapping step according to the operation step sequence corresponding to the plurality of second images acquired by the acquiring module 730, where the overlapping step is the same operation step in the operation step sequences corresponding to different second images;
an asynchronous execution module 750, configured to perform sharpening on the plurality of second images in an asynchronous execution manner according to the overlapping step determined by the determination module 740.
Further, the lookup module 720 is configured to:
acquiring time information and scene information of the first image;
searching a plurality of alternative images with the same scene information;
and determining a second image from the plurality of candidate images according to the time information and a preset time threshold.
Further, the asynchronous execution module 750 is configured to:
judging whether the overlapping step is a continuous step;
if the overlapping step is a continuous step, acquiring a previous step and/or a subsequent step of the continuous step;
if the previous step exists, executing the previous step in a parallel execution mode, and caching the execution result of the previous step;
reading the execution result of the previous step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
and if the subsequent step exists, executing the subsequent step when the overlapping step is executed completely.
Further, the asynchronous execution module 750 is configured to, after determining whether the overlapping step is a continuous step, if the overlapping step is a discontinuous step, determine whether a distinguishing step between the overlapping steps can be used as a preprocessing step;
if the distinguishing step can be used as a preprocessing step, executing the preprocessing step and caching the execution result of the preprocessing step;
reading the execution result of the pre-processing step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
if the discriminating step is not possible as a preprocessing step, the overlapping step is split into at least one set of consecutive sub-overlapping steps, which are performed in an asynchronous execution manner.
Further, the asynchronous execution module 750 is configured to:
starting a first thread to execute a first overlapping step of a third image, wherein the third image is any one of the second images;
and when the first overlapping step is finished, starting a second thread to execute a second overlapping step of the third image, and calling the first thread to execute a first overlapping step of a fourth image, wherein the second overlapping step is the next execution step of the first overlapping step, and the fourth image is an image except for the third image in the second image.
Further, the above apparatus further comprises:
the mode control module is used for closing the multi-pin noise reduction mode when the first image is acquired;
and the preprocessing module is used for preprocessing the acquired first image.
Further, the asynchronous execution module 750 is configured to:
judging whether the second image contains an incomplete character area or not;
if the second image contains an incomplete character area, repairing the character area;
and performing sharpening processing on the plurality of repaired second images in an asynchronous execution mode according to the overlapping step.
In the text image processing apparatus provided in the embodiment of the present application, first, when the receiving module 710 receives a sharpening processing instruction for a plurality of first images, the searching module 720 searches a plurality of second images corresponding to continuous text contents from the plurality of first images; secondly, the obtaining module 730 respectively obtains the operation step sequence corresponding to each second image according to the sharpening process; thirdly, the determining module 740 determines the overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images; finally, the asynchronous execution module 750 performs sharpening on the plurality of second images in an asynchronous execution manner according to the overlapping step. According to the method and the device, the corresponding second images are searched in the first image, the overlapping step of the second images is determined, the overlapping step is executed asynchronously to carry out sharpening processing on the second images, processing operation on the text images is simplified, and processing power consumption of the text images can be reduced.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal may include: a housing (not shown), a memory 801, a Central Processing Unit (CPU) 802 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 801 and operable on the processor 802, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU802 and the memory 801 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 801 is used for storing executable program codes; the CPU802 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 801.
The terminal further comprises: peripheral interface 803, RF (Radio Frequency) circuitry 805, audio circuitry 806, speakers 811, power management chip 808, input/output (I/O) subsystem 809, touch screen 812, other input/control devices 810, and external port 804, which communicate over one or more communication buses or signal lines 807.
It should be understood that the illustrated terminal device 800 is merely one example of a terminal, and that the terminal device 800 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a terminal device provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 801, the memory 801 being accessible by the CPU802, the peripheral interface 803, and the like, the memory 801 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 803, said peripheral interface 803 allowing input and output peripherals of the device to be connected to the CPU802 and the memory 801.
I/O subsystem 809, which I/O subsystem 809 may connect input and output peripherals on the device, such as touch screen 812 and other input/control devices 810, to peripheral interface 803. The I/O subsystem 809 may include a display controller 8091 and one or more input controllers 8092 for controlling other input/control devices 810. Where one or more input controllers 8092 receive electrical signals from or transmit electrical signals to other input/control devices 810, other input/control devices 810 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 8092 may be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 812 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operating principle of the touch screen and the classification of media for transmitting information. The touch screen 812 may be classified by installation method: external hanging, internal or integral. Classified according to technical principles, the touch screen 812 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 812, which touch screen 812 is an input interface and an output interface between the user terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 812 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 802.
The display controller 8091 in the I/O subsystem 809 receives electrical signals from the touch screen 812 or sends electrical signals to the touch screen 812. The touch screen 812 detects a contact on the touch screen, and the display controller 8091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 812, that is, implements a human-computer interaction, and the user interface object displayed on the touch screen 812 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 805 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 806 is mainly used to receive audio data from the peripheral interface 803, convert the audio data into an electric signal, and transmit the electric signal to the speaker 811.
Speaker 811 is used to convert the voice signals received by the smart speaker from the wireless network through RF circuit 805 into sound and play the sound to the user.
And the power management chip 808 is used for supplying power and managing power to the hardware connected with the CPU802, the I/O subsystem and the peripheral interface.
In this embodiment, the cpu802 is configured to:
when a sharpening processing instruction facing a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images;
respectively acquiring an operation step sequence corresponding to each second image according to the sharpening;
determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images;
sharpening the plurality of second images in an asynchronous manner according to the overlapping step.
Further, the searching for a plurality of second images corresponding to continuous text content from the plurality of first images includes:
acquiring time information and scene information of the first image;
searching a plurality of alternative images with the same scene information;
and determining a second image from the plurality of candidate images according to the time information and a preset time threshold.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step includes:
judging whether the overlapping step is a continuous step;
if the overlapping step is a continuous step, acquiring a previous step and/or a subsequent step of the continuous step;
if the previous step exists, executing the previous step in a parallel execution mode, and caching the execution result of the previous step;
reading the execution result of the previous step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
and if the subsequent step exists, executing the subsequent step when the overlapping step is executed completely.
Further, after determining whether the overlapping step is a continuous step, the method further includes:
if the overlapping step is a non-continuous step, judging whether a distinguishing step between the overlapping steps can be used as a preprocessing step;
if the distinguishing step can be used as a preprocessing step, executing the preprocessing step and caching the execution result of the preprocessing step;
reading the execution result of the pre-processing step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
if the discriminating step is not possible as a preprocessing step, the overlapping step is split into at least one set of consecutive sub-overlapping steps, which are performed in an asynchronous execution manner.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step includes:
starting a first thread to execute a first overlapping step of a third image, wherein the third image is any one of the second images;
and when the first overlapping step is finished, starting a second thread to execute a second overlapping step of the third image, and calling the first thread to execute a first overlapping step of a fourth image, wherein the second overlapping step is the next execution step of the first overlapping step, and the fourth image is an image except for the third image in the second image.
Further, before receiving a sharpening processing instruction for a plurality of first images, the method further includes:
when the first image is acquired, closing the multi-pin noise reduction mode;
and preprocessing the acquired first image.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step further includes:
judging whether the second image contains an incomplete character area or not;
if the second image contains an incomplete character area, repairing the character area;
and performing sharpening processing on the plurality of repaired second images in an asynchronous execution mode according to the overlapping step.
Embodiments of the present application further provide a storage medium containing terminal device executable instructions, which when executed by a terminal device processor, are configured to perform a text image processing method, where the method includes:
when a sharpening processing instruction facing a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images;
respectively acquiring an operation step sequence corresponding to each second image according to the sharpening;
determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images;
sharpening the plurality of second images in an asynchronous manner according to the overlapping step.
Further, the searching for a plurality of second images corresponding to continuous text content from the plurality of first images includes:
acquiring time information and scene information of the first image;
searching a plurality of alternative images with the same scene information;
and determining a second image from the plurality of candidate images according to the time information and a preset time threshold.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step includes:
judging whether the overlapping step is a continuous step;
if the overlapping step is a continuous step, acquiring a previous step and/or a subsequent step of the continuous step;
if the previous step exists, executing the previous step in a parallel execution mode, and caching the execution result of the previous step;
reading the execution result of the previous step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
and if the subsequent step exists, executing the subsequent step when the overlapping step is executed completely.
Further, after determining whether the overlapping step is a continuous step, the method further includes:
if the overlapping step is a non-continuous step, judging whether a distinguishing step between the overlapping steps can be used as a preprocessing step;
if the distinguishing step can be used as a preprocessing step, executing the preprocessing step and caching the execution result of the preprocessing step;
reading the execution result of the pre-processing step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
if the discriminating step is not possible as a preprocessing step, the overlapping step is split into at least one set of consecutive sub-overlapping steps, which are performed in an asynchronous execution manner.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step includes:
starting a first thread to execute a first overlapping step of a third image, wherein the third image is any one of the second images;
and when the first overlapping step is finished, starting a second thread to execute a second overlapping step of the third image, and calling the first thread to execute a first overlapping step of a fourth image, wherein the second overlapping step is the next execution step of the first overlapping step, and the fourth image is an image except for the third image in the second image.
Further, before receiving a sharpening processing instruction for a plurality of first images, the method further includes:
when the first image is acquired, closing the multi-pin noise reduction mode;
and preprocessing the acquired first image.
Further, the sharpening the plurality of second images in an asynchronous execution manner according to the overlapping step further includes:
judging whether the second image contains an incomplete character area or not;
if the second image contains an incomplete character area, repairing the character area;
and performing sharpening processing on the plurality of repaired second images in an asynchronous execution mode according to the overlapping step.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the text image processing operations described above, and may also perform related operations in the text image processing method provided in any embodiments of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A text image processing method, comprising:
when a sharpening processing instruction facing a plurality of first images is received, searching a plurality of second images corresponding to continuous text contents from the plurality of first images; the second image is the first image without discontinuous text content and the repeated image;
respectively acquiring an operation step sequence corresponding to each second image according to the sharpening;
determining an overlapping step according to the operation step sequences corresponding to the plurality of second images, wherein the overlapping step is the same operation step in the operation step sequences corresponding to different second images;
controlling the threads corresponding to the overlapping step to carry out sharpening processing on the plurality of second images in sequence in an asynchronous execution mode; wherein the same overlapping step corresponds to the same thread.
2. The method according to claim 1, wherein said searching for a plurality of second images corresponding to continuous text content from the plurality of first images comprises:
acquiring time information and scene information of the first image;
searching a plurality of alternative images with the same scene information;
and determining a second image from the plurality of candidate images according to the time information and a preset time threshold.
3. The method according to claim 1, wherein the step of controlling the threads corresponding to the overlapping step to sequentially sharpen the plurality of second images in an asynchronous execution manner comprises:
judging whether the overlapping step is a continuous step;
if the overlapping step is a continuous step, acquiring a previous step and/or a subsequent step of the continuous step;
if the previous step exists, executing the previous step in a parallel execution mode, and caching the execution result of the previous step;
reading the execution result of the previous step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
and if the subsequent step exists, executing the subsequent step when the overlapping step is executed completely.
4. The text image processing method according to claim 3, further comprising, after determining whether the overlapping step is a continuous step:
if the overlapping step is a non-continuous step, judging whether a distinguishing step between the overlapping steps can be used as a preprocessing step;
if the distinguishing step can be used as a preprocessing step, executing the preprocessing step and caching the execution result of the preprocessing step;
reading the execution result of the pre-processing step cached in the asynchronous execution mode, and executing the overlapping step in the asynchronous execution mode;
if the discriminating step is not possible as a preprocessing step, the overlapping step is split into at least one set of consecutive sub-overlapping steps, which are performed in an asynchronous execution manner.
5. The method according to claim 1, wherein the step of controlling the threads corresponding to the overlapping step to sequentially sharpen the plurality of second images in an asynchronous execution manner comprises:
starting a first thread to execute a first overlapping step of a third image, wherein the third image is any one of the second images;
and when the first overlapping step is finished, starting a second thread to execute a second overlapping step of the third image, and calling the first thread to execute a first overlapping step of a fourth image, wherein the second overlapping step is the next execution step of the first overlapping step, and the fourth image is an image except for the third image in the second image.
6. The text image processing method according to claim 1, further comprising, before receiving a sharpening processing instruction for the plurality of first images:
when the first image is acquired, closing the multi-pin noise reduction mode;
and preprocessing the acquired first image.
7. The method according to claim 1, wherein the thread corresponding to the overlapping step is controlled to sequentially sharpen the plurality of second images in an asynchronous execution mode, and further comprising:
judging whether the second image contains an incomplete character area or not;
if the second image contains an incomplete character area, repairing the character area;
and controlling the threads corresponding to the overlapping step to carry out sharpening processing on the plurality of repaired second images in sequence by adopting an asynchronous execution mode.
8. A text image processing apparatus characterized by comprising:
the receiving module is used for receiving a sharpening processing instruction facing to a plurality of first images;
the searching module is used for searching a plurality of second images corresponding to continuous text contents from the plurality of first images when the receiving module receives the sharpening processing instruction facing the plurality of first images; the second image is the first image without discontinuous text content and the repeated image;
the acquisition module is used for respectively acquiring the operation step sequence corresponding to the second image searched by each search module according to the sharpening processing received by the receiving module;
a determining module, configured to determine an overlapping step according to the operation step sequence corresponding to the plurality of second images acquired by the acquiring module, where the overlapping step is a same operation step in operation step sequences corresponding to different second images;
the asynchronous execution module is used for controlling the threads corresponding to the overlapping steps to carry out sharpening processing on the plurality of second images in sequence in an asynchronous execution mode; wherein the same overlapping step corresponds to the same thread.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a text image processing method according to any one of claims 1 to 7.
10. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the text image processing method according to any one of claims 1 to 7 when executing the computer program.
CN201810468616.XA 2018-05-16 2018-05-16 Text image processing method and device, storage medium and terminal Active CN108647097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810468616.XA CN108647097B (en) 2018-05-16 2018-05-16 Text image processing method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810468616.XA CN108647097B (en) 2018-05-16 2018-05-16 Text image processing method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN108647097A CN108647097A (en) 2018-10-12
CN108647097B true CN108647097B (en) 2021-04-13

Family

ID=63756236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810468616.XA Active CN108647097B (en) 2018-05-16 2018-05-16 Text image processing method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN108647097B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035520B (en) * 2021-11-22 2023-04-18 荣耀终端有限公司 Character recognition method for image, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166698A (en) * 2014-08-01 2014-11-26 小米科技有限责任公司 Data processing method and device
CN104463103A (en) * 2014-11-10 2015-03-25 小米科技有限责任公司 Image processing method and device
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
CN107256528A (en) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 A kind of method and device for handling picture
CN107330859A (en) * 2017-06-30 2017-11-07 广东欧珀移动通信有限公司 A kind of image processing method, device, storage medium and terminal
CN107967339A (en) * 2017-12-06 2018-04-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166698A (en) * 2014-08-01 2014-11-26 小米科技有限责任公司 Data processing method and device
CN104463103A (en) * 2014-11-10 2015-03-25 小米科技有限责任公司 Image processing method and device
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
CN107256528A (en) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 A kind of method and device for handling picture
CN107330859A (en) * 2017-06-30 2017-11-07 广东欧珀移动通信有限公司 A kind of image processing method, device, storage medium and terminal
CN107967339A (en) * 2017-12-06 2018-04-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment

Also Published As

Publication number Publication date
CN108647097A (en) 2018-10-12

Similar Documents

Publication Publication Date Title
KR102597680B1 (en) Electronic device for providing customized quality image and method for controlling thereof
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
CN111491102B (en) Detection method and system for photographing scene, mobile terminal and storage medium
CN109089043B (en) Shot image preprocessing method and device, storage medium and mobile terminal
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN108647351B (en) Text image processing method and device, storage medium and terminal
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN107615745B (en) Photographing method and terminal
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
US20220417417A1 (en) Content Operation Method and Device, Terminal, and Storage Medium
CN111818263B (en) Shooting parameter processing method and device, mobile terminal and storage medium
CN112116624A (en) Image processing method and electronic equipment
CN113014846B (en) Video acquisition control method, electronic equipment and computer readable storage medium
WO2017107855A1 (en) Picture searching method and device
CN112532885B (en) Anti-shake method and device and electronic equipment
CN112291475B (en) Photographing method and device and electronic equipment
CN108763350B (en) Text data processing method and device, storage medium and terminal
US11190653B2 (en) Techniques for capturing an image within the context of a document
CN109040729B (en) Image white balance correction method and device, storage medium and terminal
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108647097B (en) Text image processing method and device, storage medium and terminal
CN113810588B (en) Image synthesis method, terminal and storage medium
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant