WO2021203832A1 - 文本图像中手写内容去除方法、装置、存储介质 - Google Patents

文本图像中手写内容去除方法、装置、存储介质 Download PDF

Info

Publication number
WO2021203832A1
WO2021203832A1 PCT/CN2021/076250 CN2021076250W WO2021203832A1 WO 2021203832 A1 WO2021203832 A1 WO 2021203832A1 CN 2021076250 W CN2021076250 W CN 2021076250W WO 2021203832 A1 WO2021203832 A1 WO 2021203832A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
handwritten
text
pixel
handwritten content
Prior art date
Application number
PCT/CN2021/076250
Other languages
English (en)
French (fr)
Inventor
徐青松
李青
Original Assignee
杭州睿琪软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州睿琪软件有限公司 filed Critical 杭州睿琪软件有限公司
Priority to KR1020227037762A priority Critical patent/KR20220160660A/ko
Priority to JP2022560485A priority patent/JP2023523152A/ja
Priority to US17/915,488 priority patent/US20230222631A1/en
Publication of WO2021203832A1 publication Critical patent/WO2021203832A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/147Determination of region of interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • G06V30/244Division of the character sequences into groups prior to recognition; Selection of dictionaries using graphical properties, e.g. alphabet type or font
    • G06V30/2455Discrimination between machine-print, hand-print and cursive writing

Definitions

  • the invention relates to a method, a device and a storage medium for removing handwritten content in a text image.
  • the present invention provides a method for removing handwritten content in a text image, including: obtaining an input image of a text page to be processed, wherein the input image includes a handwritten area, and the handwritten area includes handwritten content; using image segmentation
  • the model recognizes the input image to obtain the initial handwritten pixels of the handwritten content; blurs the initial handwritten pixels to obtain a handwritten pixel mask area; determines the handwritten pixel mask area according to the handwritten pixel mask area The handwritten content in the area; remove the handwritten content in the input image to obtain an output image.
  • removing the handwritten content in the input image to obtain an output image includes:
  • the pixel value of the initial handwritten pixel and the position of the handwritten pixel mask area determine the non-handwritten pixel in the handwritten pixel mask area in the input image; remove the handwritten pixel in the input image The content of the pixel mask area to obtain the intermediate output image;
  • removing the handwritten content in the input image to obtain an output image includes:
  • the handwritten content in the input image is removed according to the non-handwritten pixels in the handwritten pixel mask area and the handwritten pixel mask area to obtain the output image.
  • removing the handwritten content in the input image to obtain an output image includes: cutting and removing the handwritten content from the input image to Obtain an intermediate output image; perform binarization processing on the intermediate output image to obtain the output image.
  • removing the handwritten content in the input image to obtain the output image includes: obtaining replacement pixels; The pixels of the handwritten content are used to remove the handwritten content from the input image to obtain the output image.
  • replacing pixels of the handwritten content with the replacement pixels to remove the handwritten content from the input image to obtain the output image includes : Replacing the pixels of the handwritten content with the replacement pixels to remove the handwritten content from the input image to obtain an intermediate output image; perform binarization processing on the intermediate output image to obtain the output image.
  • the replacement pixels are obtained according to the pixels of the handwritten content through an image restoration algorithm based on pixel neighborhood calculation.
  • the obtaining replacement pixels further includes recognizing the input image using a region recognition model to obtain the handwriting area, and the replacement pixels are the Any pixel in the handwriting area except the pixel of the handwritten content; or, the replacement pixel is an average value of the pixel values of all pixels in the handwriting area except the pixel of the handwritten content.
  • obtaining the input image of the text page to be processed includes: obtaining an original image of the text page to be processed, wherein the original image includes the text page to be processed Text area; performing edge detection on the original image to determine the text area to be processed in the original image; performing normalization processing on the text area to be processed to obtain the input image.
  • the image segmentation model is a pre-trained U-Net model for segmenting the input image.
  • the initial handwritten pixel is blurred by a Gaussian filter function, and the area of the initial handwritten pixel is enlarged to obtain the handwritten pixel mask area .
  • the present invention also provides an apparatus for removing handwritten content in a text image, including: a memory for non-temporarily storing computer readable instructions; and a processor for running the computer readable instructions, wherein the When the computer-readable instructions are executed by the processor, the method for removing handwritten content in a text image according to any one of the above embodiments is executed.
  • the present invention also provides a storage medium that stores computer-readable instructions non-transitory Content removal method.
  • FIG. 1 is a schematic flowchart of a method for removing handwritten content in a text image according to an embodiment of the present invention
  • FIG. 2A is a schematic diagram of an original image provided by an embodiment of the present invention.
  • 2B is a schematic diagram of an output image provided by an embodiment of the present invention.
  • FIG. 3 is a schematic block diagram of a device for removing handwritten content in a text image according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a storage medium provided by an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a hardware environment provided by an embodiment of the present invention.
  • At least one embodiment of the present invention provides a method, device and storage medium for removing handwritten content from a text image.
  • the method for removing handwritten content from a text image includes: obtaining an input image of a text page to be processed, where the input image includes a handwritten area, and the handwritten area includes handwritten content; and using an image segmentation model to recognize the input image to obtain the handwritten content.
  • the initial handwritten pixels of the; the initial handwritten pixels are blurred to obtain the handwritten pixel mask area; the handwritten content is determined according to the handwritten pixel mask area; the handwritten content in the input image is removed to obtain the output image.
  • the method for removing handwritten content in a text image can effectively remove the handwritten content in the handwritten area in the input image, so as to facilitate the output of an image or file that only includes printed content.
  • the method for removing the handwritten content in the text image can also convert the input image into a form that is convenient for printing, so that the user can print the input image into a paper form for storage or distribution.
  • FIG. 1 is a schematic flowchart of a method for removing handwritten content in a text image provided by at least one embodiment of the present invention
  • FIG. 2A is a schematic diagram of an original image provided by at least one embodiment of the present invention
  • An embodiment provides a schematic diagram of an output image.
  • the method for removing handwritten content in a text image provided by an embodiment of the present invention includes steps S10 to S14.
  • the method for removing handwritten content in a text image obtains an input image of a text page to be processed in step S10.
  • the input image includes a handwritten area, and the handwritten area includes handwritten content.
  • the input image can be any image that includes handwritten content.
  • the input image may be an image taken by an image acquisition device (for example, a digital camera or a mobile phone, etc.), and the input image may be a grayscale image or a color image. It should be noted that the input image refers to a form in which the text page to be processed is presented in a visual manner, such as a picture of the text page to be processed.
  • an image acquisition device for example, a digital camera or a mobile phone, etc.
  • the input image may be a grayscale image or a color image.
  • the input image refers to a form in which the text page to be processed is presented in a visual manner, such as a picture of the text page to be processed.
  • the handwriting area does not have a fixed shape, but depends on the content of the handwriting, that is, the area with the handwriting content is the handwriting area, and the handwriting area can be a regular shape (for example, a rectangle, etc.), or it can be irregular shape.
  • the handwriting area may include a filled area, a handwritten draft, or other handwritten marked areas.
  • the input image also includes a text printing area, and the text printing area includes printed content.
  • the shape of the text printing area may also be a regular shape (for example, a rectangle, etc.) or an irregular shape.
  • the shape of each handwriting area is a rectangle and the shape of each text printing area is a rectangle as an example for description.
  • the present invention includes but is not limited to this.
  • the text pages to be processed may include books, newspapers, periodicals, documents, forms, contracts, and so on.
  • Books, newspapers, and periodicals include all kinds of document pages with articles or patterns
  • documents include all kinds of invoices, receipts, express orders, etc.
  • the forms can be various types of forms, such as year-end summary tables, entry lists, price summary tables, Application forms, etc.
  • contracts can include various forms of contract text pages, etc.
  • the invention does not specifically limit the type of text page to be processed.
  • the text page to be processed may be text in paper form or text in electronic form.
  • the printed content can include the title text of each item, and the handwritten content can include information filled in by the user, such as name, address, phone number, etc. (In this case, the information is filled in by the user Personal information, not general information)
  • the to-be-processed text page is article-type text
  • the printed content may be article content
  • the handwritten content may be user notes or other handwritten marks.
  • the printed content can include item titles such as "name”, “gender”, “ethnicity”, and “work history”, and the handwritten content can include users (for example, employees, etc.) Handwritten information such as the user’s name, gender (male or female), ethnicity, and work experience filled in the entry form.
  • the printed content can also include various symbols, graphics, and so on.
  • the shape of the text page to be processed may be a rectangle or the like, and the shape of the input image may be a regular shape (for example, a parallelogram, a rectangle, etc.) to facilitate printing.
  • the present invention is not limited to this.
  • the input image may also have an irregular shape.
  • the size of the input image and the size of the text page to be processed are not the same.
  • the present invention is not limited to this, and the size of the input image and the size of the text page to be processed may also be same.
  • the text page to be processed includes printed content and handwritten content
  • the printed content may be printed content
  • the handwritten content is user handwritten content
  • the handwritten content may include handwritten characters.
  • printed content does not only refer to text, characters, graphics and other content input on the electronic device through the input device.
  • the text page to be processed is text such as notes
  • the content of the notes It can also be handwritten by the user.
  • the printed content is the printed content on the blank notebook page used for handwriting, such as horizontal lines.
  • the printed content may include characters in various languages, such as Chinese (for example, Chinese characters or pinyin), English, Japanese, French, Korean, etc.
  • the printed content may also include numbers and various symbols (for example, check marks, Cross and various operation symbols, etc.) and various graphics.
  • the handwritten content may also include text, numbers, various symbols, and various graphics in various languages.
  • the to-be-processed text page 100 is a form, and the area surrounded by four boundary lines (straight lines 101A-101D) represents the to-be-processed text area 100 corresponding to the to-be-processed text page.
  • the printing area includes a form area
  • the printed content can include the text of each item, such as name, birthday, etc.
  • the printed content can also include the logo graphic in the upper right corner of the to-be-processed text area 100 (covered) Processing), etc.
  • the handwritten area includes a handwritten information area
  • the handwritten content may include personal information handwritten by the user, for example, the user’s handwritten name, birthday information, health information, tick marks, and so on.
  • the input image may include multiple handwritten content and multiple printed content.
  • Multiple handwritten content is spaced apart from each other, and multiple printed content is also spaced apart from each other.
  • part of the handwritten content in multiple handwritten content may be the same (that is, the characters of the handwritten content are the same, but the specific shape of the handwritten content is different); part of the printed content in the multiple printed content may also be the same.
  • the present invention is not limited to this, a plurality of handwritten contents may also be different from each other, and a plurality of printed contents may also be different from each other.
  • step S10 may include: obtaining an original image of a text page to be processed, where the original image includes a text area to be processed; performing edge detection on the original image to determine the text area to be processed in the original image; The text area to be processed is normalized to obtain the input image.
  • a neural network or an OpenCV-based edge detection algorithm can be used to perform edge detection on the original image to determine the text area to be processed.
  • OpenCV is an open source computer vision library.
  • Edge detection algorithms based on OpenCV include Sobel, Scarry, Canny, Laplacian, Prewitt, Marr-Hildresh, scharr and many other algorithms.
  • performing edge detection on the original image to determine the text area to be processed in the original image may include: processing the original image to obtain a line drawing of the gray contour in the original image, where the line drawing includes multiple lines; The similar lines in the line drawing are merged to obtain multiple initial merged lines, and a boundary matrix is determined according to the multiple initial merged lines; the similar lines in the multiple initial merged lines are merged to obtain the target line, and the unmerged initial The merged line is also used as the target line to obtain multiple target lines; according to the boundary matrix, multiple reference boundary lines are determined from the multiple target lines; the original image is processed through the pre-trained boundary line region recognition model to obtain the original image Multiple boundary line areas of the text page to be processed in the text page; for each boundary line area, determine the target boundary line corresponding to the boundary line area from multiple reference boundary lines; determine the target boundary line in the original image according to the determined multiple target boundary lines Process the edges of the text area.
  • processing the original image to obtain a line drawing of the gray contour in the original image includes: processing the original image by an edge detection algorithm based on OpenCV to obtain a line drawing of the gray contour in the original image .
  • merging similar lines in a line drawing to obtain multiple initial merged lines includes: obtaining long lines in the line drawing, where the long lines are lines whose length exceeds the first preset threshold; and obtaining multiple lines from the long lines.
  • a group of first-type lines wherein the first-type lines include at least two successively adjacent long lines, and the angle between any two adjacent long lines is less than the second preset threshold; for each group of first Class line, each long line in the first type of line of the group is sequentially merged to obtain an initial merged line.
  • the boundary matrix is determined in the following way: multiple initial merged lines and unmerged lines in long lines are redrawn, and the position information of the pixels in all redrawn lines is mapped to the matrix of the entire original image.
  • the values of the positions of the pixels of these lines are set to the first value, and the values of the positions of the pixels other than these lines are set to the second value, thereby forming a boundary matrix.
  • merging similar lines among multiple initial merged lines to obtain the target line includes: obtaining multiple sets of second-type lines from the multiple initial merged lines, where the second-type lines include at least two adjacent initial lines. Merged lines, and the angle between any two adjacent initial merged lines is less than the third preset threshold; for each group of second type lines, each initial merged line in the group of second type lines is merged in turn Get a target line.
  • the first preset threshold may be 2 pixels in length, and the second preset threshold and the third preset threshold may be 15 degrees. It should be noted that the first preset threshold, the second preset threshold, and the third preset threshold can be set according to actual application requirements.
  • multiple reference boundary lines are determined from multiple target lines, including: for each target line, the target line is extended, a line matrix is determined according to the extended target line, and then the line The matrix is compared with the boundary matrix, and the number of pixels belonging to the boundary matrix on the extended target line is calculated as the result of the target line, that is, the line matrix is compared with the boundary matrix to determine how many pixels fall into In the boundary matrix, it is to determine how many pixels at the same position in the two matrices have the same first value, such as 255, to calculate the score.
  • the line matrix and the boundary matrix have the same size; according to the scores of each target line, Determine multiple reference boundary lines among multiple target lines. It should be noted that the number of target lines with the best performance may be multiple. Therefore, according to the performance of each target line, multiple target lines with the best performance are determined from the multiple target lines as the reference boundary line.
  • the line matrix is determined in the following way: redraw the extended target line or straight line, correspond the position information of the pixel points in the redrawn line to the matrix of the entire original image, and combine the lines in the matrix of the original image.
  • the value of the location of the pixel is set to the first value, and the value of the location of the pixel other than the line is set to the second value, thereby forming a line matrix.
  • determining the target boundary line corresponding to the boundary line area from a plurality of reference boundary lines includes: calculating the slope of each reference boundary line; for each boundary line area, using Hough transform to transform The boundary line area is converted into multiple straight lines, and the average slope of the multiple straight lines is calculated, and then it is judged whether there is a reference boundary line whose slope matches the average slope among the multiple reference boundary lines.
  • the reference boundary line is determined as The target boundary line corresponding to the boundary line area; if it is determined that there is no reference boundary line with a slope matching the average slope among the multiple reference boundary lines, then for each straight line obtained by the conversion of the boundary line area, the straight line
  • the formed line matrix is compared with the boundary matrix, and the number of pixels belonging to the boundary matrix on the line is calculated as the score of the line; the line with the best score is determined as the target boundary line corresponding to the boundary line area; Among them, the line matrix and the boundary matrix have the same size. It should be noted that if there are multiple straight lines with the best results, the first straight line among them will be used as the best boundary line according to the sorting algorithm.
  • the boundary line region recognition model is a neural network-based model.
  • the boundary line region recognition model can be established through machine learning training.
  • multiple target boundary lines for example, four target boundary lines
  • the text area to be processed is determined by multiple target boundary lines, for example, according to multiple intersection points of multiple target boundary lines
  • the text area to be processed can be determined with multiple target boundary lines. Every two adjacent target boundary lines intersect to obtain an intersection point. Multiple intersection points and multiple target boundary lines together define the area where the text to be processed in the original image is located. .
  • the text area to be processed may be a text area surrounded by four target boundary lines.
  • the four target boundary lines are all straight lines, and the four target boundary lines are respectively the first target boundary line 101A, the second target boundary line 101B, the third target boundary line 101C, and the fourth target boundary line 101D.
  • the original image may also include a non-text area, for example, an area other than the area enclosed by the four border lines in FIG. 2A.
  • performing normalization processing on the text area to be processed to obtain the input image includes: performing projection transformation on the text area to be processed to obtain a front view of the text area to be processed, and the front view is the input image.
  • Projective transformation Perspective Transformation
  • Viewing Plane also known as Projective Mapping.
  • the true shape of the text to be processed has changed in the original image, that is, geometric distortion has occurred.
  • the shape of the text to be processed ie, the form
  • was originally a rectangle but the shape of the text to be processed in the original image has changed, becoming an irregular polygon.
  • performing projection transformation on the text area to be processed in the original image can transform the text area to be processed from irregular polygons into rectangles or parallelograms, etc., that is, to correct the text area to be processed to remove the influence of geometric distortion, and get The front view of the text to be processed in the original image.
  • the projection transformation can process the pixels in the text area to be processed according to the space projection conversion coordinates to obtain the front view of the text to be processed, which will not be repeated here.
  • the text area to be processed may not be normalized, and the text area to be processed may be directly cut from the original image to obtain a separate image of the text area to be processed.
  • the image of the processed text area is the input image.
  • the original image may be an image directly collected by the image acquisition device, or may be an image obtained after preprocessing the image directly collected by the image acquisition device.
  • the original image can be a grayscale image or a color image.
  • the method for removing handwritten content in the text image provided by the embodiment of the present invention may also include performing processing on the original image.
  • the operation of preprocessing Preprocessing can eliminate irrelevant information or noise information in the original image, so as to better process the original image.
  • the preprocessing may include, for example, processing such as scaling, cropping, gamma correction, image enhancement, or noise reduction filtering on the image directly collected by the image collection device.
  • the original image can be used as the input image.
  • the original image can be directly recognized to determine the handwritten content in the original image; then the handwritten content in the original image is removed , To obtain the output image; alternatively, you can directly recognize the original image to determine the handwritten content in the original image; then remove the handwritten content in the original image to obtain an intermediate output image; then perform edge detection on the intermediate output image to determine The text area to be processed in the intermediate output image; the text area to be processed is corrected to obtain the output image, that is, in some embodiments of the present invention, the handwritten content in the original image can be removed first to obtain the intermediate output Image, and then perform edge detection and normalization processing on the intermediate output image.
  • step S11 the input image is recognized using an image segmentation model to obtain the initial handwritten pixels of the handwritten content.
  • an image segmentation model refers to a model for region recognition (or division) of an input image
  • an image segmentation model is implemented using machine learning technology (for example, convolutional neural network technology) and running on a general-purpose computing device or a dedicated computing device, for example
  • the image segmentation model is a pre-trained model.
  • the neural network applied to the image segmentation model can also achieve the same function through other neural network models including deep convolutional neural network, masked region convolutional neural network (Mask-RCNN), deep residual network, attention model, etc. , Don’t make too many restrictions here.
  • U-Net model which is an improved FCN (Fully Convolutional Network, Fully Convolutional Neural Network) structure, which follows the idea of FCN for image semantic segmentation, namely The convolutional layer and the pooling layer are used for feature extraction, and then the deconvolutional layer is used to restore the image size.
  • U-Net network model is a model with better performance for image segmentation. Deep learning is good at solving classification problems. Using this feature of deep learning for image segmentation, its essence is to classify each pixel in the image. Finally, the points of different categories are marked with different channels, and the effect of classifying and marking the characteristic information in the target area can be achieved.
  • the U-Net model can determine the initial handwritten pixels of the handwritten content in the input image.
  • other neural network models such as Mask-RCNN can also be used to determine the initial handwritten pixels of the handwritten content.
  • step S12 blur processing is performed on the initial handwritten pixels to obtain a handwritten pixel mask area. Recognizing the input image through the image segmentation model, the obtained initial handwritten pixels may not be all handwritten pixels, but the remaining missing handwritten pixels are generally adjacent to the initial handwritten pixels, so the initial handwritten pixels need to be checked. Blur processing is performed to expand the handwritten pixel area to obtain a handwritten pixel mask area.
  • the handwritten pixel mask area basically contains all the handwritten pixels.
  • Gaussian blurring can be performed on the initial handwritten pixels through the GaussianBlur function based on the OpenCV Gaussian filter to expand the initial handwritten pixel area, thereby obtaining the handwritten pixel mask area.
  • Gaussian filtering is performed by performing convolution calculations on each point of the input array and the input Gaussian filter template, and then these results are formed into the filtered output array. It is a process of weighted averaging the image of the initial handwritten pixel. Each pixel The value of a point is obtained by weighted average of its own and other pixel values in the neighborhood.
  • Gaussian blur processing the handwritten pixel image becomes blurred, but its area is enlarged.
  • any other blur processing technology can also be used to blur the initial handwritten pixels, and there are no too many restrictions here.
  • step S13 the handwritten content is determined according to the handwritten pixel mask area.
  • the handwritten pixel mask area According to the handwritten pixel mask area and combined with the initial handwritten pixels, basically all the handwritten pixels of the handwritten content are determined, thereby determining the handwritten content.
  • step S14 the handwritten content in the input image is removed to obtain an output image.
  • the position of the handwritten pixel mask area in the input image can be determined, and then it is transferred to the area corresponding to the position in the input image
  • To determine non-handwritten pixels According to the pixel value of the initial handwritten pixel, search for other pixels with a larger pixel value difference in the corresponding area corresponding to the position of the handwritten pixel mask area in the input image, and determine them as non-handwritten pixels, for example, it can be set
  • the threshold value of the pixel difference value is determined, and when there are pixels with the pixel difference value outside the threshold value range in the area, it is determined as a non-handwritten pixel.
  • the inpaint function based on OpenCV can be used to remove the content of the handwritten pixel mask area.
  • the inpaint function based on OpenCV uses the area neighborhood to repair the selected area in the image, that is, the pixels in the corresponding area corresponding to the position of the handwritten pixel mask area in the input image are repaired using the neighborhood pixels to remove the input.
  • the effect of the handwritten pixel mask area content in the image, and an intermediate output image is obtained.
  • the position of the handwritten pixel mask area in the input image can be determined, and then it is transferred to the area corresponding to the position in the input image
  • To determine non-handwritten pixels According to the pixel value of the initial handwritten pixel, search for other pixels with a larger pixel value difference in the corresponding area corresponding to the position of the handwritten pixel mask area in the input image, and determine them as non-handwritten pixels, for example, it can be set
  • the threshold value of the pixel difference value is determined, and when there are pixels with the pixel difference value outside the threshold value range in the area, it is determined as a non-handwritten pixel.
  • the handwritten content in the input image is removed according to the non-handwritten pixels in the handwritten pixel mask area and the handwritten pixel mask area to obtain the output image. That is, non-handwritten pixels are excluded from the handwritten pixel mask area, so that other parts of pixels are removed, so non-handwritten pixels are retained to prevent them from being removed by mistake, and the output image is finally obtained.
  • the inpaint function based on OpenCV can be used to remove the content of the handwritten pixel mask area that excludes non-handwritten pixels.
  • the inpaint function based on OpenCV uses the area neighborhood to repair the selected area in the image, that is, the pixels in the input image corresponding to the position of the handwritten pixel mask area in the corresponding area except for the non-handwritten pixels are repaired using the neighborhood pixels , So as to achieve the effect of removing the content of the handwritten pixel mask area in the input image.
  • removing the handwritten content in the input image to obtain an output image includes: cutting and removing the handwritten content from the input image to obtain an intermediate output image ; Binarize the intermediate output image to obtain the output image.
  • Binarization processing is the process of setting the gray value of the pixels on the intermediate output image to 0 or 255, that is, making the entire intermediate output image show a clear black and white effect. Binarization processing can make the data in the intermediate output image The amount is greatly reduced, so that the outline of the target can be highlighted. Binarization processing can convert the intermediate output image into a grayscale image with obvious black and white contrast (ie output image). The converted grayscale image has less noise interference, which can effectively improve the recognition and printing of the content in the output image. Effect.
  • the pixels in the area corresponding to the handwritten content in the input image are empty, that is, there are no pixels.
  • the intermediate output image is binarized, the area where the pixels in the intermediate output image are empty will not be processed; or, when the intermediate output image is binarized, the pixels in the intermediate output image can also be processed.
  • the empty area is filled with a gray value of 255. In this way, the processed text image is formed into a whole without unsightly void areas of handwritten content.
  • the final output image can be used to facilitate the user to print the output image into a paper form.
  • the output image can be printed into a paper form for other users to fill in.
  • the method of binarization processing can be a threshold method.
  • the threshold method includes: setting a binarization threshold, and comparing the pixel value of each pixel in the intermediate output image with the binarization threshold. If a certain value in the intermediate output image is If the pixel value of the pixel is greater than or equal to the binarization threshold, then the pixel value of the pixel is set to 255 gray scale. If the pixel value of a pixel in the intermediate output image is less than the binarization threshold, the pixel value of the pixel is set If the grayscale is 0, the intermediate output image can be binarized.
  • the selection methods of binarization threshold include bimodal method, P-parameter method, big law method (OTSU method), maximum entropy method, iterative method and so on.
  • performing binarization processing on an intermediate output image includes: obtaining an intermediate output image; performing grayscale processing on the intermediate output image to obtain a grayscale image of the intermediate output image;
  • the binary image is processed by binarization to obtain the binarized image of the intermediate output image; the binarized image is used as the guiding image, and the gray image is subjected to guiding filtering processing to obtain the filtered image; according to the second threshold, the filtered image is determined
  • the gray value of the high-value pixel is greater than the second threshold; according to the preset expansion coefficient, the gray value of the high-value pixel is expanded to obtain an expanded image; the expanded image is sharpened to obtain Clear image; adjust the contrast of the clear image to get the output image.
  • gray-scale processing methods include component method, maximum value method, average method, and weighted average method.
  • the preset expansion factor is 1.2-1.5, for example, 1.3.
  • the gray value of each high-value pixel is multiplied by a preset expansion coefficient to expand the gray value of the high-value pixel, thereby obtaining an expanded image with more obvious black and white contrast.
  • the second threshold is the sum of the mean gray value of the filtered image and the standard deviation of the gray value.
  • clearing the expanded image to obtain a clear image includes: using Gaussian filtering to blur the expanded image to obtain a blurred image; according to the preset mixing coefficient, the blurred image and the expanded image are mixed in proportion to obtain a clear image image.
  • f 1 (i,j) is the gray value of the pixel at (i,j) in the expanded image
  • f 2 (i,j) is the gray value of the pixel at (i,j) in the blurred image.
  • Degree value, f 3 (i,j) is the gray value of the pixel of the clear image at (i,j)
  • k 1 is the preset mixing coefficient of the expanded image
  • k 2 is the preset expansion coefficient of the blurred image
  • f 3 (i,j) k 1 f 1 (i,j)+k 2 f 2 (i,j).
  • the preset mixing coefficient of the expanded image is 1.5, and the preset mixing coefficient of the blurred image is -0.5.
  • adjusting the contrast of a clear image includes: adjusting the gray value of each pixel of the clear image according to the average gray value of the clear image.
  • the gray value of each pixel of a clear image can be adjusted by the following formula:
  • f'(i,j) is the gray value of the pixel of the enhanced image at (i,j)
  • the average gray value of the clear image f(i,j) is the gray value of the pixel of the clear image at (i,j)
  • t is the intensity value.
  • the intensity value may be 0.1-0.5, for example, the intensity value may be 0.2. In practical applications, the intensity value can be selected according to the final black and white enhancement effect to be achieved.
  • step S14 includes: obtaining replacement pixels; replacing pixels of handwritten content with replacement pixels to remove the handwritten content from the input image to obtain an output image.
  • the replacement pixels can be adjacent pixels outside the handwritten pixel mask area, that is, the current handwritten pixels that need to be replaced are adjacent pixels outside the handwritten pixel mask area.
  • area recognition can also be used to perform handwriting pixel replacement processing.
  • the replacement pixel can be the pixel value of any pixel in the handwriting area except the pixel of the handwritten content; or, replace The pixel is the average value (for example, geometric average) of the pixel values of all pixels in the handwriting area except the pixels of the handwritten content; or, the replacement pixel value may also be a fixed value, for example, a 255 grayscale value.
  • an image segmentation model such as the U-Net model can be used to directly extract any pixel in the handwriting area except for the handwritten content pixels to obtain replacement pixels; alternatively, image segmentation such as the U-Net model can be used The model extracts all pixels in the handwriting area except the pixels of the handwritten content, and then obtains the replacement pixel value based on the pixel value of all pixels.
  • replacing pixels of handwritten content with replacement pixels to remove handwritten content from an input image to obtain an output image includes: replacing pixels of handwritten content with replacement pixels to remove handwritten content from an input image to obtain an intermediate output image; for intermediate output The image is binarized to obtain the output image.
  • region recognition and binarization processing of the region recognition model can refer to the above-mentioned related description, and the repetition will not be repeated.
  • an output image as shown in FIG. 2B can be obtained, and the output image is a binarized image. As shown in FIG. 2B, in the output image, all the handwritten content is removed, thereby obtaining a blank form without user information.
  • the model (for example, an arbitrary model such as a region recognition model, an image segmentation model, etc.) is not just a mathematical model, but a module that can receive input data, perform data processing, and output processing results.
  • the module can be a software module, a hardware module (for example, a hardware neural network) or a combination of software and hardware.
  • the region recognition model and/or image segmentation model includes codes and programs stored in a memory; the processor can execute the codes and programs to implement some of the region recognition model and/or image segmentation models described above. Function or full function.
  • the region recognition model and/or the image segmentation model may include one circuit board or a combination of multiple circuit boards for realizing the functions described above.
  • the circuit board or combination of circuit boards may include: (1) one or more processors; (2) one or more non-transitory computer-readable Memory; and (3) firmware stored in the memory executable by the processor.
  • the method for removing handwritten content in the text image further includes a training phase.
  • the training phase includes the process of training the region recognition model and the image segmentation model. It should be noted that the region recognition model and the image segmentation model can be trained separately, or the region recognition model and the image segmentation model can be trained at the same time.
  • the training area may be treated by the first sample image marked with a text printing area (for example, the number of marked text printing areas is at least one) and a handwriting area (for example, the number of marked handwriting areas is at least one)
  • the recognition model is trained to obtain an area recognition model.
  • the training process of the region recognition model to be trained may include: in the training phase, training the region recognition model to be trained using multiple first sample images marked with the text printing region and the handwritten region to obtain the region recognition model.
  • using multiple first sample images to train the region recognition model to be trained includes: obtaining the current first sample image from multiple first sample images; using the region recognition model to be trained to process the current first sample image, To obtain the training text printing area and the training handwriting area; according to the text printing area and the handwriting area, the training text printing area and the training handwriting area marked in the current first sample image, the first loss function is used to calculate the recognition model of the area to be trained The first loss value; the parameters of the region recognition model to be trained are modified according to the first loss value.
  • the trained region recognition model is obtained.
  • the first loss function does not meet the first predetermined condition, If the condition is satisfied, continue to input the first sample image to repeat the above-mentioned training process.
  • the above-mentioned first predetermined condition corresponds to the convergence of the loss of the first loss function (that is, the first loss value is no longer significantly reduced) when a certain number of first sample images are input.
  • the above-mentioned first predetermined condition is that the number of training times or the training period reaches a predetermined number (for example, the predetermined number may be millions).
  • the image segmentation model to be trained may be obtained by training the image segmentation model to be trained through the second sample image marked with pixels of the handwritten content.
  • the second sample image can be enlarged to accurately label all the handwritten content pixels.
  • handwriting features for example, pixel gray features, font features, etc.
  • machine learning is performed to build an image segmentation model.
  • the training process of the image segmentation model to be trained may include: in the training phase, training the image segmentation model to be trained using multiple second sample images marked with pixels of the handwritten content to obtain the image segmentation model.
  • using multiple second sample images to train the region recognition model to be trained includes: obtaining the current second sample image from multiple second sample images; using the image segmentation model to be trained to process the current second sample image to obtain training handwriting Content pixels; according to the handwritten content pixels marked in the current second sample image and the training handwritten content pixels, the second loss function is used to calculate the second loss value of the image segmentation model to be trained; the second loss value of the image segmentation model to be trained is calculated according to the second loss value The parameters are corrected.
  • the trained image segmentation model is obtained.
  • the second loss function does not meet the second predetermined condition, continue to input the second sample image to repeat the above training process.
  • the above-mentioned second predetermined condition corresponds to the convergence of the loss of the second loss function (that is, the second loss value is no longer significantly reduced) when a certain number of second sample images are input.
  • the above-mentioned second predetermined condition is that the number of training times or the training period reaches a predetermined number (for example, the predetermined number may be millions).
  • the multiple first training sample images and the multiple second training sample images may be the same or different.
  • FIG. 3 is a schematic block diagram of an apparatus for removing handwritten content from a text image provided by at least one embodiment of the present invention.
  • the device 300 for removing handwritten content in a text image includes a processor 302 and a memory 301.
  • the components of the handwritten content removal device 300 in the text image shown in FIG. 3 are only exemplary and not restrictive. According to actual application requirements, the handwritten content removal device 300 in the text image may also have other components.
  • the memory 301 is used for non-transitory storage of computer-readable instructions; the processor 302 is used for running computer-readable instructions, and the computer-readable instructions are executed when the processor 302 runs. Content removal method.
  • the apparatus 300 for removing handwritten content from a text image provided by the embodiment of the present invention may be used to implement the method for removing handwritten content from a text image provided by the embodiment of the present invention.
  • the apparatus 300 for removing handwritten content from a text image may be configured on an electronic device.
  • the electronic device may be a personal computer, a mobile terminal, etc.
  • the mobile terminal may be a hardware device with various operating systems such as a mobile phone or a tablet computer.
  • the device 300 for removing handwritten content in a text image may further include an image acquiring component 303.
  • the image obtaining part 303 is used to obtain a text image, for example, to obtain an image of a paper text.
  • the memory 301 may also be used to store text images; the processor 302 is also used to read and process the text images to obtain input images.
  • the text image may be the original image described in the embodiment of the method for removing handwritten content in the text image.
  • the image acquisition component 303 is the image acquisition device described in the embodiment of the method for removing handwritten content in the text image.
  • the image acquisition component 303 may be a camera of a smart phone, a camera of a tablet computer, a camera of a personal computer, a digital Camera lenses, webcams, and other devices that can be used for image capture.
  • the image acquisition component 303, the memory 301, and the processor 302 may be physically integrated in the same electronic device, and the image acquisition component 303 may be a camera configured on the electronic device.
  • the memory 301 and the processor 302 receive the image sent from the image acquisition part 303 via the internal bus.
  • the image acquisition component 303 and the memory 301/processor 302 may also be configured separately in physical locations, and the memory 301 and the processor 302 may be integrated in the first user's electronic device (for example, the first user's computer, mobile phone, etc.) ,
  • the image acquisition component 303 can be integrated in the electronic device of the second user (the first user and the second user are not the same), the electronic device of the first user and the electronic device of the second user can be configured separately in physical location, and
  • the electronic device of the first user and the electronic device of the second user may communicate in a wired or wireless manner.
  • the electronic device of the second user may send the original image to the electronic device of the first user via a wired or wireless manner.
  • the electronic device of receives the original image and performs subsequent processing on the original image.
  • the memory 301 and the processor 302 may also be integrated in a cloud server, and the cloud server receives the original image and processes the original image.
  • the device 300 for removing handwritten content in a text image may further include an output device, and the output device is used to output the output image.
  • the output device may include a display (for example, an organic light emitting diode display, a liquid crystal display), a projector, etc., and the display and the projector may be used to display the output image.
  • the output device may also include a printer, and the printer is used to print the output image.
  • the network may include a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
  • the network may include a local area network, the Internet, a telecommunications network, the Internet of Things (Internet of Things) based on the Internet and/or a telecommunications network, and/or any combination of the above networks, and the like.
  • the wired network may, for example, use twisted pair, coaxial cable, or optical fiber transmission for communication, and the wireless network may use, for example, a 3G/4G/5G mobile communication network, Bluetooth, Zigbee, or WiFi.
  • the invention does not limit the type and function of the network here.
  • the processor 302 may control other components in the handwriting content removal apparatus 300 in the text image to perform desired functions.
  • the processor 302 may be a central processing unit (CPU), a tensor processor (TPU), or a graphics processing unit (GPU) and other devices with data processing capabilities and/or program execution capabilities.
  • the central processing unit (CPU) can be an X86 or ARM architecture.
  • the GPU can be directly integrated on the motherboard alone or built into the north bridge chip of the motherboard.
  • the GPU can also be built into the central processing unit (CPU).
  • the memory 301 may include any combination of one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, erasable programmable read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, flash memory, etc.
  • One or more computer-readable instructions may be stored on the computer-readable storage medium, and the processor 302 may run the computer-readable instructions to implement various functions of the apparatus 300 for removing handwritten content in a text image.
  • Various application programs and various data can also be stored in the storage medium.
  • FIG. 4 is a schematic diagram of a storage medium provided by at least one embodiment of the present invention.
  • one or more computer-readable instructions 501 may be non-transitory stored on the storage medium 500.
  • the computer-readable instruction 501 is executed by a computer, one or more steps in the method for removing handwritten content from a text image described above can be executed.
  • the storage medium 500 may be applied to the apparatus 300 for removing handwritten content from a text image, for example, it may include the memory 301 in the apparatus 300 for removing handwritten content from a text image.
  • Fig. 5 is a schematic diagram of a hardware environment provided by at least one embodiment of the present invention.
  • the device for removing handwritten content in a text image provided by an embodiment of the present invention can be applied to an Internet system.
  • the computer system provided in FIG. 5 can be used to implement the handwritten content removal device in the text image involved in the present invention.
  • Such computer systems can include personal computers, notebook computers, tablet computers, mobile phones and any smart devices.
  • the specific system in this embodiment uses a functional block diagram to explain a hardware platform including a user interface.
  • Such a computer system may include a general purpose computer device, or a special purpose computer device. Both types of computer equipment can be used to implement the apparatus for removing handwritten content in a text image in this embodiment.
  • the computer system can implement any of the currently described components of the information needed to implement the method for removing handwritten content from text images.
  • a computer system can be realized by a computer device through its hardware device, software program, firmware, and their combination.
  • the computer system may include a communication port 250, which is connected to a network that realizes data communication.
  • the communication port 250 may communicate with the image acquisition component 403 described above.
  • the computer system may also include a processor group 220 (ie, the processor described above) for executing program instructions.
  • the processor group 220 may be composed of at least one processor (for example, a CPU).
  • the computer system may include an internal communication bus 210.
  • the computer system may include different forms of program storage units and data storage units (ie, the memory or storage medium described above), such as a hard disk 270, a read only memory (ROM) 230, and a random access memory (RAM) 240, which can be used for storage Various data files used for computer processing and/or communication, and possible program instructions executed by the processor group 220.
  • the computer system may also include an input/output component 260, which may support input/output data flow between the computer system and other components (for example, the user interface 280, which may be the display described above).
  • the computer system can also send and receive information and data through the communication port 250.
  • the above-mentioned computer system may be used to form a server in an Internet communication system.
  • the server of the Internet communication system can be a server hardware device or a server group. Each server in a server group can be connected through a wired or wireless network.
  • a server group can be centralized, such as a data center.
  • a server group can also be distributed, such as a distributed system.
  • each block in the block diagram and/or flowchart of the present invention can be used as a dedicated hardware-based system that performs specified functions or actions. , Or can be realized by a combination of dedicated hardware and computer program instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种文本图像中手写内容去除方法、装置和存储介质。文本图像中手写内容去除方法包括:获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容;利用图像分割模型对所述输入图像进行识别,以得到所述手写内容的初始手写像素;对所述初始手写像素进行模糊处理,以得到手写像素掩膜区域;根据手写像素掩膜区域确定所述手写内容;去除所述输入图像中的所述手写内容,以得到输出图像。

Description

文本图像中手写内容去除方法、装置、存储介质 技术领域
本发明涉及一种文本图像中手写内容去除方法、装置和存储介质。
背景技术
目前,用户在对文本进行拍照扫描成照片或者PDF等其他格式的文件时,如果原始文本上已经存在当前用户或者其他人员的手写内容,例如备注文字,说明文字,标注或标记符号等字符内容,会同时将这些手写内容也录入到输出图像或文件中。当用户不需要以上手写内容或者需要对手写内容进行保密时,去除相关手写内容对普通用户来说较为困难,不便于保存或分发。此外,用户通过手机拍摄的文本照片往往由于拍摄环境光照的不同,会在文本照片中产生阴影等,直接打印该文本照片,则打印机会将文本照片中的阴影部分直接打印,浪费墨水,也影响阅读。
发明内容
为解决上述缺陷,本发明提供一种文本图像中手写内容去除方法,包括:获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容;利用图像分割模型对所述输入图像进行识别,以得到所述手写内容的初始手写像素;对所述初始手写像素进行模糊处理,以得到手写像素掩膜区域;根据所述手写像素掩膜区域确定所述手写区域中的所述手写内容;去除所述输入图像中的所述手写内容,以得到输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到输出图像,包括:
根据所述初始手写像素的像素值以及所述手写像素掩膜区域的位置,在所述输入图像中确定所述手写像素掩膜区域中的非手写像素;去除所述输入图像中的所述手写像素掩膜区域内容,以得到中间输出图像;
对所述中间输出图像进行所述手写像素掩膜区域中的非手写像素修复,以得到所述输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到输出图像,包括:
根据所述初始手写像素的像素值以及所述手写像素掩膜区域的位置,在所述输入图像中确定所述手写像素掩膜区域中的非手写像素;
根据所述手写像素掩膜区域中的非手写像素以及所述手写像素掩膜区域去除所述输入图像中的所述手写内容,以得到所述输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到输出图像包括:从所述输入图像中切割去除所述手写内容,以得到中间输出图像;对所述中间输出图像进行二值化处理,以得到所述输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:获取替换像素;利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容而得到所述输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容以得到所述输出图像,包括:利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容而得到中间输出图像;对所述中间输出图像进行二值化处理,以得到所述输出图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,所述替换像素是根据所述手写内容的像素通过基于像素邻域计算的图像修复算法获取的。
可选的,在本发明提供的文本图像中手写内容去除方法中,所述获取替换像素还包括利用区域识别模型对所述输入图像进行识别,得到所述手写区域,所述替换像素为所述手写区域中除了所述手写内容的像素之外的任意一个像素;或者,所述替换像素为所述手写区域中除了所述手写内容的像素之外的所有像素的像素值的平均值。
可选的,在本发明提供的文本图像中手写内容去除方法中,获取所述待处理文本页面的输入图像包括:获取所述待处理文本页面的原始图像,其中, 所述原始图像包括待处理文本区域;对所述原始图像进行边缘检测,以确定所述原始图像中的所述待处理文本区域;对所述待处理文本区域进行转正处理,以得到所述输入图像。
可选的,在本发明提供的文本图像中手写内容去除方法中,所述图像分割模型为预先训练好的对所述输入图像进行分割的U-Net模型。
可选的,在本发明提供的文本图像中手写内容去除方法中,通过高斯滤波函数对所述初始手写像素进行模糊处理,扩大所述初始手写像素的区域,以得到所述手写像素掩膜区域。
进一步地,本发明还提供一种文本图像中手写内容去除装置,包括:存储器,用于非暂时性存储计算机可读指令;以及处理器,用于运行所述计算机可读指令,其中,所述计算机可读指令被所述处理器运行时执行根据上述任一实施例所述的文本图像中手写内容去除方法。
进一步地,本发明还提供一种存储介质,非暂时性地存储计算机可读指令,其中,当所述计算机可读指令由计算机执行时可以执行根据上述任一实施例所述的文本图像中手写内容去除方法。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本发明的一些实施例,而非对本发明的限制。
图1为本发明一实施例提供的一种文本图像中手写内容去除方法的示意性流程图;
图2A为本发明一实施例提供的一种原始图像的示意图;
图2B为本发明一实施例提供的一种输出图像的示意图;
图3为本发明一实施例提供的一种文本图像中手写内容去除装置的示意性框图;
图4为本发明一实施例提供的一种存储介质的示意图;
图5为本发明一实施例提供的一种硬件环境的示意图。
具体实施方式
为了使得本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例的附图,对本发明实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于所描述的本发明的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另外定义,本发明使用的技术术语或者科学术语应当为本发明所属领域内具有一般技能的人士所理解的通常意义。本发明中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
为了保持本发明实施例的以下说明清楚且简明,本发明省略了部分已知功能和已知部件的详细说明。
本发明至少一实施例提供一种文本图像中手写内容去除方法、装置和存储介质。文本图像中手写内容去除方法包括:获取待处理文本页面的输入图像,其中,输入图像包括手写区域,手写区域包括手写内容;利用图像分割模型对所述输入图像进行识别,以得到所述手写内容的初始手写像素;对所述初始手写像素进行模糊处理,以得到手写像素掩膜区域;根据手写像素掩膜区域确定所述手写内容;去除输入图像中的手写内容,以得到输出图像。
该文本图像中手写内容去除方法能够有效去除输入图像中的手写区域内的手写内容,以便于输出仅包括印刷内容的图像或文件。此外,文本图像中手写内容去除方法还可以将输入图像转化为方便打印的形式,以便于用户可以将输入图像打印为纸质形式进行保存或分发。
下面结合附图对本发明的实施例进行详细说明,但是本发明并不限于这些具体的实施例。
图1为本发明至少一实施例提供的一种文本图像中手写内容去除方法的示意性流程图;图2A为本发明至少一实施例提供的一种原始图像的示意图;图2B为本发明至少一实施例提供的一种输出图像的示意图。
例如,如图1所示,本发明实施例提供的文本图像中手写内容去除方法包括步骤S10至S14。
如图1所示,首先,文本图像中手写内容去除方法在步骤S10,获取待处理文本页面的输入图像。
例如,在步骤S10中,输入图像包括手写区域,手写区域包括手写内容。输入图像可以为任何包括手写内容的图像。
例如,输入图像可以为通过图像采集装置(例如,数码相机或手机等)拍摄的图像,输入图像可以为灰度图像,也可以为彩色图像。需要说明的是,输入图像是指以可视化方式呈现待处理文本页面的形式,例如待处理文本页面的图片等。
例如,手写区域并没有固定的形状,而是根据手写内容而定,也就是说,具有手写内容的区域即为手写区域,手写区域可以为规则形状(例如,矩形等),也可以为不规则的形状。手写区域可以包括填充区域、手写的草稿或者其他手写标记的区域等。
例如,输入图像还包括文本印刷区域,文本印刷区域包括印刷内容。文本印刷区域的形状也可以为规则形状(例如,矩形等),也可以为不规则的形状。在本发明的实施例中,以每个手写区域的形状为矩形和每个文本印刷区域的形状为矩形为例进行说明,本发明包括但不限于此。
例如,待处理文本页面可以包括书籍、报纸、期刊、单据、表格、合同等。书籍、报纸和期刊包括各类具有文章或图案的文件页面,单据包括各类发票、收据、快递单等,表格可以为各种类型的表格,例如,年终总结表、入职表、价格汇总表、申请表格等;合同可以包括各种形式的合同文本页面等。本发明对待处理文本页面的类型不作具体限制。
例如,待处理文本页面可以为纸质形式的文本,也可以为电子形式的文本。例如,当待处理文本页面为单据,例如快递单时,印刷内容可以包括各个项目的标题文字,手写内容可以包括用户填写的信息,例如姓名、地址、 电话等(此时,信息为用户填写的个人信息,并不是通用信息),当待处理文本页面为文章类文本时,印刷内容可以是文章内容,手写内容可以是用户的备注或者其他手写标记等。当待处理文本页面为表单,例如入职表时,印刷内容可以包括“姓名”、“性别”、“民族”、“工作履历”等项目标题文字,而手写内容可以包括用户(例如,职工等)填写在入职表中的用户的姓名、性别(男或女)、民族和工作经历等手写信息。印刷内容还可以包括各种符号、图形等。
例如,待处理文本页面的形状可以为矩形等形状,输入图像的形状可以为规则形状(例如,平行四边形、矩形等),以便于进行打印。然而,本发明不限于此,在一些实施例中,输入图像也可以为不规则形状。
例如,由于图像采集装置采集图像时图像可能会发生变形,从而输入图像的尺寸和待处理文本页面的尺寸不相同,然而本发明不限于此,输入图像的尺寸和待处理文本页面的尺寸也可以相同。
例如,待处理文本页面包括印刷内容和手写内容,印刷内容可以为印刷得到的内容,手写内容为用户手写的内容,手写内容可以包括手写字符。
需要说明的是,“印刷内容”不仅仅指代通过输入装置在电子设备上输入的文字、字符、图形等内容,在一些实施例中,当待处理文本页面为例如笔记等文本时,笔记内容也可以是由用户手写的,此时,印刷内容则为用于手写的空白笔记本页面上的印刷内容,例如横线等。
例如,印刷内容可以包括各种语言的文字,例如,中文(例如,汉字或拼音)、英文、日文、法文、韩文等,此外,印刷内容还可以包括数字、各种符号(例如,勾号,叉号以及各种运算符号等)和各种图形等。手写内容也可以为包括各种语言的文字、数字、各种符号和各种图形等。
例如,在图2A所示的示例中,待处理文本页面100为表单,由四条边界线线(直线101A-101D)围成的区域表示待处理文本页面对应的待处理文本区域100。在该待处理文本区域100中,印刷区域包括表单区域,印刷内容可以包括各个项目文字,例如,姓名、生日等,印刷内容还可以包括待处理文本区域100中右上角的logo图形(已做遮盖处理)等;手写区域包括手写信息区域,手写内容可以包括用户手写的个人信息,例如,用户手写的姓名、 生日信息、健康信息、打勾符号等。
例如,输入图像可以包括多个手写内容和多个印刷内容。多个手写内容彼此间隔,多个印刷内容也彼此间隔。例如,多个手写内容中的部分手写内容可以相同(即手写内容的字符相同,然而手写内容的具体形状不相同);多个印刷内容中的部分印刷内容也可以相同。本发明不限于此,多个手写内容也可以彼此不相同,多个印刷内容也可以彼此不相同。
例如,在一些实施例中,步骤S10可以包括:获取待处理文本页面的原始图像,其中,原始图像包括待处理文本区域;对原始图像进行边缘检测,以确定原始图像中的待处理文本区域;对待处理文本区域进行转正处理,以得到输入图像。
例如,可以采用神经网络或基于OpenCV的边缘检测算法等方法对原始图像进行边缘检测,以确定待处理文本区域。例如,OpenCV为一种开源计算机视觉库,基于OpenCV的边缘检测算法包括Sobel、Scarry、Canny、Laplacian、Prewitt、Marr-Hildresh、scharr等多种算法。
例如,对原始图像进行边缘检测,以确定原始图像中的待处理文本区域,可以包括:对原始图像进行处理,获得原始图像中灰度轮廓的线条图,其中,线条图包括多条线条;将线条图中相似的线条进行合并,得到多条初始合并线条,并根据多条初始合并线条确定一边界矩阵;将多条初始合并线条中相似的线条进行合并得到目标线条,并且将未合并的初始合并线条也作为目标线条,由此得到多条目标线条;根据边界矩阵,从多条目标线条中确定多条参考边界线;通过预先训练的边界线区域识别模型对原始图像进行处理,得到原始图像中待处理文本页面的多个边界线区域;针对每一边界线区域,从多条参考边界线中确定与该边界线区域对应的目标边界线;根据确定的多条目标边界线确定原始图像中待处理文本区域的边缘。
例如,在一些实施例中,对原始图像进行处理,获得原始图像中灰度轮廓的线条图,包括:通过基于OpenCV的边缘检测算法对原始图像进行处理,获得原始图像中灰度轮廓的线条图。
例如,将线条图中相似的线条进行合并,得到多条初始合并线条,包括:获取线条图中的长线条,其中,长线条为长度超过第一预设阈值的线条;从 长线条中获取多组第一类线条,其中,第一类线条包括至少两个依次相邻的长线条,且任意相邻的两长线条之间的夹角均小于第二预设阈值;针对每一组第一类线条,将该组第一类线条中的各个长线条依次进行合并得到一条初始合并线条。
例如,边界矩阵按照以下方式确定:对多条初始合并线条以及长线条中未合并的线条进行重新绘制,将重新绘制的所有线条中的像素点的位置信息对应到整个原始图像的矩阵中,将原始图像的矩阵中这些线条的像素点所在位置的值设置第一数值、这些线条以外的像素点所在位置的值设置为第二数值,从而形成边界矩阵。
例如,将多条初始合并线条中相似的线条进行合并得到目标线条,包括:从多条初始合并线条中获取多组第二类线条,其中,第二类线条包括至少两个依次相邻的初始合并线条,且任意相邻的两初始合并线条之间的夹角均小于第三预设阈值;针对每一组第二类线条,将该组第二类线条中的各个初始合并线条依次进行合并得到一条目标线条。
例如,第一预设阈值可以为2个像素的长度,第二预设阈值和第三预设阈值可以为15度。需要说明的是,第一预设阈值、第二预设阈值和第三预设阈值可以根据实际应用需求设置。
例如,根据边界矩阵,从多条目标线条中确定多条参考边界线,包括:针对每一条目标线条,将该目标线条进行延长,根据延长后的该目标线条确定一线条矩阵,然后将该线条矩阵与边界矩阵进行对比,计算延长后的该目标线条上属于边界矩阵的像素点的个数,作为该目标线条的成绩,即将该线条矩阵与边界矩阵进行对比来判断有多少像素点落入到边界矩阵里面,也就是判断两个矩阵中有多少相同位置的像素点具有相同的第一数值例如255,从而计算成绩,其中,线条矩阵与边界矩阵的大小相同;根据各个目标线条的成绩,从多条目标线条中确定多条参考边界线。需要说明的是,成绩最好的目标线条的数量可能为多条,因此,根据各个目标线条的成绩,从多条目标线条中确定成绩最好的多条目标线条作为参考边界线。
例如,线条矩阵按照以下方式确定:对延长后的目标线条或直线进行重新绘制,将重新绘制的线条中的像素点的位置信息对应到整个原始图像的矩 阵中,将原始图像的矩阵中线条的像素点所在位置的值设置为第一数值、线条以外的像素点所在位置的值设置为第二数值,从而形成线条矩阵。
例如,针对每一边界线区域,从多条参考边界线中确定与该边界线区域对应的目标边界线,包括:计算每一条参考边界线的斜率;针对每一个边界线区域,利用霍夫变换将该边界线区域转换为多条直线,并计算多条直线的平均斜率,再判断多条参考边界线中是否存在斜率与平均斜率相匹配的参考边界线,如果存在,将该参考边界线确定为与该边界线区域相对应的目标边界线;如果判断出多条参考边界线中不存在斜率与平均斜率相匹配的参考边界线,则针对该边界线区域转换得到的每一条直线,将该直线形成的线条矩阵与边界矩阵进行对比,计算该直线上属于边界矩阵的像素点的个数,作为该直线的成绩;将成绩最好的直线确定为与该边界线区域相对应的目标边界线;其中,线条矩阵与边界矩阵的大小相同。需要说明的是,如果成绩最好的直线有多条,则根据排序算法将其中最先出现的一条直线作为最佳边界线。
例如,边界线区域识别模型是基于神经网络的模型。边界线区域识别模型可以通过机器学习训练建立。
例如,对原始图像进行边缘检测之后可以确定多条目标边界线(例如,四条目标边界线),待处理文本区域即由多条目标边界线确定,例如,根据多条目标边界线的多个交点和多条目标边界线即可确定待处理文本区域,每两条相邻的目标边界线相交得到一个交点,多个交点和多个目标边界线共同限定了原始图像中的待处理文本所在的区域。例如,在图2A所示的示例中,待处理文本区域可以为四条目标边界线围成的文本区域。四条目标边界线均为直线,四条目标边界线分别为第一目标边界线101A、第二目标边界线101B、第三目标边界线101C和第四目标边界线101D。除了待处理文本区域之外,原始图像还可以包括非文本区域,例如,图2A中的四条边界线围成的区域之外的区域。
例如,在一些实施例中,对待处理文本区域进行转正处理,以得到输入图像,包括:对待处理文本区域进行投影变换,以得到待处理文本区域的正视图,该正视图即为输入图像。投影变换(Perspective Transformation)是将图片投影到一个新的视平面(Viewing Plane)的技术,也称作投影映射(Projective  Mapping)。由于拍照所得的原始图像中,待处理文本的真实形状在原始图像中发生了变化,即产生了几何畸变。如图2A所示的原始图像,待处理文本(即表单)的形状本来为矩形,但是原始图像中的待处理文本的形状发生了变化,变为了不规则的多边形。因此,对原始图像中的待处理文本区域进行投影变换,可以将待处理文本区域由不规则的多边形变换为矩形或平行四边形等,即将待处理文本区域进行转正,从而去除几何畸变的影响,得到原始图像中待处理文本的正视图。投影变换可以根据空间投影换算坐标来处理待处理文本区域中的像素以获取待处理文本的正视图,在此不做赘述。
需要说明的是,在另一些实施例中,也可以不对待处理文本区域进行转正处理,而直接从原始图像中切割待处理文本区域,以得到单独的待处理文本区域的图像,该单独的待处理文本区域的图像即为输入图像。
例如,原始图像可以为图像采集装置直接采集到的图像,也可以是对图像采集装置直接采集到的图像进行预处理之后获得的图像。原始图像可以为灰度图像,也可以为彩色图像。例如,为了避免原始图像的数据质量、数据不均衡等对于文本图像中手写内容去除的影响,在处理原始图像前,本发明实施例提供的文本图像中手写内容去除方法还可以包括对原始图像进行预处理的操作。预处理可以消除原始图像中的无关信息或噪声信息,以便于更好地对原始图像进行处理。预处理例如可以包括对图像采集装置直接采集到的图像进行缩放、剪裁、伽玛(Gamma)校正、图像增强或降噪滤波等处理。
值得注意的是,在一些实施例中,原始图像即可以作为输入图像,在此情况下,例如,可以直接对原始图像进行识别以确定原始图像中的手写内容;然后去除原始图像中的手写内容,以得到输出图像;或者,可以直接对原始图像进行识别以确定原始图像中的手写内容;然后去除原始图像中的手写内容,以得到中间输出图像;然后对中间输出图像进行边缘检测,以确定中间输出图像中的待处理文本区域;对待处理文本区域进行转正处理,以得到输出图像,也就是说,在本发明的一些实施例中,可以先去除原始图像中的手写内容,以得到中间输出图像,然后在对中间输出图像进行边缘检测和转正处理。
接下来,如图1所示,在步骤S11,利用图像分割模型对所述输入图像进 行识别,以得到所述手写内容的初始手写像素。
例如,图像分割模型表示对输入图像进行区域识别(或划分)的模型,图像分割模型是一种采用机器学习技术(例如,卷积神经网络技术)实现并且例如运行在通用计算装置或专用计算装置上,该图像分割模型为预先训练好的模型。例如,应用于图像分割模型的神经网络还可以通过包括深度卷积神经网络、掩膜区域卷积神经网络(Mask-RCNN)、深度残差网络、注意力模型等其他神经网络模型来实现相同功能,这里不做过多限制。
例如,利用图像分割模型对所述输入图像进行识别采用U-Net模型,其是一种改进的FCN(Fully Convolutional Network,全卷积神经网络)结构,沿用了FCN进行图像语义分割的思想,即利用卷积层、池化层进行特征提取,再利用反卷积层还原图像尺寸。U-Net网络模型是用于图像分割性能较好的一种模型。深度学习擅长解决分类问题,利用深度学习的这一特点进行图像分割,其实质是对图像中的每一像素点进行分类。最终将不同类别的点利用不同的通道标出,可以达到对目标区域中的特征信息分类标出的效果。通过U-Net模型可以在输入图像中确定所述手写内容的初始手写像素,同样的,也可以通过例如Mask-RCNN等其他神经网络模型来实现确定所述手写内容的初始手写像素。
接下来,如图1所示,在步骤S12,对所述初始手写像素进行模糊处理,以得到手写像素掩膜区域。通过图像分割模型对所述输入图像进行识别,所获得的初始手写像素可能并不是全部的手写像素,但是其余遗漏的手写像素一般都邻近于所述初始手写像素,因此需要对所述初始手写像素进行模糊处理,将手写像素区域扩大,以得到手写像素掩膜区域,所述手写像素掩膜区域基本上包含了全部的手写像素。
例如,可以通过基于OpenCV的高斯滤波GaussianBlur函数对初始手写像素进行高斯模糊处理,扩大初始手写像素区域,从而得到手写像素掩膜区域。高斯滤波是通过对输入数组的每个点与输入的高斯滤波模板执行卷积计算然后将这些结果组成了滤波后的输出数组,其是对初始手写像素的图像进行加权平均的过程,每一个像素点的值都由其本身和邻域内的其他像素值经过加权平均后得到。通过高斯模糊处理处理后,手写像素图像变得模糊,但 是其区域得到扩大。例如,还可以采用其他任何模糊处理技术对初始手写像素进行模糊处理,这里不做过多限制。
接下来,如图1所示,在步骤S13,根据手写像素掩膜区域确定所述手写内容。根据手写像素掩膜区域并结合初始手写像素基本上确定了手写内容的全部手写像素,从而确定了手写内容。
接下来,如图1所示,在步骤S14,去除所述输入图像中的所述手写内容,以得到输出图像。
例如,在本发明第一较佳实施例中,在步骤S12中获得手写像素掩膜区域后,可以确定手写像素掩膜区域在输入图像中的位置,接着转到输入图像中相应位置的区域中去确定非手写像素。根据所述初始手写像素的像素值,在所述输入图像中对应于手写像素掩膜区域位置的相应区域中查找像素值差距较大的其他像素,并将其确定为非手写像素,例如可以设定像素差值的阈值,当区域内具有像素差值在阈值范围以外的像素时,将其确定为非手写像素。
接着,去除所述输入图像中的所述手写像素掩膜区域内容,以得到中间输出图像;
例如,可以通过基于OpenCV的inpaint函数来进行手写像素掩膜区域内容的去除。基于OpenCV的inpaint函数使用区域邻域在图像中修复选定区域,即将所述输入图像中对应于手写像素掩膜区域位置的相应区域中的像素使用邻域像素进行修复,从而达到去除所述输入图像中的所述手写像素掩膜区域内容的效果,并得到中间输出图像。
接着,对所述中间输出图像进行所述手写像素掩膜区域中的非手写像素修复,以得到所述输出图像。
例如,获取输入图像中所述手写像素掩膜区域中的非手写像素的像素值,并直接替换所述中间输出图像中相应位置处的像素,从而完成对该位置的非手写像素修复,最终得到所述输出图像。
例如,在本发明另一较佳实施例中,在步骤S12中获得手写像素掩膜区域后,可以确定手写像素掩膜区域在输入图像中的位置,接着转到输入图像中相应位置的区域中去确定非手写像素。根据所述初始手写像素的像素值, 在所述输入图像中对应于手写像素掩膜区域位置的相应区域中查找像素值差距较大的其他像素,并将其确定为非手写像素,例如可以设定像素差值的阈值,当区域内具有像素差值在阈值范围以外的像素时,将其确定为非手写像素。
接着,根据所述手写像素掩膜区域中的非手写像素以及所述手写像素掩膜区域去除所述输入图像中的所述手写内容,以得到所述输出图像。即在所述手写像素掩膜区域中排除掉非手写像素,从而将其他部分的像素进行去除,因此保留了非手写像素避免其被误去除,最终得到所述输出图像。
例如,可以通过基于OpenCV的inpaint函数来进行排除了非手写像素的手写像素掩膜区域内容的去除。基于OpenCV的inpaint函数使用区域邻域在图像中修复选定区域,即将所述输入图像中对应于手写像素掩膜区域位置的相应区域中的除了非手写像素以外的其他像素使用邻域像素进行修复,从而达到去除所述输入图像中的所述手写像素掩膜区域内容的效果。
例如,在本发明另一较佳实施例中,去除所述输入图像中的所述手写内容,以得到输出图像,包括:从所述输入图像中切割去除所述手写内容,以得到中间输出图像;对中间输出图像进行二值化处理,以得到输出图像。
二值化处理是将中间输出图像上的像素点的灰度值设置为0或255,也就是使得整个中间输出图像呈现出明显的黑白效果的过程,二值化处理可以使中间输出图像中数据量大为减少,从而能凸显出目标的轮廓。二值化处理可以将中间输出图像转换为黑白对比较为明显的灰度图像(即输出图像),转换后的灰度图像的噪声干扰较少,可以有效提高输出图像中的内容的辨识度和打印效果。
例如,从输入图像中切割去除所述手写内容之后,手写内容对应的区域内的所有像素均被去除,即输入图像中的手写内容对应的区域的像素为空,即没有像素。在对中间输出图像进行二值化处理时,中间输出图像中的像素为空的区域不进行任何处理;或者,在对中间输出图像进行二值化处理时,也可以将中间输出图像中的像素为空的区域利用灰度值255进行填充。从而将处理后的文本图像形成一个整体,而不会出现不美观的手写内容空洞区域。
例如,中间输出图像进行二值化处理后,最终得到输出图像可以方便用 户将该输出图像打印成为纸质形式。例如,当输入图像为表单时,可以将输出图像打印成为纸质形式以供其他用户填写。
例如,二值化处理的方法可以是阈值法,阈值法包括:设置二值化阈值,将中间输出图像中的每个像素的像素值与二值化阈值进行比较,若中间输出图像中的某像素的像素值大于或等于二值化阈值,则将该像素的像素值设置为255灰阶,若中间输出图像中的某像素的像素值小于二值化阈值,则将该像素的像素值设置为0灰阶,由此即可实现对中间输出图像进行二值化处理。
例如,二值化阈值的选取方法包括双峰法、P参数法、大律法(OTSU法)、最大熵值法、迭代法等。
例如,在一些实施例中,对中间输出图像进行二值化处理包括:获取中间输出图像;对中间输出图像进行灰度化处理,得到中间输出图像的灰度图像;根据第一阈值,对灰度图像进行二值化处理,得到中间输出图像的二值化图像;以二值化图像为导向图,对灰度图像进行导向滤波处理,得到滤波图像;根据第二阈值,确定滤波图像中的高值像素点,高值像素点的灰度值大于第二阈值;根据预设扩充系数,对高值像素点的灰度值进行扩充处理,得到扩充图像;对扩充图像进行清晰化处理,得到清晰图像;对清晰图像的对比度进行调整,以得到输出图像。
例如,灰度化处理的方法包括分量法、最大值法、平均值法和加权平均法等。
例如,预设扩充系数为1.2-1.5,例如,1.3。将每个高值像素点的灰度值都乘以预设扩充系数,以对高值像素点的灰度值进行扩充处理,从而得到黑白对比更加明显的扩充图像。
例如,第二阈值为滤波图像的灰度均值与灰度值的标准差之和。
例如,对扩充图像进行清晰化处理,得到清晰图像,包括:采用高斯滤波对扩充图像进行模糊化处理,得到模糊图像;根据预设混合系数,将模糊图像和扩充图像按比例进行混合,得到清晰图像。
例如,假设f 1(i,j)为扩充图像在(i,j)处的像素点的灰度值,f 2(i,j)为模糊图像在(i,j)处的像素点的灰度值,f 3(i,j)为清晰图像在(i,j)处的像素点的灰度值,k 1为扩充图像的预设混合系数,k 2为模糊图像的预设扩充系数,则f 1(i,j)、 f 2(i,j)、f 3(i,j)满足如下关系:
f 3(i,j)=k 1f 1(i,j)+k 2f 2(i,j)。
例如,扩充图像的预设混合系数为1.5,模糊图像的预设混合系数为-0.5。
例如,对清晰图像的对比度进行调整包括:根据清晰图像的灰度均值,对清晰图像的每个像素点的灰度值进行调整。
例如,可以通过如下公式对清晰图像的每个像素点的灰度值进行调整:
Figure PCTCN2021076250-appb-000001
其中,f'(i,j)为增强图像在(i,j)处的像素点的灰度值,
Figure PCTCN2021076250-appb-000002
清晰图像的灰度均值,f(i,j)为清晰图像在(i,j)处的像素点的灰度值,t为强度值。例如,强度值可为0.1-0.5,例如,强度值可为0.2。在实际应用中,强度值可根据最终所要达到的黑白增强效果进行选取。
例如,如图1所示,步骤S14包括:获取替换像素;利用替换像素替换手写内容的像素,以从输入图像去除手写内容而得到输出图像。
例如,替换像素可以为手写像素掩膜区域外部的相邻像素,即当前需要被替换的手写像素在手写像素掩膜区域外部相邻的像素,同样的,还可以利用基于OpenCV的inpaint函数来直接进行像素替换处理。
例如,还可以采用区域识别的方式来进行手写像素替换处理,首先通过区域识别模型获取手写区域,替换像素可以为手写区域中除了手写内容的像素之外的任意一个像素的像素值;或者,替换像素为手写区域中除了手写内容的像素之外的所有像素的像素值的平均值(例如,几何平均值);或者,替换像素值也可以为固定值,例如,255灰阶值。需要说明的是,可以利用例如U-Net模型等图像分割模型直接提取手写区域中的除了手写内容像素之外的任意一个像素,以得到替换像素;或者,可以利用例如U-Net模型等图像分割模型提取手写区域中除了手写内容的像素之外的所有像素,然后基于所有像素的像素值得到替换像素值。
例如,利用替换像素替换手写内容的像素,以从输入图像去除手写内容以得到输出图像,包括:利用替换像素替换手写内容的像素,以从输入图像去除手写内容而得到中间输出图像;对中间输出图像进行二值化处理,以得到输出图像。
需要说明的是,区域识别模型进行区域识别、二值化处理等的说明可以参考上述的相关描述,重复之处不再赘述。
例如,对图2A所示的原始图像进行文本图像中手写内容去除处理后,可以得到如图2B所示的输出图像,该输出图像为二值化后的图像。如图2B所示,在该输出图像中,所有手写内容均被去除,从而得到没有用户填写信息的空白表单。
需要说明的是,在本发明的实施例中,模型(例如,区域识别模型、图像分割模型等任意模型)不是仅仅的数学模型,而是可以接收输入数据、执行数据处理、输出处理结果的模块,该模块可以是软件模块、硬件模块(例如硬件神经网络)或采用软硬结合的方式实现。在一些实施例中,区域识别模型和/或图像分割模型包括存储在存储器中的代码和程序;处理器可以执行该代码和程序以实现如上所述的区域识别模型和/或图像分割模型的一些功能或全部功能。在又一些实施例中,区域识别模型和/或图像分割模型可以包括一个电路板或多个电路板的组合,用于实现如上所述的功能。在一些实施例中,该一个电路板或多个电路板的组合可以包括:(1)一个或多个处理器;(2)与处理器相连接的一个或多个非暂时的计算机可读的存储器;以及(3)处理器可执行的存储在存储器中的固件。
应了解,在本发明的实施例中,在获取输入图像前,文本图像中手写内容去除方法还包括:训练阶段。训练阶段包括对区域识别模型和图像分割模型进行训练的过程。需要说明的是,区域识别模型和图像分割模型可以被分别训练,或者,可以同时对区域识别模型和图像分割模型进行训练。
例如,可以通过标注有文本印刷区域(例如,标注出的文本印刷区域的数量至少为一个)和手写区域(例如,标注出的手写区域的数量至少为一个)的第一样本图像对待训练区域识别模型进行训练以得到的区域识别模型。例如,待训练区域识别模型的训练过程可以包括:在训练阶段,利用标注有文本印刷区域和手写区域的多张第一样本图像训练待训练的区域识别模型,以得到区域识别模型。
例如,利用多张第一样本图像训练待训练区域识别模型包括:从多张第一样本图像获取当前第一样本图像;利用待训练区域识别模型对当前第一样 本图像进行处理,以得到训练文本印刷区域和训练手写区域;根据当前第一样本图像中标注出的文本印刷区域和手写区域和训练文本印刷区域和训练手写区域,通过第一损失函数计算待训练区域识别模型的第一损失值;根据第一损失值对待训练区域识别模型的参数进行修正,在第一损失函数满足第一预定条件时,得到训练完成的区域识别模型,在第一损失函数不满足第一预定条件时,继续输入第一样本图像以重复执行上述训练过程。
例如,在一个示例中,上述第一预定条件对应于在输入一定数量的第一样本图像的情况下,第一损失函数的损失收敛(即第一损失值不再显著减小)。例如,在另一个示例中,上述第一预定条件为训练次数或训练周期达到预定数目(例如,该预定数目可以为上百万)。
例如,可以通过标注有手写内容像素的第二样本图像对待训练图像分割模型进行训练以得到的图像分割模型。在标注第二样本图像中的手写内容像素时,可以对第二样本图像进行放大从而将全部手写内容像素准确地标注出来。根据各种手写特征(例如,像素灰度特征、字体特征等)进行机器学习以建立图像分割模型。
例如,待训练图像分割模型的训练过程可以包括:在训练阶段,利用标注有手写内容像素的多张第二样本图像训练待训练的图像分割模型,以得到图像分割模型。
例如,利用多张第二样本图像训练待训练区域识别模型包括:从多张第二样本图像获取当前第二样本图像;利用待训练图像分割模型对当前第二样本图像进行处理,以得到训练手写内容像素;根据当前第二样本图像中标注出的手写内容像素和训练手写内容像素,通过第二损失函数计算待训练图像分割模型的第二损失值;根据第二损失值对待训练图像分割模型的参数进行修正,在第二损失函数满足第二预定条件时,得到训练完成的图像分割模型,在第二损失函数不满足第二预定条件时,继续输入第二样本图像以重复执行上述训练过程。
例如,在一个示例中,上述第二预定条件对应于在输入一定数量的第二样本图像的情况下,第二损失函数的损失收敛(即第二损失值不再显著减小)。例如,在另一个示例中,上述第二预定条件为训练次数或训练周期达到预定 数目(例如,该预定数目可以为上百万)。
本领域技术人员可以理解,多张第一训练样本图像和多张第二训练样本图像可以是相同的也可以是不同的。
本发明至少一实施例还提供一种文本图像中手写内容去除装置,图3为本发明至少一实施例提供的一种文本图像中手写内容去除装置的示意性框图。
如图3所示,该文本图像中手写内容去除装置300包括处理器302和存储器301。应当注意,图3所示的文本图像中手写内容去除装置300的组件只是示例性的,而非限制性的,根据实际应用需要,该文本图像中手写内容去除装置300还可以具有其他组件。例如,存储器301用于非暂时性存储计算机可读指令;处理器302用于运行计算机可读指令,计算机可读指令被处理器302运行时执行根据上述任一实施例所述的文本图像中手写内容去除方法。
本发明实施例提供的文本图像中手写内容去除装置300可以用于实现本发明实施例提供的文本图像中手写内容去除方法,该文本图像中手写内容去除装置300可被配置于电子设备上。该电子设备可以是个人计算机、移动终端等,该移动终端可以是手机、平板电脑等具有各种操作系统的硬件设备。
例如,如图3所示,文本图像中手写内容去除装置300还可以包括图像获取部件303。图像获取部件303用于获得文本图像,例如,获得纸质文本的图像。存储器301还可以用于存储文本图像;处理器302还用于读取并处理文本图像以得到输入图像。需要说明的是,文本图像可以为上述文本图像中手写内容去除方法的实施例中描述的原始图像。
例如,图像获取部件303即为上述文本图像中手写内容去除方法的实施例中描述的图像采集装置,例如,图像获取部件303可以是智能手机的摄像头、平板电脑的摄像头、个人计算机的摄像头、数码照相机的镜头、网络摄像头以及其它可以用于图像采集的装置。
例如,在图3所示的实施例中,图像获取部件303、存储器301和处理器302等物理上可以集成在同一电子设备内部,图像获取部件303可以为电子设备上配置的摄像头。存储器301和处理器302经由内部总线接收从图像获取部件303发送的图像。又例如,图像获取部件303和存储器301/处理器302 在物理位置上也可以分离配置,存储器301和处理器302可以集成在第一用户的电子设备(例如,第一用户的电脑、手机等)中,图像获取部件303可以集成在第二用户(第一用户和第二用户不相同)的电子设备中,第一用户的电子设备和第二用户的电子设备在物理位置上可以分离配置,且第一用户的电子设备和第二用户的电子设备之间可以通过有线或者无线方式进行通信。也就是说,由第二用户的电子设备上的图像获取部件303采集原始图像之后,第二用户的电子设备可以经由有线或者无线方式将该原始图像发送至第一用户的电子设备,第一用户的电子设备接收该原始图像并对该原始图像进行后续处理。例如,存储器301和处理器302也可以集成在云端服务器中,云端服务器接收原始图像并对原始图像进行处理。
例如,文本图像中手写内容去除装置300还可以包括输出装置,输出装置用于输出该输出图像。例如,输出装置可以包括显示器(例如,有机发光二极管显示器、液晶显示器)、投影仪等,显示器和投影仪可以用于显示输出图像。需要说明的是,输出装置还可以包括打印机,打印机用于将输出图像进行打印。
例如,处理器302和存储器301等组件之间可以通过网络连接进行通信。网络可以包括无线网络、有线网络、和/或无线网络和有线网络的任意组合。网络可以包括局域网、互联网、电信网、基于互联网和/或电信网的物联网(Internet of Things)、和/或以上网络的任意组合等。有线网络例如可以采用双绞线、同轴电缆或光纤传输等方式进行通信,无线网络例如可以采用3G/4G/5G移动通信网络、蓝牙、Zigbee或者WiFi等通信方式。本发明对网络的类型和功能在此不作限制。
例如,处理器302可以控制文本图像中手写内容去除装置300中的其它组件以执行期望的功能。处理器302可以是中央处理单元(CPU)、张量处理器(TPU)或者图形处理器(GPU)等具有数据处理能力和/或程序执行能力的器件。中央处理元(CPU)可以为X86或ARM架构等。GPU可以单独地直接集成到主板上,或者内置于主板的北桥芯片中。GPU也可以内置于中央处理器(CPU)上。
例如,存储器301可以包括一个或多个计算机程序产品的任意组合,计 算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机可读指令,处理器302可以运行所述计算机可读指令,以实现文本图像中手写内容去除装置300的各种功能。在存储介质中还可以存储各种应用程序和各种数据等。
关于文本图像中手写内容去除装置300执行文本图像中手写内容去除方法的过程的详细说明可以参考文本图像中手写内容去除方法的实施例中的相关描述,重复之处不再赘述。
本发明至少一实施例还提供一种存储介质,图4为本发明至少一实施例提供的一种存储介质的示意图。例如,如图4所示,在存储介质500上可以非暂时性地存储一个或多个计算机可读指令501。例如,当所述计算机可读指令501由计算机执行时可以执行根据上文所述的文本图像中手写内容去除方法中的一个或多个步骤。
例如,该存储介质500可以应用于上述文本图像中手写内容去除装置300中,例如,其可以包括文本图像中手写内容去除装置300中的存储器301。
例如,关于存储介质500的说明可以参考文本图像中手写内容去除装置300的实施例中对于存储器的描述,重复之处不再赘述。
图5为本发明至少一实施例提供的一种硬件环境的示意图。本发明的实施例提供的文本图像中手写内容去除装置可以应用在互联网系统。
利用图5中提供的计算机系统可以实现本发明中涉及的文本图像中手写内容去除装置。这类计算机系统可以包括个人电脑、笔记本电脑、平板电脑、手机及任何智能设备。本实施例中的特定系统利用功能框图解释了一个包含用户界面的硬件平台。这种计算机系统可以包括一个通用目的的计算机设备,或一个有特定目的的计算机设备。两种计算机设备都可以被用于实现本实施例中的文本图像中手写内容去除装置。计算机系统可以实施当前描述的实现文本图像中手写内容去除方法所需要的信息的任何组件。例如:计算机系统 能够被计算机设备通过其硬件设备、软件程序、固件以及它们的组合所实现。为了方便起见,图5中只绘制了一台计算机设备,但是本实施例所描述的实现文本图像中手写内容去除方法所需要的信息的相关计算机功能是可以以分布的方式、由一组相似的平台所实施的,分散计算机系统的处理负荷。
如图5所示,计算机系统可以包括通信端口250,与之相连的是实现数据通信的网络,例如,通信端口250可以与上面描述的图像获取部件403进行通信。计算机系统还可以包括一个处理器组220(即上面描述的处理器),用于执行程序指令。处理器组220可以由至少一个处理器(例如,CPU)组成。计算机系统可以包括一个内部通信总线210。计算机系统可以包括不同形式的程序储存单元以及数据储存单元(即上面描述的存储器或存储介质),例如硬盘270、只读存储器(ROM)230、随机存取存储器(RAM)240,能够用于存储计算机处理和/或通信使用的各种数据文件,以及处理器组220所执行的可能的程序指令。计算机系统还可以包括一个输入/输出组件260,输入/输出组件260可以支持计算机系统与其他组件(例如,用户界面280,用户界面280可以为上面描述的显示器)之间的输入/输出数据流。计算机系统也可以通过通信端口250发送和接收信息及数据。
在一些实施例中,上述计算机系统可以用于组成互联网通信系统中的服务器。互联网通信系统的服务器可以是一个服务器硬件设备,或一个服务器群组。一个服务器群组内的各个服务器可以通过有线的或无线的网络进行连接。一个服务器群组可以是集中式的,例如数据中心。一个服务器群组也可以是分布式的,例如一个分布式系统。
需要说明的是,本发明的框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机程序指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
对于本发明,还有以下几点需要说明:
(1)本发明实施例附图只涉及到与本发明实施例涉及到的结构,其他结构可参考通常设计。
(2)为了清晰起见,在用于描述本发明的实施例的附图中,层或结构的厚度和尺寸被放大。可以理解,当诸如层、膜、区域或基板之类的元件被称作位于另一元件“上”或“下”时,该元件可以“直接”位于另一元件“上”或“下”,或者可以存在中间元件。
(3)在不冲突的情况下,本发明的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上所述仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种文本图像中手写内容去除方法,其特征在于,包括:
    获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容;
    利用图像分割模型对所述输入图像进行识别,以得到所述手写内容的初始手写像素;
    对所述初始手写像素进行模糊处理,以得到手写像素掩膜区域;
    根据所述手写像素掩膜区域确定所述手写区域中的所述手写内容;
    去除所述输入图像中的所述手写内容,以得到输出图像。
  2. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到输出图像,包括:
    根据所述初始手写像素的像素值以及所述手写像素掩膜区域的位置,在所述输入图像中确定所述手写像素掩膜区域中的非手写像素;去除所述输入图像中的所述手写像素掩膜区域内容,以得到中间输出图像;
    对所述中间输出图像进行所述手写像素掩膜区域中的非手写像素修复,以得到所述输出图像。
  3. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到输出图像,包括:
    根据所述初始手写像素的像素值以及所述手写像素掩膜区域的位置,在所述输入图像中确定所述手写像素掩膜区域中的非手写像素;
    根据所述手写像素掩膜区域中的非手写像素以及所述手写像素掩膜区域去除所述输入图像中的所述手写内容,以得到所述输出图像。
  4. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到输出图像包括:
    从所述输入图像中切割去除所述手写内容,以得到中间输出图像;
    对所述中间输出图像进行二值化处理,以得到所述输出图像。
  5. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:
    获取替换像素;
    利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容而得到所述输出图像。
  6. 根据权利要求5所述的文本图像中手写内容去除方法,其特征在于,利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容以得到所述输出图像,包括:
    利用所述替换像素替换所述手写内容的像素,以从所述输入图像去除所述手写内容而得到中间输出图像;
    对所述中间输出图像进行二值化处理,以得到所述输出图像。
  7. 根据权利要求5所述的文本图像中手写内容去除方法,其特征在于,所述替换像素是根据所述手写内容的像素通过基于像素邻域计算的图像修复算法获取的。
  8. 根据权利要求5所述的文本图像中手写内容去除方法,其特征在于,所述获取替换像素还包括利用区域识别模型对所述输入图像进行识别,得到所述手写区域,所述替换像素为所述手写区域中除了所述手写内容的像素之外的任意一个像素;或者,
    所述替换像素为所述手写区域中除了所述手写内容的像素之外的所有像素的像素值的平均值。
  9. 根据权利要求1-8任一项所述的文本图像中手写内容去除方法,其特征在于,获取所述待处理文本页面的输入图像包括:
    获取所述待处理文本页面的原始图像,其中,所述原始图像包括待处理文本区域;
    对所述原始图像进行边缘检测,以确定所述原始图像中的所述待处理文本区域;
    对所述待处理文本区域进行转正处理,以得到所述输入图像。
  10. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,所述图像分割模型为预先训练好的对所述输入图像进行分割的U-Net模型。
  11. 根据权利要求1所述的文本图像中手写内容去除方法,其特征在于,通过高斯滤波函数对所述初始手写像素进行模糊处理,扩大所述初始手写像 素的区域,以得到所述手写像素掩膜区域。
  12. 一种文本图像中手写内容去除装置,其特征在于,包括:
    存储器,用于非暂时性存储计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,其中,所述计算机可读指令被所述处理器运行时执行根据权利要求1-11任一项所述的文本图像中手写内容去除方法。
  13. 一种存储介质,非暂时性地存储计算机可读指令,其特征在于,当所述计算机可读指令由计算机执行时可以执行根据权利要求1-11任一项所述的文本图像中手写内容去除方法。
PCT/CN2021/076250 2020-04-10 2021-02-09 文本图像中手写内容去除方法、装置、存储介质 WO2021203832A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020227037762A KR20220160660A (ko) 2020-04-10 2021-02-09 텍스트 이미지에서 필기 내용을 제거하는 방법, 장치 및 저장 매체
JP2022560485A JP2023523152A (ja) 2020-04-10 2021-02-09 テキスト画像中の手書き内容を除去する方法および装置、ならびに記憶媒体
US17/915,488 US20230222631A1 (en) 2020-04-10 2021-02-09 Method and device for removing handwritten content from text image, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010278143.4A CN111488881A (zh) 2020-04-10 2020-04-10 文本图像中手写内容去除方法、装置、存储介质
CN202010278143.4 2020-04-10

Publications (1)

Publication Number Publication Date
WO2021203832A1 true WO2021203832A1 (zh) 2021-10-14

Family

ID=71794780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076250 WO2021203832A1 (zh) 2020-04-10 2021-02-09 文本图像中手写内容去除方法、装置、存储介质

Country Status (5)

Country Link
US (1) US20230222631A1 (zh)
JP (1) JP2023523152A (zh)
KR (1) KR20220160660A (zh)
CN (1) CN111488881A (zh)
WO (1) WO2021203832A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048822A (zh) * 2021-11-19 2022-02-15 辽宁工程技术大学 一种图像的注意力机制特征融合分割方法
CN117746214A (zh) * 2024-02-07 2024-03-22 青岛海尔科技有限公司 基于大模型生成图像的文本调整方法、装置、存储介质
CN117746214B (zh) * 2024-02-07 2024-05-24 青岛海尔科技有限公司 基于大模型生成图像的文本调整方法、装置、存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275139B (zh) * 2020-01-21 2024-02-23 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质
CN112070708B (zh) * 2020-08-21 2024-03-08 杭州睿琪软件有限公司 图像处理方法、图像处理装置、电子设备、存储介质
CN112150394B (zh) * 2020-10-12 2024-02-20 杭州睿琪软件有限公司 图像处理方法及装置、电子设备和存储介质
CN112150365B (zh) * 2020-10-15 2023-02-21 江西威力固智能设备有限公司 一种喷印图像的涨缩处理方法及喷印设备
CN113781356A (zh) * 2021-09-18 2021-12-10 北京世纪好未来教育科技有限公司 图像去噪模型的训练方法、图像去噪方法、装置及设备
CN114283156B (zh) * 2021-12-02 2024-03-05 珠海移科智能科技有限公司 一种用于去除文档图像颜色及手写笔迹的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521516A (zh) * 2011-12-20 2012-06-27 北京商纳科技有限公司 一种自动生成错题本的方法及系统
CN109254711A (zh) * 2018-09-29 2019-01-22 联想(北京)有限公司 信息处理方法及电子设备
US20190066273A1 (en) * 2013-07-24 2019-02-28 Georgetown University Enhancing the legibility of images using monochromatic light sources
CN111275139A (zh) * 2020-01-21 2020-06-12 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080055119A (ko) * 2006-12-14 2008-06-19 삼성전자주식회사 화상형성장치 및 그 제어방법
CN105898322A (zh) * 2015-07-24 2016-08-24 乐视云计算有限公司 一种视频去水印方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521516A (zh) * 2011-12-20 2012-06-27 北京商纳科技有限公司 一种自动生成错题本的方法及系统
US20190066273A1 (en) * 2013-07-24 2019-02-28 Georgetown University Enhancing the legibility of images using monochromatic light sources
CN109254711A (zh) * 2018-09-29 2019-01-22 联想(北京)有限公司 信息处理方法及电子设备
CN111275139A (zh) * 2020-01-21 2020-06-12 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048822A (zh) * 2021-11-19 2022-02-15 辽宁工程技术大学 一种图像的注意力机制特征融合分割方法
CN117746214A (zh) * 2024-02-07 2024-03-22 青岛海尔科技有限公司 基于大模型生成图像的文本调整方法、装置、存储介质
CN117746214B (zh) * 2024-02-07 2024-05-24 青岛海尔科技有限公司 基于大模型生成图像的文本调整方法、装置、存储介质

Also Published As

Publication number Publication date
JP2023523152A (ja) 2023-06-02
KR20220160660A (ko) 2022-12-06
US20230222631A1 (en) 2023-07-13
CN111488881A (zh) 2020-08-04

Similar Documents

Publication Publication Date Title
WO2021203832A1 (zh) 文本图像中手写内容去除方法、装置、存储介质
WO2021147631A1 (zh) 手写内容去除方法、手写内容去除装置、存储介质
WO2021233266A1 (zh) 边缘检测方法和装置、电子设备和存储介质
JP5972468B2 (ja) 画像からのラベルの検出
US11106891B2 (en) Automated signature extraction and verification
WO2023284502A1 (zh) 图像处理方法、装置、设备和存储介质
US9330331B2 (en) Systems and methods for offline character recognition
US9235757B1 (en) Fast text detection
US10423851B2 (en) Method, apparatus, and computer-readable medium for processing an image with horizontal and vertical text
US10169650B1 (en) Identification of emphasized text in electronic documents
CN113033558B (zh) 一种用于自然场景的文本检测方法及装置、存储介质
CN114283156B (zh) 一种用于去除文档图像颜色及手写笔迹的方法及装置
Bukhari et al. The IUPR dataset of camera-captured document images
Susan et al. Text area segmentation from document images by novel adaptive thresholding and template matching using texture cues
WO2022002002A1 (zh) 图像处理方法、图像处理装置、电子设备、存储介质
CN114581928A (zh) 一种表格识别方法及系统
JP7364639B2 (ja) デジタル化された筆記の処理
Cai et al. Bank card and ID card number recognition in Android financial APP
WO2019071476A1 (zh) 一种基于智能终端的快递信息录入方法及录入系统
Konya et al. Adaptive methods for robust document image understanding
Hengaju et al. Improving the Recognition Accuracy of Tesseract-OCR Engine on Nepali Text Images via Preprocessing
Uyun et al. Skew Correction and Image Cleaning Handwriting Recognition Using a Convolutional Neural Network
Mahajan et al. Improving Classification of Scanned Document Images using a Novel Combination of Pre-Processing Techniques
US20240144711A1 (en) Reliable determination of field values in documents with removal of static field elements
Tamirat Customers Identity Card Data Detection and Recognition Using Image Processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785421

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022560485

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227037762

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21785421

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21785421

Country of ref document: EP

Kind code of ref document: A1