WO2021147631A1 - 手写内容去除方法、手写内容去除装置、存储介质 - Google Patents

手写内容去除方法、手写内容去除装置、存储介质 Download PDF

Info

Publication number
WO2021147631A1
WO2021147631A1 PCT/CN2020/141110 CN2020141110W WO2021147631A1 WO 2021147631 A1 WO2021147631 A1 WO 2021147631A1 CN 2020141110 W CN2020141110 W CN 2020141110W WO 2021147631 A1 WO2021147631 A1 WO 2021147631A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
handwritten content
area
input image
handwritten
Prior art date
Application number
PCT/CN2020/141110
Other languages
English (en)
French (fr)
Inventor
何涛
罗欢
陈明权
Original Assignee
杭州大拿科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州大拿科技股份有限公司 filed Critical 杭州大拿科技股份有限公司
Priority to US17/791,220 priority Critical patent/US11823358B2/en
Publication of WO2021147631A1 publication Critical patent/WO2021147631A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • G06V30/244Division of the character sequences into groups prior to recognition; Selection of dictionaries using graphical properties, e.g. alphabet type or font
    • G06V30/2455Discrimination between machine-print, hand-print and cursive writing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/15Cutting or merging image elements, e.g. region growing, watershed or clustering-based techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/155Removing patterns interfering with the pattern to be recognised, such as ruled lines or underlines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/1801Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/226Character recognition characterised by the type of writing of cursive writing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B3/00Manually or mechanically operated teaching appliances working with questions and answers
    • G09B3/02Manually or mechanically operated teaching appliances working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Definitions

  • the embodiments of the present disclosure relate to a method for removing handwritten content, a device for removing handwritten content, and a storage medium.
  • At least one embodiment of the present disclosure provides a method for removing handwritten content, including: acquiring an input image of a text page to be processed, wherein the input image includes a handwriting area, the handwriting area includes handwriting content; and recognizing the input image To determine the handwritten content in the handwriting area; remove the handwritten content in the input image to obtain an output image.
  • the input image further includes a text printing area
  • the text printing area includes printing content
  • the input image is recognized to determine whether the input image is in the handwriting area.
  • the handwritten content includes: using a region recognition model to recognize the input image to obtain the text printing region and the handwriting region.
  • removing the handwritten content in the input image to obtain the output image includes: labeling the handwritten area to obtain the handwritten area label A frame, wherein the handwriting area marking frame includes the handwriting area; the handwriting area marking frame is cut and removed from the input image to obtain the output image.
  • cutting and removing the handwritten area marking frame from the input image to obtain the output image includes: cutting and removing the input image from the input image.
  • the handwriting area is marked with a frame to obtain an intermediate output image; and the intermediate output image is binarized to obtain the output image.
  • the input image further includes a text printing area
  • the text printing area includes printing content
  • the input image is recognized to determine whether the input image is in the handwriting area.
  • the handwritten content includes: using a region recognition model to recognize the input image to obtain the text printing area and the handwriting area, and using a pixel recognition model to perform pixel recognition on the handwriting area to determine whether the handwriting is in the handwriting The handwritten content pixels corresponding to the handwritten content in the area.
  • removing the handwritten content in the input image to obtain the output image includes: obtaining a replacement pixel value; replacing the replacement pixel value with the replacement pixel value.
  • the pixel value of the pixel of the handwritten content is used to remove the handwritten content from the input image to obtain the output image.
  • the pixel value of the handwritten content pixel is replaced by the pixel value of the replacement pixel to remove the handwritten content from the input image to obtain the output
  • the image includes: replacing the pixel value of the handwritten content pixel with the pixel value of the replacement pixel to remove the handwritten content from the input image to obtain an intermediate output image; and performing binarization processing on the intermediate output image , To get the output image.
  • the replacement pixel value is the pixel value of any pixel in the handwriting area except the handwritten content pixel; or, the replacement pixel value Is the average value of the pixel values of all pixels in the handwriting area except for the handwritten content pixels.
  • the text page to be processed is a test paper or an exercise question
  • the printed content includes a question stem
  • the handwritten content includes an answer.
  • the handwritten content includes handwritten characters.
  • obtaining an input image of the text page to be processed includes: obtaining an original image of the text page to be processed, wherein the original image includes a text area to be processed Perform edge detection on the original image to determine the text area to be processed in the original image; perform normalization processing on the text area to be processed to obtain the input image.
  • At least one embodiment of the present disclosure provides a handwritten content removal device, including: a memory for non-transitory storage of computer readable instructions; and a processor for running the computer readable instructions, wherein the computer readable When the instruction is executed by the processor, the method for removing handwritten content according to any one of the foregoing embodiments is executed.
  • the handwritten content removal device further includes: an image acquisition component, the image acquisition component is used to obtain a job image, the memory is also used to store the job image, and the processor is further used to The job image is read and processed to obtain an input image.
  • At least one embodiment of the present disclosure provides a storage medium that non-temporarily stores computer-readable instructions, wherein when the computer-readable instructions are executed by a computer, the handwritten content removal method according to any of the above-mentioned embodiments can be executed .
  • FIG. 1 is a schematic flowchart of a method for removing handwritten content according to at least one embodiment of the present disclosure
  • FIG. 2A is a schematic diagram of an original image provided by at least one embodiment of the present disclosure.
  • 2B is a schematic diagram of an output image provided by at least one embodiment of the present disclosure.
  • FIG. 3 is a schematic block diagram of an apparatus for removing handwritten content according to at least one embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a hardware environment provided by at least one embodiment of the present disclosure.
  • At least one embodiment of the present disclosure provides a method for removing handwritten content, a device for removing handwritten content, and a storage medium.
  • the handwriting content removal method includes: obtaining an input image of a text page to be processed, where the input image includes a handwriting area, and the handwriting area includes handwriting content; recognizing the input image to determine the handwriting content in the handwriting area; removing the handwriting in the input image Content to get the output image.
  • the handwritten content removal method can effectively remove the handwritten content in the handwritten area in the input image, so as to output a new page for filling in.
  • the handwritten content removal method can also convert the input image into a form that is convenient for printing, so that the user can print the input image into a paper form for filling in.
  • Fig. 1 is a schematic flowchart of a method for removing handwritten content provided by at least one embodiment of the present disclosure
  • Fig. 2A is a schematic diagram of an original image provided by at least one embodiment of the present disclosure
  • Fig. 2B is at least one embodiment of the present disclosure Provide a schematic diagram of the output image.
  • the method for removing handwritten content provided by an embodiment of the present disclosure includes steps S10 to S12.
  • step S10 an input image of a text page to be processed is obtained.
  • the input image includes a handwritten area, and the handwritten area includes handwritten content.
  • the input image can be any image that includes handwritten content.
  • the input image may be an image taken by an image acquisition device (for example, a digital camera or a mobile phone, etc.), and the input image may be a grayscale image or a color image. It should be noted that the input image refers to a form in which the text page to be processed is presented in a visual manner, such as a picture of the text page to be processed.
  • an image acquisition device for example, a digital camera or a mobile phone, etc.
  • the input image may be a grayscale image or a color image.
  • the input image refers to a form in which the text page to be processed is presented in a visual manner, such as a picture of the text page to be processed.
  • the handwriting area does not have a fixed shape, but depends on the content of the handwriting, that is, the area with handwriting content is the handwriting area, and the handwriting area can be a regular shape (for example, a rectangle, etc.), or it can be irregular shape.
  • the handwriting area may include a filled area, a handwritten draft, or other handwritten marked areas.
  • the input image also includes a text printing area, and the text printing area includes printed content.
  • the shape of the text printing area may also be a regular shape (for example, a rectangle, etc.) or an irregular shape.
  • the shape of each handwriting area is a rectangle and the shape of each text printing area is a rectangle as an example for description, and the present disclosure includes but is not limited to this.
  • the text page to be processed may include test papers, exercises, forms, contracts, and so on.
  • Test papers can be test papers for various subjects, such as language, mathematics, foreign languages (for example, English, etc.).
  • exercises can also be set of exercises for various subjects;
  • tables can be various types of tables, for example, year-end summary tables , Entry form, price summary form, application form, etc.;
  • contracts can include labor contracts, etc.
  • the present disclosure does not specifically limit the types of text pages to be processed.
  • the text page to be processed may be text in paper form or text in electronic form.
  • the printed content can include the stem of the question
  • the handwritten content can include the answer filled in by the user (for example, a student, teacher, etc.) (in this case, the answer is the answer filled in by the user, which is not correct Answers or standard answers), draft calculations or other handwritten marks, etc.
  • the printed content can also include various symbols, graphics, etc.; when the text page to be processed is an entry form, the printed content can include "name”, “gender”, “ethnicity”, “work history”, etc., and the handwritten content can include The user (for example, an employee, etc.) fills in handwritten information such as the user's name, gender (male or female), ethnicity, and work experience in the entry form.
  • the shape of the text page to be processed may be a rectangle or the like, and the shape of the input image may be a regular shape (for example, a parallelogram, a rectangle, etc.) to facilitate printing.
  • the present disclosure is not limited to this, and in some embodiments, the input image may also have an irregular shape.
  • the size of the input image and the size of the text page to be processed are not the same.
  • the present disclosure is not limited to this, and the size of the input image and the size of the text page to be processed may also be same.
  • the text page to be processed includes printed content and handwritten content
  • the printed content may be printed content
  • the handwritten content is user handwritten content
  • the handwritten content may include handwritten characters.
  • printed content does not only refer to text, characters, graphics and other content input on electronic equipment through the input device.
  • the question stem is also It can be handwritten by the user.
  • the printed content is the printed question stem that is handwritten by the user.
  • the printed content may include text in various languages, such as Chinese (for example, Chinese characters or pinyin), English, Japanese, French, Korean, etc.
  • the printed content may also include numbers, various symbols (for example, greater than symbols, Less than signs, plus signs, multiplication signs, etc.) and various graphics.
  • the handwritten content may also include text, numbers, various symbols, and various graphics in various languages.
  • the to-be-processed text page 100 is an exercise, and the area enclosed by four boundary lines (straight lines 101A-101D) represents the to-be-processed text area 100 corresponding to the to-be-processed text page.
  • the printing area includes the stem area, and the printed content may include various stems, for example, "1. Look at the clock face, write the time", "2. Draw the hour and minute hands on the clock face” , "It takes about 1()30() to watch a movie", etc.
  • the printed content can also include various clock graphics, two-dimensional code graphics, etc.
  • the handwritten area includes the handwritten answer area, and the handwritten content can Including the answer that the user fills in with a pencil.
  • the handwritten content can Including the answer that the user fills in with a pencil.
  • the input image may include multiple handwritten content and multiple printed content.
  • Multiple handwritten content is spaced apart from each other, and multiple printed content is also spaced apart from each other.
  • part of the handwritten content in multiple handwritten content may be the same (that is, the characters of the handwritten content are the same, but the specific shape of the handwritten content is different); part of the printed content in the multiple printed content may also be the same.
  • the present disclosure is not limited to this, a plurality of handwritten contents may also be different from each other, and a plurality of printed contents may also be different from each other.
  • step S10 may include: obtaining an original image of a text page to be processed, where the original image includes a text area to be processed; performing edge detection on the original image to determine the text area to be processed in the original image; The text area to be processed is normalized to obtain the input image.
  • a neural network or an OpenCV-based edge detection algorithm can be used to perform edge detection on the original image to determine the text area to be processed.
  • OpenCV is an open source computer vision library.
  • Edge detection algorithms based on OpenCV include Sobel, Scarry, Canny, Laplacian, Prewitt, Marr-Hildresh, scharr and many other algorithms.
  • performing edge detection on the original image to determine the text area to be processed in the original image may include: processing the original image to obtain a line drawing of the gray contour in the original image, where the line drawing includes multiple lines; The similar lines in the line drawing are merged to obtain multiple initial merged lines, and a boundary matrix is determined according to the multiple initial merged lines; the similar lines in the multiple initial merged lines are merged to obtain the target line, and the unmerged initial The merged line is also used as the target line to obtain multiple target lines; according to the boundary matrix, multiple reference boundary lines are determined from the multiple target lines; the original image is processed through the pre-trained boundary line region recognition model to obtain the original image Multiple boundary line areas of the text page to be processed in the text page; for each boundary line area, determine the target boundary line corresponding to the boundary line area from multiple reference boundary lines; determine the target boundary line in the original image according to the determined multiple target boundary lines Process the edges of the text area.
  • processing the original image to obtain a line drawing of the gray contour in the original image includes: processing the original image by an edge detection algorithm based on OpenCV to obtain a line drawing of the gray contour in the original image .
  • merging similar lines in a line drawing to obtain multiple initial merged lines includes: obtaining long lines in the line drawing, where the long lines are lines whose length exceeds the first preset threshold; and obtaining multiple lines from the long lines.
  • a group of first-type lines wherein the first-type lines include at least two successively adjacent long lines, and the angle between any two adjacent long lines is less than the second preset threshold; for each group of first Class line, each long line in the first type of line of the group is sequentially merged to obtain an initial merged line.
  • the boundary matrix is determined in the following way: multiple initial merged lines and unmerged lines in long lines are redrawn, and the position information of the pixels in all redrawn lines is mapped to the matrix of the entire original image.
  • the values of the positions of the pixels of these lines are set to the first value, and the values of the positions of the pixels other than these lines are set to the second value, thereby forming a boundary matrix.
  • merging similar lines among multiple initial merged lines to obtain the target line includes: obtaining multiple sets of second-type lines from the multiple initial merged lines, where the second-type lines include at least two adjacent initial lines. Merged lines, and the angle between any two adjacent initial merged lines is less than the third preset threshold; for each group of second type lines, each initial merged line in the group of second type lines is merged in turn Get a target line.
  • the first preset threshold may be 2 pixels in length, and the second preset threshold and the third preset threshold may be 15 degrees. It should be noted that the first preset threshold, the second preset threshold, and the third preset threshold can be set according to actual application requirements.
  • multiple reference boundary lines are determined from multiple target lines, including: for each target line, the target line is extended, a line matrix is determined according to the extended target line, and then the line The matrix is compared with the boundary matrix, and the number of pixels belonging to the boundary matrix on the extended target line is calculated as the result of the target line, that is, the line matrix is compared with the boundary matrix to determine how many pixels fall into In the boundary matrix, it is to determine how many pixels at the same position in the two matrices have the same first value, such as 255, to calculate the score.
  • the line matrix and the boundary matrix have the same size; according to the scores of each target line, Determine multiple reference boundary lines among multiple target lines. It should be noted that the number of target lines with the best performance may be multiple. Therefore, according to the performance of each target line, multiple target lines with the best performance are determined from the multiple target lines as the reference boundary line.
  • the line matrix is determined in the following way: redraw the extended target line or straight line, correspond the position information of the pixel points in the redrawn line to the matrix of the entire original image, and combine the lines in the matrix of the original image.
  • the value of the location of the pixel is set to the first value, and the value of the location of the pixel other than the line is set to the second value, thereby forming a line matrix.
  • determining the target boundary line corresponding to the boundary line area from a plurality of reference boundary lines includes: calculating the slope of each reference boundary line; for each boundary line area, using Hough transform to transform The boundary line area is converted into multiple straight lines, and the average slope of the multiple straight lines is calculated, and then it is judged whether there is a reference boundary line whose slope matches the average slope among the multiple reference boundary lines.
  • the reference boundary line is determined as The target boundary line corresponding to the boundary line area; if it is determined that there is no reference boundary line with a slope matching the average slope among the multiple reference boundary lines, then for each straight line obtained by the conversion of the boundary line area, the straight line
  • the formed line matrix is compared with the boundary matrix, and the number of pixels belonging to the boundary matrix on the line is calculated as the score of the line; the line with the best score is determined as the target boundary line corresponding to the boundary line area; Among them, the line matrix and the boundary matrix have the same size. It should be noted that if there are multiple straight lines with the best results, the first straight line among them will be used as the best boundary line according to the sorting algorithm.
  • the boundary line region recognition model is a neural network-based model.
  • the boundary line region recognition model can be established through machine learning training.
  • multiple target boundary lines for example, four target boundary lines
  • the text area to be processed is determined by multiple target boundary lines, for example, according to multiple intersection points of multiple target boundary lines
  • the text area to be processed can be determined with multiple target boundary lines. Every two adjacent target boundary lines intersect to obtain an intersection point. Multiple intersection points and multiple target boundary lines together define the area where the text to be processed in the original image is located. .
  • the text area to be processed may be a problem area surrounded by four target boundary lines.
  • the four target boundary lines are all straight lines, and the four target boundary lines are respectively the first target boundary line 101A, the second target boundary line 101B, the third target boundary line 101C, and the fourth target boundary line 101D.
  • the original image may also include a non-text area, for example, an area other than the area enclosed by the four border lines in FIG. 2A.
  • performing normalization processing on the text area to be processed to obtain the input image includes: performing projection transformation on the text area to be processed to obtain a front view of the text area to be processed, and the front view is the input image.
  • Projective transformation Perspective Transformation
  • Viewing Plane also known as Projective Mapping.
  • the true shape of the text to be processed has changed in the original image, that is, geometric distortion has occurred.
  • the shape of the text to be processed was originally a rectangle, but the shape of the text to be processed in the original image has changed, becoming an irregular polygon.
  • the projected transformation of the text area to be processed in the original image can transform the text area to be processed from irregular polygons into rectangles or parallelograms, etc., that is, to correct the text area to be processed to remove the influence of geometric distortion, and obtain The front view of the text to be processed in the original image.
  • the projection transformation can process the pixels in the text area to be processed according to the space projection conversion coordinates to obtain the front view of the text to be processed, which will not be repeated here.
  • the text area to be processed may not be normalized, and the text area to be processed may be directly cut from the original image to obtain a separate image of the text area to be processed.
  • the image of the processed text area is the input image.
  • the original image may be an image directly collected by the image acquisition device, or may be an image obtained after preprocessing the image directly collected by the image acquisition device.
  • the original image can be a grayscale image or a color image.
  • the handwritten content removal method provided in the embodiments of the present disclosure may further include an operation of preprocessing the original image. Preprocessing can eliminate irrelevant information or noise information in the original image, so as to better process the original image.
  • the preprocessing may include, for example, processing such as scaling, cropping, gamma correction, image enhancement, or noise reduction filtering on the image directly collected by the image collection device.
  • the original image can be used as the input image.
  • the original image can be directly recognized to determine the handwritten content in the original image; then the handwritten content in the original image is removed , To obtain the output image; alternatively, you can directly recognize the original image to determine the handwritten content in the original image; then remove the handwritten content in the original image to obtain an intermediate output image; then perform edge detection on the intermediate output image to determine The to-be-processed text area in the intermediate output image; the to-be-processed text area is normalized to obtain the output image, that is, in some embodiments of the present disclosure, the handwritten content in the original image can be removed first to obtain the intermediate output Image, and then perform edge detection and normalization processing on the intermediate output image.
  • step S11 the input image is recognized to determine the handwritten content in the handwritten area.
  • step S11 may include: using an area recognition model to recognize the input image to obtain a text printing area and a handwriting area.
  • determining the handwritten content does not mean that the specific characters in the handwritten content need to be determined, but the position of the handwritten content in the input image needs to be determined.
  • the content is located in the handwriting area, so “getting the handwriting area” means determining the handwriting content in the handwriting area.
  • a region recognition model refers to a model for region recognition (or division) of an input image.
  • the region recognition model can be implemented by machine learning technology (for example, neural network technology) and run on a general-purpose computing device or a dedicated computing device, for example.
  • the recognition model is a pre-trained model.
  • the neural network applied to the region recognition model may include a deep convolutional neural network, a masked region convolutional neural network (Mask-RCNN), a deep residual network, an attention model, and so on.
  • using an area recognition model to recognize an area of an input image includes recognizing the boundary of the area.
  • the area is defined by a rectangle, and the two adjacent sides of the rectangle are respectively parallel to a horizontal line parallel to the horizontal direction and a vertical line parallel to the vertical direction, at least three of the rectangle can be determined.
  • Vertex to determine the area in the case where the area is defined by a parallelogram, the area can be determined by determining at least three vertices of the parallelogram; in the area, a quadrilateral (for example, trapezoid, arbitrary irregular quadrilateral, etc.) It is defined, and at least one boundary of the quadrilateral may also be inclined with respect to the horizontal line or the vertical line, and the area may be determined by determining the four vertices of the quadrilateral.
  • a quadrilateral for example, trapezoid, arbitrary irregular quadrilateral, etc.
  • step S12 the handwritten content in the input image is removed to obtain the output image.
  • step S12 includes: labeling the handwriting area to obtain a handwriting area labeling frame; cutting and removing the handwriting area labeling frame from the input image to obtain an output image.
  • the handwriting area marking frame includes the handwriting area, that is, the handwriting area marking frame covers the handwriting area.
  • the handwriting area can be annotated based on neural network.
  • Mask-RCNN may be used to mark the handwritten area to obtain a handwritten area labeling frame.
  • the processing flow of Mask-RCNN may include: input a to-be-processed image (ie, input image), and then to-be-processed image Perform a preprocessing operation, or the image to be processed is a preprocessed image; then, input the image to be processed into a pre-trained neural network to obtain a corresponding feature map; Set a predetermined number of regions of interest (ROI) at each point of, so as to obtain multiple candidate ROIs; then, these candidate ROIs are sent to the Region Proposal Networks (RPN) for binary classification ( Foreground or background) and Bounding-box regression to filter out the uninteresting area to obtain the target ROI; then, perform the ROIAlign operation on the target ROI (that is, first correspond to the pixels of the image to be processed and the feature image, and then The feature image corresponds to
  • Regression of the frame of the target ROI can make the obtained handwritten area labeling frame closer to the actual position of the handwritten content.
  • the mask area of the handwriting content can be directly obtained, and then the mask area of the handwriting content can be marked with a circumscribed rectangle.
  • the circumscribed marking frame includes the mask area.
  • the handwriting area marking frame is determined by the center coordinates of the handwriting area marking frame and the length and height of the handwriting area marking frame.
  • Different handwriting content can have different shapes of handwriting area marking boxes.
  • cutting and removing the handwriting area labeling frame from the input image to obtain the output image includes: cutting and removing the handwriting area labeling frame from the input image to obtain an intermediate output image; binarizing the intermediate output image to obtain the output image.
  • Binarization processing is the process of setting the gray value of the pixels on the intermediate output image to 0 or 255, that is, making the entire intermediate output image show a clear black and white effect. Binarization processing can make the data in the intermediate output image The amount is greatly reduced, so that the outline of the target can be highlighted. Binarization processing can convert the intermediate output image into a grayscale image with obvious black and white contrast (ie output image). The converted grayscale image has less noise interference, which can effectively improve the recognition and printing of the content in the output image. Effect.
  • all pixels in the area corresponding to the handwriting area labeling frame are removed, that is, the pixels of the area corresponding to the handwriting area labeling frame in the input image are empty, that is, there are no pixels.
  • the intermediate output image is binarized, the area where the pixels in the intermediate output image are empty will not be processed; or, when the intermediate output image is binarized, the pixels in the intermediate output image can also be processed.
  • the empty area is filled with a gray value of 255.
  • the final output image is obtained to facilitate the user to print the output image into a paper form.
  • the output image can be printed into a paper form for students to answer.
  • the method of binarization processing can be a threshold method.
  • the threshold method includes: setting a binarization threshold, and comparing the pixel value of each pixel in the intermediate output image with the binarization threshold. If a certain value in the intermediate output image is If the pixel value of the pixel is greater than or equal to the binarization threshold, then the pixel value of the pixel is set to 255 gray scale. If the pixel value of a pixel in the intermediate output image is less than the binarization threshold, the pixel value of the pixel is set If the grayscale is 0, the intermediate output image can be binarized.
  • the selection methods of binarization threshold include bimodal method, P-parameter method, big law method (OTSU method), maximum entropy method, iterative method and so on.
  • performing binarization processing on an intermediate output image includes: obtaining an intermediate output image; performing grayscale processing on the intermediate output image to obtain a grayscale image of the intermediate output image;
  • the binary image is processed by binarization to obtain the binarized image of the intermediate output image; the binarized image is used as the guiding image, and the gray image is subjected to guiding filtering processing to obtain the filtered image; according to the second threshold, the filtered image is determined
  • the gray value of the high-value pixel is greater than the second threshold; according to the preset expansion coefficient, the gray value of the high-value pixel is expanded to obtain an expanded image; the expanded image is sharpened to obtain Clear image; adjust the contrast of the clear image to get the output image.
  • gray-scale processing methods include component method, maximum value method, average method, and weighted average method.
  • the preset expansion factor is 1.2-1.5, for example, 1.3.
  • the gray value of each high-value pixel is multiplied by a preset expansion coefficient to expand the gray value of the high-value pixel, thereby obtaining an expanded image with more obvious black and white contrast.
  • the second threshold is the sum of the mean gray value of the filtered image and the standard deviation of the gray value.
  • clearing the expanded image to obtain a clear image includes: using Gaussian filtering to blur the expanded image to obtain a blurred image; according to the preset mixing coefficient, the blurred image and the expanded image are mixed in proportion to obtain a clear image image.
  • f 1 (i,j) is the gray value of the pixel at (i,j) in the expanded image
  • f 2 (i,j) is the gray value of the pixel at (i,j) in the blurred image.
  • Degree value, f 3 (i,j) is the gray value of the pixel of the clear image at (i,j)
  • k 1 is the preset mixing coefficient of the expanded image
  • k 2 is the preset expansion coefficient of the blurred image
  • f 3 (i,j) k 1 f 1 (i,j)+k 2 f 2 (i,j).
  • the preset mixing coefficient of the expanded image is 1.5, and the preset mixing coefficient of the blurred image is -0.5.
  • adjusting the contrast of a clear image includes: adjusting the gray value of each pixel of the clear image according to the average gray value of the clear image.
  • the gray value of each pixel of a clear image can be adjusted by the following formula:
  • f'(i,j) is the gray value of the pixel of the enhanced image at (i,j)
  • the average gray value of the clear image f(i,j) is the gray value of the pixel of the clear image at (i,j)
  • t is the intensity value.
  • the intensity value may be 0.1-0.5, for example, the intensity value may be 0.2. In practical applications, the intensity value can be selected according to the final black and white enhancement effect to be achieved.
  • step S11 may include: using a region recognition model to recognize the input image to obtain a text printing area and a handwriting area; using a pixel recognition model to perform pixel recognition on the handwriting area to determine whether it is in the handwriting area Handwritten content pixels corresponding to the handwritten content.
  • determining the handwritten content pixels means determining the handwritten content in the handwriting area.
  • the pixel recognition model refers to a model that performs pixel recognition on handwritten content in the handwriting area, and the pixel recognition model can recognize handwritten content pixels corresponding to the handwritten content in the handwriting area.
  • the pixel recognition model can also implement region recognition based on a neural network.
  • the neural network applied to the pixel recognition model can include a deep convolutional neural network.
  • step S12 includes: obtaining the replacement pixel value; replacing the pixel value of the handwritten content pixel with the replacement pixel value to remove the handwritten content from the input image to obtain the output image.
  • the replacement pixel value can be the pixel value of any pixel in the handwriting area except the handwritten content pixel; or, the replacement pixel value can be the average of the pixel values of all pixels in the handwriting area except the handwritten content pixel (for example, , Geometric average); or, the replacement pixel value can also be a fixed value, for example, 255 grayscale value.
  • the pixel recognition network can be used to directly extract any pixel in the handwriting area except the handwritten content pixels to obtain the replacement pixel value; or, the pixel recognition network can be used to extract the handwriting area except for the handwritten content pixels And then get the replacement pixel value based on the pixel value of all pixels.
  • replacing the pixel value of the handwritten content pixel with the pixel value of the replaced pixel to remove the handwritten content from the input image to obtain the output image includes: replacing the pixel value of the handwritten content pixel with the pixel value of the replaced pixel to remove the handwriting from the input image Content to obtain an intermediate output image; binarize the intermediate output image to obtain an output image.
  • region recognition and binarization processing of the region recognition model can refer to the above-mentioned related description, and the repetition will not be repeated.
  • an output image as shown in FIG. 2B can be obtained, and the output image is a binarized image.
  • the output image As shown in Fig. 2B, in the output image, all handwritten content is removed, resulting in exercises without answers.
  • the model (for example, an arbitrary model such as a region recognition model, a pixel recognition model, etc.) is not just a mathematical model, but a module that can receive input data, perform data processing, and output processing results.
  • the module can be a software module, a hardware module (for example, a hardware neural network) or a combination of software and hardware.
  • the region recognition model and/or the pixel recognition model include codes and programs stored in a memory; the processor can execute the codes and programs to implement some of the region recognition models and/or pixel recognition models described above. Function or full function.
  • the area recognition model and/or the pixel recognition model may include one circuit board or a combination of multiple circuit boards for realizing the functions described above.
  • the circuit board or combination of circuit boards may include: (1) one or more processors; (2) one or more non-transitory computer-readable computers connected to the processors Memory; and (3) firmware stored in the memory executable by the processor.
  • the method for removing handwritten content before acquiring the input image, further includes a training phase.
  • the training phase includes the process of training the region recognition model and the pixel recognition model. It should be noted that the region recognition model and the pixel recognition model can be trained separately, or the region recognition model and the pixel recognition model can be trained at the same time.
  • the training area may be treated by the first sample image marked with a text printing area (for example, the number of marked text printing areas is at least one) and a handwriting area (for example, the number of marked handwriting areas is at least one)
  • the recognition model is trained to obtain an area recognition model.
  • the training process of the region recognition model to be trained may include: in the training phase, training the region recognition model to be trained using multiple first sample images marked with the text printing region and the handwritten region to obtain the region recognition model.
  • training the region recognition model to be trained using multiple first sample images includes: obtaining the current first sample image from the multiple first sample images; processing the current first sample image using the region recognition model to be trained, To obtain the training text printing area and the training handwriting area; according to the text printing area and the handwriting area, the training text printing area and the training handwriting area marked in the current first sample image, the first loss function is used to calculate the recognition model of the area to be trained The first loss value; the parameters of the region recognition model to be trained are modified according to the first loss value.
  • the trained region recognition model is obtained.
  • the first loss function does not meet the first predetermined condition, If the condition is satisfied, continue to input the first sample image to repeat the above-mentioned training process.
  • the above-mentioned first predetermined condition corresponds to the convergence of the loss of the first loss function (that is, the first loss value is no longer significantly reduced) when a certain number of first sample images are input.
  • the above-mentioned first predetermined condition is that the number of training times or the training period reaches a predetermined number (for example, the predetermined number may be millions).
  • the pixel recognition model to be trained can be obtained by training the pixel recognition model to be trained through the second sample image marked with pixels of the handwritten content.
  • the second sample image can be enlarged to accurately label all the handwritten content pixels.
  • handwriting features for example, pixel gray features, font features, etc.
  • machine learning is performed to build a pixel recognition model.
  • the training process of the pixel recognition model to be trained may include: in the training phase, training the pixel recognition model to be trained by using multiple second sample images marked with pixels of the handwritten content to obtain the pixel recognition model.
  • using multiple second sample images to train the region recognition model to be trained includes: obtaining the current second sample image from multiple second sample images; using the pixel recognition model to be trained to process the current second sample image to obtain training handwriting Content pixels; According to the handwritten content pixels and training handwritten content pixels marked in the current second sample image, the second loss value of the pixel recognition model to be trained is calculated through the second loss function; the second loss value of the pixel recognition model to be trained is calculated according to the second loss value The parameters are corrected. When the second loss function meets the second predetermined condition, the trained pixel recognition model is obtained. When the second loss function does not meet the second predetermined condition, continue to input the second sample image to repeat the above training process.
  • the above-mentioned second predetermined condition corresponds to the convergence of the loss of the second loss function (that is, the second loss value is no longer significantly reduced) when a certain number of second sample images are input.
  • the above-mentioned second predetermined condition is that the number of training times or the training period reaches a predetermined number (for example, the predetermined number may be millions).
  • the multiple first training sample images and the multiple second training sample images may be the same or different.
  • FIG. 3 is a schematic block diagram of a handwritten content removal device provided by at least one embodiment of the present disclosure.
  • the handwritten content removal device 300 includes a processor 302 and a memory 301. It should be noted that the components of the handwritten content removal device 300 shown in FIG. 3 are only exemplary and not restrictive. According to actual application requirements, the handwritten content removal device 300 may also have other components.
  • the memory 301 is used for non-transitory storage of computer readable instructions; the processor 302 is used for running computer readable instructions, and when the computer readable instructions are executed by the processor 302, the method for removing handwritten content according to any of the above embodiments is executed .
  • the handwritten content removal apparatus 300 provided by the embodiment of the present disclosure may be used to implement the handwritten content removal method provided by the embodiment of the present disclosure, and the handwritten content removal apparatus 300 may be configured on an electronic device.
  • the electronic device may be a personal computer, a mobile terminal, etc.
  • the mobile terminal may be a hardware device with various operating systems such as a mobile phone or a tablet computer.
  • the handwritten content removal device 300 may further include an image acquisition component 303.
  • the image acquisition part 303 is used to acquire a job image, for example, a job image of a paper job.
  • the memory 301 can also be used to store job images; the processor 302 is also used to read and process the job images to obtain input images.
  • the job image may be the original image described in the embodiment of the method for removing handwritten content.
  • the image acquisition component 303 is the image acquisition device described in the embodiment of the method for removing handwritten content.
  • the image acquisition component 303 may be a camera of a smart phone, a camera of a tablet computer, a camera of a personal computer, or a lens of a digital camera. , Or even a webcam.
  • the image acquisition component 303, the memory 301, and the processor 302 may be physically integrated in the same electronic device, and the image acquisition component 303 may be a camera configured on the electronic device.
  • the memory 301 and the processor 302 receive the image sent from the image acquisition part 303 via the internal bus.
  • the image acquisition component 303 and the memory 301/processor 302 may also be configured separately in physical locations, and the memory 301 and the processor 302 may be integrated in the first user's electronic device (for example, the first user's computer, mobile phone, etc.) ,
  • the image acquisition component 303 can be integrated in the electronic device of the second user (the first user and the second user are not the same), the electronic device of the first user and the electronic device of the second user can be configured separately in physical location, and
  • the electronic device of the first user and the electronic device of the second user may communicate in a wired or wireless manner.
  • the electronic device of the second user may send the original image to the electronic device of the first user via a wired or wireless manner.
  • the electronic device of receives the original image and performs subsequent processing on the original image.
  • the memory 301 and the processor 302 may also be integrated in a cloud server, and the cloud server receives the original image and processes the original image.
  • the handwritten content removal device 300 may further include an output device, and the output device is used to output the output image.
  • the output device may include a display (for example, an organic light emitting diode display, a liquid crystal display), a projector, etc., and the display and the projector may be used to display the output image.
  • the output device may also include a printer, and the printer is used to print the output image.
  • the network may include a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
  • the network may include a local area network, the Internet, a telecommunications network, the Internet of Things (Internet of Things) based on the Internet and/or a telecommunications network, and/or any combination of the above networks, and so on.
  • the wired network may, for example, use twisted pair, coaxial cable, or optical fiber transmission for communication, and the wireless network may use, for example, a 3G/4G/5G mobile communication network, Bluetooth, Zigbee, or WiFi.
  • the present disclosure does not limit the types and functions of the network here.
  • the processor 302 may control other components in the handwritten content removal apparatus 300 to perform desired functions.
  • the processor 302 may be a central processing unit (CPU), a tensor processor (TPU), or a graphics processing unit (GPU) and other devices with data processing capabilities and/or program execution capabilities.
  • the central processing unit (CPU) can be an X86 or ARM architecture.
  • the GPU can be directly integrated on the motherboard alone or built into the north bridge chip of the motherboard.
  • the GPU can also be built into the central processing unit (CPU).
  • the memory 301 may include any combination of one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, erasable programmable read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, flash memory, etc.
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM portable compact disk read only memory
  • USB memory flash memory, etc.
  • One or more computer-readable instructions may be stored on the computer-readable storage medium, and the processor 302 may run the computer-readable instructions to implement various functions of the handwritten content removal apparatus 300.
  • Various application programs and various data can also be stored in the storage medium.
  • FIG. 4 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
  • one or more computer-readable instructions 501 may be non-transitory stored on the storage medium 500.
  • the computer-readable instruction 501 is executed by a computer, one or more steps in the handwritten content removal method described above can be executed.
  • the storage medium 500 may be applied to the handwritten content removal device 300 described above, for example, it may include the memory 301 in the handwritten content removal device 300.
  • FIG. 5 is a schematic diagram of a hardware environment provided by at least one embodiment of the present disclosure.
  • the handwritten content removal device provided by the embodiment of the present disclosure can be applied to the Internet system.
  • the computer system provided in FIG. 5 can be used to implement the handwritten content removal device involved in the present disclosure.
  • Such computer systems can include personal computers, notebook computers, tablet computers, mobile phones and any smart devices.
  • the specific system in this embodiment uses a functional block diagram to explain a hardware platform including a user interface.
  • Such a computer system may include a general purpose computer device, or a special purpose computer device. Both types of computer equipment can be used to implement the handwritten content removal apparatus in this embodiment.
  • the computer system can implement any of the currently described components of the information needed to implement the handwritten content removal method.
  • a computer system can be realized by a computer device through its hardware device, software program, firmware, and their combination.
  • the computer system may include a communication port 250, which is connected to a network that realizes data communication.
  • the communication port 250 may communicate with the image acquisition component 403 described above.
  • the computer system may also include a processor group 220 (ie, the processor described above) for executing program instructions.
  • the processor group 220 may be composed of at least one processor (for example, a CPU).
  • the computer system may include an internal communication bus 210.
  • the computer system may include different forms of program storage units and data storage units (ie, the memory or storage medium described above), such as a hard disk 270, a read only memory (ROM) 230, and a random access memory (RAM) 240, which can be used for storage Various data files used for computer processing and/or communication, and possible program instructions executed by the processor group 220.
  • the computer system may also include an input/output component 260, which may support input/output data flow between the computer system and other components (for example, the user interface 280, which may be the display described above).
  • the computer system can also send and receive information and data through the communication port 250.
  • the above-mentioned computer system may be used to form a server in an Internet communication system.
  • the server of the Internet communication system can be a server hardware device or a server group. Each server in a server group can be connected through a wired or wireless network.
  • a server group can be centralized, such as a data center.
  • a server group can also be distributed, such as a distributed system.
  • each block in the block diagram and/or flowchart of the present disclosure can be used as a dedicated hardware-based system that performs specified functions or actions. , Or can be realized by a combination of dedicated hardware and computer program instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Character Input (AREA)
  • Image Processing (AREA)

Abstract

一种手写内容去除方法、手写内容去除装置和存储介质。手写内容去除方法包括:获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容(S10);对所述输入图像进行识别,以确定所述手写区域中的所述手写内容(S11);去除所述输入图像中的所述手写内容,以得到输出图像(S12)。

Description

手写内容去除方法、手写内容去除装置、存储介质 技术领域
本公开的实施例涉及一种手写内容去除方法、手写内容去除装置和存储介质。
背景技术
目前,经常发生学生忘记带作业或者丢失作业的情况,此时,家长或者学生可以向其他同学询问作业内容,然而,有时其他同学已经完成了相关作业内容,因此其他同学拍摄的作业的照片中包括该同学的答题内容,不便于该学生再次进行答题。此外,用户通过手机拍摄的作业照片往往由于拍摄环境光照的不同,会在作业照片中产生阴影等,直接打印该作业照片,则打印机会将作业照片中的阴影部分直接打印,浪费墨水,也影响阅读。
发明内容
本公开至少一实施例提供一种手写内容去除方法,包括:获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容;对所述输入图像进行识别,以确定所述手写区域中的所述手写内容;去除所述输入图像中的所述手写内容,以得到输出图像。
例如,在本公开一实施例提供的手写内容去除方法中,所述输入图像还包括文本印刷区域,所述文本印刷区域包括印刷内容,对所述输入图像进行识别,以确定所述手写区域中的手写内容,包括:利用区域识别模型对所述输入图像进行识别,以得到所述文本印刷区域和所述手写区域。
例如,在本公开一实施例提供的手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:对所述手写区域进行标注,以得到手写区域标注框,其中,所述手写区域标注框包括所述手写区域;从所述输入图像中切割去除所述手写区域标注框,以得到所述输出图像。
例如,在本公开一实施例提供的手写内容去除方法中,从所述输入图像中切割去除所述手写区域标注框,以得到所述输出图像,包括:从所述输入 图像中切割去除所述手写区域标注框,以得到中间输出图像;对所述中间输出图像进行二值化处理,以得到所述输出图像。
例如,在本公开一实施例提供的手写内容去除方法中,所述输入图像还包括文本印刷区域,所述文本印刷区域包括印刷内容,对所述输入图像进行识别,以确定所述手写区域中的手写内容,包括:利用区域识别模型对所述输入图像进行识别,以得到所述文本印刷区域和所述手写区域,利用像素识别模型对所述手写区域进行像素识别,以确定在所述手写区域中所述手写内容对应的手写内容像素。
例如,在本公开一实施例提供的手写内容去除方法中,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:获取替换像素值;利用所述替换像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容而得到所述输出图像。
例如,在本公开一实施例提供的手写内容去除方法中,利用所述替换像素的像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容以得到所述输出图像,包括:利用所述替换像素的像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容而得到中间输出图像;对所述中间输出图像进行二值化处理,以得到所述输出图像。
例如,在本公开一实施例提供的手写内容去除方法中,所述替换像素值为所述手写区域中除了所述手写内容像素之外的任意一个像素的像素值;或者,所述替换像素值为所述手写区域中除了所述手写内容像素之外的所有像素的像素值的平均值。
例如,在本公开一实施例提供的手写内容去除方法中,所述待处理文本页面为试卷或习题,所述印刷内容包括题干,所述手写内容包括答案。
例如,在本公开一实施例提供的手写内容去除方法中,所述手写内容包括手写字符。
例如,在本公开一实施例提供的手写内容去除方法中,获取所述待处理文本页面的输入图像包括:获取所述待处理文本页面的原始图像,其中,所述原始图像包括待处理文本区域;对所述原始图像进行边缘检测,以确定所述原始图像中的所述待处理文本区域;对所述待处理文本区域进行转正处理, 以得到所述输入图像。
本公开至少一实施例提供一种手写内容去除装置,包括:存储器,用于非暂时性存储计算机可读指令;以及处理器,用于运行所述计算机可读指令,其中,所述计算机可读指令被所述处理器运行时执行根据上述任一实施例所述的手写内容去除方法。
例如,本公开一实施例提供的手写内容去除装置还包括:图像获取部件,所述图像获取部件用于获得作业图像,所述存储器还用于存储所述作业图像,所述处理器还用于读取并处理所述作业图像以得到输入图像。
本公开至少一实施例提供一种存储介质,非暂时性地存储计算机可读指令,其中,当所述计算机可读指令由计算机执行时可以执行根据上述任一实施例所述的手写内容去除方法。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1为本公开至少一实施例提供的一种手写内容去除方法的示意性流程图;
图2A为本公开至少一实施例提供的一种原始图像的示意图;
图2B为本公开至少一实施例提供的一种输出图像的示意图;
图3为本公开至少一实施例提供的一种手写内容去除装置的示意性框图;
图4为本公开至少一实施例提供的一种存储介质的示意图;
图5为本公开至少一实施例提供的一种硬件环境的示意图。
具体实施方式
为了使得本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提 下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
为了保持本公开实施例的以下说明清楚且简明,本公开省略了部分已知功能和已知部件的详细说明。
本公开至少一实施例提供一种手写内容去除方法、手写内容去除装置和存储介质。手写内容去除方法包括:获取待处理文本页面的输入图像,其中,输入图像包括手写区域,手写区域包括手写内容;对输入图像进行识别,以确定手写区域中的手写内容;去除输入图像中的手写内容,以得到输出图像。
该手写内容去除方法能够有效去除输入图像中的手写区域内的手写内容,以便于输出新的页面以进行填写。此外,手写内容去除方法还可以将输入图像转化为方便打印的形式,以便于用户可以将输入图像打印为纸质形式进行填写。
下面结合附图对本公开的实施例进行详细说明,但是本公开并不限于这些具体的实施例。
图1为本公开至少一实施例提供的一种手写内容去除方法的示意性流程图;图2A为本公开至少一实施例提供的一种原始图像的示意图;图2B为本公开至少一实施例提供的一种输出图像的示意图。
例如,如图1所示,本公开实施例提供的手写内容去除方法包括步骤S10至S12。
如图1所示,首先,手写内容去除方法在步骤S10,获取待处理文本页面的输入图像。
例如,在步骤S10中,输入图像包括手写区域,手写区域包括手写内容。输入图像可以为任何包括手写内容的图像。
例如,输入图像可以为通过图像采集装置(例如,数码相机或手机等)拍摄的图像,输入图像可以为灰度图像,也可以为彩色图像。需要说明的是,输入图像是指以可视化方式呈现待处理文本页面的形式,例如待处理文本页面的图片等。
例如,手写区域并没有固定的形状,而是根据手写内容而定,也就是说,具有手写内容的区域即为手写区域,手写区域可以为规则形状(例如,矩形等),也可以为不规则的形状。手写区域可以包括填充区域、手写的草稿或者其他手写标记的区域等。
例如,输入图像还包括文本印刷区域,文本印刷区域包括印刷内容。文本印刷区域的形状也可以为规则形状(例如,矩形等),也可以为不规则的形状。在本公开的实施例中,以每个手写区域的形状为矩形和每个文本印刷区域的形状为矩形为例进行说明,本公开包括但不限于此。
例如,待处理文本页面可以包括试卷、习题、表格、合同等。试卷可以为各个学科的试卷,例如,语文、数学、外语(例如,英语等),类似地,习题也可以为各个学科的习题集等;表格可以为各种类型的表格,例如,年终总结表、入职表、价格汇总表、申请表格等;合同可以包括劳动合同等。本公开对待处理文本页面的类型不作具体限制。
例如,待处理文本页面可以为纸质形式的文本,也可以为电子形式的文本。例如,当待处理文本页面为试卷或习题时,印刷内容可以包括题干,手写内容可以包括用户(例如,学生、老师等)填写的答案(此时,答案为用户填写的答案,并不是正确答案或标准答案)、计算草稿或者其他手写标记等。印刷内容还可以包括各种符号、图形等;当待处理文本页面为入职表时,印刷内容可以包括“姓名”、“性别”、“民族”、“工作履历”等内容,而手写内容可以包括用户(例如,职工等)填写在入职表中的用户的姓名、性别(男或女)、民族和工作经历等手写信息。
例如,待处理文本页面的形状可以为矩形等形状,输入图像的形状可以为规则形状(例如,平行四边形、矩形等),以便于进行打印。然而,本公开 不限于此,在一些实施例中,输入图像也可以为不规则形状。
例如,由于图像采集装置采集图像时图像可能会发生变形,从而输入图像的尺寸和待处理文本页面的尺寸不相同,然而本公开不限于此,输入图像的尺寸和待处理文本页面的尺寸也可以相同。
例如,待处理文本页面包括印刷内容和手写内容,印刷内容可以为印刷得到的内容,手写内容为用户手写的内容,手写内容可以包括手写字符。
需要说明的是,“印刷内容”不仅仅指代通过输入装置在电子设备上输入的文字、字符、图形等内容,在一些实施例中,当待处理文本页面为试卷或习题时,题干也可以是由用户手写的,此时,印刷内容则为印刷得到的用户手写的题干。
例如,印刷内容可以包括各种语言的文字,例如,中文(例如,汉字或拼音)、英文、日文、法文、韩文等,此外,印刷内容还可以包括数字、各种符号(例如,大于符号、小于符号、加号、乘号等)和各种图形等。手写内容也可以为包括各种语言的文字、数字、各种符号和各种图形等。
例如,在图2A所示的示例中,待处理文本页面100为习题,由四条边界线(直线101A-101D)围成的区域表示待处理文本页面对应的待处理文本区域100。在该待处理文本区域100中,印刷区域包括题干区域,印刷内容可以包括各个题干,例如,“1.看钟面,写时间”、“2.在钟面上画出时针和分针”、“看一部电影大约要花1()30()”等,印刷内容还可以包括待处理文本区域100中的各个时钟图形、二维码图形等;手写区域包括手写答案区域,手写内容可以包括用户采用铅笔填写的答案,例如,在文本“我们一天在校时间约6(小时)”中,括号中的汉字“小时”为手写内容,在文本“一人唱一首歌要5分钟,9个人同时唱这一首歌要(5)分钟”中,括号中的数字“5”为手写内容。
例如,输入图像可以包括多个手写内容和多个印刷内容。多个手写内容彼此间隔,多个印刷内容也彼此间隔。例如,多个手写内容中的部分手写内容可以相同(即手写内容的字符相同,然而手写内容的具体形状不相同);多个印刷内容中的部分印刷内容也可以相同。本公开不限于此,多个手写内容也可以彼此不相同,多个印刷内容也可以彼此不相同。如图2A所示,“练速 度”和“练准确率”为彼此间隔的两个印刷内容,文本“吃午饭需要花20(分钟)”中的“分钟”和“做一套广播体操的时间约10“分钟”中的“分钟”为彼此间隔的两个手写内容,且该两个手写内容相同。
例如,在一些实施例中,步骤S10可以包括:获取待处理文本页面的原始图像,其中,原始图像包括待处理文本区域;对原始图像进行边缘检测,以确定原始图像中的待处理文本区域;对待处理文本区域进行转正处理,以得到输入图像。
例如,可以采用神经网络或基于OpenCV的边缘检测算法等方法对原始图像进行边缘检测,以确定待处理文本区域。例如,OpenCV为一种开源计算机视觉库,基于OpenCV的边缘检测算法包括Sobel、Scarry、Canny、Laplacian、Prewitt、Marr-Hildresh、scharr等多种算法。
例如,对原始图像进行边缘检测,以确定原始图像中的待处理文本区域,可以包括:对原始图像进行处理,获得原始图像中灰度轮廓的线条图,其中,线条图包括多条线条;将线条图中相似的线条进行合并,得到多条初始合并线条,并根据多条初始合并线条确定一边界矩阵;将多条初始合并线条中相似的线条进行合并得到目标线条,并且将未合并的初始合并线条也作为目标线条,由此得到多条目标线条;根据边界矩阵,从多条目标线条中确定多条参考边界线;通过预先训练的边界线区域识别模型对原始图像进行处理,得到原始图像中待处理文本页面的多个边界线区域;针对每一边界线区域,从多条参考边界线中确定与该边界线区域对应的目标边界线;根据确定的多条目标边界线确定原始图像中待处理文本区域的边缘。
例如,在一些实施例中,对原始图像进行处理,获得原始图像中灰度轮廓的线条图,包括:通过基于OpenCV的边缘检测算法对原始图像进行处理,获得原始图像中灰度轮廓的线条图。
例如,将线条图中相似的线条进行合并,得到多条初始合并线条,包括:获取线条图中的长线条,其中,长线条为长度超过第一预设阈值的线条;从长线条中获取多组第一类线条,其中,第一类线条包括至少两个依次相邻的长线条,且任意相邻的两长线条之间的夹角均小于第二预设阈值;针对每一组第一类线条,将该组第一类线条中的各个长线条依次进行合并得到一条初 始合并线条。
例如,边界矩阵按照以下方式确定:对多条初始合并线条以及长线条中未合并的线条进行重新绘制,将重新绘制的所有线条中的像素点的位置信息对应到整个原始图像的矩阵中,将原始图像的矩阵中这些线条的像素点所在位置的值设置第一数值、这些线条以外的像素点所在位置的值设置为第二数值,从而形成边界矩阵。
例如,将多条初始合并线条中相似的线条进行合并得到目标线条,包括:从多条初始合并线条中获取多组第二类线条,其中,第二类线条包括至少两个依次相邻的初始合并线条,且任意相邻的两初始合并线条之间的夹角均小于第三预设阈值;针对每一组第二类线条,将该组第二类线条中的各个初始合并线条依次进行合并得到一条目标线条。
例如,第一预设阈值可以为2个像素的长度,第二预设阈值和第三预设阈值可以为15度。需要说明的是,第一预设阈值、第二预设阈值和第三预设阈值可以根据实际应用需求设置。
例如,根据边界矩阵,从多条目标线条中确定多条参考边界线,包括:针对每一条目标线条,将该目标线条进行延长,根据延长后的该目标线条确定一线条矩阵,然后将该线条矩阵与边界矩阵进行对比,计算延长后的该目标线条上属于边界矩阵的像素点的个数,作为该目标线条的成绩,即将该线条矩阵与边界矩阵进行对比来判断有多少像素点落入到边界矩阵里面,也就是判断两个矩阵中有多少相同位置的像素点具有相同的第一数值例如255,从而计算成绩,其中,线条矩阵与边界矩阵的大小相同;根据各个目标线条的成绩,从多条目标线条中确定多条参考边界线。需要说明的是,成绩最好的目标线条的数量可能为多条,因此,根据各个目标线条的成绩,从多条目标线条中确定成绩最好的多条目标线条作为参考边界线。
例如,线条矩阵按照以下方式确定:对延长后的目标线条或直线进行重新绘制,将重新绘制的线条中的像素点的位置信息对应到整个原始图像的矩阵中,将原始图像的矩阵中线条的像素点所在位置的值设置为第一数值、线条以外的像素点所在位置的值设置为第二数值,从而形成线条矩阵。
例如,针对每一边界线区域,从多条参考边界线中确定与该边界线区域 对应的目标边界线,包括:计算每一条参考边界线的斜率;针对每一个边界线区域,利用霍夫变换将该边界线区域转换为多条直线,并计算多条直线的平均斜率,再判断多条参考边界线中是否存在斜率与平均斜率相匹配的参考边界线,如果存在,将该参考边界线确定为与该边界线区域相对应的目标边界线;如果判断出多条参考边界线中不存在斜率与平均斜率相匹配的参考边界线,则针对该边界线区域转换得到的每一条直线,将该直线形成的线条矩阵与边界矩阵进行对比,计算该直线上属于边界矩阵的像素点的个数,作为该直线的成绩;将成绩最好的直线确定为与该边界线区域相对应的目标边界线;其中,线条矩阵与边界矩阵的大小相同。需要说明的是,如果成绩最好的直线有多条,则根据排序算法将其中最先出现的一条直线作为最佳边界线。
例如,边界线区域识别模型是基于神经网络的模型。边界线区域识别模型可以通过机器学习训练建立。
例如,对原始图像进行边缘检测之后可以确定多条目标边界线(例如,四条目标边界线),待处理文本区域即由多条目标边界线确定,例如,根据多条目标边界线的多个交点和多条目标边界线即可确定待处理文本区域,每两条相邻的目标边界线相交得到一个交点,多个交点和多个目标边界线共同限定了原始图像中的待处理文本所在的区域。例如,在图2A所示的示例中,待处理文本区域可以为四条目标边界线围成的习题区域。四条目标边界线均为直线,四条目标边界线分别为第一目标边界线101A、第二目标边界线101B、第三目标边界线101C和第四目标边界线101D。除了待处理文本区域之外,原始图像还可以包括非文本区域,例如,图2A中的四条边界线围成的区域之外的区域。
例如,在一些实施例中,对待处理文本区域进行转正处理,以得到输入图像,包括:对待处理文本区域进行投影变换,以得到待处理文本区域的正视图,该正视图即为输入图像。投影变换(Perspective Transformation)是将图片投影到一个新的视平面(Viewing Plane)的技术,也称作投影映射(Projective Mapping)。由于拍照所得的原始图像中,待处理文本的真实形状在原始图像中发生了变化,即产生了几何畸变。如图2A所示的原始图像,待处理文本(即习题)的形状本来为矩形,但是原始图像中的待处理文本的形状发生了变化, 变为了不规则的多边形。因此,对原始图像中的待处理文本区域进行投影变换,可以将待处理文本区域由不规则的多边形变换为矩形或平行四边形等,即将待处理文本区域进行转正,从而去除几何畸变的影响,得到原始图像中待处理文本的正视图。投影变换可以根据空间投影换算坐标来处理待处理文本区域中的像素以获取待处理文本的正视图,在此不做赘述。
需要说明的是,在另一些实施例中,也可以不对待处理文本区域进行转正处理,而直接从原始图像中切割待处理文本区域,以得到单独的待处理文本区域的图像,该单独的待处理文本区域的图像即为输入图像。
例如,原始图像可以为图像采集装置直接采集到的图像,也可以是对图像采集装置直接采集到的图像进行预处理之后获得的图像。原始图像可以为灰度图像,也可以为彩色图像。例如,为了避免原始图像的数据质量、数据不均衡等对于手写内容去除的影响,在处理原始图像前,本公开实施例提供的手写内容去除方法还可以包括对原始图像进行预处理的操作。预处理可以消除原始图像中的无关信息或噪声信息,以便于更好地对原始图像进行处理。预处理例如可以包括对图像采集装置直接采集到的图像进行缩放、剪裁、伽玛(Gamma)校正、图像增强或降噪滤波等处理。
值得注意的是,在一些实施例中,原始图像即可以作为输入图像,在此情况下,例如,可以直接对原始图像进行识别以确定原始图像中的手写内容;然后去除原始图像中的手写内容,以得到输出图像;或者,可以直接对原始图像进行识别以确定原始图像中的手写内容;然后去除原始图像中的手写内容,以得到中间输出图像;然后对中间输出图像进行边缘检测,以确定中间输出图像中的待处理文本区域;对待处理文本区域进行转正处理,以得到输出图像,也就是说,在本公开的一些实施例中,可以先去除原始图像中的手写内容,以得到中间输出图像,然后在对中间输出图像进行边缘检测和转正处理。
接下来,如图1所示,在步骤S11,对输入图像进行识别,以确定手写区域中的手写内容。
例如,在一些实施例中,步骤S11可以包括:利用区域识别模型对输入图像进行识别,以得到文本印刷区域和手写区域。需要说明的是,在本公开 中,“确定手写内容”并不表示需要确定手写内容中的具体字符,而是需要确定手写内容在输入图像中的位置,例如,在该实施例中,由于手写内容位于手写区域内,从而“得到手写区域”即表示确定手写区域中的手写内容。
例如,区域识别模型表示对输入图像进行区域识别(或划分)的模型,区域识别模型可以采用机器学习技术(例如,神经网络技术)实现并且例如运行在通用计算装置或专用计算装置上,该区域识别模型为预先训练好的模型。例如,应用于区域识别模型的神经网络可以包括深度卷积神经网络、掩膜区域卷积神经网络(Mask-RCNN)、深度残差网络、注意力模型等。
例如,利用区域识别模型对输入图像的区域(例如,文本印刷区域或手写区域)进行识别包括识别出区域的边界。例如,在区域以矩形来界定,且该矩形的相邻两条边分别平行于与水平方向平行的水平线和与竖直方向平行的竖直线的情况下,可以通过确定该矩形的至少三个顶点来确定该区域;在区域以平行四边形来界定的情况下,可以通过确定该平行四边形的至少三个顶点来确定该区域;在区域以四边形(例如,梯形、任意不规则的四边形等)来界定,且该四边形的至少一条边界也可以相对于水平线或竖直线倾斜,可以通过确定该四边形的四个顶点来确定该区域。
接下来,如图1所示,在步骤S12,去除输入图像中的手写内容,以得到输出图像。
例如,步骤S12包括:对手写区域进行标注,以得到手写区域标注框;从输入图像中切割去除手写区域标注框,以得到输出图像。
例如,手写区域标注框包括手写区域,也就是说,手写区域标注框覆盖手写区域。
例如,可以基于神经网络实现对手写区域进行标注。在一些实施例中,可以利用Mask-RCNN对手写区域进行标注,以得到手写区域标注框,Mask-RCNN的处理流程可以包括:输入的一幅待处理图像(即输入图像),然后对待处理图像进行预处理操作,或者该待处理图像为预处理后的图像;然后,将该待处理图像输入到一个预训练好的神经网络中获得对应的特征图像(feature map);接着,对特征图像中的每一点设定预定数量的感兴趣区域(region of interest,ROI),从而获得多个候选ROI;接着,将这些候选的ROI 送入区域候选网络(Region Proposal Networks,RPN)进行二值分类(前景或背景)和边框回归(Bounding-box regression),过滤掉不感兴趣区域,从而得到目标ROI;接着,对目标ROI进行ROIAlign操作(即先将待处理图像和特征图像的像素对应起来,然后将特征图像和固定的特征对应起来);最后,对这些目标ROI进行分类、边框回归和掩膜区域生成,从而得到手写区域的手写区域标注框。对目标ROI进行边框回归可以使得到的手写区域标注框更加贴近手写内容的实际位置。例如,当将Mask-RCNN应用到手写内容识别时,可以直接获取到手写内容的掩膜区域,然后利用外接矩形标注框标注该手写内容的掩膜区域。也就是说,外接标注框包括该掩膜区域。
例如,手写区域标注框通过手写区域标注框的中心坐标以及手写区域标注框的长度和高度确定。不同手写内容可以具有不同形状的手写区域标注框。
例如,从输入图像中切割去除手写区域标注框,以得到输出图像,包括:从输入图像中切割去除手写区域标注框,以得到中间输出图像;对中间输出图像进行二值化处理,以得到输出图像。
二值化处理是将中间输出图像上的像素点的灰度值设置为0或255,也就是使得整个中间输出图像呈现出明显的黑白效果的过程,二值化处理可以使中间输出图像中数据量大为减少,从而能凸显出目标的轮廓。二值化处理可以将中间输出图像转换为黑白对比较为明显的灰度图像(即输出图像),转换后的灰度图像的噪声干扰较少,可以有效提高输出图像中的内容的辨识度和打印效果。
例如,从输入图像中切割去除手写区域标注框之后,手写区域标注框对应的区域内的所有像素均被去除,即输入图像中的手写区域标注框对应的区域的像素为空,即没有像素。在对中间输出图像进行二值化处理时,中间输出图像中的像素为空的区域不进行任何处理;或者,在对中间输出图像进行二值化处理时,也可以将中间输出图像中的像素为空的区域利用灰度值255进行填充。
例如,中间输出图像进行二值化处理后,最终得到输出图像可以方便用户将该输出图像打印成为纸质形式。例如,当输入图像为习题时,可以将输出图像打印成为纸质形式以供学生作答。
例如,二值化处理的方法可以是阈值法,阈值法包括:设置二值化阈值,将中间输出图像中的每个像素的像素值与二值化阈值进行比较,若中间输出图像中的某像素的像素值大于或等于二值化阈值,则将该像素的像素值设置为255灰阶,若中间输出图像中的某像素的像素值小于二值化阈值,则将该像素的像素值设置为0灰阶,由此即可实现对中间输出图像进行二值化处理。
例如,二值化阈值的选取方法包括双峰法、P参数法、大律法(OTSU法)、最大熵值法、迭代法等。
例如,在一些实施例中,对中间输出图像进行二值化处理包括:获取中间输出图像;对中间输出图像进行灰度化处理,得到中间输出图像的灰度图像;根据第一阈值,对灰度图像进行二值化处理,得到中间输出图像的二值化图像;以二值化图像为导向图,对灰度图像进行导向滤波处理,得到滤波图像;根据第二阈值,确定滤波图像中的高值像素点,高值像素点的灰度值大于第二阈值;根据预设扩充系数,对高值像素点的灰度值进行扩充处理,得到扩充图像;对扩充图像进行清晰化处理,得到清晰图像;对清晰图像的对比度进行调整,以得到输出图像。
例如,灰度化处理的方法包括分量法、最大值法、平均值法和加权平均法等。
例如,预设扩充系数为1.2-1.5,例如,1.3。将每个高值像素点的灰度值都乘以预设扩充系数,以对高值像素点的灰度值进行扩充处理,从而得到黑白对比更加明显的扩充图像。
例如,第二阈值为滤波图像的灰度均值与灰度值的标准差之和。
例如,对扩充图像进行清晰化处理,得到清晰图像,包括:采用高斯滤波对扩充图像进行模糊化处理,得到模糊图像;根据预设混合系数,将模糊图像和扩充图像按比例进行混合,得到清晰图像。
例如,假设f 1(i,j)为扩充图像在(i,j)处的像素点的灰度值,f 2(i,j)为模糊图像在(i,j)处的像素点的灰度值,f 3(i,j)为清晰图像在(i,j)处的像素点的灰度值,k 1为扩充图像的预设混合系数,k 2为模糊图像的预设扩充系数,则f 1(i,j)、f 2(i,j)、f 3(i,j)满足如下关系:
f 3(i,j)=k 1f 1(i,j)+k 2f 2(i,j)。
例如,扩充图像的预设混合系数为1.5,模糊图像的预设混合系数为-0.5。
例如,对清晰图像的对比度进行调整包括:根据清晰图像的灰度均值,对清晰图像的每个像素点的灰度值进行调整。
例如,可以通过如下公式对清晰图像的每个像素点的灰度值进行调整:
Figure PCTCN2020141110-appb-000001
其中,f'(i,j)为增强图像在(i,j)处的像素点的灰度值,
Figure PCTCN2020141110-appb-000002
清晰图像的灰度均值,f(i,j)为清晰图像在(i,j)处的像素点的灰度值,t为强度值。例如,强度值可为0.1-0.5,例如,强度值可为0.2。在实际应用中,强度值可根据最终所要达到的黑白增强效果进行选取。
例如,在另一些实施例中,步骤S11可以包括:利用区域识别模型对输入图像进行识别,以得到文本印刷区域和手写区域;利用像素识别模型对手写区域进行像素识别,以确定在手写区域中手写内容对应的手写内容像素。在该实施例中,由于手写内容像素即可表示手写内容的位置,从而“确定手写内容像素”即表示确定手写区域中的手写内容。
例如,像素识别模型表示对手写区域内的手写内容进行像素识别的模型,像素识别模型可以识别手写区域内的手写内容对应的手写内容像素。像素识别模型也可以基于神经网络而实现区域识别,例如,应用于像素识别模型的神经网络可以包括深度卷积神经网络等。
例如,如图1所示,步骤S12包括:获取替换像素值;利用替换像素值替换手写内容像素的像素值,以从输入图像去除手写内容而得到输出图像。
例如,替换像素值可以为手写区域中除了手写内容像素之外的任意一个像素的像素值;或者,替换像素值为手写区域中除了手写内容像素之外的所有像素的像素值的平均值(例如,几何平均值);或者,替换像素值也可以为固定值,例如,255灰阶值。需要说明的是,可以利用像素识别网络直接提取手写区域中的除了手写内容像素之外的任意一个像素,以得到替换像素值;或者,可以利用像素识别网络提取手写区域中除了手写内容像素之外的所有像素,然后基于所有像素的像素值得到替换像素值。
例如,利用替换像素的像素值替换手写内容像素的像素值,以从输入图像去除手写内容以得到输出图像,包括:利用替换像素的像素值替换手写内 容像素的像素值,以从输入图像去除手写内容而得到中间输出图像;对中间输出图像进行二值化处理,以得到输出图像。
需要说明的是,区域识别模型进行区域识别、二值化处理等的说明可以参考上述的相关描述,重复之处不再赘述。
例如,对图2A所示的原始图像进行手写内容去除处理后,可以得到如图2B所示的输出图像,该输出图像为二值化后的图像。如图2B所示,在该输出图像中,所有手写内容均被去除,从而得到没有答案的习题。
需要说明的是,在本公开的实施例中,模型(例如,区域识别模型、像素识别模型等任意模型)不是仅仅的数学模型,而是可以接收输入数据、执行数据处理、输出处理结果的模块,该模块可以是软件模块、硬件模块(例如硬件神经网络)或采用软硬结合的方式实现。在一些实施例中,区域识别模型和/或像素识别模型包括存储在存储器中的代码和程序;处理器可以执行该代码和程序以实现如上所述的区域识别模型和/或像素识别模型的一些功能或全部功能。在又一些实施例中,区域识别模型和/或像素识别模型可以包括一个电路板或多个电路板的组合,用于实现如上所述的功能。在一些实施例中,该一个电路板或多个电路板的组合可以包括:(1)一个或多个处理器;(2)与处理器相连接的一个或多个非暂时的计算机可读的存储器;以及(3)处理器可执行的存储在存储器中的固件。
应了解,在本公开的实施例中,在获取输入图像前,手写内容去除方法还包括:训练阶段。训练阶段包括对区域识别模型和像素识别模型进行训练的过程。需要说明的是,区域识别模型和像素识别模型可以被分别训练,或者,可以同时对区域识别模型和像素识别模型进行训练。
例如,可以通过标注有文本印刷区域(例如,标注出的文本印刷区域的数量至少为一个)和手写区域(例如,标注出的手写区域的数量至少为一个)的第一样本图像对待训练区域识别模型进行训练以得到的区域识别模型。例如,待训练区域识别模型的训练过程可以包括:在训练阶段,利用标注有文本印刷区域和手写区域的多张第一样本图像训练待训练的区域识别模型,以得到区域识别模型。
例如,利用多张第一样本图像训练待训练区域识别模型包括:从多张第 一样本图像获取当前第一样本图像;利用待训练区域识别模型对当前第一样本图像进行处理,以得到训练文本印刷区域和训练手写区域;根据当前第一样本图像中标注出的文本印刷区域和手写区域和训练文本印刷区域和训练手写区域,通过第一损失函数计算待训练区域识别模型的第一损失值;根据第一损失值对待训练区域识别模型的参数进行修正,在第一损失函数满足第一预定条件时,得到训练完成的区域识别模型,在第一损失函数不满足第一预定条件时,继续输入第一样本图像以重复执行上述训练过程。
例如,在一个示例中,上述第一预定条件对应于在输入一定数量的第一样本图像的情况下,第一损失函数的损失收敛(即第一损失值不再显著减小)。例如,在另一个示例中,上述第一预定条件为训练次数或训练周期达到预定数目(例如,该预定数目可以为上百万)。
例如,可以通过标注有手写内容像素的第二样本图像对待训练像素识别模型进行训练以得到的像素识别模型。在标注第二样本图像中的手写内容像素时,可以对第二样本图像进行放大从而将全部手写内容像素准确地标注出来。根据各种手写特征(例如,像素灰度特征、字体特征等)进行机器学习以建立像素识别模型。
例如,待训练像素识别模型的训练过程可以包括:在训练阶段,利用标注有手写内容像素的多张第二样本图像训练待训练的像素识别模型,以得到像素识别模型。
例如,利用多张第二样本图像训练待训练区域识别模型包括:从多张第二样本图像获取当前第二样本图像;利用待训练像素识别模型对当前第二样本图像进行处理,以得到训练手写内容像素;根据当前第二样本图像中标注出的手写内容像素和训练手写内容像素,通过第二损失函数计算待训练像素识别模型的第二损失值;根据第二损失值对待训练像素识别模型的参数进行修正,在第二损失函数满足第二预定条件时,得到训练完成的像素识别模型,在第二损失函数不满足第二预定条件时,继续输入第二样本图像以重复执行上述训练过程。
例如,在一个示例中,上述第二预定条件对应于在输入一定数量的第二样本图像的情况下,第二损失函数的损失收敛(即第二损失值不再显著减小)。 例如,在另一个示例中,上述第二预定条件为训练次数或训练周期达到预定数目(例如,该预定数目可以为上百万)。
本领域技术人员可以理解,多张第一训练样本图像和多张第二训练样本图像可以是相同的也可以是不同的。
本公开至少一实施例还提供一种手写内容去除装置,图3为本公开至少一实施例提供的一种手写内容去除装置的示意性框图。
如图3所示,该手写内容去除装置300包括处理器302和存储器301。应当注意,图3所示的手写内容去除装置300的组件只是示例性的,而非限制性的,根据实际应用需要,该手写内容去除装置300还可以具有其他组件。
例如,存储器301用于非暂时性存储计算机可读指令;处理器302用于运行计算机可读指令,计算机可读指令被处理器302运行时执行根据上述任一实施例所述的手写内容去除方法。
本公开实施例提供的手写内容去除装置300可以用于实现本公开实施例提供的手写内容去除方法,该手写内容去除装置300可被配置于电子设备上。该电子设备可以是个人计算机、移动终端等,该移动终端可以是手机、平板电脑等具有各种操作系统的硬件设备。
例如,如图3所示,手写内容去除装置300还可以包括图像获取部件303。图像获取部件303用于获得作业图像,例如,获得纸质作业的作业图像。存储器301还可以用于存储作业图像;处理器302还用于读取并处理作业图像以得到输入图像。需要说明的是,作业图像可以为上述手写内容去除方法的实施例中描述的原始图像。
例如,图像获取部件303即为上述手写内容去除方法的实施例中描述的图像采集装置,例如,图像获取部件303可以是智能手机的摄像头、平板电脑的摄像头、个人计算机的摄像头、数码照相机的镜头、或者甚至可以是网络摄像头。
例如,在图3所示的实施例中,图像获取部件303、存储器301和处理器302等物理上可以集成在同一电子设备内部,图像获取部件303可以为电子设备上配置的摄像头。存储器301和处理器302经由内部总线接收从图像获取部件303发送的图像。又例如,图像获取部件303和存储器301/处理器302 在物理位置上也可以分离配置,存储器301和处理器302可以集成在第一用户的电子设备(例如,第一用户的电脑、手机等)中,图像获取部件303可以集成在第二用户(第一用户和第二用户不相同)的电子设备中,第一用户的电子设备和第二用户的电子设备在物理位置上可以分离配置,且第一用户的电子设备和第二用户的电子设备之间可以通过有线或者无线方式进行通信。也就是说,由第二用户的电子设备上的图像获取部件303采集原始图像之后,第二用户的电子设备可以经由有线或者无线方式将该原始图像发送至第一用户的电子设备,第一用户的电子设备接收该原始图像并对该原始图像进行后续处理。例如,存储器301和处理器302也可以集成在云端服务器中,云端服务器接收原始图像并对原始图像进行处理。
例如,手写内容去除装置300还可以包括输出装置,输出装置用于输出该输出图像。例如,输出装置可以包括显示器(例如,有机发光二极管显示器、液晶显示器)、投影仪等,显示器和投影仪可以用于显示输出图像。需要说明的是,输出装置还可以包括打印机,打印机用于将输出图像进行打印。
例如,处理器302和存储器301等组件之间可以通过网络连接进行通信。网络可以包括无线网络、有线网络、和/或无线网络和有线网络的任意组合。网络可以包括局域网、互联网、电信网、基于互联网和/或电信网的物联网(Internet of Things)、和/或以上网络的任意组合等。有线网络例如可以采用双绞线、同轴电缆或光纤传输等方式进行通信,无线网络例如可以采用3G/4G/5G移动通信网络、蓝牙、Zigbee或者WiFi等通信方式。本公开对网络的类型和功能在此不作限制。
例如,处理器302可以控制手写内容去除装置300中的其它组件以执行期望的功能。处理器302可以是中央处理单元(CPU)、张量处理器(TPU)或者图形处理器(GPU)等具有数据处理能力和/或程序执行能力的器件。中央处理元(CPU)可以为X86或ARM架构等。GPU可以单独地直接集成到主板上,或者内置于主板的北桥芯片中。GPU也可以内置于中央处理器(CPU)上。
例如,存储器301可以包括一个或多个计算机程序产品的任意组合,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器 和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机可读指令,处理器302可以运行所述计算机可读指令,以实现手写内容去除装置300的各种功能。在存储介质中还可以存储各种应用程序和各种数据等。
关于手写内容去除装置300执行手写内容去除方法的过程的详细说明可以参考手写内容去除方法的实施例中的相关描述,重复之处不再赘述。
本公开至少一实施例还提供一种存储介质,图4为本公开至少一实施例提供的一种存储介质的示意图。例如,如图4所示,在存储介质500上可以非暂时性地存储一个或多个计算机可读指令501。例如,当所述计算机可读指令501由计算机执行时可以执行根据上文所述的手写内容去除方法中的一个或多个步骤。
例如,该存储介质500可以应用于上述手写内容去除装置300中,例如,其可以包括手写内容去除装置300中的存储器301。
例如,关于存储介质500的说明可以参考手写内容去除装置300的实施例中对于存储器的描述,重复之处不再赘述。
图5为本公开至少一实施例提供的一种硬件环境的示意图。本公开的实施例提供的手写内容去除装置可以应用在互联网系统。
利用图5中提供的计算机系统可以实现本公开中涉及的手写内容去除装置。这类计算机系统可以包括个人电脑、笔记本电脑、平板电脑、手机及任何智能设备。本实施例中的特定系统利用功能框图解释了一个包含用户界面的硬件平台。这种计算机系统可以包括一个通用目的的计算机设备,或一个有特定目的的计算机设备。两种计算机设备都可以被用于实现本实施例中的手写内容去除装置。计算机系统可以实施当前描述的实现手写内容去除方法所需要的信息的任何组件。例如:计算机系统能够被计算机设备通过其硬件设备、软件程序、固件以及它们的组合所实现。为了方便起见,图5中只绘制了一台计算机设备,但是本实施例所描述的实现手写内容去除方法所需要 的信息的相关计算机功能是可以以分布的方式、由一组相似的平台所实施的,分散计算机系统的处理负荷。
如图5所示,计算机系统可以包括通信端口250,与之相连的是实现数据通信的网络,例如,通信端口250可以与上面描述的图像获取部件403进行通信。计算机系统还可以包括一个处理器组220(即上面描述的处理器),用于执行程序指令。处理器组220可以由至少一个处理器(例如,CPU)组成。计算机系统可以包括一个内部通信总线210。计算机系统可以包括不同形式的程序储存单元以及数据储存单元(即上面描述的存储器或存储介质),例如硬盘270、只读存储器(ROM)230、随机存取存储器(RAM)240,能够用于存储计算机处理和/或通信使用的各种数据文件,以及处理器组220所执行的可能的程序指令。计算机系统还可以包括一个输入/输出组件260,输入/输出组件260可以支持计算机系统与其他组件(例如,用户界面280,用户界面280可以为上面描述的显示器)之间的输入/输出数据流。计算机系统也可以通过通信端口250发送和接收信息及数据。
在一些实施例中,上述计算机系统可以用于组成互联网通信系统中的服务器。互联网通信系统的服务器可以是一个服务器硬件设备,或一个服务器群组。一个服务器群组内的各个服务器可以通过有线的或无线的网络进行连接。一个服务器群组可以是集中式的,例如数据中心。一个服务器群组也可以是分布式的,例如一个分布式系统。
需要说明的是,本公开的框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机程序指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
对于本公开,还有以下几点需要说明:
(1)本公开实施例附图只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(2)为了清晰起见,在用于描述本发明的实施例的附图中,层或结构的厚度和尺寸被放大。可以理解,当诸如层、膜、区域或基板之类的元件被称 作位于另一元件“上”或“下”时,该元件可以“直接”位于另一元件“上”或“下”,或者可以存在中间元件。
(3)在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上所述仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (14)

  1. 一种手写内容去除方法,其特征在于,包括:
    获取待处理文本页面的输入图像,其中,所述输入图像包括手写区域,所述手写区域包括手写内容;
    对所述输入图像进行识别,以确定所述手写区域中的所述手写内容;
    去除所述输入图像中的所述手写内容,以得到输出图像。
  2. 根据权利要求1所述的手写内容去除方法,其特征在于,所述输入图像还包括文本印刷区域,所述文本印刷区域包括印刷内容,
    对所述输入图像进行识别,以确定所述手写区域中的所述手写内容,包括:
    利用区域识别模型对所述输入图像进行识别,以得到所述文本印刷区域和所述手写区域。
  3. 根据权利要求2所述的手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:
    对所述手写区域进行标注,以得到手写区域标注框,其中,所述手写区域标注框包括所述手写区域;
    从所述输入图像中切割去除所述手写区域标注框,以得到所述输出图像。
  4. 根据权利要求3所述的手写内容去除方法,其特征在于,从所述输入图像中切割去除所述手写区域标注框,以得到所述输出图像,包括:
    从所述输入图像中切割去除所述手写区域标注框,以得到中间输出图像;
    对所述中间输出图像进行二值化处理,以得到所述输出图像。
  5. 根据权利要求1所述的手写内容去除方法,其特征在于,所述输入图像还包括文本印刷区域,所述文本印刷区域包括印刷内容,
    对所述输入图像进行识别,以确定所述手写区域中的手写内容,包括:
    利用区域识别模型对所述输入图像进行识别,以得到所述文本印刷区域和所述手写区域,
    利用像素识别模型对所述手写区域进行像素识别,以确定在所述手写区域中所述手写内容对应的手写内容像素。
  6. 根据权利要求5所述的手写内容去除方法,其特征在于,去除所述输入图像中的所述手写内容,以得到所述输出图像,包括:
    获取替换像素值;
    利用所述替换像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容而得到所述输出图像。
  7. 根据权利要求6所述的手写内容去除方法,其特征在于,利用所述替换像素的像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容以得到所述输出图像,包括:
    利用所述替换像素的像素值替换所述手写内容像素的像素值,以从所述输入图像去除所述手写内容而得到中间输出图像;
    对所述中间输出图像进行二值化处理,以得到所述输出图像。
  8. 根据权利要求6所述的手写内容去除方法,其特征在于,所述替换像素值为所述手写区域中除了所述手写内容像素之外的任意一个像素的像素值;或者,
    所述替换像素值为所述手写区域中除了所述手写内容像素之外的所有像素的像素值的平均值。
  9. 根据权利要求2-8任一项所述的手写内容去除方法,其特征在于,所述待处理文本页面为试卷或习题,所述印刷内容包括题干,所述手写内容包括答案。
  10. 根据权利要求1-8任一项所述的手写内容去除方法,其特征在于,所述手写内容包括手写字符。
  11. 根据权利要求1-8任一项所述的手写内容去除方法,其特征在于,获取所述待处理文本页面的输入图像包括:
    获取所述待处理文本页面的原始图像,其中,所述原始图像包括待处理文本区域;
    对所述原始图像进行边缘检测,以确定所述原始图像中的所述待处理文本区域;
    对所述待处理文本区域进行转正处理,以得到所述输入图像。
  12. 一种手写内容去除装置,其特征在于,包括:
    存储器,用于非暂时性存储计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,其中,所述计算机可读指令被所述处理器运行时执行根据权利要求1-11任一项所述的手写内容去除方法。
  13. 根据权利要求12所述的手写内容去除装置,其特征在于,还包括:图像获取部件,
    其中,所述图像获取部件用于获得作业图像,所述存储器还用于存储所述作业图像,所述处理器还用于读取并处理所述作业图像以得到输入图像。
  14. 一种存储介质,非暂时性地存储计算机可读指令,其中,当所述计算机可读指令由计算机执行时可以执行根据权利要求1-11任一项所述的手写内容去除方法。
PCT/CN2020/141110 2020-01-21 2020-12-29 手写内容去除方法、手写内容去除装置、存储介质 WO2021147631A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/791,220 US11823358B2 (en) 2020-01-21 2020-12-29 Handwritten content removing method and device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010072431.4 2020-01-21
CN202010072431.4A CN111275139B (zh) 2020-01-21 2020-01-21 手写内容去除方法、手写内容去除装置、存储介质

Publications (1)

Publication Number Publication Date
WO2021147631A1 true WO2021147631A1 (zh) 2021-07-29

Family

ID=71001913

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141110 WO2021147631A1 (zh) 2020-01-21 2020-12-29 手写内容去除方法、手写内容去除装置、存储介质

Country Status (3)

Country Link
US (1) US11823358B2 (zh)
CN (1) CN111275139B (zh)
WO (1) WO2021147631A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021023111A1 (zh) * 2019-08-02 2021-02-11 杭州睿琪软件有限公司 一种识别图像中票据数量、多个票据区域的方法及装置
CN111275139B (zh) 2020-01-21 2024-02-23 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质
CN112396009A (zh) * 2020-11-24 2021-02-23 广东国粒教育技术有限公司 一种基于全卷积神经网络模型的算题批改方法、算题批改装置
CN112861864A (zh) * 2021-01-28 2021-05-28 广东国粒教育技术有限公司 一种题目录入方法、题目录入装置、电子设备及计算机可读存储介质
CN113781356B (zh) * 2021-09-18 2024-06-04 北京世纪好未来教育科技有限公司 图像去噪模型的训练方法、图像去噪方法、装置及设备
CN114283156B (zh) * 2021-12-02 2024-03-05 珠海移科智能科技有限公司 一种用于去除文档图像颜色及手写笔迹的方法及装置
CN113900602B (zh) * 2021-12-09 2022-03-11 北京辰光融信技术有限公司 一种自动消除目标对象填充信息的智能打印方法及系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1231558A2 (en) * 2001-02-09 2002-08-14 Matsushita Electric Industrial Co., Ltd. A printing control interface system and method with handwriting discrimination capability
US20080144131A1 (en) * 2006-12-14 2008-06-19 Samsung Electronics Co., Ltd. Image forming apparatus and method of controlling the same
CN101482968A (zh) * 2008-01-07 2009-07-15 日电(中国)有限公司 图像处理方法和设备
CN104281625A (zh) * 2013-07-12 2015-01-14 曲昊 一种自动从数据库中获取试题信息的方法和装置
CN108921158A (zh) * 2018-06-14 2018-11-30 众安信息技术服务有限公司 图像校正方法、装置及计算机可读存储介质
CN109254711A (zh) * 2018-09-29 2019-01-22 联想(北京)有限公司 信息处理方法及电子设备
CN111275139A (zh) * 2020-01-21 2020-06-12 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3504054B2 (ja) * 1995-07-17 2004-03-08 株式会社東芝 文書処理装置および文書処理方法
US20040205568A1 (en) * 2002-03-01 2004-10-14 Breuel Thomas M. Method and system for document image layout deconstruction and redisplay system
US7391917B2 (en) * 2003-02-13 2008-06-24 Canon Kabushiki Kaisha Image processing method
US7657091B2 (en) * 2006-03-06 2010-02-02 Mitek Systems, Inc. Method for automatic removal of text from a signature area
EP2920743A4 (en) * 2012-11-19 2017-01-04 IMDS America Inc. Method and system for the spotting of arbitrary words in handwritten documents
US10395133B1 (en) * 2015-05-08 2019-08-27 Open Text Corporation Image box filtering for optical character recognition
US9582230B1 (en) * 2015-10-09 2017-02-28 Xerox Corporation Method and system for automated form document fill-in via image processing
CN105631829B (zh) * 2016-01-15 2019-05-10 天津大学 基于暗通道先验与颜色校正的夜间雾霾图像去雾方法
CN105913421B (zh) * 2016-04-07 2018-11-16 西安电子科技大学 基于自适应形状暗通道的遥感图像云检测方法
KR102580519B1 (ko) * 2016-09-07 2023-09-21 삼성전자주식회사 영상처리장치 및 기록매체
CN106485720A (zh) * 2016-11-03 2017-03-08 广州视源电子科技股份有限公司 图像处理方法和装置
FR3060180A1 (fr) * 2016-12-14 2018-06-15 Cyclopus Procede de traitement d’image numerique
CN107358593B (zh) * 2017-06-16 2020-06-26 Oppo广东移动通信有限公司 成像方法和装置
CN107492078B (zh) * 2017-08-14 2020-04-07 厦门美图之家科技有限公司 一种去除图像中黑噪的方法及计算设备
CN107798670B (zh) * 2017-09-20 2021-03-19 中国科学院长春光学精密机械与物理研究所 一种利用图像引导滤波器的暗原色先验图像去雾方法
CN108304814B (zh) * 2018-02-08 2020-07-14 海南云江科技有限公司 一种文字类型检测模型的构建方法和计算设备
CN108520254B (zh) * 2018-03-01 2022-05-10 腾讯科技(深圳)有限公司 一种基于格式化图像的文本检测方法、装置以及相关设备
CN110163805B (zh) * 2018-06-05 2022-12-20 腾讯科技(深圳)有限公司 一种图像处理方法、装置和存储介质
EP3660733B1 (en) * 2018-11-30 2023-06-28 Tata Consultancy Services Limited Method and system for information extraction from document images using conversational interface and database querying
CN109859217B (zh) * 2019-02-20 2020-12-29 厦门美图之家科技有限公司 人脸图像中毛孔区域的分割方法及计算设备
CN109948510B (zh) * 2019-03-14 2021-06-11 北京易道博识科技有限公司 一种文档图像实例分割方法及装置
CN110427932B (zh) * 2019-08-02 2023-05-02 杭州睿琪软件有限公司 一种识别图像中多个票据区域的方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1231558A2 (en) * 2001-02-09 2002-08-14 Matsushita Electric Industrial Co., Ltd. A printing control interface system and method with handwriting discrimination capability
US20080144131A1 (en) * 2006-12-14 2008-06-19 Samsung Electronics Co., Ltd. Image forming apparatus and method of controlling the same
CN101482968A (zh) * 2008-01-07 2009-07-15 日电(中国)有限公司 图像处理方法和设备
CN104281625A (zh) * 2013-07-12 2015-01-14 曲昊 一种自动从数据库中获取试题信息的方法和装置
CN108921158A (zh) * 2018-06-14 2018-11-30 众安信息技术服务有限公司 图像校正方法、装置及计算机可读存储介质
CN109254711A (zh) * 2018-09-29 2019-01-22 联想(北京)有限公司 信息处理方法及电子设备
CN111275139A (zh) * 2020-01-21 2020-06-12 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111488881A (zh) * 2020-04-10 2020-08-04 杭州睿琪软件有限公司 文本图像中手写内容去除方法、装置、存储介质

Also Published As

Publication number Publication date
CN111275139A (zh) 2020-06-12
CN111275139B (zh) 2024-02-23
US11823358B2 (en) 2023-11-21
US20230037272A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
WO2021147631A1 (zh) 手写内容去除方法、手写内容去除装置、存储介质
WO2021203832A1 (zh) 文本图像中手写内容去除方法、装置、存储介质
CN110866495B (zh) 票据图像识别方法及装置和设备、训练方法和存储介质
CN113486828B (zh) 图像处理方法、装置、设备和存储介质
US10902283B2 (en) Method and device for determining handwriting similarity
US20190304066A1 (en) Synthesis method of chinese printed character images and device thereof
US8755595B1 (en) Automatic extraction of character ground truth data from images
WO2021233266A1 (zh) 边缘检测方法和装置、电子设备和存储介质
CN113223025B (zh) 图像处理方法及装置、神经网络的训练方法及装置
CN110443235B (zh) 一种智能纸质试卷总分识别方法及系统
CN113033558B (zh) 一种用于自然场景的文本检测方法及装置、存储介质
CN111950355A (zh) 印章识别方法、装置及电子设备
CN113436222A (zh) 图像处理方法、图像处理装置、电子设备及存储介质
WO2022002002A1 (zh) 图像处理方法、图像处理装置、电子设备、存储介质
CN111213157A (zh) 一种基于智能终端的快递信息录入方法及录入系统
CN114386413A (zh) 处理数字化的手写
Panchal et al. An investigation on feature and text extraction from images using image recognition in Android
WO2022183907A1 (zh) 图像处理方法及装置、智能发票识别设备和存储介质
CN114241486A (zh) 一种提高识别试卷学生信息准确率的方法
KR102192558B1 (ko) 필기 내용을 공유하는 강의 관리 시스템 및 방법
Liu et al. A Connected Components Based Layout Analysis Approach for Educational Documents
CN113627297B (zh) 图像识别方法、装置、设备及介质
Sarak et al. Image Text to Speech Conversion using Optical Character Recognition Technique in Raspberry PI
CN112529885A (zh) 基于颜色识别的答题卡切分方法、装置、设备及存储介质
Tamirat Customers Identity Card Data Detection and Recognition Using Image Processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915418

Country of ref document: EP

Kind code of ref document: A1