WO2022173521A1 - Extraction d'objets d'image - Google Patents

Extraction d'objets d'image Download PDF

Info

Publication number
WO2022173521A1
WO2022173521A1 PCT/US2021/071173 US2021071173W WO2022173521A1 WO 2022173521 A1 WO2022173521 A1 WO 2022173521A1 US 2021071173 W US2021071173 W US 2021071173W WO 2022173521 A1 WO2022173521 A1 WO 2022173521A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cluster
convex polygon
pixels
generate
Prior art date
Application number
PCT/US2021/071173
Other languages
English (en)
Inventor
Varadharaman BALASUBRAMANIAN
Rithvik Kumar THUMMALACHARLA
Neethu John .
Gnanesh RASINENI
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Publication of WO2022173521A1 publication Critical patent/WO2022173521A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/147Determination of region of interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/168Smoothing or thinning of the pattern; Skeletonisation

Definitions

  • Electronic images of documents may be generated by scanning or capturing the documents.
  • An image capturing device such as a flatbed scanner, a camera, or the like may be utilized to generate an electronic image of a document.
  • mobile phones may include cameras, which provide users the ability to capture the electronic image of the document.
  • image capturing devices such as a scanner, a copier, or a multi-function device, the document may be placed upon an imaging surface/platen made of a material, such as glass, for scanning, copying, printing, or other such purposes.
  • FIG. 1 is a block diagram of an example image acquisition device, including an image extraction module to extract an image object from an input image by correcting a skew orientation;
  • FIG. 2A depicts an example input image, including an image object and a background of the image object
  • FIG. 2B depicts an example binary edge map of the input image of FIG. 2A
  • FIG. 2C depicts an example image mask with an outer boundary corresponding to the binary edge map of FIG. 2B;
  • FIG. 2D depicts an example output of a connected component analysis, including an image cluster of the input image of FIG. 2A;
  • FIG. 2E depicts an example convex polygon for the image cluster of FIG. 2D;
  • FIG. 2F depicts an example bounding box, including the image cluster of FIG. 2D;
  • FIG. 2G depicts an example output image of the input image of FIG. 2A
  • FIG. 3 is a block diagram of an example image acquisition device including a non-transitory machine-readable storage medium storing instructions to generate an output image by removing a transparent layer around an extracted image cluster from an input image;
  • FIG. 4A depicts an example extracted image cluster including a transparent layer
  • FIG. 4B depicts the example extracted image cluster of FIG. 4A, depicting an outer boundary
  • FIG. 4C depicts an example image cluster with enhanced contrast in the outer boundary of FIG. 4B;
  • FIG. 4D depicts an example binary edge map of the image cluster of FIG. 4C
  • FIG. 4E depicts an example output image without transparent layer of FIG. 4A
  • FIG. 5A depicts an example input image including a background noise
  • FIG. 5B depicts an example output image of the input image of FIG. 5A without the background noise
  • FIG. 6 is a flowchart illustrating an example method for generating an output image by correcting an orientation of an extracted image cluster
  • FIG. 7A depicts an example input image including an image object in a random orientation
  • FIG. 7B depicts an example image cluster obtained by de-skewing and cropping the image cluster of FIG. 7A.
  • FIG. 7C depicts an example output image generated by correcting the orientation of the image cluster of FIG. 7B.
  • An electronic or digital image of a document may be generated by scanning or capturing the document using an image capturing device.
  • An example document can be a business card, an identification card, a statement copy, a bonded document (e.g., a certificate), a letter, a check, a bill, an invoice, or the like.
  • An example image capturing device or an image reading apparatus may be a flatbed scanner, a camera, or the like.
  • the image capturing device may optically read the document and convert the read image into electronic data (i.e., the digital image).
  • the quality of such digital images may depend on multiple factors. For example, variations while positioning the document to scan may result in a skewed scanned image. Skewed images may be unappealing and subsequent processing (e.g., to perform optical character recognition processes) of such skewed images may be challenging. Thus, to scan the document in correct angle or orientation, the document may have to be placed in right orientation and align to a marker position in a scanner bed of the flatbed scanner.
  • the quality e.g., an accuracy
  • scanning the document including a transparent layer such as a lamination layer, a thin protective cover, or the like around the boundary of the document may result in presence of an extra boundary or noise in the scanned image.
  • the scanned image may include noise due to changes in an environment illumination (e.g., due to change in luminance, which is a measure of an amount of light falling on the document) while scanning, dust particles on the document, moire affect in the document (e.g., a pattern on an object being photographed can interfere with a shape of light sensors to generate unwanted artifacts), and/or the like.
  • an environment illumination e.g., due to change in luminance, which is a measure of an amount of light falling on the document
  • dust particles on the document e.g., a pattern on an object being photographed can interfere with a shape of light sensors to generate unwanted artifacts
  • Examples described herein may provide an image acquisition device to extract an actual image object within an input image (e.g., a scanned or captured image), irrespective of skew and rotation of the image object.
  • the image acquisition device may generate a binary edge map of an input image.
  • the binary edge map may include outer edges and inner edges of an image object in the input image.
  • the image acquisition device may isolate the outer edges of the image object to generate an image mask with an outer boundary of the image object.
  • the image acquisition device may perform a connected component analysis on the image mask to determine an image cluster within the outer boundary. Further, the image acquisition device may generate a convex polygon of the image cluster based on connected group of pixels in the image cluster.
  • the image acquisition device may extract the image cluster from the input image by correcting a skew orientation of the image cluster based on the convex polygon.
  • the image acquisition device may remove a transparent layer (e.g., a lamination or protective cover) around the extracted image cluster, for instance, by removing edges having an intensity less than a threshold at an outer boundary of the extracted image cluster.
  • FIG. 1 is a block diagram of an example image acquisition device 100, including an image extraction module 106 to extract an image object from an input image by correcting a skew orientation.
  • image acquisition device 100 may be a mobile phone, a digital camera, a scanner, a multifunctional printer, or any other device capable of processing the input image.
  • image acquisition device 100 may include an image scanner to scan a document to generate the scan image or input image.
  • the document may be laid face-down on a transparent platen of the image scanner so that a reading unit installed in the image scanner can read the document through the platen to generate the input image.
  • image acquisition device 100 may include a camera to capture the document to generate the input image.
  • image acquisition device 100 may receive the input image (e.g., a camera captured image or a scan image) from an optical imaging device externally connected to image acquisition device 100.
  • the input image may be a pre-stored image or may be generated based on scanning the document in real-time.
  • image acquisition device 100 may include a processor 102 and a memory 104 including an image extraction module 106 that can be executed by processor 102.
  • Processor 102 may be a type of central processing unit (CPU), microprocessor, or processing logic that interprets and executes machine-readable instructions stored in memory 104 of image acquisition device 100.
  • image extraction module 106 may generate a binary edge map of an input image.
  • the input image may be a camera captured image or a scan image of a document.
  • the document may be a business card, an identification card, a bonded document (e.g., a certificate), a letter, a check, a bill, an invoice, or the like including content such as text, graphics, or a combination thereof.
  • the image extraction module 106 may generate the binary edge map of the input image as follows. First, image extraction module 106 may perform a background smoothing to set background-colored pixels that are within a standard deviation to a mean background color. For example, a multicolor dropout algorithm may be used to set the background-colored pixels of the input image within the standard deviation to the mean background color. Thus, the background smoothing may remove a non-uniformity (e.g., a finger scratch mark) in the input image, which otherwise may appear like a foreground element in the output image.
  • a non-uniformity e.g., a finger scratch mark
  • An example input image 200A is depicted in FIG. 2A.
  • Input image 200A may depict an image object 202 and a background 204 as shown in FIG. 2A. Extraction of image object 202 may be complex when document background 204 and a boundary of image object 202 are of similar color. In such examples, image acquisition device 100 may be able to extract image object 202 even when the image object boundary and document background 204 are of similar color by performing the background smoothing.
  • image extraction module 106 may generate the binary edge map of the input image based on the background smoothened input image.
  • a Sobel edge detection algorithm e.g., which generates image emphasizing edges
  • the binary edge map may include outer edges and inner edges of the image object in the input image.
  • An example binary edge map 200B is depicted in FIG. 2B.
  • Example binary edge map 200B may include the outer edges (e.g., 206) and inner edges (e.g., 208).
  • image extraction module 106 may generate an image mask with an outer boundary of the image object based on the binary edge map.
  • a flood fill algorithm may be used to isolate the outer boundary of the image object.
  • An example image mask with the outer boundary is shown in FIG. 2C.
  • FIG. 2C depicts the image mask 200C with outer boundary 206, inside which a white color is filled (e.g., as shown by 210) using the flood fill algorithm.
  • image extraction module 106 may perform a connected component analysis on the image mask to determine an image cluster within the outer boundary.
  • the image cluster may include a connected group of pixels.
  • a connected component analysis may be used to determine the image cluster from the flood filled output (i.e. , the image mask).
  • the image cluster may refer to the image object that has to be extracted.
  • FIG. 2D depicts an example output 200D of the connected component analysis including an image cluster 212.
  • image extraction module 106 may generate a convex polygon of the image cluster based on the connected group of pixels.
  • a convex hull algorithm may be used to generate the convex polygon of the image cluster.
  • the output of the connected component analysis i.e., the image cluster
  • the convex hull algorithm may determine the convex polygon of the image cluster.
  • FIG. 2E depicts an example convex polygon 216 for image cluster 212 (e.g., as shown in FIG. 2D).
  • image extraction module 106 may determine a skew angle of the image cluster based on the convex polygon.
  • image extraction module 106 may determine the skew angle as follows. First, image extraction module 106 may estimate a bounding box of the image cluster based on the convex polygon. An example bounding box 218 is depicted in FIG. 2E. In this example, the bounding box may be estimated by calculating a height and width of the bounding box surrounding the oriented/rotated image cluster. Each pixel in the bounding box may be classified as being outside, inside, or on the convex polygon.
  • the bounding box surrounding the oriented/rotated image cluster may include at least a portion of another image object in input image 200A.
  • the pixels lying outside the convex polygon may represent pixels of the other image object.
  • image extraction module 106 may replace pixels lying outside the convex polygon with a mean background color of the input image and pixels lying on and inside the convex polygon with an original color of the input image. Painting the pixels lying outside, inside, and on the convex polygon may facilitate in extraction of the individual image object, particularly when the input image is of low-resolution, where there is a possibility of gaps in the outer edge of the image object which leads to concave edges.
  • image extraction module 106 may extract the bounding box including the cluster image upon replacing the pixels.
  • An example extracted bounding box 218 including an oriented image cluster 212 is depicted in FIG. 2F. Further, image extraction module 106 may determine the skew angle of the image cluster from the extracted bounding box.
  • image extraction module 106 may determine the skew angle of the image cluster from the extracted bounding box as follows.
  • Image extraction module 106 may generate a minimum area-oriented boundary polygon from the convex polygon in an orientation of the convex polygon.
  • the minimum area-oriented boundary polygon may encapsulate content within the image cluster.
  • the minimum area-oriented boundary polygon may be a minimum area-oriented bounding rectangle.
  • the minimum area-oriented bounding rectangle may refer to the smallest area rectangle which encloses the connected group of pixels of the image cluster.
  • An example minimum area-oriented bounding rectangle 220 is shown in FIG. 2E.
  • image extraction module 106 may determine the skew angle of the image cluster based on the orientation of the minimum area-oriented bounding polygon relative to the bounding box.
  • FIG. 2E further depicts an example skew angle 214 of minimum area-oriented bounding rectangle/minimum area-oriented bounding polygon 220 relative to bounding box 218.
  • image extraction module 106 may de-skew the image cluster based on the skew angle. Furthermore, image extraction module 106 may generate an output image by extracting the de-skewed image cluster. In an example, image extraction module 106 may generate the output image by cropping the de-skewed image cluster from the extracted bounding box based on a dimension of the minimum area-oriented boundary polygon (e.g., the minimum area-oriented bounding rectangle). Once the image is rotated along the skew angle, the size of the minimum area-oriented bounding rectangle (e.g., a width (W) and a height (H) of minimum area-oriented bounding rectangle 220 as shown in FIG. 2E) may be used to trim an excess background from the de-skewed image cluster.
  • FIG. 2G depicts an example output image 200G (e.g., de-skewed and cropped output image).
  • examples described herein may extract the image object along the actual boundaries irrespective of the skew angle and orientation of the image object within the document.
  • an input image includes a first image object and a second image object (i.e., two separate documents are scanned/captured at once).
  • the background smoothing may be performed on the input image.
  • the binary edge map of the input image may be generated based on the background smoothened input image.
  • the binary edge map may include outer edges and inner edges of each of the first and second image objects in the input image.
  • the flood fill algorithm may be applied on the binary edge map to generate a first image mask with an outer boundary of the first image object and a second image mask with an outer boundary of the second image object.
  • a connected component analysis may be performed on the flood filled output (e.g., the first image mask and the second image mask) to determine a first image cluster and a second image cluster, respectively, within respective outer boundaries.
  • a first convex polygon of the first image cluster and a second convex polygon of the second image cluster may be generated based on connected groups of pixels.
  • a first bounding box including the first cluster image and a second bounding box including the second cluster image may be extracted based on the respective convex polygons.
  • skew angles corresponding to the first and second image clusters may be determined based on the first and second convex polygons.
  • the first and second image clusters may be de-skewed separately based on corresponding skew angles.
  • a first output image and a second output image may be generated by extracting the de-skewed first and second image clusters.
  • the first and second output images may be outputted (e.g., printed) either separately or in a single page.
  • FIG. 3 is a block diagram of an example image acquisition device 300 including a non-transitory machine-readable storage medium 304 storing instructions to generate an output image by removing a transparent layer around an extracted image cluster.
  • Image acquisition device 300 may include a processor 302 and machine-readable storage medium 304 communicatively coupled through a system bus.
  • Processor 302 may be any type of central processing unit (CPU), microprocessor, or processing logic that interprets and executes machine-readable instructions stored in machine-readable storage medium 304.
  • Machine-readable storage medium 304 may be a random-access memory (RAM) or another type of dynamic storage device that may store information and machine-readable instructions that may be executed by processor 302.
  • machine-readable storage medium 304 may be synchronous DRAM (SDRAM), double data rate (DDR), rambus DRAM (RDRAM), rambus RAM, etc., or storage memory media such as a floppy disk, a hard disk, a CD-ROM, a DVD, a pen drive, and the like.
  • machine-readable storage medium 304 may be non-transitory machine-readable medium.
  • Machine-readable storage medium 304 may be remote but accessible to image acquisition device 300.
  • machine-readable storage medium 304 may store instructions 306-316.
  • instructions 306-316 may be executed by processor 302 to generate an output image by removing a transparent layer around an extracted image cluster.
  • Instructions 306 may be executed by processor 302 to receive an input image having an image object.
  • the image object may be in a random orientation.
  • Instructions 308 may be executed by processor 302 to generate a binary edge map of the input image.
  • the binary edge map may include outer edges of the image object.
  • Instructions 310 may be executed by processor 302 to perform a connected component analysis on the input image to determine an image cluster in the binary edge map.
  • the image cluster may include a connected group of pixels corresponding to the image object.
  • Instructions 312 may be executed by processor 302 to generate a convex polygon of the image cluster based on the connected group of pixels.
  • Instructions 314 may be executed by processor 302 to extract the image cluster from the input image by correcting a skew orientation of the image cluster based on the convex polygon.
  • instructions to extract the image clusterfrom the input image by correcting the skew orientation may include instructions to:
  • each pixel in the bounding box may be classified as being outside, inside, or on the convex polygon.
  • instructions to extract the image cluster from the bounding box may include instructions to:
  • Instructions 316 may be executed by processor 302 to generate an output image by removing a transparent layer around the extracted image cluster.
  • instructions to generate the output image may include instructions to determine a region of interest from the extracted image cluster.
  • the region of interest may correspond to an outer boundary of the extracted image cluster.
  • An example extracted image cluster 400A is depicted in FIG. 4A.
  • an outer boundary 402 of extracted image cluster 400A of FIG. 4A is depicted in FIG. 4B. Since the region of interest may correspond to outer boundary 402 of extracted image cluster 400B, an inner region 404 may be painted with a darker color (e.g., a black color) as shown in FIG. 4B.
  • instructions to generate the output image may include instructions to increment the contrast of the region of interest by stretching a range of intensity values.
  • a contrast stretching or normalization may be applied on the area near the outer boundary to remove weak edges and a high intensity transparent layer.
  • the normalization may enhance the contrast in the outer boundary by stretching out the intensity values between a lower and upper cut off limit to the range of 0 to 255.
  • the lower intensity cut off limit can be 10.
  • the upper intensity cut off may be determined based on a dominant color coverage percent and an average standard deviation over the extracted image cluster.
  • the intensity values below lower cut off limit may be set to 0, above upper cut off limit may be set to 255, and the intensity values in between lower cut off and upper cut off limits may be stretched between 0 and 255.
  • An example image cluster 400C depicting enhanced contrast (e.g., 406) in outer boundary 402 of FIG. 4B by stretching the range of intensity values is shown in FIG. 4C.
  • instructions to generate the output image may include instructions to apply a border padding on the region of interest. For example, applying the border padding with white color followed by an extra layer of high intensity clipping may clearly distinguish the border of the image cluster.
  • instructions to generate the output image may include instructions to generate an edge map of the extracted image cluster. For example, the edge map may be calculated on three channels (red (R), green (G), and blue (B)) and the border to crop may be obtained by taking the maximum edge map value from the R, G and B channel for each pixel.
  • instructions to generate the output image may include instructions to generate a binary edge map of the extracted image cluster by applying thresholding on the edge map to determine coordinates of a border having intensity greater than a threshold.
  • An example binary edge map 400D is depicted in FIG. 4D.
  • instructions to generate the output image may include instructions to generate the output image by removing the transparent layer around the determined coordinates from the extracted image cluster. For example, upon determining the coordinates of the border, the border may be cropped from the extracted image cluster to generate the output image.
  • An example output image 400E without the transparent layer is depicted in FIG. 4E.
  • machine-readable storage medium 304 may store instructions to detect background noise from image data of the output image and cleanse the detected background noise from the output image.
  • a document may include a noise such as dust particles, markings, and the like.
  • the input image may reflect the noises as random pixels in the background, referred as the background noise.
  • An example input image 500A including the background noise (e.g., 502) is depicted in FIG. 5A.
  • the background noise may also appear in the output image.
  • the background noise may be cleansed by removing the pixels constituting the background noise.
  • An example output image generated by removing the background noise (e.g., 502) of input image 500A of FIG. 5A is depicted in FIG. 5B.
  • FIG. 6 is a flowchart illustrating an example method 600 for generating an output image by correcting an orientation of an extracted image cluster from an input image.
  • Example method 600 depicted in FIG. 6 may represent generalized illustrations, and that other processes may be added, or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present application.
  • the processes may represent instructions stored on a computer-readable storage medium that, when executed, may cause a processor to respond, to perform actions, to change states, and/or to make decisions.
  • the processes of method 600 may represent functions and/or actions performed by functionally equivalent circuits like analog circuits, digital signal processing circuits, application specific integrated circuits (ASICs), or other hardware components associated with the system.
  • example method 600 may not be intended to limit the implementation of the present application, but rather example method 600 may illustrate functional information to design/fabricate circuits, generate machine-readable instructions, or use a combination of hardware and machine-readable instructions to perform the illustrated processes.
  • an input image having an image object in a random orientation may be received.
  • a background smoothing may be performed on the input image.
  • a binary edge map of the background smoothened input image may be generated by removing inner edge pixels associated with the image object.
  • a connected component analysis may be performed on the input image to determine an image cluster in the binary edge map.
  • the image cluster may include a connected group of pixels corresponding to the image object.
  • a convex polygon of the image cluster may be generated based on the connected group of pixels.
  • the image cluster may be extracted from the input image by correcting a skew orientation of the image cluster based on the convex polygon.
  • extracting the image cluster from the input image by correcting the skew orientation may include:
  • An example bounding box 702 including an oriented cluster image 704 is depicted in FIG. 7A.
  • FIG. 7B depicts an example image cluster 700B obtained by de-skewing and cropping image cluster 704 from bounding box 702 of FIG. 7A.
  • an output image may be generated by correcting the orientation of the extracted image cluster by applying an optical character recognition.
  • An example output image 700C is depicted in FIG. 7C.
  • Output image 700C may be generated by correcting the orientation of image cluster 700B of FIG. 7B.
  • the optical character recognition may be used to rotate the extracted image cluster by 180 degrees to correct the orientation.
  • generating the output image by correcting the orientation of the extracted image may include:
  • output image 700C may be generated by rotating image cluster 700B based on English language as letters of the language have a unique direction.
  • other regional and/or national languages may also be identified to correct the orientation of the extracted cluster image.
  • the image cluster includes a combination of languages (i.e., two or more languages), then any one of the languages can be used to correct the orientation of the image cluster.
  • a transparent layer around the output image may be removed by removing edges having an intensity less than a threshold near an outer boundary of the output image.
  • background noise may be cleansed from the extracted image cluster by removing pixels constituting the background noise in the output image upon removing the transparent layer. An example removal of transparent layer and background noise is described in FIG. 3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Un dispositif d'acquisition d'image peut comprendre un processeur et une mémoire comprenant un module d'extraction d'image. Le module d'extraction d'image peut produire une carte de contours binaire d'une image d'entrée. En outre, le module d'extraction d'image peut produire un masque d'image comportant une limite externe d'un objet d'image sur la base de la carte de contours binaire. De plus, le module d'extraction d'image peut effectuer une analyse de composantes liées sur le masque d'image pour déterminer un groupe d'images compris dans les confins de la limite externe. En outre, le module d'extraction d'image peut produire un polygone convexe du groupe d'images sur la base d'un groupe lié de pixels dans le groupe d'images. De plus, le module d'extraction d'image peut déterminer un angle d'inclinaison du groupe d'images sur la base du polygone convexe et redresser le groupe d'images sur la base de l'angle d'inclinaison. En outre, le module d'extraction d'image peut produire une image de sortie par extraction du groupe d'images redressé.
PCT/US2021/071173 2021-02-11 2021-08-13 Extraction d'objets d'image WO2022173521A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202141005830 2021-02-11
IN202141005830 2021-02-11

Publications (1)

Publication Number Publication Date
WO2022173521A1 true WO2022173521A1 (fr) 2022-08-18

Family

ID=82838648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/071173 WO2022173521A1 (fr) 2021-02-11 2021-08-13 Extraction d'objets d'image

Country Status (1)

Country Link
WO (1) WO2022173521A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014774A1 (en) * 2008-07-17 2010-01-21 Lawrence Shao-Hsien Chen Methods and Systems for Content-Boundary Detection
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
US20160050338A1 (en) * 2009-07-02 2016-02-18 Hewlett-Packard Development Company, L.P. Skew detection
US20190180415A1 (en) * 2016-08-17 2019-06-13 Hewlett-Packard Development Company, L.P. Image forming apparatus, scanned image correction method thereof, and non-transitory computer-readable recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014774A1 (en) * 2008-07-17 2010-01-21 Lawrence Shao-Hsien Chen Methods and Systems for Content-Boundary Detection
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
US20160050338A1 (en) * 2009-07-02 2016-02-18 Hewlett-Packard Development Company, L.P. Skew detection
US20190180415A1 (en) * 2016-08-17 2019-06-13 Hewlett-Packard Development Company, L.P. Image forming apparatus, scanned image correction method thereof, and non-transitory computer-readable recording medium

Similar Documents

Publication Publication Date Title
US8559748B2 (en) Edge detection
JP4261005B2 (ja) 領域ベースのイメージ2値化システム
EP2288135B1 (fr) Décryptage et détermination de seuils adaptatifs supervisés pour évaluation d'images de documents d'impression et de numérisation
US20090103808A1 (en) Correction of distortion in captured images
CN107330433B (zh) 图像处理方法和装置
CN112183038A (zh) 一种表格识别套打方法、计算机设备及计算机可读存储介质
JP2012243216A (ja) 画像処理装置及び画像処理プログラム
JP2002094763A (ja) 背景トレーニングを用いるデジタルイメージング装置
Meng et al. Nonparametric illumination correction for scanned document images via convex hulls
US20060215232A1 (en) Method and apparatus for processing selected images on image reproduction machines
Sun et al. A visual attention based approach to text extraction
US8442348B2 (en) Image noise reduction for digital images using Gaussian blurring
US7554698B2 (en) Robust document boundary determination
JP2003016440A5 (fr)
CN106033534A (zh) 基于直线检测的电子阅卷方法
CN111445402B (zh) 一种图像去噪方法及装置
WO2022173521A1 (fr) Extraction d'objets d'image
US20220141350A1 (en) Reading device, image forming apparatus, and color mixture prevention method
CN113076952B (zh) 一种文本自动识别和增强的方法及装置
JP2004096435A (ja) 画像解析装置、画像解析方法、および画像解析プログラム
Roullet et al. An automated technique to recognize and extract images from scanned archaeological documents
US11800036B2 (en) Determining minimum scanning resolution
CN114359923A (zh) 证件字符识别方法、装置、计算机及存储介质
Wang et al. Currency recognition system using image processing
JP4507762B2 (ja) 印刷検査装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21926037

Country of ref document: EP

Kind code of ref document: A1