EP3182365B1 - Writing board detection and correction - Google Patents

Writing board detection and correction Download PDF

Info

Publication number
EP3182365B1
EP3182365B1 EP16196685.8A EP16196685A EP3182365B1 EP 3182365 B1 EP3182365 B1 EP 3182365B1 EP 16196685 A EP16196685 A EP 16196685A EP 3182365 B1 EP3182365 B1 EP 3182365B1
Authority
EP
European Patent Office
Prior art keywords
line
image
writing board
lines
strip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16196685.8A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP3182365A3 (en
EP3182365A2 (en
Inventor
Gang Fang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Laboratory USA Inc
Original Assignee
Konica Minolta Laboratory USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Laboratory USA Inc filed Critical Konica Minolta Laboratory USA Inc
Publication of EP3182365A2 publication Critical patent/EP3182365A2/en
Publication of EP3182365A3 publication Critical patent/EP3182365A3/en
Application granted granted Critical
Publication of EP3182365B1 publication Critical patent/EP3182365B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • Writing boards e.g ., whiteboards, blackboards, etc.
  • Text, drawings, charts, graphs, etc. may be placed on writing boards to communicate ideas during lectures, training, brainstorming sessions, etc.
  • image processing e.g ., optical character recognition (OCR)
  • OCR optical character recognition
  • the resulting image may include the background external to the writing board. Further, the resulting image may also capture the writing board with a distorted perspective. Both the background and the distorted perspective complicate the image processing and make successful extraction of the contents less likely. However, users still wish to memorialize the ideas on a writing board by photographing the writing board.
  • An image processing tool is also known from US2007/268501 .
  • the invention relates to a method for image processing according to claim 1.
  • the invention relates to a non-transitory computer readable medium (CRM) storing computer readable program code for image processing according to claim 3.
  • CRM computer readable medium
  • the invention relates to a system for image processing according to claim 4.
  • embodiments of the invention provide a method, a non-transitory computer readable medium (CRM), and a system for image processing.
  • An image including a writing board and at least a portion of the background external to the writing board is obtained.
  • the lines (e.g ., edges) within the image are detected. Lines that are located on the writing board in the image and lines that are located in the background of the image are identified and removed ( i.e ., excluded from further consideration). Some of the remaining lines are used to determine the corners of the writing board and to calculate a transformation that offsets the distorted perspective of the writing board.
  • additional image processing e.g ., OCR
  • OCR optical character recognition
  • FIG. 1 shows a system (100) in accordance with one or more embodiments of the invention.
  • the system (100) has multiple components (e.g ., a buffer (104), a line processor (114), a corner detector (110), and a correction engine (108)).
  • Each of these components (104, 108, 110, 114) may be located on the same computing device (e.g ., personal computer (PC), laptop, tablet PC, smart phone, server, mainframe, cable box, kiosk, etc.) or may be located on different computing devices connected by a network of any size and any topology having wired and/or wireless segments.
  • PC personal computer
  • laptop tablet PC
  • smart phone server
  • mainframe mainframe
  • cable box mainframe
  • kiosk etc.
  • the system (100) includes the buffer (104).
  • the buffer (104) may be implemented in hardware ( i.e ., circuitry), software, or any combination thereof.
  • the buffer (104) stores an image (106).
  • the image (106) includes a writing board and a background.
  • the background of the image (106) is effectively any area of the image (106) that is not occupied by the writing board.
  • the image (106) may capture the writing board with a distorted perspective.
  • the image (106) may be obtained from any source.
  • the image (106) may be obtained from a digital camera in a smart phone (not shown).
  • the image (106) may be obtained over a network (e.g ., the Internet) (not shown).
  • the image (106) may be obtained from a hard drive (not shown).
  • the image (106) may be of any size and any resolution.
  • the buffer (104) may downsample the image (106). Specifically, the buffer (104) may downsample the image (106) if the resolution of the image exceeds a predetermined threshold (e.g ., the image (106) is in high definition).
  • the buffer (104) may store the image (106) while other components ( e.g ., 108, 110, 114) operate on the image (106).
  • the system (100) includes the line processor (114).
  • the line processor (114) may be implemented in hardware ( i.e ., circuitry), software, or any combination thereof.
  • the line processor (114) is configured to perform edge detection on the image (106). In other words, the line processor (114) is configured to detect lines within the image (106).
  • the line processor (114) may utilize the Canny algorithm and/or the Hough Transform to detect the lines in the image (106).
  • the line processor (114) may also remove ( i.e ., exclude from future consideration) lines that are not sufficiently long ( i.e ., lines that do not exceed a length threshold).
  • the line processor (114) may also classify each line as being closer to vertical or closer to horizontal.
  • one or more detected lines may be located on the writing board in the image, one or more detected lines may be located in the background external to the writing board in the image, and one or more detected lines may correspond to the outline ( e.g ., border, perimeter, etc.) of the writing board in the image (106).
  • the line processor (114) identifies and removes (i.e ., excludes from future consideration) lines that are located on the writing board in the image (106).
  • the line processor (114) may compare the intensity values of the pixels on both sides of the line to determine the selected line is on the writing board. This process may include the use of strips (discussed below).
  • the line processor (114) identifies and removes (i.e ., excludes from further consideration) lines that are located in the background of the image. Specifically, the line processor (114) may calculate multiple sample points on a selected line and generate links from a reference point (e.g ., the center of the image) to each sample point. If the links intersect other lines before reaching the sample point, it may be determined that the selected line is located in the background of the image (106) (discussed below).
  • the system (100) includes the corner detector (110).
  • the corner detector (110) may be implemented in hardware (i.e ., circuitry), software, or any combination thereof.
  • the corner detector (110) is configured to determine the corners of the writing board in the image (106). This may include partitioning the remaining lines into four clusters based on the normal orientations from the image center. Within each cluster, lines may be ranked based on various factors including length and proximity to the center of the image (106). Four lines, one from each of the four clusters, may be selected based on rank, and then the cross points ( i.e ., intersections) of the four lines are calculated. Assuming the calculated cross points do not violate a quadrangle principle (discussed below), the four cross points are deemed to be the corners of the writing board in the image (106).
  • the system (100) includes the correction engine (108).
  • the correction engine (108) may be implemented in hardware (i.e ., circuitry), software, or any combination thereof.
  • the correction engine (108) is configured to calculate a transformation based on the calculated intersections and the distances between the four lines (discussed below).
  • the correction engine (108) is also configured to apply the transformation to the image (106) in order to offset ( i.e ., at least partially correct) the distorted perspective of the writing board in the image (106) (discussed below).
  • FIG. 1 shows the system (100) as having four components (104, 108, 110, 114), in other embodiments, the system (100) may have more or fewer components.
  • the system (100) may include a smart phone with a digital camera to capture the image (106).
  • the system (100) may include addition engines to perform additional processing (e.g ., OCR) on the image (106) to extract the contents of the writing board in the image (106).
  • additional processing e.g ., OCR
  • FIG. 2 shows a flowchart in accordance with one or more embodiments of the invention.
  • the flowchart depicts a process for image processing.
  • One or more of the steps in FIG. 2 may be performed by the components of the system (100), discussed above in reference to FIG. 1 .
  • one or more of the steps shown in FIG. 2 may be omitted, repeated, and/or performed in a different order than the order shown in FIG. 2 . Accordingly, the scope of the invention should not be considered limited to the specific arrangement of steps shown in FIG. 2 .
  • an image is obtained (STEP 205).
  • the image may be obtained from a digital camera.
  • the image may be downloaded from a server.
  • the image may include a writing board (e.g ., a whiteboard, a blackboard, etc.).
  • the writing board may occupy the center of the image.
  • the image may also include a background that is external to the writing board. The background may appear on some or all four sides of the writing board.
  • the image may be downsampled to enhance the linearity of the edges.
  • STEP 210 may be optional.
  • lines are detected in the image.
  • the lines may be detected by applying a line detection algorithm.
  • the lines of the image may be detected by applying the Canny algorithm and/or the Hough Transform to the image.
  • Other algorithms for line detection may also be applied. Any detected line that does not exceed a length threshold (i.e ., short lines) may be removed ( i.e ., excluded from further consideration).
  • one or more detected lines may be located on the writing board in the image, one or more detected lines may be located in the background external to the writing board in the image, and one or more detected lines may correspond to the border of the writing board.
  • STEP 220 lines located on the writing board in the image are identified and removed ( i.e ., excluded from future consideration). Additional details regarding STEP 220 may be found, for example, in FIG. 3 and in FIG. 5C .
  • STEP 225 lines located on the background of the image are identified and removed ( i.e ., excluded from future consideration). Additional details regarding STEP 225 may be found, for example, in FIG. 4 and in FIG. 5D .
  • the remaining lines are partitioned into four clusters based on the normal orientations from the image center.
  • the lines may be ranked. For example, the lines may be ranked in terms of length and/or proximity to the center of the image. A cluster having only one line is possible.
  • a line is selected from each of the four clusters resulting in a set of four lines.
  • the selection may be at random. In other words, a line may be selected at random from each cluster. Additionally or alternatively, a line may be selected from each cluster based on the rank of the line in the cluster. For example, the highest ranked line may be selected from each cluster. As another example, the lowest ranked line may be selected from each cluster.
  • the cross points (i.e ., intersections) of the four lines are calculated.
  • calculating the cross points of the four lines may include extending one or more of the four lines until they intersect with the other lines.
  • there will be four cross points and each cross point may be represented with coordinates.
  • STEP 245 it is determined whether the calculated cross points violate a quadrangle principle.
  • the quadrangle principle is violated if one or more of the calculated cross points are located close ( i.e ., within a predetermined distance) to the center of the line segments. Additionally or alternatively, the quadrangle principle is violated if two cross points in a line are on the one side of the line. Said in a different way, as discussed above, in order to calculate cross points, a line may be extended in both directions (extension in direction A, extension in direction B) to intersect with the other lines.
  • both cross points in a line are located in the same extension of the line (i.e ., both located in the extension in direction A or both located in the extension in direction B), then the quadrangle principle is violated.
  • the process proceeds to STEP 250.
  • the cross points are deemed to be the four corners of the writing board in the image.
  • the process returns to STEP 235, where at least one line of the set of lines is replaced with a different line from the same cluster.
  • a transformation is calculated based on the cross-points and the distances between the four lines. For example, let w be the distance between the midpoint of one vertical line and the midpoint of the other vertical line in the set of four lines. Further, let h be the distance between the midpoint of one horizontal line and the midpoint of the other horizontal line in the set of four lines.
  • the transformation may be an affine transformation that maps the coordinates of the cross points to the following coordinates: (0, 0), (w, 0), (0, h ), and ( w , h ).
  • the distorted perspective is at least partially corrected by applying the transformation to the image. Following application of the transformation, it is more likely that any processing (e.g ., OCR) performed on the image to extract the contents of the writing board in the image will be successful (STEP 260). Those skilled in the art, having the benefit of this detailed description, will appreciate that STEP 260 is optional.
  • FIG. 3 shows a flowchart in accordance with one or more embodiments of the invention.
  • the flowchart depicts a process for identifying lines that are located on a writing board in an image.
  • One or more of the steps in FIG. 3 may be performed by the components of the system (100), discussed above in reference to FIG. 1 .
  • one or more of the steps shown in FIG. 3 may be omitted, repeated, and/or performed in a different order than the order shown in FIG. 3 . Accordingly, the scope of the invention should not be considered limited to the specific arrangement of steps shown in FIG. 3 .
  • the process shown in FIG. 3 may correspond to STEP 220, discussed above in reference to FIG. 2 .
  • one of the detected lines in the image is selected (STEP 305).
  • the line may be selected at random from all the detected lines. Additionally or alternatively, the line may be selected because it is the longest line, it is the shortest line, it is the closest to the center of the image, it is the farthest from the center of the image, etc.
  • each strip includes multiple pixels (e.g ., 2, 3, 10, etc.) from each side of the line. For example, if the line is classified as vertical, each strip is horizontal and may include 3 pixels from the left side of the line and 3 pixels from the right side of the line. As another example, if the line is classified as horizontal, each strip is vertical and may include 3 pixels below the line and 3 pixels above the line.
  • intensity values for pixels on both sides of the line are identified for each strip. For a given strip, these intensity values may be sorted for each side of the line. For example, in a horizontal strip, the intensity values for the pixels on the left side of the line may be sorted amongst themselves, and the intensity values for the pixels on the right side of the line may be sorted amongst themselves.
  • statistical intensity values are calculated for both sides of the line on a strip by strip basis.
  • the statistical intensity value may correspond to the mean or median intensity value among the pixels located on one side of the line in a strip.
  • the statistical intensity values may correspond to the 40% intensity value (I 40 ) and the 60% intensity value (I 60 ) of the pixels located on one side of the line in a strip.
  • a UIS is a strip where the statistical intensity value for one side of the line matches ( i.e ., equals or approximately equals) the statistical intensity value for the other side of the line.
  • the strip is deemed to be a UIS.
  • the strip is deemed to be a UIS.
  • the threshold may be 1/3 of all the strips for the selected line.
  • the line is deemed as being located on the writing board in the image (STEP 335).
  • the line is deemed as not being located on the writing board in the image (STEP 340).
  • process depicted in FIG. 3 may be repeated for each detected line in the image that has not been removed (i.e ., excluded from further consideration). In other words, the process depicted in FIG. 3 may be repeated multiple times.
  • FIG. 4 shows a flowchart in accordance with one or more embodiments of the invention.
  • the flowchart depicts a process for identifying lines that are located in the background external to a writing board in an image.
  • One or more of the steps in FIG. 4 may be performed by the components of the system (100), discussed above in reference to FIG. 1 .
  • one or more of the steps shown in FIG. 4 may be omitted, repeated, and/or performed in a different order than the order shown in FIG. 4 . Accordingly, the scope of the invention should not be considered limited to the specific arrangement of steps shown in FIG. 4 .
  • the process shown in FIG. 4 may correspond to STEP 225, discussed above in reference to FIG. 2 .
  • one of the detected lines in the image is selected (STEP 405).
  • the line may be selected at random from all the detected lines. Additionally or alternatively, the line may be selected because it is the longest line, it is the shortest line, it is the closest to the center of the image, it is the farthest from the center of the image, etc.
  • multiple sample points are calculated for the selected line.
  • the number of sample points is based on the length of the line. If the selected line has a length of L , sample points are placed at L /10 intervals (or L /2 intervals, L /4 intervals, L /5 intervals, L /8 intervals, L /16 intervals, etc.) on the selected line. Additionally or alternatively, a fixed number of sample points may be used regardless of the length of the selected line. The sample points may be spaced at random distances along the selected line. Additionally or alternatively, sample points are only placed at the ends of the selected line.
  • links are generated from a reference point to the sample points on the selected line.
  • the links themselves are effectively line segments.
  • the reference point may correspond to the center of the image. Additionally or alternatively, the reference point may be near the center of the image ( e.g ., located in a small region that includes the center of the image).
  • a link intersection is effectively an intersection of a link with another line before the link reaches the sample point of the selected link.
  • Some links may have no link intersections.
  • a single link may have multiple link intersections.
  • STEP 425 it is determined whether the total number of link intersections (i.e ., the link intersections for all links going from the reference point to the selected line) exceeds a threshold (e.g ., 1, 5, 6, 10, 11, etc.). When it is determined that the total number of link intersections exceeds the threshold, the selected line is deemed to be located in the background external to the writing board in the image (STEP 430). When it is determined that the total number of link intersections does not exceed the threshold, the selected line is not deemed to be located in the background (STEP 435).
  • a threshold e.g . 1, 5, 6, 10, 11, etc.
  • process depicted in FIG. 4 may be repeated for each detected line in the image that has not been removed (i.e ., excluded from further consideration). In other words, the process depicted in FIG. 4 may be repeated multiple times.
  • FIGs. 5A-5F show an implementation example in accordance with one or more embodiments of the invention.
  • FIG. 5A there exists an image (506) with a whiteboard (508).
  • the perspective of the whiteboard (508) in the image (506) is distorted.
  • Performing OCR or other types of image processing on the image (506) would likely produce poor results.
  • the image processing might not correctly extract the content of the whiteboard (508) from the image (506) because of the distorted perspective.
  • FIG. 5B shows the image (510) after line detection.
  • the detected lines include lines located on the whiteboard (508), lines corresponding to the outline ( e.g ., perimeter, border) of the whiteboard (508), and lines located in the background external to the whiteboard (508).
  • FIG. 5C shows one of the detected lines (514) and multiple generated strips ( i.e ., Strip A (516A), Strip C (516C), Strip E (516E)).
  • Strip A (516A), Strip C (516C), Strip E (516E)
  • Each strip (516A, 516C, 516E) includes pixels (512) that are located on the left side of the line (514) and pixels that are located on the right side of the line (514).
  • One or more statistical intensity values e.g ., I 40 , I 60
  • One or more statistical intensity values may be calculated for the pixels of strip C (516C) on the right side of the line (514). If the statistical intensity values from both sides of the line (514) match, strip C (516C) is deemed to be a uniform intensity strip. If at least a third of all the strips (516A, 516C, 516E, etc.) are uniform intensity strips, the line (514) is deemed to be located on the whiteboard (508) in the image.
  • FIG. 5D shows a detected line (524) and a reference point (520) that is near the center of the image.
  • multiple sample points (522) are calculated for the detected line (524).
  • Multiple links (525) are generated from the reference point (520) to the sample points (522).
  • There are other detected lines i.e ., Other Line A (526), Other Line B (528) located between the reference point (520) and the detected line (524). Accordingly, some of the links intersect with the other lines (526, 528). As the number of link intersections exceeds a threshold (e.g ., 5), the detected line (524) is deemed to be located in the background external to the whiteboard (508) in the image.
  • a threshold e.g ., 5
  • FIG. 5E shows multiple clusters ( i.e ., cluster A (530A), cluster B (530B), cluster C (530C), and cluster D (530D)).
  • the clusters (530A, 530B, 530C, 530D) include two clusters for horizontal lines (530B, 530D) and two clusters for vertical lines (530A, 530C).
  • the lines may be ranked according to length, distance from the center of the image, etc.
  • a set of four lines is formed by selecting one line from each of the clusters (530A, 530B, 530C, 530D).
  • FIG. 5F shows the calculation of cross points for a set of four lines.
  • these cross points do not violate a quadrangle principle, these cross points are deemed to be the corners of the whiteboard (508) in the image.
  • an affine transformation may be calculate based on the coordinates of the cross points and the distances ( w , h ) between the four lines. This transformation may be applied to the image to at least partially correct the distorted perspective of the writing board. Following application of the transform, the image is better suited for additional image processing (e.g ., OCR) to extract the content of the whiteboard (508).
  • OCR additional image processing
  • One or more embodiments of the invention may have the following advantages: the ability to at least partially correct a distorted perspective of a writing board in an image; the ability to identify and remove lines located on the writing board in the image; the ability to identify and remove lines located in the background external to the writing board in the image; the ability to determine a line is located in the background based on link intersections; the ability to determine a line is located on the writing board using statistical intensity values and uniform intensity strips; etc.
  • Embodiments of the invention may be implemented on virtually any type of computing system, regardless of the platform being used.
  • the computing system may be one or more mobile devices (e.g ., laptop computer, smart phone, personal digital assistant, tablet computer, or other mobile device), desktop computers, servers, blades in a server chassis, or any other type of computing device or devices that includes at least the minimum processing power, memory, and input and output device(s) to perform one or more embodiments of the invention.
  • mobile devices e.g ., laptop computer, smart phone, personal digital assistant, tablet computer, or other mobile device
  • desktop computers e.g ., servers, blades in a server chassis, or any other type of computing device or devices that includes at least the minimum processing power, memory, and input and output device(s) to perform one or more embodiments of the invention.
  • the computing system (600) may include one or more computer processor(s) (602), associated memory (604) (e.g ., random access memory (RAM), cache memory, flash memory, etc .), one or more storage device(s) (606) (e.g ., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory stick, etc .), and numerous other elements and functionalities.
  • the computer processor(s) (602) may be an integrated circuit for processing instructions.
  • the computer processor(s) may be one or more cores, or micro-cores of a processor.
  • the computing system (600) may also include one or more input device(s) (610), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device. Further, the computing system (600) may include one or more output device(s) (608), such as a screen ( e.g ., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output device(s) may be the same or different from the input device(s).
  • input device(s) such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
  • output device(s) such as a screen (e.g ., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device
  • the computing system (600) may be connected to a network (612) (e.g ., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) via a network interface connection (not shown).
  • the input and output device(s) may be locally or remotely (e.g ., via the network (612)) connected to the computer processor(s) (602), memory (604), and storage device(s) (606).
  • LAN local area network
  • WAN wide area network
  • the input and output device(s) may be locally or remotely (e.g ., via the network (612)) connected to the computer processor(s) (602), memory (604), and storage device(s) (606).
  • Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium.
  • the software instructions may correspond to computer readable program code that when executed by a processor(s), is configured to perform embodiments of the invention.
  • one or more elements of the aforementioned computing system (600) may be located at a remote location and connected to the other elements over a network (612).
  • one or more embodiments of the invention may be implemented on a distributed system having a plurality of nodes, where each portion of the invention may be located on a different node within the distributed system.
  • the node corresponds to a distinct computing device.
  • the node may correspond to a computer processor with associated physical memory.
  • the node may alternatively correspond to a computer processor or micro-core of a computer processor with shared memory and/or resources.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Facsimiles In General (AREA)
  • Drawing Aids And Blackboards (AREA)
EP16196685.8A 2015-12-18 2016-11-01 Writing board detection and correction Active EP3182365B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/975,531 US9824267B2 (en) 2015-12-18 2015-12-18 Writing board detection and correction

Publications (3)

Publication Number Publication Date
EP3182365A2 EP3182365A2 (en) 2017-06-21
EP3182365A3 EP3182365A3 (en) 2017-09-20
EP3182365B1 true EP3182365B1 (en) 2019-01-30

Family

ID=57345681

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16196685.8A Active EP3182365B1 (en) 2015-12-18 2016-11-01 Writing board detection and correction

Country Status (4)

Country Link
US (1) US9824267B2 (zh)
EP (1) EP3182365B1 (zh)
JP (1) JP2017168079A (zh)
CN (1) CN107038441B (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10268920B2 (en) * 2017-08-31 2019-04-23 Konica Minolta Laboratory U.S.A., Inc. Detection of near rectangular cells
CN108171282B (zh) * 2017-12-29 2021-08-31 安徽慧视金瞳科技有限公司 一种黑板笔迹自动合成方法
JP2020098420A (ja) * 2018-12-17 2020-06-25 ソニー株式会社 画像処理装置、画像処理方法及びプログラム
US11302035B2 (en) * 2019-09-06 2022-04-12 Intel Corporation Processing images using hybrid infinite impulse response (TTR) and finite impulse response (FIR) convolution block
EP4267912A1 (en) * 2020-12-22 2023-11-01 Dittopatterns Llc Image projecting systems and methods
WO2023122537A1 (en) * 2021-12-20 2023-06-29 Canon U.S.A., Inc. Apparatus and method for enhancing a whiteboard image
WO2023235581A1 (en) * 2022-06-03 2023-12-07 Canon U.S.A., Inc. Apparatus and method for enhancing a whiteboard image

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7171056B2 (en) * 2003-02-22 2007-01-30 Microsoft Corp. System and method for converting whiteboard content into an electronic document
JP2006107018A (ja) * 2004-10-04 2006-04-20 Konica Minolta Photo Imaging Inc 画像解析方法及び装置、画像処理方法及びシステム、これらの動作プログラム
US8306336B2 (en) 2006-05-17 2012-11-06 Qualcomm Incorporated Line or text-based image processing tools
US8098936B2 (en) * 2007-01-12 2012-01-17 Seiko Epson Corporation Method and apparatus for detecting objects in an image
US8244062B2 (en) * 2007-10-22 2012-08-14 Hewlett-Packard Development Company, L.P. Correction of distortion in captured images
JP5266953B2 (ja) * 2008-08-19 2013-08-21 セイコーエプソン株式会社 投写型表示装置および表示方法
US8345106B2 (en) * 2009-09-23 2013-01-01 Microsoft Corporation Camera-based scanning
US8873864B2 (en) * 2009-12-16 2014-10-28 Sharp Laboratories Of America, Inc. Methods and systems for automatic content-boundary detection
US8503813B2 (en) * 2010-12-22 2013-08-06 Arcsoft Hangzhou Co., Ltd. Image rectification method
CN103106648B (zh) * 2011-11-11 2016-04-06 株式会社理光 确定图像中投影区域的方法和设备
CN102789340B (zh) * 2012-06-27 2015-12-16 深圳市巨龙科教高技术股份有限公司 一种电子白板的白板坐标获取方法、装置及电子白板
CN103473541B (zh) * 2013-08-21 2016-11-09 方正国际软件有限公司 一种证件透视校正方法及系统
CN103870863B (zh) * 2014-03-14 2016-08-31 华中科技大学 制备隐藏二维码图像全息防伪标签的方法及其识别装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US9824267B2 (en) 2017-11-21
EP3182365A3 (en) 2017-09-20
CN107038441A (zh) 2017-08-11
JP2017168079A (ja) 2017-09-21
US20170177931A1 (en) 2017-06-22
CN107038441B (zh) 2020-05-19
EP3182365A2 (en) 2017-06-21

Similar Documents

Publication Publication Date Title
EP3182365B1 (en) Writing board detection and correction
WO2022148192A1 (zh) 图像处理方法、图像处理装置以及非瞬时性存储介质
US9697423B1 (en) Identifying the lines of a table
US9076205B2 (en) Edge direction and curve based image de-blurring
US20180158199A1 (en) Image alignment for burst mode images
US9865038B2 (en) Offsetting rotated tables in images
US9934431B2 (en) Producing a flowchart object from an image
US10083218B1 (en) Repairing tables
JP2017215946A (ja) 表の線を追跡するためのロバスト法
US10049268B2 (en) Selective, user-mediated content recognition using mobile devices
CN112581374A (zh) 散斑亚像素中心提取方法、系统、设备及介质
US9483834B1 (en) Object boundary detection in an image
US9727145B2 (en) Detecting device and detecting method
US10163004B2 (en) Inferring stroke information from an image
JP7219011B2 (ja) 表に関するタイプセットネススコア
US10679049B2 (en) Identifying hand drawn tables
US9785856B2 (en) Repairing holes in images
CN115359502A (zh) 一种图像处理方法、装置、设备以及存储介质
US20180032807A1 (en) Selecting primary groups during production of a flowchart object from an image
CN113850239A (zh) 多文档检测方法、装置、电子设备及存储介质
CN113850238A (zh) 文档检测方法、装置、电子设备及存储介质
US9940698B1 (en) Cleaning writing boards based on strokes
US10157311B2 (en) Detecting arrows within images
US10268920B2 (en) Detection of near rectangular cells
US20180211366A1 (en) Flattening and Rectifying A Curved Image

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 5/00 20060101AFI20170816BHEP

Ipc: G06T 7/13 20170101ALI20170816BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180122

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20180907

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1093859

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190215

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016009618

Country of ref document: DE

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190530

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190430

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1093859

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190501

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190430

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190530

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016009618

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

26N No opposition filed

Effective date: 20191031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191101

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20191130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20161101

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190130

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230510

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230907

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230911

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230906

Year of fee payment: 8