CN114026598A - Writing surface boundary marking for computer vision - Google Patents

Writing surface boundary marking for computer vision Download PDF

Info

Publication number
CN114026598A
CN114026598A CN202080027842.XA CN202080027842A CN114026598A CN 114026598 A CN114026598 A CN 114026598A CN 202080027842 A CN202080027842 A CN 202080027842A CN 114026598 A CN114026598 A CN 114026598A
Authority
CN
China
Prior art keywords
image
boundary
writing surface
marker
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080027842.XA
Other languages
Chinese (zh)
Inventor
J·勒梅
J·艾普斯坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Innovations Inc
Original Assignee
Rocket Innovations Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Innovations Inc filed Critical Rocket Innovations Inc
Publication of CN114026598A publication Critical patent/CN114026598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/611Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)

Abstract

A system for capturing, organizing, and storing handwritten notes includes a plurality of boundary markers. The boundary marker is configured to be positioned on a writing surface. Further, the plurality of boundary markers have a fluorescent color. The system also includes a tangible, non-transitory computer-readable medium encoded with instructions that, when run on a camera-equipped computing device, cause the camera-equipped computing device to perform processing. The processing includes capturing an image of the writing surface with a fluorescent marker thereon. The processing also includes detecting a fluorescent color boundary marker in the captured image. Additionally, the processing includes identifying a virtual boundary in the captured image based on the location of the fluorescent color boundary marker. The processing then expands a portion of the captured image within the virtual boundary to produce an expanded image.

Description

Writing surface boundary marking for computer vision
Priority and cross-reference to related applications
This patent application claims priority from us 62/833,321 provisional patent application No. 4/12/2019, which is incorporated herein by reference in its entirety.
Technical Field
Illustrative embodiments of the present invention relate generally to markers placed on a writing surface to define a boundary, and more particularly, to machine vision identification of marker locations.
Background
Students and professionals often write on whiteboards. In some collaborative efforts, such as during a research session or team meeting, users may take pictures written on a whiteboard to commemorate or share meeting notes. The user can take a picture of the whiteboard to record notes on their cell phone and share with their colleagues.
Disclosure of Invention
According to one embodiment of the present invention, a system for capturing, organizing, and storing handwritten notes includes a plurality of boundary markers. The boundary marker is configured to be positioned on a writing surface. Further, the plurality of boundary markers have a fluorescent color. The system also includes a tangible, non-transitory computer-readable medium encoded with instructions that, when run on a camera-equipped computing device, cause the camera-equipped computing device to perform processing. The processing includes capturing an image of the writing surface with a fluorescent marker thereon. The processing also includes detecting a fluorescent color boundary marker in the captured image. Additionally, the processing includes identifying a virtual boundary in the captured image based on the location of the fluorescent color boundary marker. The processing then expands a portion of the captured image within the virtual boundary to produce an expanded image.
In some embodiments, the boundary marker is fluorescent orange. The marker may have a substantially triangular shape. Further, the marker is made of silicone. In various embodiments, the marker is coupled to the writing surface using an adhesive and/or a microsuction device. Furthermore, the marker may be portable and easy to grasp. To this end, the mark may have a thickness of between about 0.5 millimeters and about 3 millimeters.
In some embodiments, the processing performed by the camera-equipped computing device further comprises broadcasting the unfolded image. Additionally, the broadcast may be updated as new images are captured. Other processing may include saving the expanded image in an image storage area. Additionally or alternatively, the processing may further comprise cropping the boundary markers from the image. Some other processes may include removing background from a captured image, cropping the captured image using the virtual boundary in the image, and/or enhancing the image. In various embodiments, the processing performed by the computing device is performed in response to taking a picture of the writing surface. The writing surface may comprise a whiteboard or a wall.
According to another embodiment, a method for capturing and storing handwritten notes includes placing a plurality of boundary markers on a writing surface. The boundary markers define a virtual boundary that encompasses the handwritten note. The method includes capturing a writing surface image by scanning the writing surface using an electronic device. Additionally, the method identifies a location of the marker in the writing surface image. The method also determines the boundary based on the position of the marker in the writing surface image. The processing may also expand a portion of the captured image within the virtual boundary to produce an expanded image. The unfolded image may then be cropped based on the location of the detected boundary.
Among other options, placing the plurality of boundary markers includes positioning the boundary markers at locations that substantially define corners of a rectangular boundary. Identifying the location of the marker may include identifying the fluorescent color in the image. The expanded image may be stored and/or broadcast.
A second writing surface image may be captured by scanning the writing surface using the electronic device. The method may then identify the location of the marker in the second writing surface image and determine the boundary based on the location of the marker in the second writing surface image. The method may then expand the second writing surface image based on detecting the boundary in the captured second writing surface image to produce a second expanded image. The method may also crop the second unfolded image based on the location of the detected boundary. In some embodiments, the broadcast of the expanded image may be updated to broadcast the second expanded image.
According to yet another embodiment, a marker for machine vision detection includes a first surface having a fluorescent color. The first surface is configured to be viewed by machine vision. The marker has a second surface with a surface coupling portion. The surface coupling portion is configured to couple to a writing surface such that the marker remains coupled to the writing surface when the writing surface is in an upright orientation.
The shape of the marking may correspond to at least a portion of the shape of the edge of the writing surface, among other shapes. The writing surface may be a whiteboard. The marker may be coupled to the writing surface using a microsuction layer. In some embodiments, the indicia is formed from a material that does not retain a visible fold pattern. The marker or a majority thereof may be formed of silicone. Thus, the indicia may be water washable and reusable.
According to another embodiment, a system for sharing handwritten notes includes a computer device coupled with a camera. The camera is configured to view a background having content. The system also includes a plurality of boundary markers having a fluorescent color. The boundary marker is configured to be positioned between the background and the camera so as to define a virtual boundary around a portion of the background containing the content. The computer device is configured to: (1) detecting a boundary marker of a fluorescent color, (2) determining the virtual boundary, and (3) correcting (deskew) the portion of the background according to a shape of the virtual boundary to produce a corrected image of the portion of the background. The computer device is further configured to share the corrected image of the portion of the background.
The boundary markers may be held together by a frame, among other ways. The frame may have an external indicia holding portion and an internal portion to be imaged (also referred to as an image portion). The mark retaining portion may be formed of plastic or metal. The marker retention portion may be shaped to retain the marker in a predefined orientation corresponding to the virtual boundary. The image portion may contain a preset background or an aperture/opening through which the background can be viewed.
The frame may have boundary markers positioned at one or more vertices of the marker-holding portion such that the positioned markers define a virtual boundary, such as a rectangle. During use, the frame may be positioned between the camera and the background. In some other embodiments, the frame may have a transparent annotation surface over the image portion. The annotation surface may be annotated and/or marked using a writing instrument. In other embodiments, the image portion may contain a preset background. The image portion may be corrected and/or shared with the participant. The image portion may be shared as an image or video.
Some embodiments may comprise a kit having a plurality of boundary markers. The boundary marker may have a top surface opposite a bottom surface. The top surface may have a fluorescent color, and the bottom surface may be configured to adhere to a writing surface. The writing surface may be a whiteboard. The boundary markers may be shaped as triangles. In some embodiments, the kit may comprise four boundary markers.
The illustrative embodiments of the present invention are implemented as a computer program product having a computer usable medium with computer readable program code thereon. The computer readable code can be read and used by a computer system according to conventional processing.
Drawings
The advantages of various embodiments of the present invention will be more fully understood by those skilled in the art from the following "description of illustrative embodiments" discussed with reference to the drawings summarized immediately below.
FIG. 1 schematically illustrates an example of a system for capturing, storing, and/or sharing images from a writing surface according to an illustrative embodiment of the invention.
Fig. 2 schematically shows a boundary marker according to an illustrative embodiment of the present invention.
FIG. 3 schematically illustrates a plurality of boundary markers forming a virtual boundary in accordance with an illustrative embodiment of the present invention.
FIG. 4 schematically shows a user viewing the note in FIG. 3 through a camera of a computing device, according to an illustrative embodiment of the invention.
FIG. 5 schematically illustrates the system identifying a virtual boundary defined by the markers in FIG. 4.
FIG. 6 schematically shows an image of the note of FIG. 5 after processing in accordance with an illustrative embodiment of the invention.
FIG. 7 schematically shows an updated image of the note from FIG. 6, according to an illustrative embodiment of the invention.
FIG. 8 shows a method of using a marker according to an illustrative embodiment of the invention.
FIG. 9 schematically shows a frame configured to hold a marker according to an illustrative embodiment of the invention.
Detailed Description
In an illustrative embodiment, a set of boundary markers defining a virtual boundary is placed on a writing surface, such as a whiteboard. Inside the virtual boundary may be notes, text, print, pictures, or other objects (e.g., models) that the user may wish to capture in the image. A camera-equipped computing device (e.g., a smartphone) views the writing surface, machine vision identifies the markers and determines virtual boundaries based on the positions of the markers. In some embodiments, an image of the writing surface is captured and processed (e.g., portions of the mark and the image are cropped out of the virtual boundary defined by the mark, the image is corrected, and/or the image is enhanced). The processed image may be stored in a database or shared with others. Details of illustrative embodiments are discussed below.
FIG. 1 schematically shows an example of a system 100 for capturing, storing, and/or sharing images from a writing surface 12, according to an illustrative embodiment of the invention. Although the above description sets forth the system 100 capturing, storing, and sharing images, the system 100 may be used to capture, store, or share images without performing all three actions. For example, system 100 may be used to stream a shared image from writing surface 12 without storing the image. Alternatively, in some embodiments, the system 100 may be used to capture and store images without the need for simultaneous sharing.
System 100 includes a writing surface 12 having content 10 thereon. The content 10 may be any kind of writing, drawing, graffiti, etc. The illustrative embodiment includes indicia 18 disposed on writing surface 12. For convenience, throughout the application, the writing 10 is referred to as a note 10. It should be understood that the terms "writing" and "note" 10 are not intended to limit the type of writing, drawing, marking, or other content that may be present on writing surface 12. Instead, the terms "writing" and "notes" are used merely to facilitate an easier understanding of how to make and use the illustrative embodiments of the present invention. It should also be understood that the illustrative embodiments are not limited to capturing content 10 that includes alphanumeric writing. Indeed, as discussed further below, content 10 may include various notes, text, printing, pictures, and/or other objects (e.g., models), including objects that are not on writing surface 12.
For example, the note 10 may be created during a collaborative work session. FIG. 1 schematically shows a plurality of participants 15 with whom a note 10 is shared. Some of the participants 15 may remotely access the collaborative work session (e.g., via dial-up, the internet, or various messaging systems) and may benefit from viewing the note 10 on the electronic device. For example, as shown, some of the participants 15 may wish to view the note 10 on a television, computer, and/or smartphone device. The illustrative embodiments capture, correct, and enhance the image of the note 10. The note 10 may be saved locally on the device 14 or cloud storage, forwarded to the application 16, and/or broadcast to others. Further, the broadcast may be updated in real-time as the note 10 is updated and/or changed. Thus, the illustrative embodiments provide for easy sharing of the note 10 among various participants.
To this end, system 100 includes a camera-equipped computing device 14 that captures one or more images including markings 18 on writing surface 12. The markers 18 may define a virtual boundary (shown in dashed lines in FIG. 3 below as virtual boundary 22) that encompasses the note 10 or portions thereof that the user wishes to save and/or share. The device 14 is coupled to a system cloud service and a third party cloud service 16, optionally over the internet. In other embodiments, rather than the network connecting to the cloud service 16 over the internet, the cloud service 16 is within a local area network, a wide area network, or a virtual network such as a VPN (virtual private network). Additionally or alternatively, some services may be used locally on the device 14.
Camera-equipped computing device 14 may include any computing device coupled to a camera, including but not limited to camera-equipped smartphones, camera-equipped tablet computers, desktop computers with USB-connected cameras, and laptop computers coupled to cameras, among other things. In addition to these conventional camera-equipped computing devices 14, illustrative embodiments may further include machine vision machines and/or camera-equipped headphones (e.g., a helmet).
Fig. 2 schematically shows a boundary marker 18 according to an illustrative embodiment of the invention. As previously mentioned, one or more of boundary markers 18 may be placed on writing surface 12 to define a desired boundary 22 on writing surface 12. To this end, the illustrative embodiments may have a shape configured to correspond to writing surface 12. For example, the indicia 18 shown in the figures have a triangular shape with right angles corresponding to the corners of a conventional rectangular whiteboard. The shape correspondence between marking 18 and writing surface 12 provides for easy positioning of marking 18 along the edge of writing surface 12, although it should be noted that marking 18 need not be positioned along the edge of writing surface 12. However, it should be understood that the illustrative embodiments may incorporate any shape of marking 18 and are not limited to triangular shapes or shapes corresponding to writing surface 12. Thus, for example, the indicia 18 may be circular, square, or other shapes.
The note 10 may be on various types of writing surfaces 12 including, for example, walls, whiteboards, projectors, blackboards, glass panels, or paper. Further, the note 10 may contain a variety of content, such as text, pictures, and/or drawings. In the illustrative embodiment, the boundary markers 18 are positioned on the writing surface in such a way that they form a discontinuous perimeter/boundary (e.g., the four corners of a virtual boundary) around the note 10 that the user wants to capture in the image. It should be understood that the indicia 18 may be positioned to form a virtual boundary around the entire writing surface 12, a portion of the writing surface 12 (e.g., only the note 10 on the writing surface 12), or only a portion of the note 10. Thus, the user may select the portion of writing surface 12 to be captured by forming virtual boundary 22 using markers 18. Even if, for example, the camera captures the entire writing surface 12 in multiple sequential images, the user may reposition the marker 18 to focus on different portions of the note 10.
The indicia 18 may be formed from a variety of materials including one or more of rubber, silicone, and/or polypropylene. Additionally, at least one side of the indicia 18 may have a writing surface coupling portion, such as an adhesive (e.g., including permanent adhesives, electrostatic adhesion adhesives, and other semi-permanent adhesives), electrostatic adhesion layers, and/or microsuction adhesion layers, to provide secure attachment 12 to the writing surface. Preferably, the coupling portion provides sufficient coupling so that the marker 18 does not fall off of the writing surface 12 due to its weight (e.g., such as when the writing surface 12 is in a vertical orientation). Additionally, the illustrative embodiments may form the indicia 18 from the materials previously mentioned or other materials such that the indicia 18 is reusable and water washable. In some embodiments, indicia 18 may be about 0.5mm to about 3mm (e.g., 1/32 inches) thick, while maintaining portability while providing easy grip and easy removal from writing surface 12. In addition, the markers 18 may be provided in kits (e.g., in a pack of four) to facilitate boundary 22 (e.g., rectangular boundary) detection, as shown in fig. 3.
The inventors have discovered and surprisingly discovered that fluorescent color markings 18 are more easily and reliably identified by machine vision, including by a camera-equipped computer device 14. It should be noted that the inventors are unaware of the exact reason why the fluorescent markings 18 and/or fluorescent colors are more easily identified by machine vision. However, the inventors suspect, but have not yet demonstrated, that the mechanism of action of this effect is due to the emission of a fluorescent color (e.g., "glow") in response to the absorption of light in the invisible radiation spectrum (e.g., ultraviolet light) of the visible light. As an additional advantage, in the illustrative embodiment, the fluorescent colors are more easily detected under low light conditions because they are able to reflect light absorbed in the invisible spectrum. Some illustrative embodiments use indicia 18 having colors that are not common in office environments (e.g., orange fluorescent indicia 18 that do not "compete" with other colors on a common whiteboard environment).
FIG. 3 schematically shows a plurality of boundary markers 18 forming a virtual boundary 22, according to an illustrative embodiment of the invention. As shown here, the indicia 18 are offset from the corners of the whiteboard 12, but in other embodiments some or all of the indicia 18 may be placed against the corners of the whiteboard 12. As previously described, the placement of the markers 18 defines a virtual boundary 22. It should be understood that the virtual boundary 22 shown in the figures does not actually exist on the writing surface. Instead, the virtual boundary 22 is created by the system 100 as a result of identifying the location of the marker 18. Thus, machine vision detects the markers 18 and determines the virtual boundaries 22 formed by the markers 18. In some embodiments, determining the virtual boundary 22 may include correlating the position of the marker 18 with an expected image shape. Although not explicitly described herein, those skilled in the art understand that machine vision may detect the markers 18 and that some separate logic may determine the virtual boundary 22 (e.g., using a cloud-based server).
The illustrative embodiments may use various portions of the marker 18 to determine the virtual boundary 22. For example, as shown, the virtual boundary 22 may be defined by the outer edges of the indicia 18. However, in some embodiments, the inner edge of the marker 18 may be used to identify the virtual boundary 22. For example, the midpoint of the hypotenuse of each triangular marker 18 may be used to define the virtual boundary 22. Alternatively, the midpoint of each marker 18 may be used to define the virtual boundary 22. It can be seen that these are merely exemplary, and that there are many ways to define the virtual boundary 22 using the markers 18. Furthermore, in some embodiments, the marks 18 may not align perfectly into a desired shape, such as a rectangular shape. Thus, illustrative embodiments may compensate for the offset by using different portions of the marker 18, for example, by defining a virtual boundary 22 having a "best match" with respect to the placement of the marker, to correspond to the intended image shape. Some embodiments may create a "best match" that does not pass through all or any of the markers 18. For example, the "best match" virtual boundary 22 may pass 3 of the 4 markers 18. Alternatively, the boundary 22 may be defined at a distance inside the marker 18. Those skilled in the art may use a variety of methods to define the boundary 22 using the indicia 18 while remaining within the scope of the illustrative embodiments of the present invention.
FIG. 4 schematically shows a user viewing the note 10 in FIG. 3 through a camera of the computing device 14, according to an illustrative embodiment of the invention. In this example, markers 18 are generally positioned on writing surface 12 to correspond to the corners of a rectangle. However, in the current perspective of the image captured by the camera in fig. 4, the markers 18 appear to be positioned at the corners of the quadrilateral. Depending on the relative angle of camera 14 and writing surface 12. However, the system 100 identifies the marker 18, creates a virtual boundary 22 based on the location of the marker 18, and corrects the image to an appropriate shape (e.g., based on the known size and scale of the marker 18).
Fig. 5 schematically illustrates the system 100 identifying the virtual boundary 22 defined by the marker 18 in fig. 4. As shown, the system 100 uses the outer edge 28 of the marker 18 to define the virtual boundary 22. During processing, the system 100 applies a computer vision transformation to expand each portion of the image within the boundary (e.g., quadrilateral) to its appropriate shape (e.g., rectangular) and remove the background of the image so that it is cropped to or approximates the virtual boundary 22. This identification and correction process is similar to that described for page boundaries in U.S. patent No. 10,127,468, which is incorporated herein by reference in its entirety.
FIG. 6 schematically shows the image of the note 10 of FIG. 5 after processing according to an illustrative embodiment of the invention. Specifically, the image in FIG. 6 has been corrected and enhanced. It can be seen that although the image is taken from a certain angle, the corrected image appears to be taken from directly in front of the note 10. The system 100 applies computer vision transformations to expand each boundary into a predefined shape. For example, a quadrangle formed by four marks 18 may be expanded into a rectangle.
Additionally, in some embodiments, the system 100 may remove the background of the image and enhance the image. In some embodiments, the system 100 crops the marker 18 itself and all content outside the boundary 22 from the image such that the image is cropped to the virtual boundary 22. The image may also be enhanced using conventional image enhancement and filtering techniques, such as noise filtering, sharpness enhancement, contrast enhancement, color saturation enhancement, and the like. In some embodiments, optical character recognition is performed on the scanned note. Further, notes may be automatically assigned titles based on the words identified in the notes during OCR. After the image is captured, the note 10 may be stored locally on the device 14 and/or may be broadcast to other people (e.g., participants) as well as various applications and programs 16.
In some embodiments, system 100 allows a user to stream images and/or video of writing surface 12 or portions of the writing surface within defined virtual boundary 22 in real time. The system generates a unique URL and shares it with other users (e.g., via text message link, email, etc.). The dedicated real-time page may be updated each time writing surface 12 and/or virtual boundary 22 is scanned. Additionally or alternatively, illustrative embodiments may have an auto-scan mode in which the camera is facing the writing surface and/or virtual boundary 22 and automatically scans at a predetermined time (e.g., every 5 seconds, every minute, every 5 minutes, etc.). The auto-scan time may be adjusted by the user.
FIG. 7 schematically shows an updated image 10A of the note 10 from FIG. 6, according to an illustrative embodiment of the invention. As can be seen, a new note 30 is added to the note 10 from FIG. 6. For example, the user may have drawn these new notes 30 on writing surface 12. After a new marker 30 is added to writing surface 12, the camera of device 14 again views writing surface 12 and/or marker 18 and generates a second image using the previously described processing. This second image may be broadcast to participants 15 in real time. Thus, the illustrative embodiments may provide broadcast updates of writing surface 12. This process may be repeated many times. Additionally, these notes 10A may be erased and an entirely new set of notes may be created and broadcast using the methods described herein. Illustrative embodiments may save various images and allow a user to keep a record of the various images scanned by system 100 for viewing.
FIG. 8 shows a method of using the mark 18 according to an illustrative embodiment of the invention. The method begins at step 802, which locates the boundary markers 18. As previously described, the boundary markers 18 may be positioned in a variety of ways to define various types of boundaries 22. For example, the markers 18 may be positioned at the four corners of a rectangle, at three points forming a triangle, and so forth. In the illustrative embodiment, the system 100 has logic to determine the shape formed by the markers 18 or the closest shape, and make corrections based on the determined shape. However, because conventional writing surface 12 is rectangular, it is expected that many usage scenarios will be based on rectangular shapes. Accordingly, some embodiments may ignore (i.e., not detect the virtual boundary 22) configurations of (e.g., randomly scattered) markers 18 that are not positioned in or near the edges of the defined shape. Thus, the illustrative embodiment may provide four markers 18 in the kit to easily define the rectangular virtual boundary 22.
The process proceeds to step 804 which identifies the boundary markers 18 using computer vision. The computer vision may be on any of the devices 14 previously described. For example, when using a fluorescent triangular shaped marker 18, the computer vision searches for four bright orange triangles in the image on the screen. The system 100 may employ some color thresholds around the target color to find the on-screen marker 18. For example, the system may look for RGB colors within a certain hue/saturation value to ensure that it detects markers in various environments (e.g., sunny and dark). In some embodiments, dynamic values may also be employed — the system may look for the shape of the marker 18 (e.g., find four triangular markers 18 of the same color).
Processing then proceeds to stop 806, which determines the boundary 22 based on the position of the marker 18. In the illustrative embodiment, the marker is fluorescent to provide easy differentiation of the marker from the writing surface 12 and the background in the image. As previously described, it is suspected that the glow of the fluorescent color helps the mark to stand out more prominently in computer vision and is less likely to be confused with shadows, characters, or drawn shapes (e.g., drawn triangles). However, other embodiments may use non-fluorescent colors.
Processing then proceeds to step 808, which processes the image. After all of the markers 18 (e.g., four triangles) are found, the system 100 can take a snapshot and correct the image (e.g., by using the known shape and scale of the markers 18). The system 100 may also crop the image to the boundary 22 defined by the location of the marker 18. It should be appreciated that the system 100 may also correct images from steep angles because the system may determine the angle (e.g., by detecting that more distant markers 18 are smaller and distorted relative to more recent markers 18). For example, when the marker 18 is located at a corner of a rectangle in the physical world, the virtual boundary 22 may appear on the screen as a trapezoid. The system may use the known shape of the marker 18 and the known shape of the writing surface 12 to stretch the image back to a rectangle.
In some embodiments, the 3D data may be used to enhance the correction algorithm. For example, a smartphone may detect a 3D shape (e.g., similar to that used to unlock)
Figure BDA0003296889470000091
Facial recognition of iPhone 10). The true 3D data can be used to more accurately determine the position of the marker 18 and produce a more accurate corrected image. Indeed, some embodiments may consider any location of the marks 18 even if the marks are not arranged in a predetermined shape (e.g., randomly scattered). The corrected image may take on a shape defined by the position of the marker 18. In a further step, the corrected image may optionally be cropped to a preferred shape (e.g., rectangular).
In some embodiments, after cropping and correcting the image, a final layer of image processing may be applied. By correcting for color distortion using markers 18 of known color, background and foreground detection techniques can be used to enhance the image. For example, if the color of the marker 18 in the image is darker than the expected color value, the image may be brightened. Additionally or alternatively, if the color of the indicia is off (e.g., more yellow than orange), the image may be moved away from yellow.
While the illustrative embodiments describe capturing an image, it should be understood that this process includes scanning or viewing the indicia without saving the image. Thus, illustrative embodiments may initiate the processes described herein by merely viewing and identifying indicia 18, without requiring active capture (e.g., pressing a button and/or saving an image) to capture the image. However, some other embodiments may require the user to actively capture the image and/or save the image.
This process then proceeds to step 810, which stores and/or shares the image. The images may be stored locally or on a cloud-based drive. In addition, the images may be broadcast in real time and on a continuous basis as previously described. The process then moves to step 812 which asks whether there are more images to be taken. If there are more images to be taken, the process returns to step 802. This may be the case, for example, if the note 10 has been modified or changed, or if the user wishes to update the broadcast. If there are no more images to be taken, the process ends.
Some embodiments may operate without any markings 18. For example, the system 100 may identify boundaries, such as the edges of a blackboard, whiteboard, and/or projector screen, and use them in place of the virtual boundaries 22 defined by the markers 18 discussed above. Thus, the user can save an image of the note 10 in real time on the blackboard, whiteboard, and/or projector screen. Notes captured without markers may also be broadcast, corrected, and saved according to the methods described herein.
While the illustrative embodiments refer to the use of markings 18 with reference to writing surface 12, it should be understood that the illustrative embodiments may be used with machine vision in general. The inventors have surprisingly found that fluorescent colors which are more easily machine-visually identified can be used more generally in any field where machine-visual identification is desired. Thus, not all illustrative embodiments are intended to be limited to the application of writing surface 12.
Indeed, the illustrative embodiments may incorporate a variety of surfaces 12 in place of the writing surface 12 previously described. For example, in some embodiments, writing surface 12 may be a non-traditional writing surface 12, such as a roadway (e.g., where a child draws with colored chalk), and indicia 18 may be placed on the roadway.
Additionally or alternatively, some embodiments may capture, store, and/or share background (i.e., in place of writing surface 12). For example, the marker 18 may be lifted in a sky background (from the perspective of the camera 14). To this end, fig. 9 schematically illustrates a frame 32 (e.g., formed of metal or plastic) configured to hold the indicia 18, according to an illustrative embodiment of the invention. The frame 32 may have a predetermined shape (e.g., rectangular) and the marker 18 is coupled to the frame in a predetermined orientation and position. The markers 18 may be placed at the vertices of the frame 32 (e.g., at the four corners of a rectangular frame). Thus, the frame 32 provides an easy and convenient way to pre-define the shape of the background image to be shared within the frame 32. Illustrative embodiments may additionally process or operate on the background in a manner similar to writing surface 12 (e.g., by identifying markers 18, detecting boundaries 22, correcting, enhancing, storing, and/or sharing background images).
In some embodiments, the frame 32 may contain a predefined background (e.g., as opposed to the open frame described above that allows a user to view notes 10 on the whiteboard 12), such as a location background (e.g., a famous landmark, such as the eiffel tower of paris or the beast court of roman). In some other embodiments, the predefined background may comprise various backgrounds, such as various athletic formations (e.g., football or basketball formations from a tactical manual). Thus, the coach can broadcast the tactics while drawing on the background.
Some embodiments may have a frame 32 with a transparent annotation surface 34 configured to cover the background and/or writing surface 12. The transparent surface 34 may be annotated 36 by a user (e.g., using a pen or other writing instrument). Thus, the illustrative embodiments enable the system 100 to operate as a sketch drawing device (telestrator) on some background or writing surface 12. Thus, the user may draw/annotate 36 on a moving video or still image. Further, in some embodiments, system 100 may include a receiving headset (e.g., a helmet modified to include a video or image display screen). The system 100 may further broadcast the annotation image to the receiving headphones.
It should be noted that this process may be a simplified version of a more complex process using the token 18. Thus, this process may have additional steps not discussed. Further, some steps may be optional, performed in a different order, or performed in parallel with each other. For example, step 812 may occur before either of steps 808 or 810. Thus, the discussion of this process is illustrative and not intended to limit various embodiments of the invention. It should be noted that this symbolic representation is one view of the logic flow of system 100. Using the above algorithmic approach, the logical variants do not change the underlying support of the system. Additionally, it should be understood that the processing described above, while involving images, may also be applied to video.
It should be noted that a logic flow may be described herein to illustrate various aspects of the invention, and should not be construed as limiting the invention to any particular logic flow or logic implementation. The described logic may be partitioned into different logic blocks (e.g., programs, modules, functions, or subroutines) without changing the overall results or otherwise departing from the true scope of the invention. Many times, logic elements may be added, modified, omitted, performed in a different order, or implemented using different logic structures (e.g., logic gates, looping primitives, conditional logic, and other logic structures) without changing the overall results or otherwise departing from the true scope of the present invention.
The invention can be embodied in many different forms, including, but not limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with programmable logic (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other device, including any combination thereof. Computer program logic implementing some or all of the described functionality is typically embodied as a set of computer program instructions, such as stored in a computer readable medium, converted to computer executable form and executed by a microprocessor under the control of an operating system. Hardware-based logic implementing some or all of the functions described herein may be implemented using one or more suitably configured FPGAs.
Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms produced by an assembler, compiler, linker, or locator). The source code may comprise a series of computer program instructions implemented in any of a variety of programming languages, such as object code, assembly language, or a high-level language such as Fortran, C + +, JAVA, or HTML, for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in computer-executable form (e.g., via an interpreter), or the source code may be converted to computer-executable form (e.g., via a translator, assembler, or compiler).
Computer program logic implementing all or part of the functionality previously described herein may execute at different times on a single processor (e.g., concurrently) or may execute simultaneously or at different times on multiple processors and may run under a single operating system process/thread or under different operating system processes/threads. Thus, the term "computer process" generally refers to the execution of a set of computer program instructions, whether different computer processes execute on the same or different processors, and whether different computer processes run under the same operating system process/thread or under different operating system processes/threads.
The computer program may be fixed in any form (e.g., source code form, computer executable form, or intermediate form) either permanently or temporarily in a tangible storage medium such as a semiconductor memory device (e.g., RAM, ROM, PROM, EEPROM, or flash programmable RAM), a magnetic memory device (e.g., a floppy disk or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., a PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that can be transmitted to a computer using any of a variety of communication technologies, including, but not limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., bluetooth), networking technologies, and internetworking technologies. The computer program may be distributed in any form, such as in a removable storage medium with printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the internet or world wide web).
Hardware logic implementing all or part of the functionality previously described herein, including programmable logic used with programmable logic devices, may be designed using conventional manual methods, or may be electronically designed, captured, simulated, or recorded using various tools, such as computer-aided design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL).
Programmable logic may be fixed, permanently or temporarily, in a tangible storage medium such as a semiconductor memory device (e.g., RAM, ROM, PROM, EEPROM or flash programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), or other memory device. Programmable logic may be fixed in a signal that may be transmitted to a computer using any of a variety of communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., bluetooth), networking technologies, and internetworking technologies. The programmable logic may be distributed as a removable storage medium with printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the internet or world wide web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software.
It should be noted that embodiments of the invention may employ conventional components, such as a conventional computer (e.g., off-the-shelf PC, host, microprocessor), a conventional programmable logic device (e.g., off-the-shelf FPGA or PLD), or conventional hardware components (e.g., off-the-shelf ASIC or discrete hardware components) that when programmed or configured to perform the non-conventional methods described herein result in a non-conventional device or system. Thus, there is no convention regarding the invention described herein, because even when embodiments are implemented using conventional components, the resulting devices and systems are necessarily non-conventional, in that conventional components do not inherently perform the non-conventional methods described, if not specially programmed or configured.
The activities described and claimed herein provide technical solutions to problems that arise in the technical field. These solutions as a whole are not readily understandable, customary or conventional, and in any case provide practical applications for transforming and improving computers and computer systems.
While the foregoing discussion discloses various exemplary embodiments of the invention, it will be apparent to those skilled in the art that various modifications can be made to achieve some of the advantages of the invention without departing from the true scope of the invention. Any reference to "the invention" is intended to refer to exemplary embodiments of the invention and should not be construed as referring to all embodiments of the invention unless the context requires otherwise. The described embodiments are to be considered in all respects only as illustrative and not restrictive.
The embodiments of the invention described above are intended to be exemplary only; many variations and modifications will be apparent to those of ordinary skill in the art. Such changes and modifications are intended to fall within the scope of the present invention as defined by any one of the appended claims.

Claims (43)

1. A system for capturing, organizing, and storing handwritten notes, the system comprising:
a plurality of boundary markers configured to be positioned on a writing surface, the plurality of boundary markers having a fluorescent color;
a tangible, non-transitory computer-readable medium encoded with instructions that, when run on a camera-equipped computing device, cause the camera-equipped computing device to perform processes comprising:
capturing an image of the writing surface with the fluorescent marker thereon,
detecting a fluorescent color boundary marker in the captured image;
identifying a virtual boundary in the captured image based on the position of the fluorescent color boundary marker; and
unfolding a portion of the captured image within the virtual boundary to produce an unfolded image.
2. The system of claim 1, wherein the boundary marker is fluorescent orange.
3. The system of claim 1, wherein the performed processing further comprises broadcasting the unfolded image and updating the broadcast when a new image is captured.
4. The system of claim 1, further comprising saving the expanded image in an image store.
5. The system of claim 1, wherein the processing further comprises cropping the boundary markers from the image.
6. The system of claim 1, wherein the processing further comprises:
removing a background from the captured image,
cropping the captured image using the virtual boundary in the image, an
Enhancing the image.
7. The system of claim 1, wherein the processing performed by the computing device is performed in response to taking a picture of the writing surface.
8. The system of claim 1, wherein the writing surface is a whiteboard or a wall.
9. The system of claim 1, wherein the shape of the marker generally resembles a triangle.
10. The system of claim 1, wherein the indicia is formed of silicone.
11. The system of claim 1, wherein the marker is coupled to the writing surface using an adhesive and/or a microsuction device.
12. The system of claim 1, wherein the indicia has a thickness between about 0.5 millimeters and about 3 millimeters.
13. A method for capturing and storing handwritten notes, the method comprising:
placing a plurality of boundary markers on a writing surface such that the boundary markers define a virtual boundary that encompasses the handwritten note;
capturing a writing surface image by scanning the writing surface using an electronic device;
identifying a location of the marker in the writing surface image;
determining the boundary based on the position of the marker in the writing surface image;
expanding a portion of the captured image within the virtual boundary to produce an expanded image; and
cropping the unfolded image based on the location of the detected boundary.
14. The method defined in claim 13 wherein placing the plurality of boundary markers comprises positioning the boundary markers at locations that generally define corners of a rectangular boundary.
15. The method as defined in claim 13, wherein the boundary marker comprises a fluorescent color.
16. The method defined in claim 15 wherein identifying the location of the marker comprises identifying the fluorescent color in the image.
17. The method defined in claim 13 further comprising storing the unfolded image.
18. The method defined in claim 13 further comprising broadcasting the unfolded image.
19. The method defined in claim 18 further comprising capturing a second writing surface image by scanning the writing surface using the electronic device;
identifying the location of the marker in the second writing surface image;
determining the boundary based on the position of the marker in the second writing surface image;
expanding the second writing surface image in accordance with detecting the boundary in the captured second writing surface image to produce a second expanded image;
cropping the second unfolded image based on the location of the detected boundary.
20. The method defined in claim 19 further comprising updating the broadcast of the expanded image to broadcast the second expanded image.
21. The method as defined in claim 13, further comprising:
removing a background from the writing surface image; and
enhancing the writing surface image.
22. A marking for detection by machine vision, the marking comprising:
a first surface having a fluorescent color, the first surface configured to be viewed by machine vision;
a second surface having a surface coupling portion configured to couple to a writing surface such that the marker remains coupled to the writing surface when the writing surface is in a vertical orientation.
23. The marking as defined in claim 22, wherein the fluorescent color comprises fluorescent orange.
24. The marking as defined in claim 22, wherein the marking has a thickness of between about 0.5 millimeters and about 3 millimeters.
25. A marker as defined in claim 22, wherein the marker is generally triangular in shape.
26. The marking as defined in claim 22, wherein the shape of the marking corresponds to at least a portion of a shape of an edge of the writing surface.
27. The marking as defined in claim 22 wherein the marking is formed of a material that does not retain a visible fold pattern.
28. The marking as defined in claim 22 wherein a majority of the thickness of the marking is formed of silicone.
29. The marking as defined in claim 22 wherein the marking is water washable and reusable.
30. The marking as defined in claim 22 wherein the surface coupling portion is a microsuction layer.
31. The marking as defined in claim 22, wherein the writing surface is a whiteboard.
32. A system for capturing, organizing, and storing handwritten notes, the system comprising:
a plurality of boundary markers configured to be positioned on a writing surface;
a tangible, non-transitory computer-readable medium encoded with instructions that, when run on a camera-equipped computing device, cause the camera-equipped computing device to perform processes comprising:
capturing an image of the writing surface with the marking thereon,
detecting the boundary marker in the captured image;
identifying a virtual boundary in the captured image based on the location of the boundary marker; and
unfolding a portion of the captured image within the virtual boundary to produce an unfolded image.
33. A system for sharing handwritten notes, the system comprising:
a computer device coupled with a camera, the camera configured to view a background having content;
a plurality of boundary markers having a fluorescent color and configured to be positioned between the background and the camera so as to define a virtual boundary around a portion of the background containing the content,
the computer device is configured to: (1) detecting a boundary marker of a fluorescent color, (2) determining the virtual boundary, and (3) correcting the portion of the background according to a shape of the virtual boundary to produce a corrected image of the portion of the background, the computer device further configured to share the corrected image of the portion of the background.
34. The system defined in claim 33 further comprising a frame defining a perimeter on which the plurality of boundary markers are positioned, and an image portion within the perimeter.
35. The system as defined by claim 34 wherein the frame has a rectangular shape and the boundary markers are positioned at corners of the frame.
36. The system defined by claim 34 wherein the image portion has a predefined background.
37. The system defined by claim 34 wherein the image portion is open and has no predefined background.
38. The system defined in claim 34 wherein the frame includes a transparent annotation surface over the image portion.
39. The system defined in claim 38 wherein the annotation surface is configured to be written using thermochromic ink.
40. A kit of parts, comprising:
a plurality of boundary markers having a top surface opposite a bottom surface, the top surface having a fluorescent color, the bottom surface configured to adhere to a writing surface.
41. The kit-of-parts as defined in claim 40, wherein said boundary marker is shaped as a triangle.
42. A kit as defined in claim 40, wherein four boundary markers are provided in the kit.
43. The kit defined in claim 40 wherein the writing surface is a whiteboard.
CN202080027842.XA 2019-04-12 2020-04-10 Writing surface boundary marking for computer vision Pending CN114026598A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962833321P 2019-04-12 2019-04-12
US62/833,321 2019-04-12
PCT/US2020/027687 WO2020210637A1 (en) 2019-04-12 2020-04-10 Writing surface boundary markers for computer vision

Publications (1)

Publication Number Publication Date
CN114026598A true CN114026598A (en) 2022-02-08

Family

ID=72750882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080027842.XA Pending CN114026598A (en) 2019-04-12 2020-04-10 Writing surface boundary marking for computer vision

Country Status (7)

Country Link
EP (1) EP3953789A4 (en)
JP (1) JP2022527413A (en)
KR (1) KR20220002372A (en)
CN (1) CN114026598A (en)
AU (1) AU2020271104A1 (en)
CA (1) CA3136438A1 (en)
WO (1) WO2020210637A1 (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6625299B1 (en) * 1998-04-08 2003-09-23 Jeffrey Meisner Augmented reality technology
JP4142871B2 (en) * 2001-12-27 2008-09-03 株式会社シード Mark transfer tool, mark transfer tape, and method of manufacturing mark transfer tape
US9599561B2 (en) * 2011-10-13 2017-03-21 Affymetrix, Inc. Methods, systems and apparatuses for testing and calibrating fluorescent scanners
KR102234688B1 (en) 2013-04-02 2021-03-31 쓰리엠 이노베이티브 프로퍼티즈 컴파니 Systems and methods for managing notes
US20150125846A1 (en) * 2013-11-05 2015-05-07 Michael Langford Rollable and Transportable Dry Erase Board
US20160339337A1 (en) * 2015-05-21 2016-11-24 Castar, Inc. Retroreflective surface with integrated fiducial markers for an augmented reality system
US10127468B1 (en) 2015-07-17 2018-11-13 Rocket Innovations, Inc. System and method for capturing, organizing, and storing handwritten notes
JP6661407B2 (en) * 2016-02-29 2020-03-11 株式会社エンプラス Marker
CN110072704B (en) * 2016-11-13 2021-08-27 火箭创新股份有限公司 Moisture-erasable note-taking system
US10284815B2 (en) * 2017-07-26 2019-05-07 Blue Jeans Network, Inc. System and methods for physical whiteboard collaboration in a video conference
US11179964B2 (en) * 2017-09-21 2021-11-23 Comsero, Inc. Micro-suction reusable and repositionable writing surfaces

Also Published As

Publication number Publication date
EP3953789A1 (en) 2022-02-16
AU2020271104A1 (en) 2021-12-02
JP2022527413A (en) 2022-06-01
CA3136438A1 (en) 2020-10-15
KR20220002372A (en) 2022-01-06
WO2020210637A1 (en) 2020-10-15
EP3953789A4 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
WO2018214365A1 (en) Image correction method, apparatus, device, and system, camera device, and display device
KR101292925B1 (en) Object of image capturing, computer readable media for storing image processing program and image processing method
US10298898B2 (en) User feedback for real-time checking and improving quality of scanned image
US10175845B2 (en) Organizing digital notes on a user interface
US9516214B2 (en) Information processing device and information processing method
US20170372449A1 (en) Smart capturing of whiteboard contents for remote conferencing
EP3089101A1 (en) User feedback for real-time checking and improving quality of scanned image
US9292186B2 (en) Note capture and recognition with manual assist
US20050088542A1 (en) System and method for displaying an image composition template
US11908101B2 (en) Writing surface boundary markers for computer vision
US9779323B2 (en) Paper sheet or presentation board such as white board with markers for assisting processing by digital cameras
US20150220800A1 (en) Note capture, recognition, and management with hints on a user interface
ES2827177T3 (en) Image processing device, image processing method and program
CN114026598A (en) Writing surface boundary marking for computer vision
JP2012060452A (en) Image processor, method therefor and program
JP6914369B2 (en) Vector format small image generation
JP2013131801A (en) Information terminal device, picked up image processing system, method and program, and recording medium
JP6025031B2 (en) Image processing apparatus and program
JP5140777B2 (en) Imaging object, image processing program, and image processing method
JP6154109B2 (en) Whiteboard and image correction method
JP2019097050A (en) Image reading device and image reading program
JP2012058800A (en) Method of creating pictorial original picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination