WO2015101775A1 - Image capturing apparatus - Google Patents

Image capturing apparatus Download PDF

Info

Publication number
WO2015101775A1
WO2015101775A1 PCT/GB2014/053771 GB2014053771W WO2015101775A1 WO 2015101775 A1 WO2015101775 A1 WO 2015101775A1 GB 2014053771 W GB2014053771 W GB 2014053771W WO 2015101775 A1 WO2015101775 A1 WO 2015101775A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scanned
scanning
area
document
Prior art date
Application number
PCT/GB2014/053771
Other languages
French (fr)
Inventor
Lysa CLAVENNA
Original Assignee
Samsung Electronics (Uk) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics (Uk) Ltd filed Critical Samsung Electronics (Uk) Ltd
Publication of WO2015101775A1 publication Critical patent/WO2015101775A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • H04N1/3873Repositioning or masking defined only by a limited number of coordinate points or parameters, e.g. corners, centre; for trimming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00352Input means
    • H04N1/00381Input by recognition or interpretation of visible user gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00352Input means
    • H04N1/00392Other manual input means, e.g. digitisers or writing tablets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/00411Display of information to the user, e.g. menus the display also being used for user input, e.g. touch screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00795Reading arrangements
    • H04N1/00798Circuits or arrangements for the control thereof, e.g. using a programmed control device or according to a measured quantity
    • H04N1/00816Determining the reading area, e.g. eliminating reading of margins
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/0402Arrangements not specific to a particular one of the scanning methods covered by groups H04N1/04 - H04N1/207
    • H04N2201/0436Scanning a picture-bearing surface lying face up on a support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/04Scanning arrangements
    • H04N2201/0402Arrangements not specific to a particular one of the scanning methods covered by groups H04N1/04 - H04N1/207
    • H04N2201/0452Indicating the scanned area, e.g. by projecting light marks onto the medium

Definitions

  • the present invention relates to an image capturing apparatus and a method of capturing an image.
  • the entire page of the document is scanned and stored. For example, if a user scans a letter the entire page of that letter is scanned and stored. Also, if a user wishes to scan a page of a book, that page is placed on the scanner and the scanner scans the whole page.
  • an image capturing apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of an object to be scanned, a detection unit arranged to detect an area of the object to be scanned selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the object to be scanned in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the object to be scanned.
  • the detection unit may be arranged to detect the position of the physical pointing object when the physical pointing object is between the object to be scanned and the scanning unit.
  • the image capturing apparatus uses the physical pointing object to determine a selected area of an image so as to crop the image.
  • the physical pointing object can be used to determine a digital zoom factor for a camera of other such device, before an image is stored.
  • the physical pointing object may comprise a finger of the user.
  • the user can carry out the selection process using just his hands, and the selection is made directly on the object to be scanned itself, rather than on a display or a screen.
  • One of the advantages of such selection process is that the user does not need to go through a separate and additional editing or cropping process using a PC or using a separate programme.
  • the user does not need to use any other device, such as a mouse, to select the area on the document.
  • the selection can simply be made using his finger.
  • the image acquired by the image sensor of the scanning unit may be displayed (either on a display integrated with the image capturing apparatus or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
  • the image capturing apparatus may be a scanner (e.g. a document scanner) or other suitable device.
  • the image capturing apparatus could be a camera (e.g. a smart phone camera) that uses the physical pointing object(s) (e.g. the user's fingers) to crop a captured image.
  • the selected area may be rectangular, and a corner of the rectangular selected area is determined by the position of the physical pointing object.
  • the physical pointing object may comprise a first pointer and a second pointer, and two diagonally-opposite corners of the rectangular selected area may be determined by positions of the first and second pointers, respectively.
  • the first pointer may be a first finger of the user, and the second pointer may be a second finger of the user.
  • the detection unit may be arranged to detect a movement of the physical pointing object and to determine a boundary of the selected area based on a path formed by the movement of the physical pointing object on the object to be scanned.
  • the image sensor may comprise a downward-facing camera arranged in use to be substantially above the object to be scanned to be scanned.
  • the image sensor may comprise two cameras for capturing stereoscopic images of the object to be scanned.
  • the detection unit may comprise two cameras to detect the position and orientation of the physical pointing object relative to the object to be scanned. This provides an accurate detection of not only the position of the physical pointing object but also the orientation thereof. This in turn provides more accurate information of where on the object to be scanned the physical pointing object is pointing to.
  • the scanning unit may comprise a lighting unit arranged to illuminate the object to be scanned during scanning.
  • the storage unit may be a non-volatile storage unit, in other embodiments, the storage unit may be a volatile storage unit such as a memory or buffer.
  • a display apparatus comprising the scanning apparatus of an embodiment of the present invention, wherein the scanning apparatus is integrated in the display apparatus, and a display unit arranged to display the image captured by image sensor.
  • the scanning unit maybe arranged to scan the object to be scanned (e.g. a document) when placed on a scanning area
  • the display apparatus may further comprise a support arranged to support the display unit above the scanning area
  • the display apparatus maybe arranged to stand on a surface and the object (e.g. a document) to be scanned is placed substantially below the display apparatus.
  • the display apparatus may further comprise a support base arranged to support the display apparatus on a surface, and, when scanning the object (e.g. a document), the object may be placed on a portion of the support base.
  • the display unit may comprise an image displaying portion and an edge portion disposed around peripheral edges of the image displaying portion, and the image sensor maybe disposed on a lower region of the edge portion.
  • the display apparatus maybe any one of a monitor, an integrated personal computer or a television.
  • a method of scanning comprising capturing an image of a document, detecting an area of am object (e.g. a document) selected by a user by detecting a position of a physical pointing object relative to the document in order to determine the selected area, and storing a scanned image including only the selected area of the object (e.g. document).
  • an area of am object e.g. a document
  • a user by detecting a position of a physical pointing object relative to the document in order to determine the selected area
  • detecting the selected area may comprise detecting the position of the physical pointing object when the physical pointing object is between the object (e.g. a document) and the scanning unit. In some embodiments, the detecting of the selected area may comprise detecting a movement of the physical pointing object and determining a boundary of the selected area based on a path formed by the movement of the physical pointing object on the document.
  • an image capture apparatus comprising a scanning unit comprising an image sensor arranged to capture an image, a detection unit arranged to detect an area of the image selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object within the image in order to determine the selected area, and to crop the image according to the selected area.
  • a scanning apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of a document, a detection unit arranged to detect an area of the document selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the document in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the document.
  • a scanning apparatus comprising a detection unit arranged to detect an area of a document selected by a user, and a scanning unit arranged to capture an image of the selected area of the document, wherein the detection unit is arranged to detect a physical pointing object and to determine the selected area based on a position of the physical pointing object on the document.
  • a method of scanning comprising detecting an area of a document selected by a user by detecting a physical pointing object and determining the selected area based on a position of the physical pointing object on the document, and capturing an image of the selected area of the document.
  • an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image.
  • the detection unit processes a received image, for example an image captured by an external image sensor.
  • the image processing apparatus could be, for example, a server connected to an external image sensor, or a host PC connected to a peripheral image sensor.
  • Figure ⁇ illustrates an image capturing apparatus, according to an embodiment of the present invention
  • Figure 2 illustrates a method of capturing an image, according to an embodiment of the present invention
  • Figure 3 illustrates a method of capturing an image, according to an embodiment of the present invention
  • Figure 4 illustrates a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention
  • Figure 5 illustrates a display apparatus having an integrated scanning apparatus in use, according to an embodiment of the present invention
  • Figure 6 illustrates a method of scanning a document using a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention.
  • Figure 7 illustrates a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention.
  • Figure 8 illustrates a display apparatus having an integrated scanning apparatus in use, according to an embodiment of the present invention.
  • an image capturing apparatus 10 that is capable of capturing an image of an object to be scanned, detecting an area of the object to be scanned selected by a user, and storing a scanned image of only the selected area of the object to be scanned.
  • Fig. 1 shows an image capturing apparatus 10, according to an embodiment of the present invention.
  • the image capturing apparatus 10 comprises a scanning unit 20 for capturing an image of an object to be scanned, a detection unit 30 for detecting an area of a object to be scanned selected by the user, and a storage unit 40 for storing a scanned image that includes only the area selected by the user.
  • the scanning unit 20 comprises an image sensor, such as a digital camera, for capturing an image of an object to be scanned.
  • an image sensor such as a digital camera
  • the object to be scanned could be any object.
  • it could be a document or a page of a book or a magazine. It could also be a sign, poster, calendar or an artwork.
  • the detection unit 30 is arranged to detect an area of the object to be scanned selected by a user so that a scanned image of such area can be stored. To this end, the detection unit 30 is arranged to detect a presence of a particular physical object in front of the image sensor. In effect, the detection unit 30 is arranged to detect a physical presence of an object between the object to be scanned and the scanning unit 20. Furthermore, the detection unit 30 is arranged to determine the position of the physical object relative to the object to be scanned. The detection unit 30 may also be arranged to determine the orientation of the physical object.
  • the detection unit 30 is capable of determining on which part of the object to be scanned the physical object has been placed and/or to which part of the object to be scanned it is pointing to.
  • the detection unit 30 may comprise a sensor for detecting a physical object, including an image sensor, a motion detector, infrared, microwave or sonic sensor.
  • the detection unit may detect the position of a physical pointing object within the image in order to select a portion of the image as the selected area.
  • the detection unit may receive the image produced by the image sensor of the scanning unit 20, and use a suitable software algorithm to determine the location of the physical object within the image.
  • the selected area of the image i.e. a portion of the image
  • the physical object being detected by the detection unit 30 is the user's finger. Therefore, for example, if a user places his index finger on a particular point on the object to be scanned, the detection unit 30 is able to determine where on the object to be scanned the user is pointing to. As a result, the user can indicate and select any particular point on the object to be scanned using his finger. As such, the selection is made directly on the object to be scanned itself, rather than on a display or a screen.
  • One of the advantages of such selection is that the user does not need to go through a separate and additional editing or cropping process using a PC or using a separate programme. Furthermore, the user does not need to use any other device, such as a mouse, to select the area on the object to be scanned. The selection can simply be made using his finger.
  • the detection unit 30 may be arranged to detect movement of the physical object relative to the object to be scanned. For example, if the user draws a circle on a particular part of the object to be scanned using his finger, the detection unit 30 recognises that the circular area created by the circle is the area of the object to be scanned which has been selected by the user. In other words, the circle drawn on the object to be scanned using the user's finger corresponds to the boundary of the area selected by the user. In such embodiments, the selected area may be of any shape, and is not limited to a circle or a rectangle.
  • the physical pointing object is a finger of the user.
  • the pointing object may be any type of pointing device or a pointy object, such as a pen.
  • the detection unit 30 comprises a sensor arranged to detect the presence, position or movement of a physical object.
  • the sensor may be an image sensor.
  • the sensor of the detection unit 30 may be the same image sensor used for capturing the image of the object to be scanned.
  • the storage unit 40 is arranged to store an image, and in particular, is arranged to store an image of only the selected area of the object to be scanned.
  • the storage unit 40 may be a non-volatile storage unit.
  • the storage unit 40 may be a volatile storage unit such as a memory or buffer.
  • the image acquired by the image sensor of the scanning unit 20 may be displayed (either on a display integrated with the image capturing apparatus 10 or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
  • the image capturing apparatus uses the physical pointing object to determine a selected area of an image so as to crop the image.
  • the physical pointing object can be used to determine a digital zoom factor for a camera of other such device, before an image is stored.
  • Step S10 the scanning unit captures an image of an object to be scanned.
  • the image sensor of the scanning unit 20 captures the image of the object to be scanned which has been placed in its field of view.
  • Step S20 the detection unit 30 starts detecting whether a physical pointing object is present between the image sensor of the scanning unit and the object to be scanned. In particular, the detection unit 30 detects whether a particular physical pointing object is present in the space between the image sensor and the object to be scanned. If a particular physical pointing object is detected by the detection unit 30, in Step S30, the detection unit 30 detects the position of the physical pointing object relative to the object to be scanned. That is, the position of the physical pointing object on the object to be scanned is determined, and by doing so, the detection unit 30 determines to which point on the object to be scanned the physical pointing object is pointing.
  • Step S40 the detection unit 30 detects an area of an object to be scanned selected by the user. That is, based on the detected position of the physical pointing object relative to the object to be scanned, the detection unit 30 determines an area of the object to be scanned being selected by the user. In particular, by detecting the position of the physical pointing object relative to the object to be scanned, the detection unit 30 is able to determine to which area/part of the object to be scanned the physical pointing object is pointing, and using such information the detection unit is able to determine the area/part of the object to be scanned that is being selected by the user.
  • Step S50 a scanned image of the selected area of the object to be scanned is stored in the storage unit 40.
  • the area of the object to be scanned that is determined to be selected by the user is automatically extracted or cropped from the captured image of the object to be scanned and stored as a scanned image.
  • the scanned image including only the selected area of the object to be scanned is stored.
  • Fig. 3 shows a method of selecting a rectangular area 70 of a document 60, according to an embodiment of the present invention.
  • the document 60 contains a picture of the earth, which is the only part of this document the user wishes to scan and store. In other words, the user is not interested in the bottom half of this document.
  • the rectangular area 70 which the user wishes to select is represented by the dotted lines.
  • the image captured by the image sensor may be displayed on a display at the time of scanning.
  • the image of the whole document 60 may be displayed on a display.
  • the user can then select an area 70 of the document whilst the image of the document is being displayed. This helps the user to see on the display whether his selected area is being correctly selected.
  • the user places his left index finger 80 on the first corner (top left corner) of the desired rectangular area 70 of the document 60.
  • the user places his right index finger 81 on a corner of the rectangular area diagonally opposite to the first corner (a third corner) - i.e. the bottom right corner.
  • any other two fingers could be used (e.g. thumb and index finger of one hand).
  • the detection unit 30 detects that a particular physical point object (i.e. the index finger of the user) has been placed between the image sensor and the document 60. The detection unit then detects the positions of the index fingers relative to the document 60.
  • the detection unit 30 is able to determine the location of the first corner of a rectangular area 70 of the document 60.
  • the position of the left index finger 80 on the document 60 indicates the location of the first corner of the rectangular area 70 of the document 60, and the position of the right index finger 81 on the document 60 would indicates the location of the third corner of the rectangular area 70 of the document 60.
  • the positions of the left and right index fingers are not limited to defining the first and third corners only.
  • the left and right index fingers may define the second and fourth corners of the rectangular area 70.
  • the positions of the left and right index fingers on the document can correspond to any pair of diagonally- opposite corners of the rectangular area 70.
  • the positions of the left and right index fingers on the document 60 correspond to the location of two diagonally-opposite corners of the selected area 70 of the document 60. It follows that the location of the rectangular area 70 as a whole is determined by locating the positions of the user's left and right index fingers on the document 60.
  • an area to be scanned is selected by the user using his/her fingers, particularly by detecting the positions of the user's left and right index fingers on the document to be scanned.
  • the rectangular area 70 is selected when the user places his left-hand index finger and right-hand index finger on the two diagonally-opposite corners of the rectangular area 70.
  • the rectangular selected area 70 is determined based on the information regarding the locations of two diagonally-opposite corners of such area. This may be possible, for example, through a prerequisite or an assumption that the edges of the selected rectangular area 70 are parallel to the edges of the document 60 or page (i.e. the paper). Once the selected area has been detected in such way, the scanned image
  • the selection may be made using only one hand.
  • the user may position his right index finger on one corner (e.g. the first corner) of the rectangular area 70, followed by the positioning of the same index finger (right index finger) on the diagonally-opposite corner (e.g. the third corner).
  • the detection unit detects the two sequential positions of the user's index finger on the document, and determines the locations of the two diagonally-opposite corners of the rectangular selected area 70.
  • two sequential positions of the physical pointing object corresponds to the locations of the two diagonally-opposite corners of the rectangular selected area 70.
  • Embodiments of the present invention have been described in relation to the selected area being rectangular.
  • the shape of the selected area is not limited to a rectangle or a square.
  • a path drawn by the pointing object may define the boundaries or edges of the selected area.
  • the user may outline the edges of the area of the document he wishes to scan using his index finger. In effect, the user draws the edges of the selected area using his finger.
  • the detection unit detects the path of the finger on the document and determines the selected area. As a result, an area of the document may be selected based on the movement of the finger on the document.
  • the user may select four corners of a rectangular selected area using thumbs and index fingers of both of his/her hands. For example, the user may position his left thumb and index finger on the two left-hand corners of the rectangular selected area, and position his right thumb and index finger on the two right-hand corners of the rectangular selected area.
  • the physical object may be any type of pointing device, such as a pen.
  • the detection unit is configured to detect may be any type of pointing device, including a pen.
  • the image capturing apparatus may be a scanner (e.g. a document scanner) or there suitable device.
  • the image capturing apparatus could be a camera (e.g. a smart phone camera) that uses the physical pointing object(s) (e.g. fingers) to crop a captured image.
  • the scanning unit, detection unit and storage unit may be integrated in the same device.
  • the image capturing apparatus may be provided as a set of distributed components. For example, in some
  • the scanning unit, detection unit and storage unit may be provided as separate components.
  • Some embodiments of the invention provide an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image.
  • the detection unit processes a received image, for example an image captured by an external image sensor.
  • the image processing apparatus could be, for example, a server connected to a an external image sensor, or a host PC connected to a peripheral image sensor.
  • Embodiments of the present invention have been described in relation to an image capturing apparatus.
  • the image capturing apparatus may be a scanning apparatus that has been integrated in a display apparatus.
  • Fig. 4 shows a display apparatus 500 having an integrated scanning apparatus, according to another embodiment of the present invention.
  • the display apparatus 500 the may comprise a display unit 510 for displaying an image and the scanning apparatus 100 for scanning an object.
  • the display unit 510 and the scanning apparatus 100 are connected such that an image of the object scanned by the scanning apparatus 100 can be displayed on the display unit 510.
  • the object to be scanned is a document.
  • the object to be scanned may be any object or thing for which an image can be captured.
  • the object to be scanned may be a poster, painting, artwork, or an object on a wall.
  • the display unit 510 may be a monitor. In other embodiments, the display unit 510 may be an integrated PC or a TV.
  • the display unit 510 comprises a display which can be any display capable of reproducing image or video in 2D or 3D, for example, an organic light-emitting diode (OLED) panel, a liquid crystal display (LCD) panel, or a plasma display panel (PDP).
  • OLED organic light-emitting diode
  • LCD liquid crystal display
  • PDP plasma display panel
  • the display unit 510 comprises a touch screen device comprising a display with a touch-sensitive overlay for user interaction.
  • the scanning apparatus 100 comprises an image sensor 110 and a lighting unit 120.
  • the image sensor 110 captures the image of a document to be scanned, and is arranged such that it is directed at the document to be scanned.
  • the lighting unit 120 illuminates the document placed on the scanning area when the image of the document is being taken by the image sensor 110.
  • the user places the document in front of the image sensor 110 and the image sensor 110 captures the image of the document.
  • the image sensor 110 is a downward-facing camera and therefore the document is placed substantially below the image sensor for scanning.
  • Fig. 5 shows an integrated display apparatus 500 (i.e. a display apparatus incorporating a scanning apparatus) according to an embodiment of the present invention.
  • the display apparatus 500 is supported by support legs 520 extending from the lower part of the display apparatus.
  • the integrated display apparatus has a support structure 520 (support legs) which is arranged to support the display apparatus 500 above a surface.
  • the display apparatus 500 is propped up by the support at a certain height above the surface of the table or the desk.
  • the image sensor detects whether or not a document 60 has been placed below the image sensor.
  • the image sensor is configured such that it can detect when a document 60 has been placed within its field of view. Specifically, when the image sensor 110 detects motion in its field of view it triggers the initiation of the scanning process. In other words, when a document 60 moves into its field of view, this sends a signal to start the scanning operation.
  • the image - ⁇ 5 - sensor no is located at a spot where its view of the document to be scanned 60 is unobstructed by any part of the display apparatussoo.
  • the image sensor 110 may be disposed on a lower part of the display apparatus 500 or near the base of the display apparatus 500.
  • Step S100 the scanning operation is initiated when a document 60 to be scanned is placed on the scanning area.
  • the scanning area is an area on which the document 60 placed thereon can be scanned by the scanning apparatus 100.
  • the image sensor 110 detects whether or not a document 60 has been placed below the display apparatus 500.
  • the image sensor 110 is configured such that it can detect when a document 60 has been placed on the scanning area. Specifically, when the image sensor 110 detects motion in its field of view it triggers the initiation of the scanning process. The field of view effectively covers the scanning area such that when a document moves into the scanning area (and thereby its field of view) this sends a signal to start the scanning operation of the display apparatus 500.
  • the image sensor 110 is a downward-facing image sensor.
  • the image sensor 110 is directed downwards towards the surface on which the display apparatus 500 stands. This is in effect a top-down scanner (i.e. scanned from the top).
  • the image sensor 110 is located at a spot where its view of the scanning area, and consequently the document on the scanning area, is unobstructed by any part of the display apparatus 500.
  • the image sensor may be disposed on a lower part of the display apparatus 500 or near the base of the display apparatus 500.
  • the image sensor 110 may be located on a bottom edge or a bottom surface of the display unit 510.
  • the image sensor 110 is located on a part of the display apparatus 500 from which it has an unobstructed view of any document placed on the scanning area.
  • the scanning unit Upon detection of a document 60 in the scanning area, the scanning unit starts the scanning operation.
  • a confirmation by the user to proceed to the next stage i.e. the scanning process
  • the scanning operation could be a manual process with no document detection.
  • Step S300 when a document to be scanned has been placed on the scanning area and the image sensor 110 has detected that a document has indeed been placed on the scanning area, the lighting unit 120 is activated.
  • the lighting unit 120 is designed to provide the illumination required for the scanning operation. To this end, the lighting unit 120 is arranged to light up the scanning area, and effectively, the document. As a result, the lighting unit 120 provides the necessary lighting on the scanning area for the image sensor 110 to capture a clear image of the document to be scanned. In other embodiments, sufficient lighting may be provided by other sources (e.g. external lighting devices) and as such no light unit is required.
  • Step S400 once the lighting unit 120 has been activated and the document 60 in the scanning area illuminated the image sensor 110 captures the image of the document placed on the scanning area.
  • Step S400 corresponds to Steps S10 to S50 of Fig. 2. In other words, in this
  • Step S400 when the process reaches Step S400, Steps S10 to S50 of Fig. 2 are performed.
  • Step S500 depending on whether or an area 70 of the document 60 has been selected, either a scanned image of only the selected area 70 of the document or the captured image of the whole document 60 is displayed on the display unit 510.
  • Step S600 the image sensor 110 continues to detect whether a new page or a new document has been placed on the scanning area even after the image of the document has been captured.
  • the image sensor 110 continues to detect whether or not a document has been placed below the display apparatus 500.
  • the image sensor continues to detect whether or not a document is present on the scanning area.
  • Steps S300 to S500 of Fig. 6 are repeated for the new page.
  • the scanning unit comprises two image sensors to capture stereoscopic images of the document on the scanning area.
  • two image sensors 310, 320 are disposed along a bottom edge of the display unit of the display apparatus, and facing the scanning area 610. Specifically, each image sensor 310, 320 captures the image of the document from its respective viewpoint. Then, a composite image (i.e.
  • a stereo image of the two images captured by the two image sensors 310, 320 is then created by the scanning unit.
  • the advantage of making a stereo image of the document is that an accurate image of the document could still be made even when the document has not been placed flat on the scanning area. If the document is not perfectly flat (for example, a creased letter or a thick book), the scanning unit captures the images from two different angles (viewpoints), and then combines the two images to form a flattened image, which would be subsequently displayed as if the document was flat. This helps to solve the difficulty of scanning a document which has been folded and does not stay flat.
  • the user is normally required to hold the document down flat on the surface with, for example, his/her hand or a flat cover on top of the document.
  • two image sensors 310, 320 are used to create flattened stereo image of the documents placed on the scanning area, thereby improving the ability for image pickup.
  • the detection unit comprises two image sensors to provide stereoscopic images of an object. This provides a more accurate detection of not only the position of the physical pointing object but also the orientation thereof. This in turn provides more accurate information of where on the document the physical pointing object is pointing to.
  • the detection unit having two image sensors to produce stereoscopic images provides a more accurate detection of the position and orientation of the physical pointing object compared to having a single image sensor.
  • Some embodiments of the invention provide an image capture apparatus comprising a scanning unit comprising an image sensor arranged to capture an image, a detection unit arranged to detect an area of the image selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object within the image in order to determine the selected area, and to crop the image according to the selected area.
  • some embodiments of the invention provide an image capturing apparatus, the image capturing apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of an object to be scanned, a detection unit arranged to detect an area of the object to be scanned selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the object to be scanned in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the object to be scanned.
  • the detection unit may detect the position of a physical pointing object within the image in order to select a portion of the image as the selected area.
  • the image acquired by the image sensor of the scanning unit 20 may be displayed (either on a display integrated with the image capturing apparatus 10 or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
  • Some embodiments of the invention provide a display apparatus comprising: a image capturing apparatus according to any one of the above mentioned embodiments that is integrated in the display apparatus, and a display unit arranged to display the image captured by the image sensor.
  • Some embodiments of the invention provide an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Facsimile Scanning Arrangements (AREA)

Abstract

An image capturing apparatus has a scanning unit to capture an image of an object to be scanned, a detection unit to detect an area of the object to be scanned selected by a user, and a storage unit to store a scanned image including only the selected area of the object to be scanned. The detection unit determines the selected area by detecting a position of a physical pointing object relative to the object to be scanned when the physical pointing object is between the object to be scanned and the scanning unit.

Description

Image Capturing Apparatus
Field of the Invention
The present invention relates to an image capturing apparatus and a method of capturing an image.
Background of the Invention
When a document is scanned using a conventional scanner, the entire page of the document is scanned and stored. For example, if a user scans a letter the entire page of that letter is scanned and stored. Also, if a user wishes to scan a page of a book, that page is placed on the scanner and the scanner scans the whole page.
However, often the user is not interested in the entire page, and often the user is interested in only a part of a document. For example, the user may only be interested in a particular article that takes up only a half of the page. In such situation, a user using a conventional scanner would have no choice but to scan the whole page, and then to edit the scanned image, for example, by cropping the wanted section and saving the cropped image only. This typically involves using an editing software programme on a PC. This process, however, is time-consuming and cumbersome since it requires the user to scan the whole page first, and then to crop a part of the page on a PC.
Furthermore, the user must have the appropriate software programme to perform the editing task.
The invention is made in this context.
Summary of the Invention
According to an aspect of the present invention, there is provided an image capturing apparatus, the image capturing apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of an object to be scanned, a detection unit arranged to detect an area of the object to be scanned selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the object to be scanned in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the object to be scanned. An advantage of such image capturing apparatus is that the user does not need to carry out a separate editing task to select an area after the object to be scanned (e.g. a document) has been scanned. In some embodiments, the detection unit may be arranged to detect the position of the physical pointing object when the physical pointing object is between the object to be scanned and the scanning unit.
Hence, in such embodiments, the image capturing apparatus uses the physical pointing object to determine a selected area of an image so as to crop the image.
In some embodiments, the physical pointing object can be used to determine a digital zoom factor for a camera of other such device, before an image is stored. In some embodiments, the physical pointing object may comprise a finger of the user.
As such, the user can carry out the selection process using just his hands, and the selection is made directly on the object to be scanned itself, rather than on a display or a screen. One of the advantages of such selection process is that the user does not need to go through a separate and additional editing or cropping process using a PC or using a separate programme. Furthermore, the user does not need to use any other device, such as a mouse, to select the area on the document. The selection can simply be made using his finger. In some embodiments, the image acquired by the image sensor of the scanning unit may be displayed (either on a display integrated with the image capturing apparatus or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
The image capturing apparatus may be a scanner (e.g. a document scanner) or other suitable device. For example, the image capturing apparatus could be a camera (e.g. a smart phone camera) that uses the physical pointing object(s) (e.g. the user's fingers) to crop a captured image. In some embodiments, the selected area may be rectangular, and a corner of the rectangular selected area is determined by the position of the physical pointing object.
In some embodiments, the physical pointing object may comprise a first pointer and a second pointer, and two diagonally-opposite corners of the rectangular selected area may be determined by positions of the first and second pointers, respectively.
In some embodiments, the first pointer may be a first finger of the user, and the second pointer may be a second finger of the user.
In some embodiments, the detection unit may be arranged to detect a movement of the physical pointing object and to determine a boundary of the selected area based on a path formed by the movement of the physical pointing object on the object to be scanned.
In some embodiments, the image sensor may comprise a downward-facing camera arranged in use to be substantially above the object to be scanned to be scanned.
In some embodiments, the image sensor may comprise two cameras for capturing stereoscopic images of the object to be scanned.
In some embodiments, the detection unit may comprise two cameras to detect the position and orientation of the physical pointing object relative to the object to be scanned. This provides an accurate detection of not only the position of the physical pointing object but also the orientation thereof. This in turn provides more accurate information of where on the object to be scanned the physical pointing object is pointing to.
In some embodiments, the scanning unit may comprise a lighting unit arranged to illuminate the object to be scanned during scanning.
In some embodiments, the storage unit may be a non-volatile storage unit, in other embodiments, the storage unit may be a volatile storage unit such as a memory or buffer. According to an aspect of the present invention, there is provided a display apparatus comprising the scanning apparatus of an embodiment of the present invention, wherein the scanning apparatus is integrated in the display apparatus, and a display unit arranged to display the image captured by image sensor.
In some embodiments, the scanning unit maybe arranged to scan the object to be scanned (e.g. a document) when placed on a scanning area, and the display apparatus may further comprise a support arranged to support the display unit above the scanning area.
In some embodiments, the display apparatus maybe arranged to stand on a surface and the object (e.g. a document) to be scanned is placed substantially below the display apparatus. In some embodiments, the display apparatus may further comprise a support base arranged to support the display apparatus on a surface, and, when scanning the object (e.g. a document), the object may be placed on a portion of the support base.
In some embodiments, the display unit may comprise an image displaying portion and an edge portion disposed around peripheral edges of the image displaying portion, and the image sensor maybe disposed on a lower region of the edge portion.
In some embodiments, the display apparatus maybe any one of a monitor, an integrated personal computer or a television.
According to an aspect of the present invention, there is provided a method of scanning comprising capturing an image of a document, detecting an area of am object (e.g. a document) selected by a user by detecting a position of a physical pointing object relative to the document in order to determine the selected area, and storing a scanned image including only the selected area of the object (e.g. document).
In some embodiments, detecting the selected area may comprise detecting the position of the physical pointing object when the physical pointing object is between the object (e.g. a document) and the scanning unit. In some embodiments, the detecting of the selected area may comprise detecting a movement of the physical pointing object and determining a boundary of the selected area based on a path formed by the movement of the physical pointing object on the document.
According to an aspect of the present invention, there is provided an image capture apparatus, the image capture apparatus comprising a scanning unit comprising an image sensor arranged to capture an image, a detection unit arranged to detect an area of the image selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object within the image in order to determine the selected area, and to crop the image according to the selected area.
According to an aspect of the present invention, there is provided a scanning apparatus, the scanning apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of a document, a detection unit arranged to detect an area of the document selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the document in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the document.
According to an aspect of the present invention, there is provided a scanning apparatus comprising a detection unit arranged to detect an area of a document selected by a user, and a scanning unit arranged to capture an image of the selected area of the document, wherein the detection unit is arranged to detect a physical pointing object and to determine the selected area based on a position of the physical pointing object on the document.
According to an aspect of the present invention, there is provided a method of scanning comprising detecting an area of a document selected by a user by detecting a physical pointing object and determining the selected area based on a position of the physical pointing object on the document, and capturing an image of the selected area of the document.
According to another aspect of the invention, there is provided an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image.
In such embodiments, the detection unit processes a received image, for example an image captured by an external image sensor. The image processing apparatus could be, for example, a server connected to an external image sensor, or a host PC connected to a peripheral image sensor.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure ι illustrates an image capturing apparatus, according to an embodiment of the present invention;
Figure 2 illustrates a method of capturing an image, according to an embodiment of the present invention;
Figure 3 illustrates a method of capturing an image, according to an embodiment of the present invention;
Figure 4 illustrates a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention;
Figure 5 illustrates a display apparatus having an integrated scanning apparatus in use, according to an embodiment of the present invention;
Figure 6 illustrates a method of scanning a document using a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention; and
Figure 7 illustrates a display apparatus having an integrated scanning apparatus, according to an embodiment of the present invention; and
Figure 8 illustrates a display apparatus having an integrated scanning apparatus in use, according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will now be described, in particular, an image capturing apparatus 10 that is capable of capturing an image of an object to be scanned, detecting an area of the object to be scanned selected by a user, and storing a scanned image of only the selected area of the object to be scanned. Fig. 1 shows an image capturing apparatus 10, according to an embodiment of the present invention. In this embodiment of the invention, the image capturing apparatus 10 comprises a scanning unit 20 for capturing an image of an object to be scanned, a detection unit 30 for detecting an area of a object to be scanned selected by the user, and a storage unit 40 for storing a scanned image that includes only the area selected by the user.
The scanning unit 20 comprises an image sensor, such as a digital camera, for capturing an image of an object to be scanned.
The object to be scanned could be any object. For example, it could be a document or a page of a book or a magazine. It could also be a sign, poster, calendar or an artwork.
The detection unit 30 is arranged to detect an area of the object to be scanned selected by a user so that a scanned image of such area can be stored. To this end, the detection unit 30 is arranged to detect a presence of a particular physical object in front of the image sensor. In effect, the detection unit 30 is arranged to detect a physical presence of an object between the object to be scanned and the scanning unit 20. Furthermore, the detection unit 30 is arranged to determine the position of the physical object relative to the object to be scanned. The detection unit 30 may also be arranged to determine the orientation of the physical object. As such, if a physical pointing object is placed on the object to be scanned, the detection unit 30 is capable of determining on which part of the object to be scanned the physical object has been placed and/or to which part of the object to be scanned it is pointing to. To this end, the detection unit 30 may comprise a sensor for detecting a physical object, including an image sensor, a motion detector, infrared, microwave or sonic sensor.
In some embodiments, the detection unit may detect the position of a physical pointing object within the image in order to select a portion of the image as the selected area. For example, the detection unit may receive the image produced by the image sensor of the scanning unit 20, and use a suitable software algorithm to determine the location of the physical object within the image. Thus, enabling the selected area of the image (i.e. a portion of the image) to be selected.
In this embodiment, the physical object being detected by the detection unit 30 is the user's finger. Therefore, for example, if a user places his index finger on a particular point on the object to be scanned, the detection unit 30 is able to determine where on the object to be scanned the user is pointing to. As a result, the user can indicate and select any particular point on the object to be scanned using his finger. As such, the selection is made directly on the object to be scanned itself, rather than on a display or a screen. One of the advantages of such selection is that the user does not need to go through a separate and additional editing or cropping process using a PC or using a separate programme. Furthermore, the user does not need to use any other device, such as a mouse, to select the area on the object to be scanned. The selection can simply be made using his finger.
In other embodiments, the detection unit 30 may be arranged to detect movement of the physical object relative to the object to be scanned. For example, if the user draws a circle on a particular part of the object to be scanned using his finger, the detection unit 30 recognises that the circular area created by the circle is the area of the object to be scanned which has been selected by the user. In other words, the circle drawn on the object to be scanned using the user's finger corresponds to the boundary of the area selected by the user. In such embodiments, the selected area may be of any shape, and is not limited to a circle or a rectangle.
In this embodiment, the physical pointing object is a finger of the user. In other embodiments, the pointing object may be any type of pointing device or a pointy object, such as a pen.
In this embodiment, the detection unit 30 comprises a sensor arranged to detect the presence, position or movement of a physical object. In this respect, the sensor may be an image sensor. Furthermore, in some embodiments, the sensor of the detection unit 30 may be the same image sensor used for capturing the image of the object to be scanned.
The storage unit 40 is arranged to store an image, and in particular, is arranged to store an image of only the selected area of the object to be scanned. In other words, although the image of the whole object to be scanned is captured by the image sensor of the scanning unit 20, only a particular part of the object to be scanned as selected by the user using a physical pointing object directly on the object to be scanned itself is stored. As such, this is a quick and efficient way of permanently storing a scanned image of only the wanted part. In some embodiments, the storage unit 40 may be a non-volatile storage unit. In other embodiments, the storage unit 40 may be a volatile storage unit such as a memory or buffer. In some embodiments, the image acquired by the image sensor of the scanning unit 20 may be displayed (either on a display integrated with the image capturing apparatus 10 or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
In some embodiments, the image capturing apparatus uses the physical pointing object to determine a selected area of an image so as to crop the image.
In some embodiments, the physical pointing object can be used to determine a digital zoom factor for a camera of other such device, before an image is stored.
A method of scanning an object to be scanned, according to an embodiment of the present invention, will now be described with reference to Fig. 2. In Step S10, the scanning unit captures an image of an object to be scanned. In particular, the image sensor of the scanning unit 20 captures the image of the object to be scanned which has been placed in its field of view.
Thereafter, in Step S20, the detection unit 30 starts detecting whether a physical pointing object is present between the image sensor of the scanning unit and the object to be scanned. In particular, the detection unit 30 detects whether a particular physical pointing object is present in the space between the image sensor and the object to be scanned. If a particular physical pointing object is detected by the detection unit 30, in Step S30, the detection unit 30 detects the position of the physical pointing object relative to the object to be scanned. That is, the position of the physical pointing object on the object to be scanned is determined, and by doing so, the detection unit 30 determines to which point on the object to be scanned the physical pointing object is pointing. This enables the user to select, for example, a particular part of a document or a page being scanned. In Step S40, the detection unit 30 detects an area of an object to be scanned selected by the user. That is, based on the detected position of the physical pointing object relative to the object to be scanned, the detection unit 30 determines an area of the object to be scanned being selected by the user. In particular, by detecting the position of the physical pointing object relative to the object to be scanned, the detection unit 30 is able to determine to which area/part of the object to be scanned the physical pointing object is pointing, and using such information the detection unit is able to determine the area/part of the object to be scanned that is being selected by the user. Then, in Step S50, a scanned image of the selected area of the object to be scanned is stored in the storage unit 40. In particular, the area of the object to be scanned that is determined to be selected by the user is automatically extracted or cropped from the captured image of the object to be scanned and stored as a scanned image. As a result, the scanned image including only the selected area of the object to be scanned is stored.
The process of detecting an area of an object to be scanned selected by the user, according to another embodiment of the present invention, will now be described with reference to Fig. 3. Fig. 3 shows a method of selecting a rectangular area 70 of a document 60, according to an embodiment of the present invention. In this example, the document 60 contains a picture of the earth, which is the only part of this document the user wishes to scan and store. In other words, the user is not interested in the bottom half of this document. The rectangular area 70 which the user wishes to select is represented by the dotted lines.
In some embodiments, the image captured by the image sensor may be displayed on a display at the time of scanning. For example, with reference to Fig. 2, after the image of the document 60 has been captured in Step S10, the image of the whole document 60 may be displayed on a display. The user can then select an area 70 of the document whilst the image of the document is being displayed. This helps the user to see on the display whether his selected area is being correctly selected.
In this embodiment, to select a particular rectangular area 70 of the document, the user places his left index finger 80 on the first corner (top left corner) of the desired rectangular area 70 of the document 60. At the same time, the user places his right index finger 81 on a corner of the rectangular area diagonally opposite to the first corner (a third corner) - i.e. the bottom right corner. Of course, it will be appreciated that any other two fingers could be used (e.g. thumb and index finger of one hand). When the user places his fingers on the document 60, the detection unit 30 detects that a particular physical point object (i.e. the index finger of the user) has been placed between the image sensor and the document 60. The detection unit then detects the positions of the index fingers relative to the document 60. Thereafter, based on the positional information of the index fingers of the user, the detection unit 30 is able to determine the location of the first corner of a rectangular area 70 of the document 60. The position of the left index finger 80 on the document 60 indicates the location of the first corner of the rectangular area 70 of the document 60, and the position of the right index finger 81 on the document 60 would indicates the location of the third corner of the rectangular area 70 of the document 60.
The positions of the left and right index fingers are not limited to defining the first and third corners only. For example, the left and right index fingers may define the second and fourth corners of the rectangular area 70. In other words, the positions of the left and right index fingers on the document can correspond to any pair of diagonally- opposite corners of the rectangular area 70. In effect, the positions of the left and right index fingers on the document 60 correspond to the location of two diagonally-opposite corners of the selected area 70 of the document 60. It follows that the location of the rectangular area 70 as a whole is determined by locating the positions of the user's left and right index fingers on the document 60. As such, an area to be scanned is selected by the user using his/her fingers, particularly by detecting the positions of the user's left and right index fingers on the document to be scanned. In other words, the rectangular area 70 is selected when the user places his left-hand index finger and right-hand index finger on the two diagonally-opposite corners of the rectangular area 70.
In this embodiment, the rectangular selected area 70 is determined based on the information regarding the locations of two diagonally-opposite corners of such area. This may be possible, for example, through a prerequisite or an assumption that the edges of the selected rectangular area 70 are parallel to the edges of the document 60 or page (i.e. the paper). Once the selected area has been detected in such way, the scanned image
corresponding to only the selected area of the document is extracted from the captured image, and stored by the storage unit 40. Although embodiments of the present invention have been described in relation to the rectangular selected area 70 being selected based on the positions of the user's left- and right-hand fingers, in some embodiments the selection may be made using only one hand. For example, the user may position his right index finger on one corner (e.g. the first corner) of the rectangular area 70, followed by the positioning of the same index finger (right index finger) on the diagonally-opposite corner (e.g. the third corner). The detection unit detects the two sequential positions of the user's index finger on the document, and determines the locations of the two diagonally-opposite corners of the rectangular selected area 70. In other words, in some embodiments, two sequential positions of the physical pointing object corresponds to the locations of the two diagonally-opposite corners of the rectangular selected area 70.
Embodiments of the present invention have been described in relation to the selected area being rectangular. However, the shape of the selected area is not limited to a rectangle or a square. In some embodiments, a path drawn by the pointing object may define the boundaries or edges of the selected area. For example, the user may outline the edges of the area of the document he wishes to scan using his index finger. In effect, the user draws the edges of the selected area using his finger. The detection unit detects the path of the finger on the document and determines the selected area. As a result, an area of the document may be selected based on the movement of the finger on the document.
Also, in some embodiments, the user may select four corners of a rectangular selected area using thumbs and index fingers of both of his/her hands. For example, the user may position his left thumb and index finger on the two left-hand corners of the rectangular selected area, and position his right thumb and index finger on the two right-hand corners of the rectangular selected area.
Although embodiments of the present invention have been described in relation to the physical object being the user's finger, in other embodiments, the physical object may be any type of pointing device, such as a pen. As such, in such embodiments, the detection unit is configured to detect may be any type of pointing device, including a pen.
As discussed, the image capturing apparatus may be a scanner (e.g. a document scanner) or there suitable device. For example, the image capturing apparatus could be a camera (e.g. a smart phone camera) that uses the physical pointing object(s) (e.g. fingers) to crop a captured image.
In some embodiments, the scanning unit, detection unit and storage unit may be integrated in the same device. In other embodiments, the image capturing apparatus may be provided as a set of distributed components. For example, in some
embodiments, the scanning unit, detection unit and storage unit may be provided as separate components. Some embodiments of the invention provide an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image. In such embodiments, the detection unit processes a received image, for example an image captured by an external image sensor. The image processing apparatus could be, for example, a server connected to a an external image sensor, or a host PC connected to a peripheral image sensor.
Embodiments of the present invention have been described in relation to an image capturing apparatus. However, in some embodiments, the image capturing apparatus may be a scanning apparatus that has been integrated in a display apparatus.
Fig. 4 shows a display apparatus 500 having an integrated scanning apparatus, according to another embodiment of the present invention. In this embodiment, the display apparatus 500 the may comprise a display unit 510 for displaying an image and the scanning apparatus 100 for scanning an object. The display unit 510 and the scanning apparatus 100 are connected such that an image of the object scanned by the scanning apparatus 100 can be displayed on the display unit 510. In this embodiment, the object to be scanned is a document. However, in other embodiments, the object to be scanned may be any object or thing for which an image can be captured. For example, the object to be scanned may be a poster, painting, artwork, or an object on a wall.
In some embodiments, the display unit 510 may be a monitor. In other embodiments, the display unit 510 may be an integrated PC or a TV. The display unit 510 comprises a display which can be any display capable of reproducing image or video in 2D or 3D, for example, an organic light-emitting diode (OLED) panel, a liquid crystal display (LCD) panel, or a plasma display panel (PDP). In some embodiments, the display unit 510 comprises a touch screen device comprising a display with a touch-sensitive overlay for user interaction.
In this embodiment, the scanning apparatus 100 comprises an image sensor 110 and a lighting unit 120. The image sensor 110 captures the image of a document to be scanned, and is arranged such that it is directed at the document to be scanned. The lighting unit 120 illuminates the document placed on the scanning area when the image of the document is being taken by the image sensor 110.
To scan a document, the user places the document in front of the image sensor 110 and the image sensor 110 captures the image of the document. In this embodiment, the image sensor 110 is a downward-facing camera and therefore the document is placed substantially below the image sensor for scanning.
Fig. 5 shows an integrated display apparatus 500 (i.e. a display apparatus incorporating a scanning apparatus) according to an embodiment of the present invention. In this embodiment, the display apparatus 500 is supported by support legs 520 extending from the lower part of the display apparatus. In other words, the integrated display apparatus has a support structure 520 (support legs) which is arranged to support the display apparatus 500 above a surface. In effect, the display apparatus 500 is propped up by the support at a certain height above the surface of the table or the desk.
In this embodiment, the image sensor detects whether or not a document 60 has been placed below the image sensor. To this end, the image sensor is configured such that it can detect when a document 60 has been placed within its field of view. Specifically, when the image sensor 110 detects motion in its field of view it triggers the initiation of the scanning process. In other words, when a document 60 moves into its field of view, this sends a signal to start the scanning operation. In this embodiment, the image - ι5 - sensor no is located at a spot where its view of the document to be scanned 60 is unobstructed by any part of the display apparatussoo. As such, the image sensor 110 may be disposed on a lower part of the display apparatus 500 or near the base of the display apparatus 500.
A method of scanning using a display apparatus having an integrated scanning apparatus shall be described with reference to Fig. 6.
In Step S100, the scanning operation is initiated when a document 60 to be scanned is placed on the scanning area. The scanning area is an area on which the document 60 placed thereon can be scanned by the scanning apparatus 100.
In Step S200, the image sensor 110 detects whether or not a document 60 has been placed below the display apparatus 500. In other words, the image sensor 110 is configured such that it can detect when a document 60 has been placed on the scanning area. Specifically, when the image sensor 110 detects motion in its field of view it triggers the initiation of the scanning process. The field of view effectively covers the scanning area such that when a document moves into the scanning area (and thereby its field of view) this sends a signal to start the scanning operation of the display apparatus 500.
In some embodiments of the invention, the image sensor 110 is a downward-facing image sensor. In other words, in such embodiments, the image sensor 110 is directed downwards towards the surface on which the display apparatus 500 stands. This is in effect a top-down scanner (i.e. scanned from the top).
In this embodiment, the image sensor 110 is located at a spot where its view of the scanning area, and consequently the document on the scanning area, is unobstructed by any part of the display apparatus 500. As such, the image sensor may be disposed on a lower part of the display apparatus 500 or near the base of the display apparatus 500.
For example, as exemplified in Fig. 5, the image sensor 110 may be located on a bottom edge or a bottom surface of the display unit 510. In other words, in this embodiment, the image sensor 110 is located on a part of the display apparatus 500 from which it has an unobstructed view of any document placed on the scanning area. Upon detection of a document 60 in the scanning area, the scanning unit starts the scanning operation. In this regard, in some embodiments, a confirmation by the user to proceed to the next stage (i.e. the scanning process) may be required after the detection has been made by the image sensor 110. Alternatively, the scanning operation could be a manual process with no document detection.
In Step S300, when a document to be scanned has been placed on the scanning area and the image sensor 110 has detected that a document has indeed been placed on the scanning area, the lighting unit 120 is activated.
In this embodiment, the lighting unit 120 is designed to provide the illumination required for the scanning operation. To this end, the lighting unit 120 is arranged to light up the scanning area, and effectively, the document. As a result, the lighting unit 120 provides the necessary lighting on the scanning area for the image sensor 110 to capture a clear image of the document to be scanned. In other embodiments, sufficient lighting may be provided by other sources (e.g. external lighting devices) and as such no light unit is required.
In Step S400, once the lighting unit 120 has been activated and the document 60 in the scanning area illuminated the image sensor 110 captures the image of the document placed on the scanning area.
Step S400 corresponds to Steps S10 to S50 of Fig. 2. In other words, in this
embodiment, when the process reaches Step S400, Steps S10 to S50 of Fig. 2 are performed.
Thereafter, in Step S500, depending on whether or an area 70 of the document 60 has been selected, either a scanned image of only the selected area 70 of the document or the captured image of the whole document 60 is displayed on the display unit 510.
In Step S600, in this embodiment, the image sensor 110 continues to detect whether a new page or a new document has been placed on the scanning area even after the image of the document has been captured. In particular, the image sensor 110 continues to detect whether or not a document has been placed below the display apparatus 500. In other words, the image sensor continues to detect whether or not a document is present on the scanning area. As such, if a new page of the document or a new document is detected on the scanning area by the image sensor after the scanning operation of one document has been completed, the scanning operation is triggered again. As a result, Steps S300 to S500 of Fig. 6 are repeated for the new page. Although embodiments of the present invention have been described in relation to an image sensor for capturing the image of the document, the present invention is not limited to a single image sensor for capturing the image, and a plurality of image sensors may be used. In some embodiments, the scanning unit comprises two image sensors to capture stereoscopic images of the document on the scanning area. For example, as exemplified in Figs. 7 and 8, two image sensors 310, 320 are disposed along a bottom edge of the display unit of the display apparatus, and facing the scanning area 610. Specifically, each image sensor 310, 320 captures the image of the document from its respective viewpoint. Then, a composite image (i.e. a stereo image) of the two images captured by the two image sensors 310, 320 is then created by the scanning unit. The advantage of making a stereo image of the document is that an accurate image of the document could still be made even when the document has not been placed flat on the scanning area. If the document is not perfectly flat (for example, a creased letter or a thick book), the scanning unit captures the images from two different angles (viewpoints), and then combines the two images to form a flattened image, which would be subsequently displayed as if the document was flat. This helps to solve the difficulty of scanning a document which has been folded and does not stay flat. In such situations, the user is normally required to hold the document down flat on the surface with, for example, his/her hand or a flat cover on top of the document. For a device which has the image sensor above the document, this poses a problem. If the user was to not place anything on top of the document to hold it down flat on the surface, the image captured by a single sensor will not be a perfect flat copy of the document. In such embodiments, two image sensors 310, 320 are used to create flattened stereo image of the documents placed on the scanning area, thereby improving the ability for image pickup.
Furthermore, although embodiments of the present invention have been described in relation to an image sensor for detecting a position of the physical pointing object relative to the document, the present invention is not limited to a single image sensor for such detection, and a plurality of image sensors may be used. In some embodiments, the detection unit comprises two image sensors to provide stereoscopic images of an object. This provides a more accurate detection of not only the position of the physical pointing object but also the orientation thereof. This in turn provides more accurate information of where on the document the physical pointing object is pointing to. In other words, the detection unit having two image sensors to produce stereoscopic images provides a more accurate detection of the position and orientation of the physical pointing object compared to having a single image sensor. Some embodiments of the invention provide an image capture apparatus comprising a scanning unit comprising an image sensor arranged to capture an image, a detection unit arranged to detect an area of the image selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object within the image in order to determine the selected area, and to crop the image according to the selected area.
As discussed above, some embodiments of the invention provide an image capturing apparatus, the image capturing apparatus comprising a scanning unit comprising an image sensor arranged to capture an image of an object to be scanned, a detection unit arranged to detect an area of the object to be scanned selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object relative to the object to be scanned in order to determine the selected area, and a storage unit arranged to store a scanned image including only the selected area of the object to be scanned. The detection unit may detect the position of a physical pointing object within the image in order to select a portion of the image as the selected area.
In some embodiments, the image acquired by the image sensor of the scanning unit 20 may be displayed (either on a display integrated with the image capturing apparatus 10 or on a separate display), and the user may view this image while the physical pointing object(s) are moved into place. This enables the user to line up the physical pointing object(s) with the object to be scanned accurately.
Some embodiments of the invention provide a display apparatus comprising: a image capturing apparatus according to any one of the above mentioned embodiments that is integrated in the display apparatus, and a display unit arranged to display the image captured by the image sensor. Some embodiments of the invention provide an image processing apparatus comprising: a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and a storage unit arranged to store only the selected portion of the image.
Whilst certain embodiments of the present invention have been described above, the skilled person will understand that many variations and modifications are possible without departing from the scope of the invention as defined by the accompanying claims.

Claims

Claims
1. An image capturing apparatus (10) comprising:
a scanning unit (20) comprising an image sensor (110) arranged to capture an image of an object to be scanned (60);
a detection unit (30) arranged to detect an area (70) of the object to be scanned selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object (80, 81) relative to the object to be scanned in order to determine the selected area (70); and
a storage unit (40) arranged to store a scanned image including only the selected area (70) of the object to be scanned.
2. The scanning apparatus claim 1, wherein the detection unit is arranged to detect the position of the physical pointing object when the physical pointing object is between the object to be scanned (60) and the scanning unit (20).
3. The scanning apparatus claim 1 or 2, wherein the physical pointing object (80, 81) comprises a finger of the user.
4. The scanning apparatus of any one of the preceding claims, wherein the selected area is rectangular, and a corner of the rectangular selected area is determined by the position of the physical pointing object.
5. The scanning apparatus of claim 4, wherein the physical pointing object comprises a first pointer (80) and a second pointer (81), and
wherein two diagonally-opposite corners of the rectangular selected area are determined by positions of the first and second pointers, respectively.
6. The scanning apparatus of claim 5, wherein the first pointer is a first finger of the user, and the second pointer is a second finger of the user.
7. The scanning apparatus of any one of claims 1 to 4, wherein the detection unit is arranged to detect a movement of the physical pointing object and to determine a boundary of the selected area based on a path formed by the movement of the physical pointing object relative to the object to be scanned.
8. The scanning apparatus of any preceding claim, wherein the image sensor comprises a downward-facing camera arranged in use to be substantially above the object to be scanned.
9. The scanning apparatus of any preceding claim, wherein the image sensor comprises two downward-facing cameras (310, 320) for capturing stereoscopic images of the object to be scanned.
10. The scanning apparatus of any one of the preceding claims, wherein the scanning unit comprises a lighting unit (120) arranged to illuminate the object to be scanned during scanning.
11. A display apparatus (500) comprising:
the image capturing apparatus according to any one of the preceding claims, wherein the image capturing apparatus is integrated in the display apparatus; and
a display unit (510) arranged to display the image captured by the image sensor.
12. The display apparatus of claim 12, wherein the scanning unit is arranged to scan a document when placed on a scanning area, the display apparatus further comprising a support (520) arranged to support the display unit above the scanning
area.
13. The display apparatus of claim 11 or 12, wherein the display apparatus is arranged to stand on a surface and a document to be scanned is placed substantially below the display apparatus.
14. The display apparatus of any one of claims 11 to 13, wherein the display apparatus further comprises a support base arranged to support the display apparatus on a surface, and, when scanning the document, the document is placed on a portion of the support base.
15. The display apparatus of any one of claims 11 to 14, wherein the display unit comprises an image displaying portion and an edge portion disposed around peripheral edges of the image displaying portion, and
wherein the image sensor is disposed on a lower region of the edge portion.
16. The display apparatus of any one of the claims 11 to 15, wherein the display apparatus is any one of a monitor, an integrated personal computer or a television.
17. An image capturing method comprising:
capturing an image of an object to be scanned (60);
detecting an area (70) of the object to be scanned selected by a user by detecting a position of a physical pointing object (80, 81) relative to the object to be scanned in order to determine the selected area; and
storing a scanned image including only the selected area of the object to be scanned.
18. The method of claim 17, wherein detecting the selected area comprises detecting the position of the physical pointing object when the physical pointing object is between the object to be scanned and the scanning unit.
19. The method of claim 17 or 18, wherein the physical pointing object comprises a finger of the user. 20. The method of any one of claims 17 to 19, wherein the selected area is rectangular, and a corner of the rectangular selected area is determined by the position of the physical pointing object.
20. The method of any one of claims 17 to 19, wherein the physical pointing object comprises a first pointer (80) and a second pointer (81), and
wherein two diagonally-opposite corners of the rectangular selected area are determined by positions of the first and second pointers, respectively.
21. The method of claim 20, wherein the first pointer is a first finger of the user, and the second pointer is a second finger of the user.
22. The method of any one of claims 17 to 19, wherein the detecting of the selected area comprises detecting a movement of the physical pointing object and determining a boundary of the selected area based on a path formed by the movement of the physical pointing object relative to the object to be scanned.
23. An image processing apparatus comprising:
a detection unit arranged to receive an image of an object, and to detect a position of a physical pointing object within the image in order to select a portion of the image; and
a storage unit arranged to store only the selected portion of the image.
24. An image capture apparatus comprising:
a scanning unit comprising an image sensor arranged to capture an image; a detection unit arranged to detect an area of the image selected by a user, wherein the detection unit is arranged to detect a position of a physical pointing object within the image in order to determine the selected area, and to crop the image according to the selected area.
PCT/GB2014/053771 2014-01-02 2014-12-18 Image capturing apparatus WO2015101775A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1400035.0A GB201400035D0 (en) 2014-01-02 2014-01-02 Image Capturing Apparatus
GB1400035.0 2014-01-02

Publications (1)

Publication Number Publication Date
WO2015101775A1 true WO2015101775A1 (en) 2015-07-09

Family

ID=50191690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2014/053771 WO2015101775A1 (en) 2014-01-02 2014-12-18 Image capturing apparatus

Country Status (2)

Country Link
GB (1) GB201400035D0 (en)
WO (1) WO2015101775A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050036708A1 (en) * 2003-08-11 2005-02-17 David Boll Systems and methods for cropping captured images
EP1662362A1 (en) * 2004-11-26 2006-05-31 Océ-Technologies B.V. Desk top scanning with hand gestures recognition
US20100194908A1 (en) * 2009-02-04 2010-08-05 Seiko Epson Corporation Image input device, image display device, and image display system
US20130083176A1 (en) * 2010-05-31 2013-04-04 Pfu Limited Overhead scanner device, image processing method, and computer-readable recording medium
US20130141556A1 (en) * 2011-12-01 2013-06-06 Kamran Siminou Viewing aid with tracking system, and method of use
US20130321858A1 (en) * 2012-06-01 2013-12-05 Pfu Limited Image processing apparatus, image reading apparatus, image processing method, and image processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050036708A1 (en) * 2003-08-11 2005-02-17 David Boll Systems and methods for cropping captured images
EP1662362A1 (en) * 2004-11-26 2006-05-31 Océ-Technologies B.V. Desk top scanning with hand gestures recognition
US20100194908A1 (en) * 2009-02-04 2010-08-05 Seiko Epson Corporation Image input device, image display device, and image display system
US20130083176A1 (en) * 2010-05-31 2013-04-04 Pfu Limited Overhead scanner device, image processing method, and computer-readable recording medium
US20130141556A1 (en) * 2011-12-01 2013-06-06 Kamran Siminou Viewing aid with tracking system, and method of use
US20130321858A1 (en) * 2012-06-01 2013-12-05 Pfu Limited Image processing apparatus, image reading apparatus, image processing method, and image processing program

Also Published As

Publication number Publication date
GB201400035D0 (en) 2014-02-19

Similar Documents

Publication Publication Date Title
US8345106B2 (en) Camera-based scanning
US8896688B2 (en) Determining position in a projection capture system
KR101969244B1 (en) Communication apparatus, method of controlling communication apparatus, computer-readable storage medium
US8964259B2 (en) Image processing apparatus, image reading apparatus, image processing method, and image processing program
WO2011007746A1 (en) Fingertip-manipulation-type information providing system, interactive manipulation device, computer program, and storage medium
JP2013505669A5 (en)
US7110619B2 (en) Assisted reading method and apparatus
Matsushita et al. Interactive bookshelf surface for in situ book searching and storing support
JP5817149B2 (en) Projection device
US9792012B2 (en) Method relating to digital images
CN105607825B (en) Method and apparatus for image processing
US10037132B2 (en) Enlargement and reduction of data with a stylus
JP2020091745A (en) Imaging support device and imaging support method
JP2010272078A (en) System, and control unit of electronic information board, and cursor control method
WO2015101775A1 (en) Image capturing apparatus
JP4951266B2 (en) Display device, related information display method and program
JP5292210B2 (en) Document presentation device
JP7206739B2 (en) Acquisition equipment and program
JP6399135B1 (en) Image input / output device and image input / output method
JP6312488B2 (en) Image processing apparatus, image processing method, and program
WO2015087071A1 (en) Display apparatus and integrated scanner
JP2016051191A (en) Image processing method
JP2014137788A (en) Information display device
JP5118663B2 (en) Information terminal equipment
EP3417608B1 (en) System and method for video processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14815834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/09/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14815834

Country of ref document: EP

Kind code of ref document: A1