GB2517674A - Image capture using client device - Google Patents

Image capture using client device Download PDF

Info

Publication number
GB2517674A
GB2517674A GB1308954.5A GB201308954A GB2517674A GB 2517674 A GB2517674 A GB 2517674A GB 201308954 A GB201308954 A GB 201308954A GB 2517674 A GB2517674 A GB 2517674A
Authority
GB
United Kingdom
Prior art keywords
image
frame
focus
card
adaptive threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1308954.5A
Other versions
GB201308954D0 (en
Inventor
Liu Zizhou
Warren Blumenow
Daniel Hegarty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WONGA Tech Ltd
Original Assignee
WONGA Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WONGA Tech Ltd filed Critical WONGA Tech Ltd
Priority to GB1308954.5A priority Critical patent/GB2517674A/en
Publication of GB201308954D0 publication Critical patent/GB201308954D0/en
Priority to PCT/EP2014/060154 priority patent/WO2014184372A1/en
Publication of GB2517674A publication Critical patent/GB2517674A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00477Indicating status, e.g. of a job
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • G02B7/365Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals by analysis of the spatial frequency components of the image
    • G06T5/70
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/0049Output means providing a visual indication to the user, e.g. using a lamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4092Edge or detail enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/608Skewing or deskewing, e.g. by two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00204Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
    • H04N1/00244Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server with a server, e.g. an internet server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0084Digital still camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0096Portable devices

Abstract

Methods, operable on client devices, for capturing an image of a document. One method comprises: receiving one or more frames; processing each frame by emphasising horizontal and vertical structures to produce an edge image; using an edge detection algorithm on each edge image to detect edges of the document; and automatically capturing a still image of the document once its edges have been located. Another method comprises: receiving a video stream comprising a sequence of frames; generating a focus metric indicating the sharpness of each frame; establishing and varying an adaptive threshold of the focus metric based on the variation of the focus metric over a number of frames; and capturing a still image of the document once it is determined that the focus metric of a current frame is above the adaptive threshold. Another method comprises: receiving a frame containing an image of the document; using an edge detector to detect candidate edges and produce an edge image; detecting corner points at candidate edge intersections; correcting perspective so that the image corner points are at the corners of a rectangle.

Description

IMAGE CAPTURE USING CUENT DEVICE
BACKGROUND OF THE INVENTION
This invention relates to methods and systems for efficient capturing of images of documents such as cards and the like using mobile devices. In particular, embodiments of the invention relate to capturing of images for optical character recogniUon of data on a card using a mobile device such as a smart phone.
Optical character recognition techniques are known for the automated reading of characters. For example, scanners for the automated reading of text on A4 pages and for scanning text on business cards and the like are known. However, such devices and techniques typically operate in controlled lighting conditions and capture plain, non-reflective surfaces.
SUMMARY OF THE INVENTION
We have appreciated the need for improved methods.,sIstems and devices for capturing and processing images of documents such as cards and other regular shaped items bearing alphanumeric data. In particular1 we have appreciated the need capturing images of personal cards such as credit cards, ID cards, cheques and the like for very rapid input of data from an image of the object using a mobile device.
Various attempts have also been made to automatically capture information from more challenging image surfaces such as credit card sized cards using devices such as smart phones. However, we have appreciated problems in capturing images from surfaces of such cards due to factors such as the variety of surface pattern arrangements and reflectivity of card surfaces.
The invention is defined in the claims to which reference is now directed. In broad terms, the invention resides in three areas.
In a first aspect, an embodiment of the invention provides a new approach to detecting the edge of a card in an image. In a second aspect, a new focus detection process is provided. In a third aspect, a new card framing process is provided.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail by way of example with reference to the drawings, in which: Figure 1.: is a functional diagram of the key components of a system embodying the invention; Figure 2: shows the framing of a card image; Figure 3: is a flow diagram showing the main process for capturing a card image; Figure 4: shows an image of a card; Figure 5: shows the card image of Figure 4 after filtering using a known filter; Figure 6: shows the card image of Figure 4 after filtering and channel processing according to an embodiment of the invention; Figure 7w shows a focus value against a frame count value for a series of images; Figure 7b: shows a series of images; Figure 8: shows a focus selection using regions of interest; Figure 9: shows a sliding window algorithm; Figure 10: shows a first step of a card framing process; Figure II: shows a second step of a card framing process; Figure 12: shows a third step of a card framing process; Figure 13: shows a final image resuiting from the card framing process; and Figure 14: shows an image uploading arrangement.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The invention may be embodied in methods of operating client devices, methods of using a system involving a client device, client devices, modules within client devices and computer instructions for controling operation of client devices.
Client devices include personal computers, smart phones, tablet devices and other devices useable to access remote services, A client device embodying the invention is arranged to capture an image of a document such as a card. Such a card may be a credit card, debit card, store card, driving licence, ID card or any of a number of credit card sized items on which text and other details are printed. For ease of description, such cards will be simply referred to hereafter as cards", and include printed, embossed and cards with or without a background image. Other objects with which the embodying device and methods may be used include cheques, printed forms and other such documents. In general, the embodying device and processes are arranged for capture of images of rectangular documents, in particular cards which are one type of document.
A system embodying the invention is shown in Figure 1. The system shown in Figure 1 comprises a mobile client device such as a smart phone, tablet device or the like 2 and a server system 20 to which the client device connects via any known wired or wireless network such as the Internet. The client device 2 comprises a processor, memory, battery source, screen, camera and input devices such as keyboard or touch screen. Such hardware items are known arid will not be described further. The device is arranged to have a number of separate functional modules, each of which may be operable under the command of executable code. As such, the functional components may be considered as either hardware modules or as software components.
A video capture module 10 is arranged to produce a video stream of images comprising a sequence of frames. The video capture module 10 will therefore include imaging optics, sensors, executable code and memory for producing a video stream. The video capture module provides the sequence of frames to a card detection nodule 12 and a focus detection module 14.
The card detection module 12 provides the functionality for determining the edges of a card and then determining if the card is properly positioned. This module provides an edge detection algorithm and a Hough transform based card detection algorithm. The latter consumes the edge images, which are generated by the former, and determines whether the card is properly positioned in each frame of the video stream. The focus detection module 14 is arranged to determine which frames of a sequence of frames are in focus-One reason for providing such focus detection is that many smart phones do not allow applications to control the actual focus of the camera system, and so the card detection arrangement is reliant upon the camera autofocus. This module features an adaptive threshold algorithm, which has been developed to determine the focus status of the card in each frame of the video stream. The adaptive threshold algorithm consumes focus values calculated by one of a number of focus metrics discussed later. A card framing module 16 is arranged to produce a final properly framed image of a card. This module combines a card detection process and card framing algorithm and produces a properly framed card image from a high-resolution still image. An image upload module 18 is arranged to upload the card image to the server 20.
The overall operation of the system shown in Figure 1 will now be described before discussing each of the functional modules in turn.
A front end client application of the client device 2, comprising the modules described, produces a live video stream of the user's card using the user device's camera while the user positions the card in a specific region indicted by the application (referred to as the card alignment box" shown in Figure 2). The functional modules then operate as quickly as possible to produce a properly framed. in focus image of the card. The main modules operate as follows. The card detection module 12 analyses live video frames to determine whether the card is properly positioned. The focus detection module 14 processes live video frames and decides whether the camera is properly focused on the card. Once the card is properly positioned and the card is in focus, the application causes the user device to take a new still image automatically. This stifi image, along with card position metrics generated by card detection algorithm, is sent to the card framing module 16. The card framing module 16 consumes the still image and produces a properly framed card image for upload by an image upload module 18. The properly framed card image is then uploaded to a backend server 20 for Optical Character Recognition (OCR). Once OCR has been completed, the OCR result is placed on the backend server ready for the client application of the user device to consume. The high resolution images captured are also uploaded to remote storage of the server in the background via a queuing mechanism, shown here as image upload module, designed to minimise the effect on the user experience of the application.
The output of the process is a properly framed card image in the sense that all background details are removed from the original image, only the card region is extracted; and the final card image has no perspective distortion, as shown in Figure 13 and described later.
The modules will now be described in turn. The modules may be provided by dedicated hardware, but the preferred embodiment is for each module to be provided as program code executable by a client device.
Card Detection Process A card detection process embodying the invention will now be described with reference to Figures 3 to 6. The purpose of the card detection process is to assist a user to position the card correctly, and decide whether the card is properly aligned within the frame and in focus. The output is a captured high-resolution image and card position metrics.
We have appreciated a number of problems involved in detecting the card. First, the diversity of cards means that process needs to work well across a high diversity of card surfaces. In addition, the likely cluttered background during card detection means that the process should be able to detect a card placed against a cluttered background. The process should also perform all processing in real-time to ensure a responsive user experience and provide a sense of control when the user uses the application to capture a card. The user should be able to use the system easily such as choosing to place the card on a surface or hold the card with their fingers while capturing. The process should also be able to detect cards in cases when one or more of the card's corners are occluded or the card's edges are partially occluded, such as by the users finger.
In order to address the various problems noted, the card detection module 12 operates a process as shown in Figure 3. The process shown in Figure 3 operates on an incom!ng video stream. An incoming video stream is analysed.
For each incoming video frame the process operates the following steps: At step 32, the process extracts from the original frame a sub-image that potentially contains the card (as shown in Figure 4) and optionally downsamples the image to speed up subsequent processing. Figure 5 shows an output of processing the image of Figure 4 using a known edge detection algorithm. As can be seen, many unwanted "edges" have been detected in addition to the actual card edges.
At step 34, the process produces a binary image of edge segments (as shown in Figure 6) by applying a new edge detection algorithm to the downsampled sub-image.
The edge detection algorithm takes into account the nature of the image being analysed (a card) and uses techniques to improve upon more general known algorfthms. The edge detection is defined by steps 1 to 4 below.
The edge detection algorithm operates as follows: Step 1 provides directional blurring: On the original image, in the top and bottom edge areas, use a horizontal kernel for blurring; in the left and right edge areas use a vertical kernel T (transpose of horizontal kernel) for blurring.
This operation removes some unwanted noise and Entensifies the card edges.
The directional blurring may comprise one or more of a variety of processes that operate to reduce the rate of change of an image in a particular direction. Such processes include smoothing or blurring algorithms such as a Gaussian. In the preferred arrangement, the directional blurring operates in one dimension at a time on a line by line basis. The horizontal blurring reduces the rate of change in the horizontal direction and thereby emphasises the rate of change in the vertical direction (emphasising a horizontal line). Similarly, the vertical blurring reduces the rate of change in the vertical direction (emphasising a vertical line).
Referring again to Figure 4, the top and bottom edge areas 48 are above and below boundary hnes 46, and the left and right edge areas are outside the vertical boundary lines 47.
Step 2 uses a directional filter: A Sobel edge detector is preferably used to operate on the edge areas and outputs derivatives of the gradient changes. From the derivatives produced, the magnitudes and directions of the gradient changes are calculated. Then a fixed threshold is applied on the outputted magnitude to selected pixels of strong gradient changes (usually edges in the image are preserved), further filtering the pixels based on directions of gradient changes.
For top and bottom areas, horizontal edges (where gradient changes are nearly vertical) are preserved; for left and right area, vertical edges (where gradient changes are nearly horizontal) are preserved. Finally, a binary image is outputted, which only contains promising edges pixels. The directional filter may be a number of different filters all of which have in common that they produce an output giving magnitude and direction of gradient changes in the image.
The top, bottom, left and right areas may be as defined in relation to step 1. By applying thresholds to the gradient changes according to the region of the image, the desired horizontal and vertical lines that are detected as edges are enhanced.
Step 3 Multi-chann& processing: a directional filter such as the Sobel edge detector operates separately on the R, G, and B channels and the final derivatives of gradient changes are aggregated from all channels by taking the maximum value from the outputs of all channels at each pixel location. Multi-channel processing increases the sensitivity of the card detection algorithm, in cases where the environment in which the card image was captured is such that luminance contrast between the card and the background is low but chroma contrast is high. The multi-channel processing may be in any colour space, or could be omitted entirely. The choice of R, C, B colour space is preferred, but alternatives such as CMYK are also possible. The advantage of processing each channel, then aggregating to take a maximum at each pixel ocation is that this caters for diverse card and background colours as well as diverse lighting conditions.
S
Step 4 Directional Morphological Operation: On the filtered edge image, in the top and bottom edge areas, erode with 11,1,1,1,1)1)1] to remove false edges: in the left and right edge areas, erode with [1,1,1,1, 1,1, fl1 (transpose of horizontal erosion mask). After erosion, apply dilation with the same masks of erosion in the edge areas. This operation removes some false edges and intensifies card edges. The final image locks like Figure 6. The morphological operations improve the output image by removing pixels that appear to be small "edges" (as shown by the edge clutter in Figure 5). The erosion operation computes a local minimum for the specified kernel, so will reduce visibility of vertical structures in the top and bottom areas and reduce visibility of horizontal structures in the left and right areas. The dilation takes the maximum for the specified kernel and so emphasises horizontal structures in the top and bottom areas and emphasises vertical structures in the side areas. In the arrangement described, erosion precedes dilation. The purpose of the erosion step is to remove the remaining false edge segments in the binary image. And after erosion, a dilation operation is used to fill up small gaps between the edge segments and compensates the effect of erosion on the genuine edge segments.
At step 36, the process detects the card edges by using the Probabilistic Hough Transform for line detection on the binary image of edge segments. For each edge line that is detected that matches the specified conditions for a card edge (minimum length of the edge line, angle of the edge line, prediction error of the edge line), the process calculates, at step 38, line metrics (line function and line end points) for the detected edge line. The Hough Transform provides extra information about the lines and in the process by which the edges within the image of Figure 6 are detected.
If, at step 40, 4 edge lines are detected in the current frame and 3 or more edge lines were detected in the previous frame, the card is considered to be properly positioned. If, at step 42, the card is also in focus, the application takes a high-resolution image at step 44. Otherwise, the process is repeated for the next frame.
The arrangement could use the video stream to provide the sliD image. However, devices tend to use lower resolutions for video streams and so the step of capturing a single still image using the camera functionality of the user device is preferred.
To assist the above process, the client application displays images to the user as follows. For each frame, highlight on the application screen those edges that have been detected, by lighting up the corresponding edges of the displayed frame; turn off such highlighting for edges that failed to be detected. (In Figure 2, all edges have been detected so all four edges are highlighted).
The user interface by which the user is shown that they have correctly positioned the card within the boundary area is best understood with reference to Figure 2.
The user positions the card in front of the imaging optcs of their smart phone or other user device and they can view that card on the screen of the display of their device, A boundary rectangle is shown giving the area within which the card should be positioned. The algorithm described above is operated on the video stream of the card image and, as each of the left, right, top and bottom edges are detected using the technique described above, those edges are indicated by highlighting on the display or changing the colour of the edge of the frame so as to indicate to the user that they have correcfiy positioned the card within the appropriate area.
The calculation of line metrics at step 38 above may be provided by a known edge detector. For example, off-the-shelf edge detection algorithms such as the Canny edge detector and the Sobel edge detector are generalised edge detectors; which not only detect the edges of cards but also noise edges from a cluttered background, as shown in Figure 5. Accordingly, we have provided a new robust edge detection algorithm described below that filters out the noise edges and preserves the card edges accurately.
Focus Detection Process We have also appreciated the need for improved focus detection for the purpose ol card capture. Clear and sharp images are essential for OCR based applications. However, some user devices cannot provide good focus measurement nformation. In addition, many devices do not aow appflcations to control the focus of the device on which they operate, or allow only limited types of control discussed ater. A focus detection process has been developed, which uses underlying algorithms for focus metric calculation and focus discrimination.
Focus metrics are calculated values that are highly correlated with the actual focus of the image. Focus discrimination is achieved by applying an adaptive threshold algorithm on the calculated focus metrics.
The focus detection aspects of an embodiment of the invention are shown in Figures 7a, 7b, 8 and 9. The arrangement determines a focus metric for each frame of a video stream, using one or more algorithms operating on each frame, and then determines whether the current frame is in focus by determining whether the focus metric for that frame is above or below a threshold. The threshold is an adaptive threshold in the sense that it varies adaptively depending upon the focus metric for previous frames. In this way, as the focus metric of each frame in turn is determined, when the focus metric of a given frame is above the adaptive threshold which varies based on the focus metric of previous frames1 the system then determines that the focus of the current frame is sufficient. The fact that the focus is sufficient can then be used as part of the triggering of capturing a still mage of the card.
The choice of focus metrics used will first be discussed followed by the manner in which the adaptive threshold is determined.
The embodiment uses five distinct focus metric calculation algorithms. Each of these algorithms produces a focus metric value for each sampled frame in a video stream. As shown in Figure 7a, the higher the focus metric value, the better the actual focus of the image as can be seen intuitively by the example frames of Figure 7b. Only one focus metric is required for the focus detection algorithm; alternative algorithms may be used depending upon the metric that performs best. The focus metrics that may be used indude: I, Threshold Absolute Gradient: if ç1agçzy1 o} dx dv £rnaqe 8x -.
2. Squared Gradient: fl ff3Q(xY)T2 dx dv Ox J * -. -. qiy; 3. Squared Lapiauan: Jfjnagr 1 02(x2) J dx dy 4. Threshold Absolute Sobel Gradient: ILmoye fflsohel gradient O}clx dy where f(z) = z if z »= O,f(z) = 0 otherwise, = = g(iJ) ge:J 1), = g(ij + 1)-29(i,j) +g(i,j -1), sobel gradient is calculated by -1 0 1 convolving image with kern& -2 0 2,q is the grayscale image and gQ,J) is -1 0 1 the pixel value at the i' row, column.
The above focus metrics may be used in an embodiment, but the preferred approach is to use a Discrete Cosine Transform (DCT). As is known to the skilled person, a discrete cosine transform (DCI) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies.
In the DCT approach, focus values are calculated block by block (a 4x4 block size is used in the preferred implementation as shown by the sample areas in a region of interest in Figure 8). A DCT transformation is applled to each image block, producing a representation of the block in the frequency domain. The result contains a number of frequency components. One of these components is the DC component, which represents the baseline of the image frequency. The other components are considered to be high4requency components. The sum of all the quotients of the high frequency components divided by the DC component is considered to be the focus value of the block. The focus value of the image can be calculated by aggregating the focus values for all blocks.
The process for producing the preferred focus metric may therefore be summarised by the following steps: 1 For each 4x4 pixel block of the image, apply 2D OCT operation and obtain a 4x4 DCI frequency map.
2. For each frequency map, divide the high frequency' components by the major low frequency' component (DC component). Sum up all quotients as the result of the block.
3. Sum of the results of all blocks to produce the final focus metric.
The focus metric can be used on sub-images of the original image. Focus values can be calculated in regions of interest of the original image. This feature gives the application the abibty to specify the region to focus on, as shown in Figure 6.
By calculating a focus metric for small sub regions of the original image, the CPU consumption is also reduced.
The system must cope with a wide variety of different devices under different lighting conditions and card surfaces. When one of these conditions changes, the focus metrics can output values with significantly different ranges. Using a fixed threshold cannot discriminate focused images for all variations of image capture conditions. In order to provide accurate focus discrimination, an adaptive threshold algorithm has been created which can automatically adjust threshold values according to the focus values of historical sampled frames.
The adaptive threshold algorithm uses the following features: Sliding window: The algorithm keeps the focus values of recently sampled frames, using focus values within a sliding window, in which focus values are retained for frames within that window. The window moves with the live video stream, thereby retaining the focus values for a specified number of frames. The windows moves concurrently with the video stream, with the newly sampled focus values added in from the right side of the window and old focus values dropped out from the left side of the window, as shown in Figure 9.
The adaptive algorithm then operates as follows in relation to the sliding window.
For each newly sampled frame the focus metric is calculated and the moving sliding window moved. The adaptive threshold is recalculated based on an unfocused base line, a focused base line and a discrimination threshold for the focus values within the sliding window. The focus value for the current frame is then compared to the adaptive threshold and the discrimination threshold and, if the focus value is above the adaptive threshoid and discrimination threshold1 then the frame is deemed to be in focus. The values used within the focus detection process are as foflows: Minimum window size: This is the minimum number of sampled frames that must be present in the sliding window before the adaptive threshold algorithm is applied.
Maximum window size: This is the maximum number of sampled frames in the sliding window.
Adaptive threshold: This threshold value roughly separates focused frames from non-focused frames. It adapts itself according to the values in the sliding window.
If there is no value above the adaptive threshold in the sliding window, the adaptive threshold shrinks; if there is no value below the adaptive threshold in the sliding window, the adaptive threshold grows. The adaptive threshold is adjusted whenever a new frame is sampled.
Adaptive threshold higher limit: This is the limit to which the adaptive threshold can grow.
Adaptive threshold lower limit: This is the limit to which the adaptive threshold can shrink.
Adaptive threshold growing speed: This is the speed at which the adaptive threshold grows.
Adaptive threshold shrinking speed: This is the speed at which the adaptive threshold shrinks.
Un-focused baseline: This is the mean of focus values lower than the adaptive threshold in the sliding window.
Focused baseline: This is the larger of: the mean of focus values higher than the discrimination threshold in the sliding window; or the current adaptive threshold value.
Discrimination threshold: This threshold is used for discriminating focused frames from unfocused frames. This threshold is the largest value among: the adaptive threshold, double the un-focused baseline and 80% of the focus baseline. These numbers may change after parameter optimisation.
Using the combination of determining a focus metric for each frame and varying the adaptive threshold for that focus metric based on the focus metric for a certain number of previous frames as defined by the sliding window, an accurate determination of the focus of an image may be made within the user device. This is achieved by analysing the video frames themselves, and without requiring control over the imaging optics of the device. An advantage of this is that the technique may be used across many different types of device using a process within a downloadable application and without direct control of the imaging optics (which is not available to applications for many user devices).
As a further addition, some control of the imaging optics may be included, For example, some devices allow a focus request to be transmitted from a downloadable application to the imaging optics of the device, prompting the imaging optics to attempt to obtain focus by varying the lens focus. Although the device will do its best to focus on the object, it is not guaranteed to get a perfectly focused image using this autofccus function. The imaging optics will then attempt to hunt for the correct focus position and, in doing so, the focus metric will vary for a period of time. The process described above is then operable to determine when an appropriate focus has been achieved based on the variation of the focus metric during the period of time that the imaging optics hunts for the correct focus.
Card Framing Process We have also appreciated the need; once a high resolution image has been acquired, for the card detection process to be re-run to accurately locate the position of the card and produce a perspectively correct image of the card surface. Once this is done the output is a properly framed card image.
We have appreciated, though, that there are challenges: For example, in cases where the user's fingers occlude the corners of card, simple pattern matching techniques may fail to locate the correct location of corners. A properly framed card in the sense that no additional background is included, no parts are missing and the perspective is correct assists any subsequent process such as OCR.
The broad steps of the card framing process in an embodiment are as follows, and as shown in Figures 10 to 13.
First, rerun the card detection process to obtain candidate edge lines for the high-resolution image. The card detection process is only re-run if needed, for example if the final image being processed is freshly captured sti image. If the image being used is, in fact, one of the frames of the video stream analysed, the card edges may already be available from the earlier card detection process. If the algorithm fails to detect any of the four edge lines, use the line metrics produced by the Card Detection Process as the edge line metrics for the high-resolution image.
If a high resolution still image is used, the next step is to extract the card region from the high-resolution image and resize itto 1200x752 pixels. At this stage, the arrangement has produced a high resolution image of just the card, but the perspective may still require some correction if the card was not held perfectly parallel to the imaging season of the client device. For this reason a process is operated to identify the corners' of the rectangular shape and then to app'y perspective correction such that the corners are truly rectangular in position.
To identify the corners, the next step is to extract the corner regions (for example 195x195 patches from the 1200x752 card region).
For simplicity of processing, the process then "folds" the corner regions so that all the corners point to the northwest and thus can be treated the same way. The folding process is known to the skilled person and involves translating and/or rotating the images.
The next step is to split each corner region into channels. For each channel, the process produces art edge image (for example using a Gaussian filter and Canny Edge Detector). The separate processing of each channel is preferred, as this improves the quality, but a single channel could be used.
Then, the process step is to merge the edge images from all channels (for example using a max operator). This produces a single edge image that results from the combined edge image of each channel.
The edge image processing steps so far produce an edge image of each corner as shown in Figure 10. The process next identifies the exact corner points of the rectangular image.
To do this. the process draws the corresponding candidate edge line (produced in the first step) on each corner edge image, as shown in Figure 10. Then a template matching method is used to find the potential corner coordinates on the corner edge image. Template matching techniques are known to the skilled person and involve comparing a template image to an image by sliding one with respect to the other. A template as shown in Figure 11 is used for this process.
The result matrix of the template matching method is shown in Figure 12. The brightest locations indicate the highest matches. In the result matrix, the brightest location is taken as the potentiai edge corner. The corners are then unfolded to obtain corner coordinates.
The process then perspectively corrects the card region specified by the corner coordinates and generates the final card image ((his can either be a colour image or a grayscale image). An example is shown in Figure 13.
When complete, the device transmits the properly framed card image to the server.
Image Upload The properly framed card image produced by the card framing process is immediately uploaded to the back-end OCR service for processing. Before uploading, the card image is resized to a size suitable [or transmission (1200x752 is used in the current application). The application can upload grayscale or color images. The final image uses JPEG compression and the degree of compression can be specified.
In addition to the processed image that is uploaded immediately, the original high resolution image captured is uploaded to a remote server or Cloud storage for further processing, such as a fraud detection or face recognition based ID verification.
As the size of a highresolution image can reach 10 to 20MB, serialising it to the file system and uploading it to remote storage takes a long time. In order to minimise the impact on the user experience and the memory consumption of the client while uploading, a queue-based background image upload method (as shown in Figure 14) has been developed.
The arrangement has the following features: Image serialisation queue: This is a first-in-first-out (FIFO) queue maintaining the images to be serlaUsed to the file system.
Image upload queue: This is a FIFO queue maintaining the path information of image files to be uploaded to remote storage.
Seriafisation background thread: This seriahses the images in the image serialisation queue from memory to the file system in the background.
Upload background thread: This uploads the images referenced by the path information in the image upload queue from the client's file system to a remote
server or Cloud storage in the background.
Background upload process:
After an image has been captured, the image is stored in memory on the client.
The captured images are put in an image serialisation queue. The images in the queue are serialised to the client's file system one by one by the seriafisation background thread. After serialisation, the image is removed from the image serialisation queue and the storage path information of the image fie (not the image file itself) is put in a file upload queue. The upload background thread uploads the images referenced by the storage path information in the image upload queue one by one to remote storage. Once an image has been uploaded successfully, it is removed from the file storage and its storage path information is also removed from the image upload queue. The image upload queue is also backed up on the file system, so the client can resume the image upload task if the client is restarted.

Claims (42)

  1. CLAIMS1, A method operable on a cUent device for capturing an image of a document, comprising: -receiving one or more frames; -processing each frame to produce an edge image using processes to emphasise horizontal structures at top and bottom regions of each frame and to emphasise vertical structures at left and right regions of each frame; operating an edge detection algorithm on each edge image to determine whether one or more edges of the document have been located; and -automatically capturing a still image of the document when the edge detection algorithm determines that edges of the document have been located.
  2. 2. A method according to claim 1, wherein processing each frame comprises operating horizontal blurring on top and bottom regions and operating vertical blurring on left and right regions.
  3. 3. A method according to claim 2, wherein processing each frame includes processing different resolutions of each frame.
  4. 4. A method according to claim 3, wherein the different resolutions are produced by sub-sampling each frame.
  5. A method according to any of claims I to 4, wherein processing each frame comprises operating a directional hlter on top, bottom, left and right regions.
  6. 6.. A method according to claim 5, wherein the directional filter produces derivatives of gradient changes.
  7. 7. A method according to claim 5. wherein the directional filter is a Sobel edge filter.
  8. 8, A method according to claim 5, wherein the directional filter is a Hough Transform.
  9. 9; A method according to any of claims 3 to 8, wherein processing each frame further comprises operating the directional filter on each of multiple colour channels of each frame.
  10. 10. A method according to claim 9, wherein the colour channels are RGB.
  11. 11. A method according to claim 9 or 10, wherein the processing of each frame takes the maximum value of each channel for each pixel of each frame.
  12. 12. A method according to any of claims 3 to 11, further compris!ng operating morphological operations on the resufts of the directional filter.
  13. 13. A method according to claim 12, wherein the morphological operations include one of more of eroding and dilation applied to the top, bottom, left and right regions.
  14. 14. A method according to any preceding claim, wherein the one or more frames comprises a video stream comprising a sequence of frames.
  15. 15. A method according to any preceding claim, further comprising providing a visual indication on a display of the device when one or more edges have been located.
  16. 16. A method according to any preceding claim, wherein the document is a card and the step of automatically capturing a still image occurs when all four edges of the card have been detected.
  17. 17.. A chent device comprising a camera, processor and a memori, the memory having instructions stored thereon which when executed on the processor undertake the method of any of claims 1 to 16.
  18. 18. A computer program comprising instructions which when executed on a client device undertake the method of any of claims ito 16.
  19. 19. A method operable on a client device for capturing an image of a document, comprising: -receiving a video stream comprising a sequence of frames; -processing each frame to produce a Focus metric indicative of the focus of each frame; -establishing an adaptive threshold of the focus metric; -varying the adaptive threshold based on a history of variation of the focus metric over a number of frames; -determining whether the focus metric of a current frame is above the adaptive threshold; and -capturing a still image of the document when the focus metric of the current frame is above the adaptive the threshold.
  20. 20. A method according to claim 19, wherein the adaptive threshold decreases if the focus metric of each frame remains below the adaptive threshold for a number of frames.
  21. 21. A method according to claim 20, wherein the adaptive threshold decreases at a predetermined rate of decrease.
  22. 22. A method according to claim 19, 20 or 21, wherein the adaptive threshold increases if the focus metric of each frame remains above the adaptive threshold for a number of frames.
  23. 23. A method according to claim 22, wherein the adaptive threshold increases at a predetermined rate of increase.
  24. 24. A method according to any of Sims 19 to 23, wher&n the adaptive threshold is limited in range by an upper limit and a lower limit.
  25. 25. A method according to any of claims 19 to 24, further comprising S capturing a still image of the document when the focus metric of the current frame is above the adaptive threshold and also above a further threshold.
  26. 26. A method according to claim 25, wherein the further threshold is a function of the mean of focus values lower than the adaptive threshold for the number of frames.
  27. 27. A method according to claim 26, wherein the function is a multiple of the mean of focus values lower than the adaptive threshold for the number of frames.
  28. 28. A method according to claim 25, wherein the further threshold is a function of the mean of focus values higher than the adaptive threshold for the number of frames.
  29. 29. A method according to claim 26, wherein the function is a multiple of the mean of focus values higher than the adaptive threshold for the number of frames.
  30. 30. A method according to any of claims 19 to 29, wherein the document is a card.
  31. 31, A method according to any of claims 19 to 29, wherein the method is operable without control of focus of the device.
  32. 32. A client device comprising a camera, processor and a memory, the memory having instructions stored thereon which when executed on the processor undertake the method of any of claims 19 to 29.
  33. 33. A cUent device according to 32, wherein the device is of a type which does not aflow an application to control focus of the camera of the device.
  34. 34. A computer program comprising instructions which when executed on a client device undertake the method of any of claims 19 to 29.
  35. 35. A method operable on a client device for capturing an image of a document, comprising: -receiving a frame containing an image of the document; -processing the frame using an edge detector to obtain candidate edges; processing the frame to produce an edge image; -obtaining corner points at the intersection candidate edges; and -processing the frame to correct perspective such that the corner points of the image are at corners of a rectangle.
  36. 36. A method according to claim 35, wherein the step of producing an edge image comprises producing an edge image in each of multiple channels and combining the edge images.
  37. 37. A method according to claim 36, wherein the combining comprises taking the maximum value at each pixel from each of the edge images.
  38. 38. A method according to any preceding claim, wherein obtaining corner points uses a template process with a template of a known corner and corner point position.
  39. 39. A method according to any preceding claim, further comprising folding corners prior to processing.
  40. 40. A method according to any of claims 35 to 39, wherein the document is a card.
  41. 41, A chent device comprising a camera, processor and a memory, the memory having instructions stored thereon which when executed on the processor undertake the method of any of claims 35 to 40.
  42. 42. A computer program comprising instructons which when executed on a cNent device undertake the method of any of claims 35 to 40.
GB1308954.5A 2013-05-17 2013-05-17 Image capture using client device Withdrawn GB2517674A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1308954.5A GB2517674A (en) 2013-05-17 2013-05-17 Image capture using client device
PCT/EP2014/060154 WO2014184372A1 (en) 2013-05-17 2014-05-16 Image capture using client device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1308954.5A GB2517674A (en) 2013-05-17 2013-05-17 Image capture using client device

Publications (2)

Publication Number Publication Date
GB201308954D0 GB201308954D0 (en) 2013-07-03
GB2517674A true GB2517674A (en) 2015-03-04

Family

ID=48746949

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1308954.5A Withdrawn GB2517674A (en) 2013-05-17 2013-05-17 Image capture using client device

Country Status (2)

Country Link
GB (1) GB2517674A (en)
WO (1) WO2014184372A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512658A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image recognition method and device for rectangular object
CN105611147A (en) * 2015-10-30 2016-05-25 北京旷视科技有限公司 Shooting method and device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820968A (en) * 2015-04-22 2015-08-05 上海理工大学 Text draft angle correction method
US10341418B2 (en) 2015-11-06 2019-07-02 Microsoft Technology Licensing, Llc Reducing network bandwidth utilization during file transfer
WO2017223335A1 (en) * 2016-06-23 2017-12-28 Capital One Services, Llc Systems and methods for automated object recognition
CN108429877B (en) * 2017-02-15 2021-08-13 腾讯科技(深圳)有限公司 Image acquisition method and mobile terminal
CN108304839B (en) * 2017-08-31 2021-12-17 腾讯科技(深圳)有限公司 Image data processing method and device
CN108229368B (en) * 2017-12-28 2020-05-26 浙江大华技术股份有限公司 Video display method and device
CN112183517B (en) * 2020-09-22 2023-08-11 平安科技(深圳)有限公司 Card edge detection method, device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040012679A1 (en) * 2002-07-17 2004-01-22 Jian Fan Systems and methods for processing a digital captured image
US20050013462A1 (en) * 1994-11-16 2005-01-20 Rhoads Geoffrey B. Paper products and physical objects as means to access and control a computer or to navigate over or act as a portal on a network
US20050078192A1 (en) * 2003-10-14 2005-04-14 Casio Computer Co., Ltd. Imaging apparatus and image processing method therefor
EP1571820A2 (en) * 2004-02-27 2005-09-07 Casio Computer Co., Ltd. Image processing device and method, image projection apparatus, and program
JP2005261644A (en) * 2004-03-18 2005-09-29 Sony Computer Entertainment Inc Card judgement device and method
US20060045379A1 (en) * 2004-08-26 2006-03-02 Compulink Management Center, Inc. Photographic document imaging system
US20080218613A1 (en) * 2007-03-09 2008-09-11 Janson Wilbert F Camera using multiple lenses and image sensors operable in a default imaging mode
WO2008137051A1 (en) * 2007-05-01 2008-11-13 Compulink Management Center, Inc. Photo-document segmentation method and system
US20090001165A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation 2-D Barcode Recognition
EP2166408A1 (en) * 2008-09-17 2010-03-24 Ricoh Company, Ltd. Imaging device and imaging method using the same
WO2010049004A1 (en) * 2008-10-31 2010-05-06 Hewlett-Packard Development Company, L.P. Method and digital imaging appliance adapted for selecting a focus setting
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
US20130182961A1 (en) * 2012-01-16 2013-07-18 Hiok Nam Tay Auto-focus image system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155775A (en) * 1988-10-13 1992-10-13 Brown C David Structured illumination autonomous machine vision system
US6668097B1 (en) * 1998-09-10 2003-12-23 Wisconsin Alumni Research Foundation Method and apparatus for the reduction of artifact in decompressed images using morphological post-filtering
US6898316B2 (en) * 2001-11-09 2005-05-24 Arcsoft, Inc. Multiple image area detection in a digital image
KR100542365B1 (en) * 2004-05-07 2006-01-10 삼성전자주식회사 Appratus and method of improving image
US7664326B2 (en) * 2004-07-09 2010-02-16 Aloka Co., Ltd Method and apparatus of image processing to detect and enhance edges
KR100784332B1 (en) * 2006-05-11 2007-12-13 삼성전자주식회사 Apparatus and method for photographing a business card in portable terminal
CN102236784A (en) * 2010-05-07 2011-11-09 株式会社理光 Screen area detection method and system
US8503813B2 (en) * 2010-12-22 2013-08-06 Arcsoft Hangzhou Co., Ltd. Image rectification method
US8467606B2 (en) * 2011-08-25 2013-06-18 Eastman Kodak Company Method for segmenting a composite image
US9053537B2 (en) * 2011-09-21 2015-06-09 Tandent Vision Science, Inc. Classifier for use in generating a diffuse image

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050013462A1 (en) * 1994-11-16 2005-01-20 Rhoads Geoffrey B. Paper products and physical objects as means to access and control a computer or to navigate over or act as a portal on a network
US20040012679A1 (en) * 2002-07-17 2004-01-22 Jian Fan Systems and methods for processing a digital captured image
US20050078192A1 (en) * 2003-10-14 2005-04-14 Casio Computer Co., Ltd. Imaging apparatus and image processing method therefor
EP1571820A2 (en) * 2004-02-27 2005-09-07 Casio Computer Co., Ltd. Image processing device and method, image projection apparatus, and program
JP2005261644A (en) * 2004-03-18 2005-09-29 Sony Computer Entertainment Inc Card judgement device and method
US20060045379A1 (en) * 2004-08-26 2006-03-02 Compulink Management Center, Inc. Photographic document imaging system
US20080218613A1 (en) * 2007-03-09 2008-09-11 Janson Wilbert F Camera using multiple lenses and image sensors operable in a default imaging mode
WO2008137051A1 (en) * 2007-05-01 2008-11-13 Compulink Management Center, Inc. Photo-document segmentation method and system
US20090001165A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation 2-D Barcode Recognition
EP2166408A1 (en) * 2008-09-17 2010-03-24 Ricoh Company, Ltd. Imaging device and imaging method using the same
WO2010049004A1 (en) * 2008-10-31 2010-05-06 Hewlett-Packard Development Company, L.P. Method and digital imaging appliance adapted for selecting a focus setting
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
US20130182961A1 (en) * 2012-01-16 2013-07-18 Hiok Nam Tay Auto-focus image system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611147A (en) * 2015-10-30 2016-05-25 北京旷视科技有限公司 Shooting method and device
CN105512658A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image recognition method and device for rectangular object
CN105512658B (en) * 2015-12-03 2019-03-15 小米科技有限责任公司 The image-recognizing method and device of rectangle object

Also Published As

Publication number Publication date
GB201308954D0 (en) 2013-07-03
WO2014184372A1 (en) 2014-11-20

Similar Documents

Publication Publication Date Title
GB2517674A (en) Image capture using client device
JP6255486B2 (en) Method and system for information recognition
US10127441B2 (en) Systems and methods for classifying objects in digital images captured using mobile devices
RU2631765C1 (en) Method and system of correcting perspective distortions in images occupying double-page spread
KR20130066819A (en) Apparus and method for character recognition based on photograph image
WO2017008031A1 (en) Realtime object measurement
CN114255337A (en) Method and device for correcting document image, electronic equipment and storage medium
US20150112853A1 (en) Online loan application using image capture at a client device
US8306335B2 (en) Method of analyzing digital document images
CN107085699B (en) Information processing apparatus, control method of information processing apparatus, and storage medium
Leal et al. Smartphone camera document detection via Geodesic Object Proposals
US10373329B2 (en) Information processing apparatus, information processing method and storage medium for determining an image to be subjected to a character recognition processing
KR20230017774A (en) Information processing device, information processing method, and program
CN108304840B (en) Image data processing method and device
CN111145153A (en) Image processing method, circuit, visual impairment assisting device, electronic device, and medium
KR20120035360A (en) Apparatus for recognizing character and method thereof
WO2015114021A1 (en) Image capture using client device
KR102071975B1 (en) Apparatus and method for paying card using optical character recognition
JP2017120455A (en) Information processing device, program and control method
JP6077873B2 (en) Image processing apparatus and image processing method
Guo et al. A fast page outline detection and dewarping method based on iterative cut and adaptive coordinate transform
KR101349672B1 (en) Fast detection method of image feature and apparatus supporting the same
Ettl et al. Text and image area classification in mobile scanned digitised documents
CN112507759A (en) Image processing method and image processing device for identifying bank card

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20150507 AND 20150513

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)