CN101873364B - Folding type electronic equipment - Google Patents

Folding type electronic equipment Download PDF

Info

Publication number
CN101873364B
CN101873364B CN 200910137719 CN200910137719A CN101873364B CN 101873364 B CN101873364 B CN 101873364B CN 200910137719 CN200910137719 CN 200910137719 CN 200910137719 A CN200910137719 A CN 200910137719A CN 101873364 B CN101873364 B CN 101873364B
Authority
CN
China
Prior art keywords
image
mentioned
overlap
marginalisation
literal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910137719
Other languages
Chinese (zh)
Other versions
CN101873364A (en
Inventor
黄裕翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asustek Computer Inc
Original Assignee
Asustek Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asustek Computer Inc filed Critical Asustek Computer Inc
Priority to CN 200910137719 priority Critical patent/CN101873364B/en
Publication of CN101873364A publication Critical patent/CN101873364A/en
Application granted granted Critical
Publication of CN101873364B publication Critical patent/CN101873364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Character Input (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to folding type electronic equipment. A method for outputting continuous characters in a video recording mode comprises the following steps of: acquiring a first image and a second image from an object; comparing the first image with the second image to acquire a third image of the first image overlapped with the second image; removing the third image from the first image to generate a fourth image; connecting the fourth image with the second image to generate a fifth image; and performing optical character recognition on the fifth image to output the characters in the fifth image.

Description

Folding-type electronic device
Technical field
The present invention takes a file and offers optical character recognition software (Optical Character Recognition with image file about a kind of; OCR) come the identification font, to produce the method for text file output, more particularly; Be that relevant a kind of video camera that utilizes is taken this document; And the dynamic image that video camera is captured integrates, and the image after will integrating again offers optical character recognition software identification font, to produce the method for text file output.
Background technology
Mostly the product that OCR is relevant is one of auxiliary function of scanner originally.The user needs file is placed on the scanning machine; Behind the passing through scanning machine and become map file input computer page by page, the mode of adopting the back system will map file be page by page delivered to OCR software and is done further analysis to reach to make up after picture and text separate again and be reduced into the computer facsimile file.
When hand-held device is popularized gradually; Also there is dealer's OCR technology that begins to try to bring hand-held device the inside into; We notice two things in this process, and the user only wants to be scanned the literal on the object mostly, (possibly also hope further to come out character translation); Coming is exactly the characteristic because of literal again, and literal is compiled line by line usually and is continuous.People also are reading articles line by line.Under this situation; There is the dealer to develop so-called on the market wand; But for this line by line and continuous characteristic of input, the inputting interface (phtographic lens) that wand uses is to adopt linear video camera (line camera), and it is with the combination as a series of line segment of the literal of two dimension; Linear video camera makes up the map file that is reduced into two dimension after line segment is read in system in regular turn again, supplies the OCR software processes again.
But take a broad view of the most general now hand-held device, what carried on the mobile phone but is the camera module of two dimension.It is input to system is a continuous two dimensional image, that is to say, its input meeting is relatively as the scanning machine of desktop.Therefore also be the separate processes of a photo of a photo on the use pattern, the custom of handling literal with people line by line is also inequality.So at present more general OCR uses, still processing (BCR, Business Card Recognization) that are confined to individual business card more.If want to reach continuous input line by line, all need additional hardware auxiliary.(for example: patent CN2745288Y).
The purpose of this invention is exactly to hope to utilize the bidimensional image input media (video camera) that generally carries on the present hand-held device, and under the situation that does not increase the additional hardware servicing unit, reaches line by line and the purpose of input characters continuously.
Summary of the invention
The present invention provides a kind of method of exporting continuous literal with the video recording mode.This method comprises from this object and obtains one first image and one second image, compares this first image and this second image with one the 3rd image of obtaining this first image and this second image overlap, the 3rd image is removed producing one the 4th image, the 4th image and this first image are linked producing one the 5th image from this second image, and the 5th image is carried out the optical character identification to export the literal in the 5th image.
Description of drawings
The synoptic diagram that Fig. 1 takes when for explanation the present invention one video camera being moved with a direction.
Fig. 2 carries out the synoptic diagram that emphasis cuts for explanation the present invention with image.
Fig. 3 carries out the synoptic diagram that marginalisation is handled for explanation the present invention will cut the back image.
The synoptic diagram that Fig. 4 handles for explanation marginalisation of the present invention.
The synoptic diagram that Fig. 5 merges the marginalisation image for explanation the present invention.
Fig. 6 will link the synoptic diagram of image and the merging of next edge images for explanation the present invention.
Fig. 7 is the synoptic diagram of the image of explanation after handling image overlap through the present invention and linking.
Fig. 8 for explanation the present invention carry out marginalisation image matching ratio to the time earlier with the right synoptic diagram of outward appearance matching ratio.
Fig. 9 carries out the synoptic diagram of characteristic matching comparison for font for explanation the present invention.
Figure 10 converts continuous image to for the present invention the process flow diagram of the method for continuous literal output.
Embodiment
Therefore; The present invention provides a kind of video camera (for example general mobile phone or digital camera etc.) that utilizes to take a file; In the travelling shot machine, the literal on the file is taken, afterwards; Again captured image is carried out image and integrate, offer the optical character recognition software with the image after will integrating and carry out text-recognition.Thus; Utilize mode provided by the present invention, just can convert a continuous image file to one and integrate image file, integrate image based on this simultaneously; It is cut according to character; (avoiding incomplete font) offers the optical character recognition software again and carries out text-recognition, and can reach the effect of the output character while recording a video.
Please refer to Fig. 1.The synoptic diagram that Fig. 1 takes when for explanation the present invention one video camera being moved with a direction.At first, video camera gets in the video recording pattern, under the video recording pattern, a file is taken, and is printed with the Chinese font of " in conversation, pressing " on this document.In the video recording pattern of video camera, photo-opportunity is taken with fixing sampling frequency, the image number (frame/per sec) that it can be taken according to the requirements set video camera p.s..For example: if video time is 1 second, the video camera that then gets into the video recording pattern just can be obtained 5 images.
In Fig. 1, file to be recorded a video so that the direction (right by a left side) of D1 is mobile when video camera, video camera can be noted down the literal on the file: the literal of " in conversation, pressing ".And the speed and the sampling frequency of video camera under the video recording pattern that move according to video camera, photo-opportunity photographs image P in regular turn O1, P O2, P O3And P O4, meaning be video camera when recording a video with the direction of D1, can photograph image P in regular turn O1, P O2, P O3And P O4Image P O1~P O4The complete font of treating identification that comprised " is pressed in conversation ", and since the situation that speed that video camera moves and the sampling frequency of video camera under the video recording pattern are not necessarily mated, image P O1~P O4Have the situation of image overlap each other.The direction D1 that is moved by video camera can know, image P O1The right portions image should with image P O2The left part image overlap; Image P O2The right portions image should with image P O3The left part image overlap; Image P O3The right portions image should with image P O4The left part image overlap.And the present invention is image P O1~P O4In overlapping each other part remove, and the image that will remove behind the lap connects, and to draw last correct image, offers the optical character recognition software and carries out text-recognition to produce correct literal output.
Please refer to Fig. 2.Fig. 2 carries out the synoptic diagram that emphasis cuts for explanation the present invention with image.Because when video camera is taken file, may take unnecessary unnecessary portions, therefore for follow-up simplification than flow, the present invention can carry out emphasis with the image that photographs and cut.As shown in Figure 2, the original image that photographs is P O1, and through after emphasis of the present invention cuts, just become and cut back image P C1
In Fig. 2, the present invention can detect raw video P O1Middle unnecessary portions is (like raw video P O1In white space), and define the border E that is cut 1With E 2, then with raw video P O1According to border E 1With E 2Cut, and obtain cutting back image P C1And image P O1~P O4Before carrying out subsequent treatment, all can be cut with formation and cut back image P C1~P C4
Please refer to Fig. 3.Fig. 3 carries out the synoptic diagram that marginalisation is handled for explanation the present invention will cut the back image.In order to carry out the comparison of font, the present invention can carry out the marginalisation processing to draw the edge (housing) that cuts font in the image of back with cutting the back image.As shown in Figure 3, the image after originally cutting is P C1, and, just become marginalisation image P through after the marginalisation processing of the present invention E1And image P C1~P C4Before carrying out subsequent treatment, all can be by marginalisation to form marginalisation image P E1~P E4
Please refer to Fig. 4.The synoptic diagram that Fig. 4 handles for explanation marginalisation of the present invention.Fig. 4 is for lifting pixel P 11For example is handled to carry out marginalisation.Pixel P 00, P 01, P 02, P 10, P 11, P 12, P 20, P 21, P 22, its corresponding coordinate be respectively (i-1, j-1), (i-1, j), (i-1, j+1), (i, j-1), (i, j), (i, j+1), (i+1, j-1), (i+1, j), (i+1, j+1).Hence one can see that for pixel P 11When carrying out the marginalisation processing, can be with reference to the pixel around it.Marginalisation of the present invention handle as shown in the formula:
Edge(P 11)=Diff(P 10,P 12)+Diff(P 20,P 02)+Diff(P 21,P 01)+Diff(P 22,P 00);
Diff (P wherein X, P Y) be 2 difference value, this difference value can adjust according to the tone or the characteristic of institute's preference.Diff (P for example X, P Y)=abs [(P X(G)-P Y(R)) * (P Y(G)-P X(B) * (P X(B)-P Y(R))];
Wherein abs [Z] representes absolute value, the P of Z X(G) green GTG value, the P of remarked pixel X Y(R) red GTG value, the P of remarked pixel Y Y(G) green GTG value, the P of remarked pixel Y X(B) blue GTG value, the P of remarked pixel X Y(R) the red GTG value of remarked pixel Y, and i, j, X, Y all represent positive integer.Pixel P like this 11Carrying out just can drawing marginalisation pixel Edge (P after marginalisation is handled 11), meaning is pixel P 11Luma data originally can become luma data Edge (P after handling through marginalisation 11).And work as P 11Be bending, turning point or color have bigger GTG value when bigger variation is arranged.
Please refer to Fig. 5.The synoptic diagram that Fig. 5 merges the marginalisation image for explanation the present invention.Fig. 5 marginalisation image P that passes the imperial examinations at the provincial level E1With P E2For example with convenient explanation.As shown in Figure 5, at first, marginalisation image P E1With P E2Compare, to judge overlapping areas each other.Owing to from aforementioned, can learn marginalisation image P E1The right portions image should with marginalisation image P E2Therefore the left part image overlap, just need compare marginalisation image P in Fig. 5 E1With P E2Correctly to draw the two overlapping part, then the lap image is removed from the image of shooting order after.From Fig. 5, can find out, after through comparison, marginalisation image P E1With P E2Institute's overlapping areas is judged as image P OVThe present invention can be with edge images P then E2In with marginalisation image P E1Superimposed image P OVRemove, will remove superimposed image P again OVAfter marginalisation image P E2With marginalisation image P E1Link, link image P and draw E (1+2)
Please refer to Fig. 6.Fig. 6 will link the synoptic diagram of image and the merging of next edge images for explanation the present invention.Fig. 6 passes the imperial examinations at the provincial level and links image P E (1+2)With P E3For example with convenient explanation.As shown in Figure 6, at first, link image P E (1+2)With P E3Compare, to judge overlapping areas each other.Owing to from aforementioned, can learn and link image P E (1+2)The right portions image should with marginalisation image P E3The left part image overlap, therefore in Fig. 6, just need compare and link image P E (1+2)With P E3Correctly to draw the two overlapping part, then with the lap image from the more preceding image (P of shooting order E (1+2)) in remove.From Fig. 6, can find out, after through comparison, link image P E (1+2)With P E3Institute's overlapping areas is judged as image P OVThe present invention can be with linking image P then E (1+2)In with marginalisation image P E3Superimposed image P OV, will remove superimposed image P again OVAfter binding image P E3With marginalisation image P E (1+2)Link, link image P and draw E (1+2+3)In practical operation, because video camera is to move along a certain fixed-direction, so P E2With P E3Maximum overlapping region can not exceed P E2So, be P E (1+2)With P E3Comparison the time, need not compare whole P E (1+2), as long as compared P E2Scope get final product.
Please refer to Fig. 7.Fig. 7 is the synoptic diagram of the image of explanation after handling image overlap through the present invention and linking.As shown in Figure 7, through aforementioned emphasis cut, after the processing of marginalisation, comparison and binding, just can be with marginalisation image P E1, P E2, P E3, P E4Connect to one and link image P E (1+2+3+4)Link image P E (1+2+3+4)With marginalisation image P E1, P E2, P E3, P E4In overlapping each other part remove and connect to a single image then.Binding image after so handling through the present invention just can offer the optical character recognition software and carry out the identification of font, to produce correct literal output, does not produce and do not have because of image overlap causes the situation of identification mistake.And the image after the binding of image and the binding of output is handled and can be carried out simultaneously to OCR, for example, is linking P E4The time, we can be from P E (1+2+3)Remove the fragment image of " logical " and handle, in fact P to OCR E4Link with remaining " pressing in the words " image.When selecting to remove scope, only note that can not cut to last has integrated the part of accomplishing image, for example in aforesaid example, from P E (1+2+3)When removing, to stay P at least E3Part, i.e. the image of " in the words by ".
Please refer to Fig. 8.Fig. 8 for explanation the present invention carry out marginalisation image matching ratio to the time earlier with the right synoptic diagram of outward appearance (shape) matching ratio.Fig. 8 marginalisation image P that passes the imperial examinations at the provincial level E1With P E2For example with convenient explanation.In Fig. 8, marginalisation image P E1Comprise three fonts: " ", the left side of " communicating " and " ", its corresponding outward appearance is respectively S 1, S 2With S 3Marginalisation image P E2Comprise three fonts: the right-hand part of " communicating ", " " and " in " the left side, its corresponding outward appearance is respectively S 4, S 5With S 6And the present invention just can be earlier according to outward appearance S 1, S 2, S 3, S 4, S 5, S 6Judge earlier outward appearance S roughly 4With outward appearance S 2Coupling, outward appearance S 5With outward appearance S 3Coupling, and can roughly estimate marginalisation image P E1With P E2Overlapping degree: marginalisation image P E2In outward appearance S 4To outward appearance S 5Part should with marginalisation image P E1Overlapping.So just, can draw a rough overlapping scope earlier, to carry out follow-up accurate comparison.
Please refer to Fig. 9.Fig. 9 carries out the right synoptic diagram of characteristic (characteristics) matching ratio for explanation the present invention for font.Described in the processing of marginalisation before, we can utilize the formula of adjustment difference value the unique point in the literal to be stressed out meaning promptly gives higher or different GTG value, and these unique points possibly be turning points, end points.With several feature point group altogether, become characteristic pattern (Characterpattern), just can the eigenwert of relativeness between stroke also be comprised to come in.
Pass the imperial examinations at the provincial level the right-hand part " tongue " of font " words " for example in Fig. 9, and stain is partly represented the unique point of " tongue ".The set of these unique points comprises opposing relation therebetween, is deformed into a characteristic pattern.If the set characteristic pattern of font A also can find, can judge that then body A and B are identical font on font B.Through the outward appearance matching ratio of Fig. 8 to after, the rough overlapping scope that is drawn, the present invention will further carry out more accurate characteristic matching comparison again to confirm the part of two adjacent edge edge image overlaps.Utilize the comparison of characteristic pattern, can accurately judge the overlapping part of font, to capture the superimposed image P of neighboring edge image exactly OV, then just can be with superimposed image P OVRemove to link next marginalisation image, link image and draw one.On real the work; If will let this device have with wand has similar use pattern, we can let video camera press close to the object surface running again is the close-perspective recording pattern, and video camera is under the situation of close-perspective recording; The image of periphery has slight twisted phenomena easily; Adding the reciprocation of side light and object surface, is from the object of same reality though this can cause two adjacent images, in theory should be identical; Therefore but in fact have a little difference, the same font on two images is could be hundred-percent not identical! And the calculating of too much unique point and comparison can influence the usefulness of total system, and therefore in fact we are that the characteristic pattern of getting the unique point formation of fair amount compares, and get the position of minimum difference then and are used as identical position.Therefore the action of outward appearance coupling not only has the function of acceleration at this moment, more can increase the whole degree of accuracy that connects.
Please refer to Figure 10.Figure 10 converts continuous image to for the present invention the process flow diagram of the method for continuous literal output.Step is explained as follows:
Step 1001: receive image continuously;
Step 1002: the image that is received is carried out emphasis cut;
Step 1003: the image that will pass through after emphasis cuts carries out the marginalisation processing;
Step 1004: one first image after will cutting and one second image are compared to remove in this first image and this second image institute superimposed image, to produce one the 3rd image;
Step 1005: the 3rd image and second image are linked, carry out text-recognition to export the optical character recognition software to;
Step 1006: the literal after the identification of output process optical character recognition software.
In step 1004, second image is next image of first image.Therefore second image and first image must have overlapping part.In other words, of the present invention being assumed to be is based upon two continuous images overlapping part arranged, and could correctly carry out the flow process of follow-up binding and text-recognition in this way.
In step 1004, image comparison is right with aspect ratio via aforementioned outward appearance comparison.Yet the step of outward appearance comparison is not for necessary, and it is merely the speed of quickening the subsequent characteristics comparison.That is to say that in step 1004, it is right only to carry out aspect ratio, still can draw accurate superimposed image, in this first image, to remove.
In addition, mentioned video camera in the present invention, it can be arranged at portable electronic devices (like mobile phone, notebook computer etc.), so is not easy to use the person and utilizes the present invention to carry out the scanning to object.
In sum, utilize method provided by the present invention, the user can utilize video camera to record a video merely, and through after the processing of the present invention, just exportable continuous literal to reach the effect of the output character while recording a video, provides bigger convenience.
The above is merely preferred embodiment of the present invention, and all equalizations of doing according to claims of the present invention change and modify, and all should belong to covering scope of the present invention.

Claims (10)

1. export the method for continuous literal with the video recording mode for one kind, it is characterized in that comprising:
Video camera is obtained first image and second image from the object that comprises literal along a fixed-direction, next image that wherein above-mentioned second image is above-mentioned first image, and above-mentioned first image and above-mentioned second image have lap;
Compare above-mentioned first image and above-mentioned second image to obtain the 3rd image of above-mentioned first image and above-mentioned second image overlap;
Above-mentioned the 3rd image is removed from above-mentioned second image to produce the 4th image;
Above-mentioned the 4th image and above-mentioned first image are linked to produce the 5th image; And
Above-mentioned the 5th image is carried out the optical character identification to export the literal in above-mentioned the 5th image.
2. method according to claim 1 is characterized in that wherein comparing above-mentioned first image and above-mentioned second image comprises with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap:
The part that is positioned at above-mentioned direction from above-mentioned first image begins to compare with the reciprocal part that above-mentioned second image is positioned at above-mentioned direction, up to above-mentioned the 3rd image of finding out above-mentioned first image and above-mentioned second image overlap.
3. method according to claim 1 is characterized in that wherein above-mentioned the 4th image and the binding of above-mentioned first image being comprised to produce above-mentioned the 5th image:
The reciprocal part that above-mentioned the 4th image is positioned at above-mentioned direction is linked to the part that above-mentioned first image is positioned at above-mentioned direction.
4. method according to claim 2 is characterized in that wherein comparing above-mentioned first image and above-mentioned second image comprises with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap in addition:
The characteristic of detecting above-mentioned first image and the above-mentioned second image Chinese words is to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
5. method according to claim 4 is characterized in that the characteristic of wherein detecting above-mentioned first image and the above-mentioned second image Chinese words comprises with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap:
Literal in above-mentioned second image is sought its characteristic pattern with the search of comparing at above-mentioned first image, to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
6. method according to claim 4 is characterized in that comprising in addition:
The outward appearance of detecting above-mentioned first image and the above-mentioned second image Chinese words is to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
7. method according to claim 1 is characterized in that comprising in addition:
Above-mentioned first image and above-mentioned second image are carried out emphasis to cut to reduce the size of above-mentioned first image and above-mentioned second image.
8. method according to claim 7 is characterized in that wherein above-mentioned first image and above-mentioned second image are carried out emphasis to cut and comprise:
Detect above-mentioned first image and above-mentioned second image belongs to non-legible part; And
Above-mentioned first image and above-mentioned second image are detected the part that belongs to non-legible to cut.
9. method according to claim 1 is characterized in that comprising in addition:
Above-mentioned first image and above-mentioned second image are carried out the marginalisation processing.
10. method according to claim 9 is characterized in that wherein above-mentioned first image and above-mentioned second image are carried out marginalisation to be handled and comprise:
With the luma data of the pixel of above-mentioned first image and above-mentioned second image carry out as shown in the formula conversion:
Edge(P(i,j))=Diff(P(i,j-1),P(i,j+1))+Diff(P(i+1,j-1),P(i-1,j+1))+Diff(P(i+1,j),P(i,j-1))+Diff(P(i+1,j+1),P(i-1,j-1));
Wherein i, j, X, Y represent positive integer, and ((i is j) through the luma data after the marginalisation processing for P (i, j)) remarked pixel P for Edge;
Diff (P wherein X, P Y)=abs [(P X(G)-P Y(R)) * (P Y(G)-P X(B)) * (P X(B)-P Y(R))], and abs system be ABS function, P X(G) green GTG value, the P of remarked pixel X Y(R) red GTG value, the P of remarked pixel Y Y(G) green GTG value, the P of remarked pixel Y X(B) blue GTG value, the P of remarked pixel X Y(R) the red GTG value of remarked pixel Y.
CN 200910137719 2009-04-27 2009-04-27 Folding type electronic equipment Active CN101873364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910137719 CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910137719 CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Publications (2)

Publication Number Publication Date
CN101873364A CN101873364A (en) 2010-10-27
CN101873364B true CN101873364B (en) 2012-12-05

Family

ID=42998022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910137719 Active CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Country Status (1)

Country Link
CN (1) CN101873364B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059840A (en) * 2007-05-24 2007-10-24 深圳市杰特电信控股有限公司 Words input method using mobile phone shooting style

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059840A (en) * 2007-05-24 2007-10-24 深圳市杰特电信控股有限公司 Words input method using mobile phone shooting style

Also Published As

Publication number Publication date
CN101873364A (en) 2010-10-27

Similar Documents

Publication Publication Date Title
US10484610B2 (en) Image-capturing apparatus, captured image processing system, program, and recording medium
KR102020295B1 (en) Authentication of security documents and mobile device to carry out the authentication
US9684941B2 (en) Determining pose for use with digital watermarking, fingerprinting and augmented reality
EP2624224B1 (en) Method and device for distinguishing value documents
JP3662769B2 (en) Code reading apparatus and method for color image
CN107491730A (en) A kind of laboratory test report recognition methods based on image procossing
US20030156201A1 (en) Systems and methods for processing a digitally captured image
WO2011052276A1 (en) Image processing device, image processing method, image processing program, and recording medium with recorded image processing program
KR101907414B1 (en) Apparus and method for character recognition based on photograph image
US9619701B2 (en) Using motion tracking and image categorization for document indexing and validation
US8401335B2 (en) Method for outputting consecutive characters in video-recording mode
Li et al. Optical braille recognition with haar wavelet features and support-vector machine
JP2007081458A (en) Image processing apparatus and control method of image processing apparatus
CN105830091A (en) Systems and methods for generating composite images of long documents using mobile video data
CN102737240A (en) Method of analyzing digital document images
US7986839B2 (en) Image processing method, image processing apparatus, image forming apparatus, and storage medium
US7924468B2 (en) Camera shake determination device, printing apparatus and camera shake determination method
CN1941960A (en) Embedded scanning cell phone
CN101873364B (en) Folding type electronic equipment
Pollard et al. Building cameras for capturing documents
CN104519227B (en) image judgment device and image processing system
JP6068080B2 (en) Image combining device, image combining method, and program
CN106022246B (en) A kind of decorative pattern background printed matter Word Input system and method based on difference
Joshi et al. Source printer identification from document images acquired using smartphone
US20070253615A1 (en) Method and system for banknote recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant