CN101873364A - Folding type electronic equipment - Google Patents

Folding type electronic equipment Download PDF

Info

Publication number
CN101873364A
CN101873364A CN 200910137719 CN200910137719A CN101873364A CN 101873364 A CN101873364 A CN 101873364A CN 200910137719 CN200910137719 CN 200910137719 CN 200910137719 A CN200910137719 A CN 200910137719A CN 101873364 A CN101873364 A CN 101873364A
Authority
CN
China
Prior art keywords
image
mentioned
overlap
literal
marginalisation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200910137719
Other languages
Chinese (zh)
Other versions
CN101873364B (en
Inventor
黄裕翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asustek Computer Inc
Original Assignee
Asustek Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asustek Computer Inc filed Critical Asustek Computer Inc
Priority to CN 200910137719 priority Critical patent/CN101873364B/en
Publication of CN101873364A publication Critical patent/CN101873364A/en
Application granted granted Critical
Publication of CN101873364B publication Critical patent/CN101873364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Character Input (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to folding type electronic equipment. A method for outputting continuous characters in a video recording mode comprises the following steps of: acquiring a first image and a second image from an object; comparing the first image with the second image to acquire a third image of the first image overlapped with the second image; removing the third image from the first image to generate a fourth image; connecting the fourth image with the second image to generate a fifth image; and performing optical character recognition on the fifth image to output the characters in the fifth image.

Description

Folding-type electronic device
Technical field
The present invention takes a file and offers optical character recognition software (Optical Character Recognition with image file about a kind of, OCR) come the identification font, to produce the method for text file output, more particularly, be that relevant a kind of video camera that utilizes is taken this document, and the dynamic image that video camera is captured integrates, and the image after will integrating again offers optical character recognition software identification font, to produce the method for text file output.
Background technology
Mostly the product that OCR is relevant is one of auxiliary function of scanner originally.The user needs file is placed on the scanning machine, behind the passing through scanning machine and become map file input computer page by page, the mode of adopting the back system will map file be page by page delivered to OCR software and is further analyzed to reach to be made up after picture and text separate again and is reduced into the computer facsimile file.
When hand-held device is popularized gradually, also there is dealer's OCR technology that begins to try to bring hand-held device the inside into, we notice two things in this process, the user only wants to be scanned the literal on the object mostly, (may also wish further character translation to be come out), coming is exactly because the characteristic of literal again, and literal is compiled line by line usually and is continuous.People also are reading articles line by line.Under this situation, there is the dealer to develop so-called on the market wand, but for this line by line and continuous characteristic of input, the inputting interface (phtographic lens) that wand uses is to adopt linear video camera (line camera), it treats as the literal of two dimension the combination of a series of line segment, linear video camera is made up the map file that is reduced into two dimension after line segment is read in system in regular turn again, supplies the OCR software processes again.
But take a broad view of the most general now hand-held device, what carried on the mobile phone but is the camera module of two dimension.What it was input to system is a continuous two dimensional image, that is to say, its input meeting is relatively as the scanning machine of desktop.Therefore also be the separate processes of a photo of a photo on the use pattern, the custom of handling literal with people line by line is also inequality.So at present more general OCR uses, still processing (BCR, Business Card Recognization) that are confined to individual business card more.If want to reach continuous input line by line, all need extra hardware auxiliary.(for example: patent CN2745288Y).
The purpose of this invention is exactly to wish to utilize the bidimensional image input unit (video camera) that generally carries on the present hand-held device, and under the situation that does not increase extra hardware servicing unit, reaches line by line and the purpose of input characters continuously.
Summary of the invention
The invention provides a kind of method of exporting continuous literal in the video recording mode.This method comprises from this object and obtains one first image and one second image, compares this first image and this second image with one the 3rd image of obtaining this first image and this second image overlap, the 3rd image is removed producing one the 4th image, the 4th image and this first image are linked producing one the 5th image from this second image, and the 5th image is carried out the optical character identification to export the literal in the 5th image.
Description of drawings
The schematic diagram that Fig. 1 takes when for explanation the present invention one video camera being moved with a direction.
Fig. 2 carries out the schematic diagram that emphasis cuts for explanation the present invention with image.
Fig. 3 carries out the schematic diagram that marginalisation is handled for explanation the present invention will cut the back image.
The schematic diagram that Fig. 4 handles for explanation marginalisation of the present invention.
The schematic diagram that Fig. 5 merges the marginalisation image for explanation the present invention.
Fig. 6 will link the schematic diagram of image and the merging of next edge images for explanation the present invention.
Fig. 7 is the schematic diagram of the image of explanation after handling image overlap through the present invention and linking.
Fig. 8 for explanation the present invention carry out marginalisation image matching ratio to the time earlier with the right schematic diagram of outward appearance matching ratio.
Fig. 9 carries out the schematic diagram of characteristic matching comparison for font for explanation the present invention.
Figure 10 converts continuous image to for the present invention the flow chart of the method for continuous literal output.
Embodiment
Therefore, the invention provides a kind of video camera (for example general mobile phone or digital camera etc.) that utilizes and take a file, in the travelling shot machine, literal on one file is filmed, afterwards, again captured image is carried out image and integrate, offer the optical character recognition software with the image after will integrating and carry out text-recognition.Thus, utilize mode provided by the present invention, just a continuous image file can be converted to one and integrate image file, integrate image based on this simultaneously, it is cut according to character, (avoiding incomplete font) offers the optical character recognition software again and carries out text-recognition, and can reach the effect of the output character while recording a video.
Please refer to Fig. 1.The schematic diagram that Fig. 1 takes when for explanation the present invention one video camera being moved with a direction.At first, video camera enters in the video recording pattern, under the video recording pattern file is taken, and be printed with the Chinese font of " pressing " on this document in conversation.In the video recording pattern of video camera, photo-opportunity is taken with fixing sampling frequency, the image number (frame/per sec) that it can be taken according to the requirements set video camera each second.For example: if video time is 1 second, the video camera that then enters the video recording pattern just can be obtained 5 images.
In Fig. 1, file to be recorded a video so that the direction (right by a left side) of D1 is mobile when video camera, video camera can be noted down the literal on the file: the literal of " pressing in conversation ".And the speed and the sampling frequency of video camera under the video recording pattern that move according to video camera, photo-opportunity photographs image P in regular turn O1, P O2, P O3And P O4, meaning be video camera when recording a video with the direction of D1, can photograph image P in regular turn O1, P O2, P O3And P O4Image P O1~P O4Completely comprised font to be identified and " in conversation, pressed ", and since the situation that speed that video camera moves and the sampling frequency of video camera under the video recording pattern are not necessarily mated, image P O1~P O4Have the situation of image overlap each other.The direction D1 that moves by video camera as can be known, image P O1The right side partial image should with image P O2The left part image overlap; Image P O2The right side partial image should with image P O3The left part image overlap; Image P O3The right side partial image should with image P O4The left part image overlap.And the present invention is image P O1~P O4In overlapping each other part remove, and the image that will remove behind the lap connects, and to draw last correct image, offers the optical character recognition software and carries out text-recognition to produce correct literal output.
Please refer to Fig. 2.Fig. 2 carries out the schematic diagram that emphasis cuts for explanation the present invention with image.Because may take unnecessary unnecessary portions when video camera is taken file, therefore for the simplification of follow-up comparison flow process, the present invention can carry out the image that photographs at emphasis and cut.As shown in Figure 2, the original image that photographs is P O1, and through after emphasis of the present invention cuts, just become and cut back image P C1
In Fig. 2, the present invention can detect raw video P O1Middle unnecessary portions is (as raw video P O1In white space), and define the border E that is cut 1With E 2, then with raw video P O1According to border E 1With E 2Cut, and obtain cutting back image P C1And image P O1~P O4Before carrying out subsequent treatment, all can croppedly cut back image P with formation C1~P C4
Please refer to Fig. 3.Fig. 3 carries out the schematic diagram that marginalisation is handled for explanation the present invention will cut the back image.In order to carry out the comparison of font, the present invention can carry out the marginalisation processing to draw the edge (housing) that cuts font in the image of back with cutting the back image.As shown in Figure 3, the image after script cuts is P C1, and, just become marginalisation image P through after the marginalisation processing of the present invention E1And image P C1~P C4Before carrying out subsequent treatment, all can be by marginalisation to form marginalisation image P E1~P E4
Please refer to Fig. 4.The schematic diagram that Fig. 4 handles for explanation marginalisation of the present invention.Fig. 4 is for lifting pixel P 11For example is handled to carry out marginalisation.Pixel P 00, P 01, P 02, P 10, P 11, P 12, P 20, P 21, P 22, its corresponding coordinate be respectively (i-1, j-1), (i-1, j), (i-1, j+1), (i, j-1), (i, j), (i, j+1), (i+1, j-1), (i+1, j), (i+1, j+1).Hence one can see that for pixel P 11When carrying out the marginalisation processing, can be with reference to the pixel around it.Marginalisation of the present invention handle as shown in the formula:
Edge(P 11)=Diff(P 10,P 12)+Diff(P 20,P 02)+Diff(P 21,P 01)+Diff(P 22,P 00);
Diff (P wherein X, P Y) be 2 difference value, this difference value can adjust according to the tone or the characteristic of institute's preference.Diff (P for example X, P Y)=abs[(P X(G)-P Y(R)) * (P Y(G)-P X(B) * (P X(B)-P Y(R))];
Abs[Z wherein] absolute value, the P of expression Z X(G) green GTG value, the P of remarked pixel X Y(R) red GTG value, the P of remarked pixel Y Y(G) green GTG value, the P of remarked pixel Y X(B) blue GTG value, the P of remarked pixel X Y(R) the red GTG value of remarked pixel Y, and i, i, X, Y all represent positive integer.Pixel P like this 11Carrying out just can drawing marginalisation pixel Edge (P after marginalisation is handled 11), meaning is pixel P 11Luma data originally can become luma data Edge (P after handling through marginalisation 11).And work as P 11Be bending, breakover point or color have bigger GTG value when bigger variation is arranged.
Please refer to Fig. 5.The schematic diagram that Fig. 5 merges the marginalisation image for explanation the present invention.Fig. 5 marginalisation image P that passes the imperial examinations at the provincial level E1With P E2For example with convenient explanation.As shown in Figure 5, at first, marginalisation image P E1With P E2Compare, to judge overlapping areas each other.Owing to from aforementioned, can learn marginalisation image P E1The right side partial image should with marginalisation image P E2The left part image overlap, therefore, in Fig. 5, just need compare marginalisation image P E1With P E2Correctly to draw the two overlapping part, then the lap image is removed from the image of shooting order after.As can be seen from Figure 5, after through comparison, marginalisation image P E1With P E2Institute's overlapping areas is judged as image P OVThe present invention can be with edge images P then E2In with marginalisation image P E1Superimposed image P OVRemove, will remove superimposed image P again OVAfter marginalisation image P E2With marginalisation image P E1Link, link image P and draw E (1+2)
Please refer to Fig. 6.Fig. 6 will link the schematic diagram of image and the merging of next edge images for explanation the present invention.Fig. 6 passes the imperial examinations at the provincial level and links image P E (1+2)With P E3For example with convenient explanation.As shown in Figure 6, at first, link image P E (1+2)With P E3Compare, to judge overlapping areas each other.Owing to from aforementioned, can learn and link image P E (1+2)The right side partial image should with marginalisation image P E3The left part image overlap, therefore in Fig. 6, just need compare and link image P E (1+2)With P E3Correctly to draw the two overlapping part, then with the lap image from the more preceding image (P of shooting order E (1+2)) in remove.As can be seen from Figure 6, after through comparison, link image P E (1+2)With P E3Institute's overlapping areas is judged as image P OVThe present invention can will link image P then E (1+2)In with marginalisation image P E3Superimposed image P OV, will remove superimposed image P again OVAfter binding image P E3With marginalisation image P E (1+2)Link, link image P and draw E (1+2+3)In practical operation, because video camera is to move along a certain fixed-direction, so P E2With P E3Maximum overlapping region can not exceed P E2So, be P E (1+2)With P E3Comparison the time, do not need to compare whole P E (1+2), as long as compared P E2Scope get final product.
Please refer to Fig. 7.Fig. 7 is the schematic diagram of the image of explanation after handling image overlap through the present invention and linking.As shown in Figure 7, through aforementioned emphasis cut, after the processing of marginalisation, comparison and binding, just can be with marginalisation image P E1, P E2, P E3, P E4Connect to one and link image P E (1+2+3+4)Link image P E (1+2+3+4)With marginalisation image P E1, P E2, P E3, P E4In overlapping each other part remove and connect to a single image then.Binding image after so handling through the present invention just can offer the optical character recognition software and carry out the identification of font, to produce correct literal output, does not produce and do not have because of image overlap causes the situation of identification mistake.And the image after the binding of image and the binding of output is handled and can be carried out simultaneously to OCR, for example, is linking P E4The time, we can be from P E (1+2+3)Remove the fragment image of " logical " and handle, in fact P to OCR E4Link with remaining " pressing in the words " image.When selecting to remove scope, only note that can not cut to last has integrated the part of finishing image, for example in aforesaid example, from P E (1+2+3)When removing, to stay P at least E3Part, i.e. the image of " in the words by ".
Please refer to Fig. 8.Fig. 8 for explanation the present invention carry out marginalisation image matching ratio to the time earlier with the right schematic diagram of outward appearance (shape) matching ratio.Fig. 8 marginalisation image P that passes the imperial examinations at the provincial level E1With P E2For example with convenient explanation.In Fig. 8, marginalisation image P E1Comprise three fonts: " ", the left side of " communicating " and " ", its corresponding outward appearance is respectively S 1, S 2With S 3Marginalisation image P E2Comprise three fonts: the right-hand part of " communicating ", " " and " in " the left side, its corresponding outward appearance is respectively S 4, S 5With S 6And the present invention just can be earlier according to outward appearance S 1, S 2, S 3, S 4, S 5, S 6Judge earlier outward appearance S roughly 4With outward appearance S 2Coupling, outward appearance S 5With outward appearance S 3Coupling, and can roughly estimate marginalisation image P E1With P E2Overlapping degree: marginalisation image P E2In outward appearance S 4To outward appearance S 5Part should with marginalisation image P E1Overlapping.So just, can draw a rough overlapping scope earlier, to carry out follow-up accurate comparison.
Please refer to Fig. 9.Fig. 9 carries out the right schematic diagram of feature (characteristics) matching ratio for explanation the present invention for font.Described in the processing of marginalisation before, we can utilize the formula of adjusting difference value that the characteristic point in the literal is emphasized out, and meaning promptly gives higher or different GTG value, and these characteristic points may be breakover points, end points.With several feature point group altogether, become feature pattern (Character pattern), just the characteristic value of relativeness between stroke also can be comprised to come in.
Pass the imperial examinations at the provincial level the right-hand part " tongue " of font " words " for example in Fig. 9, and stain is partly represented the characteristic point of " tongue ".The set of these characteristic points comprises relative relation therebetween, is deformed into a feature pattern.If the set feature pattern of font A also can find, can judge that then body A and B are identical font on font B.Through the outward appearance matching ratio of Fig. 8 to after, the rough overlapping scope that is drawn, the present invention will further carry out more accurate characteristic matching comparison again to determine the part of two adjacent edge edge image overlaps.Utilize the comparison of feature pattern, can accurately judge the overlapping part of font, to capture the superimposed image P of neighboring edge image exactly OV, then just can be with superimposed image P OVRemove to link next marginalisation image, link image and draw one.On real the work, if will allow this device have similar use pattern is arranged with wand, we can allow video camera press close to the object surface running again is the close-perspective recording pattern, video camera is under the situation of close-perspective recording, the image of periphery has slight twisted phenomena easily, add the reciprocation of side light and object surface, though this can cause two adjacent images is from the object of same reality, in theory should be identical, therefore but in fact have a little difference, the same font on two images is could be hundred-percent not identical! And the calculating of too much characteristic point and comparison can influence the usefulness of whole system, and therefore in fact we are that the feature pattern of getting the characteristic point formation of fair amount compares, and get the position of minimum difference then and are used as identical position.Therefore the action of outward appearance coupling not only has the function of acceleration at this moment, more can increase the whole accuracy that connects.
Please refer to Figure 10.Figure 10 converts continuous image to for the present invention the flow chart of the method for continuous literal output.Step is described as follows:
Step 1001: receive image continuously;
Step 1002: the image that is received is carried out emphasis cut;
Step 1003: the image after will cutting through emphasis carries out marginalisation to be handled;
Step 1004: one first image after will cutting and one second image are compared to remove in this first image and this second image institute superimposed image, to produce one the 3rd image;
Step 1005: the 3rd image and second image are linked, carry out text-recognition to export the optical character recognition software to;
Step 1006: the literal after the identification of output process optical character recognition software.
In step 1004, second image is next image of first image.Therefore second image and first image must have overlapping part.In other words, of the present invention being assumed to be is based upon two continuous images overlapping part arranged, and could correctly carry out the flow process of follow-up binding and text-recognition in this way.
In step 1004, image comparison is right with aspect ratio via aforementioned outward appearance comparison.Yet the step of outward appearance comparison is not for necessary, and it is only for quickening the speed of subsequent characteristics comparison.That is to say that in step 1004, it is right only to carry out aspect ratio, still can draw accurate superimposed image, in this first image, to remove.
In addition, mentioned video camera in the present invention, it can be arranged at portable electronic devices (as mobile phone, notebook computer etc.), so is not easy to use the person and utilizes the present invention to carry out scanning to object.
In sum, utilize method provided by the present invention, the user can utilize video camera to record a video merely, and through after the processing of the present invention, just exportable continuous literal to reach the effect of the output character while recording a video, provides bigger convenience.
The above only is preferred embodiment of the present invention, and all equalizations of doing according to claims of the present invention change and modify, and all should belong to covering scope of the present invention.

Claims (11)

1. export the method for continuous literal in the video recording mode for one kind, it is characterized in that comprising:
Obtain first image and second image from object;
Compare above-mentioned first image and above-mentioned second image to obtain the 3rd image of above-mentioned first image and above-mentioned second image overlap;
Above-mentioned the 3rd image is removed from above-mentioned second image to produce the 4th image;
Above-mentioned the 4th image and above-mentioned first image are linked to produce the 5th image; And
Above-mentioned the 5th image is carried out the optical character identification to export the literal in above-mentioned the 5th image.
2. method according to claim 1 is characterized in that wherein obtaining above-mentioned first image and above-mentioned second image comprises from above-mentioned object:
With video camera, along a direction, above-mentioned object is recorded a video, to obtain above-mentioned first image and above-mentioned second image;
Next image that wherein above-mentioned second image is above-mentioned first image.
3. method according to claim 2 is characterized in that wherein comparing above-mentioned first image and above-mentioned second image comprises with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap:
The part that is positioned at above-mentioned direction from above-mentioned first image begins to compare with the reciprocal part that above-mentioned second image is positioned at above-mentioned direction, up to above-mentioned the 3rd image of finding out above-mentioned first image and above-mentioned second image overlap.
4. method according to claim 2 is characterized in that wherein above-mentioned the 4th image and the binding of above-mentioned first image being comprised to produce above-mentioned the 5th image:
The reciprocal part that above-mentioned the 4th image is positioned at above-mentioned direction is linked to the part that above-mentioned first image is positioned at above-mentioned direction.
5. method according to claim 3 is characterized in that wherein comparing above-mentioned first image and above-mentioned second image comprises in addition with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap:
The feature of detecting literal in above-mentioned first image and above-mentioned two images is to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
6. method according to claim 5 is characterized in that the feature of wherein detecting literal in above-mentioned first image and above-mentioned two images comprises with above-mentioned the 3rd image of obtaining above-mentioned first image and above-mentioned second image overlap:
Literal in above-mentioned second image is sought its feature pattern with the search of comparing at above-mentioned first image, to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
7. method according to claim 5 is characterized in that comprising in addition:
The outward appearance of detecting literal in above-mentioned first image and above-mentioned two images is to obtain above-mentioned the 3rd image of above-mentioned first image and above-mentioned second image overlap.
8. method according to claim 1 is characterized in that comprising in addition:
Above-mentioned first image and above-mentioned second image are carried out emphasis to cut to reduce the size of above-mentioned first image and above-mentioned second image.
9. method according to claim 8 is characterized in that wherein above-mentioned first image and above-mentioned second image are carried out emphasis to cut and comprise:
Detect above-mentioned first image and above-mentioned second image belongs to non-legible part; And
Above-mentioned first image and above-mentioned second image are detected the part that belongs to non-legible to cut.
10. method according to claim 1 is characterized in that comprising in addition:
Above-mentioned first image and above-mentioned second image are carried out the marginalisation processing.
11. method according to claim 10 is characterized in that wherein above-mentioned first image and above-mentioned second image are carried out marginalisation to be handled and comprise:
With the luma data of the pixel of above-mentioned first image and above-mentioned second image carry out as shown in the formula conversion:
Edge(P(i,j))=Diff(P(i,j-1),P(i,j+1))+Diff(P(i+1,j-1),P(i-1,j+1))+Diff(P(i+1,j),P(i,j-1))+Diff(P(i+1,j+1)),P(i-1,j-1));
Wherein i, j, X, Y represent positive integer, and ((i is j) through the luma data after the marginalisation processing for P (i, j)) remarked pixel P for Edge;
Diff (P wherein X, P Y)=abs[(P X(G)-P Y(R)) * (P Y(G)-P X(B) * (P X(B)-P Y(R))], and abs be ABS function, P X(G) green GTG value, the P of remarked pixel X Y(R) red GTG value, the P of remarked pixel Y Y(G) green GTG value, the P of remarked pixel Y X(B) blue GTG value, the P of remarked pixel X Y(R) the red GTG value of remarked pixel Y.
CN 200910137719 2009-04-27 2009-04-27 Folding type electronic equipment Active CN101873364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910137719 CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910137719 CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Publications (2)

Publication Number Publication Date
CN101873364A true CN101873364A (en) 2010-10-27
CN101873364B CN101873364B (en) 2012-12-05

Family

ID=42998022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910137719 Active CN101873364B (en) 2009-04-27 2009-04-27 Folding type electronic equipment

Country Status (1)

Country Link
CN (1) CN101873364B (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059840A (en) * 2007-05-24 2007-10-24 深圳市杰特电信控股有限公司 Words input method using mobile phone shooting style

Also Published As

Publication number Publication date
CN101873364B (en) 2012-12-05

Similar Documents

Publication Publication Date Title
US10484610B2 (en) Image-capturing apparatus, captured image processing system, program, and recording medium
US9747504B2 (en) Systems and methods for generating composite images of long documents using mobile video data
KR102020295B1 (en) Authentication of security documents and mobile device to carry out the authentication
US9692991B2 (en) Multispectral imaging system
US9684941B2 (en) Determining pose for use with digital watermarking, fingerprinting and augmented reality
AU2012313148B2 (en) Identification method for valuable file and identification device thereof
US20030156201A1 (en) Systems and methods for processing a digitally captured image
KR101907414B1 (en) Apparus and method for character recognition based on photograph image
WO2009114967A1 (en) Motion scan-based image processing method and device
JP2000322508A (en) Code reading device and method for color image
US8401335B2 (en) Method for outputting consecutive characters in video-recording mode
JP2007081458A (en) Image processing apparatus and control method of image processing apparatus
CN105830091A (en) Systems and methods for generating composite images of long documents using mobile video data
CN102737240A (en) Method of analyzing digital document images
US7986839B2 (en) Image processing method, image processing apparatus, image forming apparatus, and storage medium
US7924468B2 (en) Camera shake determination device, printing apparatus and camera shake determination method
CN1941960A (en) Embedded scanning cell phone
CN101873364B (en) Folding type electronic equipment
CN104519227B (en) image judgment device and image processing system
CN106204420A (en) A kind of pen type image scanning joining method and device
JP6068080B2 (en) Image combining device, image combining method, and program
TW201301874A (en) Method and device of document scanning and portable electronic device
CN106022246B (en) A kind of decorative pattern background printed matter Word Input system and method based on difference
Körber Improving Camera-based Document Analysis with Deep Learning
JP2010130634A (en) Image processing apparatus, image data output processing device, image processing method, program and recording medium therefor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant