CN102682308A - Imaging processing method and device - Google Patents

Imaging processing method and device Download PDF

Info

Publication number
CN102682308A
CN102682308A CN2011100645277A CN201110064527A CN102682308A CN 102682308 A CN102682308 A CN 102682308A CN 2011100645277 A CN2011100645277 A CN 2011100645277A CN 201110064527 A CN201110064527 A CN 201110064527A CN 102682308 A CN102682308 A CN 102682308A
Authority
CN
China
Prior art keywords
image
frames images
images
gray
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011100645277A
Other languages
Chinese (zh)
Other versions
CN102682308B (en
Inventor
游赣梅
杜成
长谷川史裕
郑继川
赵立军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110064527.7A priority Critical patent/CN102682308B/en
Publication of CN102682308A publication Critical patent/CN102682308A/en
Application granted granted Critical
Publication of CN102682308B publication Critical patent/CN102682308B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an imaging processing method comprising the following steps: an input step, inputting a first image and a second image with the same size; an image frame extracting step, in the same way, extracting at least a first image frame from the first image, and extracting at least a second image frame from the second image; an image frame match step, aiming at the second image frame, finding the closest first image frame in the first image as the first image frame corresponding to the second image frame; a comparison step, comparing the second image frame with the corresponding first image frame to determine whether the second image frame is same with or different from the corresponding first image frame; a marking step, if the second image frame is different from the corresponding first image frame, then marking the position of the second image frame in the second image, and adding the mark to the second image; and an output step, outputting the second image. The invention also correspondingly provides an image processing device.

Description

Image processing method and image processing equipment
Technical field
The present invention relates to a kind of image processing method and image processing equipment, more particularly, the present invention relates to a kind of image processing method and image processing equipment that is used for image comparison.
Background technology
Along with the development of computer image technology, the demand that picture quality inspection and digital document are analyzed wherein, presses for and can compare two width of cloth images also in expansion step by step, judges the technology that it is whether identical.In reality; After original image is through a series of process of printing, scanning or the like; Perhaps after original image carries out remote printing or remote scanning, often need to judge whether the image through gained after a series of processes is identical with original image, whether once changed; If more it is desirable for and judge image and changed, preferably can also in the image of gained, find the position of being changed.If can realize this target, then important meaning is arranged for document security.
Document us US 7190470B2 (System and method for automatic documentverification; HEWLETT PACKARD DEVELOPMENT COMPANY; L.P) disclose a kind of document and printed verification system, it is used for the scanned document of comparison original document and document printing.Specifically be,, image changed into text, come the comparison text through the text comparison techniques again through OCR (Optical Character Recognition, optical character identification) technology for text document.This patent relies on the OCR technology, thereby can only handle the image that flesh and blood is a text, only can handle perhaps that flesh and blood is the part of text in the image, and the image type that therefore can handle is limited; And the enforcement of this patent needs dictionary, and character-recognition errors will cause the mistake of text comparison, so processing procedure is consuming time than growing and being prone to make mistakes.
Document us US20060126106A1 (System and method for remote proofprinting and verification; XEROX CORP) a kind of remote printing verification method that is used for before submitting to the user and browsing is provided, proof procedure wherein judges with the histogram of the contrast images that scans gained whether the two is identical through original image relatively.Yet the histogram of this techniques make use entire image compares, and degree of accuracy is relatively poor, even and judging under the situation that contrast images changes with respect to original image, also can't find the variation part in the contrast images.
Document us US 7076086B2 (Image inspection device; FUJI XEROX COLTD) a kind of equipment that is used for checking output image is provided, its image parameter through relatively resolution, brightness or the like judges whether output image changes with respect to original image.Yet the parameter of this techniques make use entire image compares, and degree of accuracy is relatively poor, even and judging under the situation that output image changes with respect to original image, also can't find the variation part in the output image.
Summary of the invention
Make the present invention in view of the problems referred to above that exist in the prior art, to solve the problem that exists in the prior art.The present invention proposes a kind of image processing method and image processing equipment; Through from image, extracting frames images and carrying out image relatively based on frames images; Judge whether two width of cloth images are identical, and find that wherein piece image is with respect to the variation part of another width of cloth image.
According to one embodiment of present invention, a kind of image processing method is provided, comprises: input step, input same size first image and second image; The frames images extraction step in an identical manner, extracts at least one first frames images from first image, from second image, extract at least one second frames images; The frames images pairing step to said second frames images, is sought immediate first frames images in position in first image, as with corresponding first frames images of this second frames images; Comparison step, relatively second frames images and corresponding first frames images are identical or different to confirm this second frames images and corresponding first frames images; Markers step, if confirm that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And the output step, export second image.
According to another embodiment of the invention, a kind of image processing equipment is provided, comprises: input media is used to import same size first image and second image; The frames images extraction element is used in an identical manner, from first image, extracts at least one first frames images, from second image, extracts at least one second frames images; The frames images contrast means is used for to said second frames images, in first image, seeks immediate first frames images in position, as with corresponding first frames images of this second frames images; Comparison means is used for comparison second frames images and corresponding first frames images, and is identical or different to confirm this second frames images and corresponding first frames images; Labelling apparatus, if be used for confirming that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And output unit, be used to export second image.
Do not need with the help of a dictionary according to the image processing method of the embodiment of the invention and the enforcement of image processing equipment; Be not limited to the image that the comparison flesh and blood is a text; Whether but it is identical automatically, accurately to judge two width of cloth images, and can find the difference of the two automatically.
Combine the detailed description of following the preferred embodiments of the present invention that accompanying drawing considers through reading, will understand above and other targets, characteristic, advantage and technology and industrial significance of the present invention better.
Description of drawings
Fig. 1 is the overview flow chart that illustrates according to the image processing method of the embodiment of the invention.
Fig. 2 A exemplarily illustrates it is implemented first image according to the embodiment of the invention.
Fig. 2 B exemplarily illustrates it is implemented second image according to the embodiment of the invention.
Fig. 3 is the process flow diagram that illustrates according to the frames images leaching process of the embodiment of the invention.
Fig. 4 A to Fig. 4 D comes schematically to explain the synoptic diagram according to the frames images leaching process of the embodiment of the invention through example; Wherein, Fig. 4 A shows from the gray-scale map of a color component of the extraction of first image shown in Fig. 2 A; Fig. 4 B shows the histogram of the pixel number of the monochromatic color gray-scale map shown in Fig. 4 A with respect to intensity profile; Fig. 4 C illustrate obtain from the monochromatic color gray-scale map shown in Fig. 4 A and gray area between corresponding binary image, and Fig. 4 D illustrates for the binary image shown in Fig. 4 C and carries out the synoptic diagram that frames images is obtained each frames images that is obtained.
Fig. 5 is the synoptic diagram of situation that is illustrated in the difference of original image marked and contrast images.
Fig. 6 is the The general frame that illustrates according to the image processing system of the embodiment of the invention.
Fig. 7 is the The general frame that illustrates according to the image processing equipment of the embodiment of the invention.
Embodiment
Below in conjunction with the description of drawings embodiments of the invention.
Fig. 1 is the overview flow chart that illustrates according to the image processing method of the embodiment of the invention.Image processing method according to the embodiment of the invention comprises: input step S100, input same size first image and second image; Frames images extraction step S200 in an identical manner, extracts at least one first frames images from first image, from second image, extract at least one second frames images; Frames images pairing step S300 to said second frames images, seeks immediate first frames images in position in first image, as with corresponding first frames images of this second frames images; Comparison step S400, relatively second frames images and corresponding first frames images are identical or different to confirm this second frames images and corresponding first frames images; Markers step S500, if confirm that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And output step S600, export second image.
First image of in input step S100, importing has identical size with second image, and just first image and second image are made up of the pixel of similar number ranks.The image processing method comparison same size of the embodiment of the invention is two width of cloth images of identical size; Before input step S100; May further include preposition step, read in two width of cloth images originally, judge whether this two width of cloth picture size equates; Be judged as under the equal situation of size, again this two width of cloth image supplied input step S100 input as first image and second image.Can through various prior aries judge that two width of cloth images vary in size and and then to be judged as two width of cloth images different; In image processing method, do not handle according to the embodiment of the invention; Perhaps through the adjusted size step; Two width of cloth picture sizes are adjusted into identical, and then this two width of cloth image are supplied input step S100 input as first image and second image.Fig. 2 A exemplarily illustrates it is implemented first image according to the embodiment of the invention; Fig. 2 B exemplarily illustrates it is implemented second image according to the embodiment of the invention, and wherein first image can be an original image, and second image can be a contrast images.
Extract first frames images and second frames images at frames images extraction step S200 respectively from first image and second image,, adopt in a like fashion for extracting first frames images from first image and extracting second frames images from second image.First frames images and second frames images can be commonly referred to as frames images.For example; Can be through the method for looking for connected domain the order of the FindContours in conventional images software package OpenCV; Or through calling existing BlobLibrary storehouse; Come from the first colored or monochromatic image and second image, to extract respectively in the image such as the edge contour of the picture material of pattern or literal and and then obtain comprising the rectangular shaped rim of picture material, come respectively as first frames images and second frames images.First frames images or second frames images all possibly on the other hand, not comprise in image under the situation of any content more than one, and the white image of a width of cloth for example can be with entire image as a frames images.The embodiment of the invention also provides another kind of means from image extraction frames images, will describe below.Through extracting frames images, the white space in the image is excluded processing procedure after this at least in part, thus can speed up processing.
At frames images pairing step S300; For second frames images of extracting from second image is sought the first corresponding frames images first frames images of extracting from first image; Under the situation that has a plurality of second frames images, then be that each second frames images is sought the first corresponding frames images.Can the coordinate of first image and second image be provided with according to same mode; For example all be initial point or all be initial point or the like with the image center with image left side lower extreme point; Thereby make the coordinate of any point in first image and second image all have comparability, any point in first image and second image all has the corresponding point of same coordinate in the other side's image in other words.Because frames images is a rectangle frame; And it will be appreciated by those skilled in the art that; When extracting frames images, can know and write down the information of this frames images; The information of frames images can comprise the apex coordinate of this frames images in respective image at least, in fact only need know that the coordinate of summit in image coordinate system on the diagonal line of frames images can be known the position of this frames images in this coordinate system.
First image adopts identical coordinate system with second image, can be that each second frames images is sought the first corresponding frames images in several ways at least one first frames images of first image.For example; Can at least one first frames images, seek center point coordinate first frames images nearest apart from the center point coordinate of this second frames images; Think that this first frames images is and this immediate frames images in second frames images position, with this first frames images as first frames images corresponding with this second frames images; Perhaps; Can seek first frames images minimum of the respective vertices on the diagonal line with certain direction of this second frames images apart from sum; For example; The top left corner apex of this second frames images is minimum at least one first frames images at this apart from sum apart from the summit, the lower right corner of this first frames images apart from the summit, the lower right corner of the distance of the top left corner apex of certain first frames images and this second frames images; Think that then this first frames images is and this immediate frames images in second frames images position, with this first frames images as first frames images corresponding with this second frames images; Perhaps; Can seek the first minimum frames images of mean distance of the respective vertices on the diagonal line with certain direction of this second frames images; For example; The summit, the lower left corner of this second frames images is minimum at least one first frames images at this apart from the mean value of the distance on the summit, the upper right corner of this first frames images with the summit, the upper right corner of this second frames images apart from the distance on the summit, the lower left corner of certain first frames images; Think that then this first frames images is and this immediate frames images in second frames images position, with this first frames images as first frames images corresponding with this second frames images.The mode that is the first corresponding frames images of second frames images searching is not limited to the above; It will be appreciated by those skilled in the art that; Under the situation that through certain mode is some second frames images first frames images of finding a plurality of correspondences; Can combine alternate manner to confirm first frames images of a correspondence, perhaps also can carry out processing procedure after this with first frames images of this a plurality of correspondences all as first frames images of correspondence.It will be understood by those skilled in the art that under the situation that one first frames images is only arranged that this one first frames images can be corresponding with all second frames images in first image.
At comparison step S400, whether first frames images that relatively in frames images pairing step S300, is paired into mutual correspondence is identical with second frames images.According to the embodiment of the invention, can judge in several ways whether first frames images is identical with second frames images, this compares deterministic process and is described further below.
At markers step S500, in comparison step S400, being judged as the two second different frames images to corresponding first frames images and second frames images, this second frames images position in second image of mark.This mark both can carry out in the position of this second frames images; Also can be outside the position of this second frames images; For example being the mark that outside this second frames images even second image, points to this second frames images position, perhaps can listing the coordinate position of this second frames images through the tabulation mode, can be any mode of expressing the position of this second frames images in a word; And above-mentioned mark appended to second image, as the ingredient of second image.At output step S600, export second image, if be that second image is done and appeared mark at markers step S500, then second image of output contains above-mentioned mark.
As improvement for above-mentioned image processing method; In above-mentioned image processing method; If there is first frames images that does not have corresponding second frames images; Then in said markers step, can be further to making a mark with this corresponding position of first frames images in second image, and said mark is appended to second image.Just; If after the processing through frames images pairing step S300; If certain first frames images does not have second frames images corresponding with it, then can in markers step S500, carry out mark to the relevant position of this first frames images in second image further.Because first image is identical with the coordinate system of second image, so the coordinate position of this first frames images in first image promptly can be as this relevant position of first frames images in second image.Similarly; This mark both can carry out in this corresponding position, can outside this relevant position, for example be the mark that outside this relevant position even second image, points to this relevant position also; Perhaps can list the coordinate of this relevant position through the tabulation mode; Can be any mode of expressing this relevant position of first frames images in second image in a word, above-mentioned mark appended to second image, as the ingredient of second image.In the case, second image of output step S600 output contains the mark relevant for the relevant position of this first frames images in this second image.
As improvement for above-mentioned image processing method; This image processing method can also comprise the frames images segmentation procedure; If the tolerance of promptly said first frames images or second frames images is greater than first predetermined threshold; Then,, said first frames images or second frames images further are divided into a plurality of first frames images or a plurality of second frames images with identical alignment thereof with fixing grid.This frames images segmentation procedure can be chosen between frames images extraction step S200 and the frames images pairing step S300 and carry out.
Promptly; For first frames images or second frames images extracted through frames images extraction step S200, alternatively, can further be divided into littler frames images; Littler a plurality of frames images with cutting apart a frames images gained replace this original image frame, get into processing after this.Can set certain area threshold as first predetermined threshold with area as tolerance, area is cut apart greater than first frames images or second frames images of this area threshold; Perhaps can be with length of side length as tolerance; Set certain length threshold; If the length of long sides of first frames images or second frames images or greater than this length threshold; If or the length of short sides of first frames images or second frames images is then cut apart this first frames images or second frames images greater than this length threshold.This first predetermined threshold can be set in several ways; Certain ratio of total image area for example; Or certain ratio of the image length of side; Or consider that the area of the frames images that frames images extraction step S200 is extracted or the distribution of the length of side set, the number of for example setting the frames images that will cut apart accounts for the ratio of frames images total number.
Can adopt grid to come the split image frame; For whole first frames images and second frames images from first image and second image, extracted; Adopt the grid of same yardstick; And the alignment thereof of this grid and all images frame should be identical, and for example adopting each lattice is the upper left corner end points alignment that the grid of 20*30 pixel comes to surpass in tolerance all images frame of first predetermined threshold, to have comparability between a plurality of first frames images after guaranteeing to cut apart and a plurality of second frames images.It will be understood by those skilled in the art that the grid length of side can choose in several ways, for example the grid length of side accounts for certain ratio of the corresponding length of side of image or the like.
If image processing method has been selected the frames images segmentation procedure for use, then replace original image frame input image frame pairing step S300 to frames images through a plurality of frames images of mesh segmentation gained.
Be described among the comparison step S400 the whether identical mode that can adopt of first frames images that relatively in frames images pairing step S300, is paired into mutual correspondence and second frames images below as an example.
For example; In said comparison step S400; Can more said second frames images and the position relation of corresponding first frames images, if the distance of this second frames images and this first frames images greater than second predetermined threshold, then definite this second frames images is different with this first frames images.
First image adopts identical coordinate system with second image; Can define the distance of this second frames images and this first frames images in several ways; For example, the distance of this second frames images and this first frames images can be the distance at center of center and this first frames images of this second frames images; Perhaps; The distance of this second frames images and this first frames images can be this second frames images and this first frames images between the respective vertices on the unidirectional diagonal line apart from sum; For example, under this second frames images left side under the left side of summit and this first frames images between the upper right summit of the distance between the summit and this upper right summit of second frames images and this first frames images apart from sum; Perhaps; The distance of this second frames images and this first frames images can be that this second frames images and the distance of this first frames images between the respective vertices on the unidirectional diagonal line are average; For example, the distance between the summit, bottom right of the distance between the left upper apex of this second frames images left upper apex and this first frames images and this summit, second frames images bottom right and this first frames images is average; Or the like.One skilled in the art will recognize that can also be through the distance between two frames images of alternate manner definition.The definition mode of this this second frames images and the distance of this first frames images can and in frames images pairing step S300 the mode of the distance between two frames images of definition identical, also can be different.
Be used to judge whether the second identical predetermined threshold can come to confirm corresponding first frames images as required with second frames images; For example; If adopt relatively stricter standard, then this second predetermined threshold should be less, otherwise then this second predetermined threshold should be established higherly.In addition; The definition mode of confirming it is also conceivable that the frames images distance of second predetermined threshold; For example; Adopting under the situation of corresponding diagonal angle vertex distance sum as the distance of two frames images, this second predetermined threshold can be than adopting the central point distance as big under the situation of the distance of two frames images.In addition, the size of confirming it is also conceivable that entire image of second predetermined threshold for example is set at certain ratio of the image length of side.If second corresponding each other frames images and the distance between first frames images, confirm then that this second frames images is different with this first frames images greater than second predetermined threshold.
Perhaps; For example; In said comparison step S400; Can more said second frames images and corresponding first frames images respectively about the gray scale difference mean value of same hue, if maximum gray scale difference mean value, confirms then that this second frames images is different with this first frames images greater than the 3rd predetermined threshold.
Those skilled in the art are to be understood that; Can know RGB (RGB) pixel value of each pixel in corresponding first frames images and second frames images; Thereby can calculate first frames images and second frames images gray scale difference mean value respectively through following formula (1)-(3) about the RGB component
R diff=∑|R p1-R p2|/N (1)
G diff=∑|G p1-G p2|/N (2)
B diff=∑|B p1-B p2|/N (3)
Wherein, R Diff, G Diff, and B DiffBe respectively corresponding first frames images and second frames images gray scale difference mean value, R about the RGB component P1, G P1, and B P1Be respectively pixel p1 in first frames images about the gray-scale value of RGB component, R P1, G P1, and B P1Be respectively pixel p2 in second frames images about the gray-scale value of RGB component, the p1 pixel is identical with p2 pixel coordinate, and N is the number of pixel in first frames images or second frames images.
Of preamble; First image adopts identical coordinate system with second image; P1 and p2 pixel in this coordinate system, having same coordinate; If corresponding first frames images and the position of second frames images in this coordinate system fit like a glove, then wherein pixel p1 and p2 can realize corresponding one by one.Yet; Because corresponding first frames images and second frames images be possibility location deviation in the same coordinate system; In the case; Can only consider " common factor " of first frames images and second frames images, just only calculate the pixel p1 and the p2 that have same coordinate in first frames images and second frames images; Perhaps, also can consider " union " of first frames images and second frames images, just in first image and second image, diffuse into first frames images and second frames images outside interim the coordinate range of the other side's frames images respectively; Perhaps, do not consider in the calculating that first frames images comprises and the pixel of the position that second frames images does not comprise, and diffuse into the coordinate range that comprises second frames images to first frames images at home and abroad at first image, otherwise perhaps.In a word, adjust to first frames images and second frames images that the position fits like a glove in coordinate system, N pixel p1 and p2 that its adjustment back is had can realize correspondence one by one.
Then, with R Diff, G Diff, and B DiffIn maximal value as the gray scale difference mean value of the maximum of corresponding second frames images and first frames images, if gray scale difference mean value that should maximum confirms then that greater than the 3rd predetermined threshold this second frames images is different with this first frames images.Perhaps, for example also can be set in R Diff, G Diff, and B DiffAll under the situation greater than the 3rd predetermined threshold, confirm that then this second frames images is different with this first frames images.
Be used to judge whether the 3rd identical predetermined threshold can come to confirm corresponding first frames images as required with second frames images; For example; If adopt relatively stricter standard, then the 3rd predetermined threshold should be less, otherwise then the 3rd predetermined threshold should be established higherly.In addition, the gray-scale value scope of confirming it is also conceivable that entire image of the 3rd predetermined threshold for example is set at certain ratio of maximum gradation value (for example 256).If the second corresponding each other frames images and the gray scale difference mean value of the maximum between first frames images greater than the 3rd predetermined threshold, confirm that then this second frames images is different with this first frames images.
It will be understood by those skilled in the art that R, G, B component must all not adopt, but can therefrom choose use wantonly, only need to guarantee choose identical color component with second frames images for first frames images.
It will be understood by those skilled in the art that HSI (tone-saturation degree-intensity) component that also can utilize each pixel confirms whether this second frames images is identical with this first frames images.
Perhaps; For example, in said comparison step S400, can more said second frames images and corresponding first frames images respectively about the histogram distance of same hue; If maximum histogram distance, confirms then that this second frames images is different with this first frames images greater than the 4th predetermined threshold.
Of preamble; First image adopts identical coordinate system with second image; If corresponding first frames images and the position of second frames images in this coordinate system are not exclusively identical, can consider that then aforementioned manner adjusts to first frames images and second frames images that the position fits like a glove in coordinate system.
Those skilled in the art are to be understood that; RGB (RGB) pixel value that can know each pixel in corresponding first frames images and second frames images is a gray-scale value, thereby just can know that also first frames images divides other histogram and second frames images to divide other histogram about rgb color about rgb color.Wherein, the histogram that can adopt can be a histogram arbitrarily, for example the pixel number with respect to the histogram of intensity profile, or local binary pattern (Local Binary Patterns, LBP) histogram, or the like.
To adopt the LBP histogram is that example describes.Calculate respectively this first frames images about the LBP histogram of R component and this second frames images about the distance between the LBP histogram of R component, this first frames images about the LBP histogram of G component and this second frames images about the distance between the LBP histogram of G component and this first frames images about the LBP histogram of B component and this second frames images about the distance between the LBP histogram of B component.The histogram distance that can adopt can be any histogram distance, for example CHI-Square (card side) distance, relevant (Correlation) distance or the like.In addition, can be according to design demand, the character of for example histogrammic character, histogram distance, and strict or loose degree or the like are set the 4th predetermined threshold and are used for comparing with each histogram distance.For example, if adopt relatively stricter standard, then the 4th predetermined threshold should be less, otherwise then the 4th predetermined threshold should be established higherly.
Then; With about the histogram distance of the maximal value in the histogram of the RGB component distance as the maximum of corresponding second frames images and first frames images; If histogram that should maximum distance, confirms then that this second frames images is different with this first frames images greater than the 4th predetermined threshold.Perhaps, for example also can be set in about the histogram of RGB component distance all under the situation greater than the 4th predetermined threshold, then definite this second frames images is different with this first frames images.
It will be understood by those skilled in the art that HSI (tone-saturation degree-intensity) component that also can utilize each pixel confirms whether this second frames images is identical with this first frames images.
One skilled in the art will recognize that whether identical means can be distinguished independent use to above-mentioned comparison diagram frame, also can be according to random order; Use through combination in any; Obtaining through above-mentioned means under the situation of different comparative results, can confirm comparative result according to design inclination, for example; If adopt relatively stricter standard; Can set and have only various means all to be judged as identically to confirm that just two frames images are identical,, be judged as and identically just confirm that two frames images are identical as long as can set a kind of means if adopt looser standard; In addition, can also pass through alternate manner, adopt other image parameter to come more corresponding first frames images whether identical with second frames images.
A kind of method of extracting frames images that in according to the image processing method of the embodiment of the invention, can adopt is described below.Fig. 3 is the process flow diagram that illustrates according to the frames images leaching process of the embodiment of the invention, and combines Fig. 4 A to Fig. 4 D, comes schematically to explain the implementation process of this frames images leaching process through example.
As shown in Figure 3, frames images leaching process S200 can comprise: gray level image extraction step S210, extract the gray level image of same hue respectively from first image and second image; Histogram extraction step S220 is to extracting the histogram of pixel number with respect to intensity profile respectively from each gray level image of first image extraction and each gray level image that extracts from second image; Partiting step S230 between gray area to from each gray level image of first image extraction and each gray level image that extracts from second image, is divided between gray area according to the gray scale of histogram with corresponding gray level image respectively; Binaryzation step S240 to from each gray level image of first image extraction and each gray level image that extracts from second image, turns to binary image with corresponding gray level image two-value respectively about coming between each gray area respectively; And frames images obtaining step S250; In each binary image of first image, extract frames images respectively; As said at least one first frames images, in each binary image of second image, extract frames images respectively, as said at least one second frames images.
At gray level image extraction step S210, can adopt the means of any known that image is divided into corresponding monochromatic gray-scale map according to each component.For example; Can first image be divided into a R image, a G image and a B image about the RGB component, and can utilize the means of identical extraction component and second image is divided into the 2nd R image, the 2nd G image and the 2nd B image about the RGB component.To be that example is explained the frames images leaching process, it will be understood by those skilled in the art that frames images leaching process S200 also can likewise be implemented on the contrast images shown in Fig. 2 B (second image) to the original image shown in Fig. 2 A (first image).
Fig. 4 A shows through first image shown in Fig. 2 A being extracted the synoptic diagram of the above-mentioned G image that the RGB component obtains respectively; Is that example describes at this with G component gray-scale map; It will be understood by those skilled in the art that implementing gray level image extraction step S210 can also obtain R component gray-scale map and B component gray-scale map similarly.
At histogram extraction step S220; Can utilize means known; Extract a R image, a G image and a B image respectively, reach the 2nd R image, the 2nd G image and the 2nd B image pixel number separately histogram with respect to intensity profile, wherein, the tonal range of each gray level image can be 0 to 255; Also can for example be 0 to 31,0 to 1023 or the like, can make the tonal range of each gray level image identical.
Fig. 4 B shows the histogram of the pixel number of the G image shown in Fig. 4 A of extraction with respect to intensity profile.At this histogram with G component gray-scale map is that example describes, and it will be understood by those skilled in the art that to implement the histogram that histogram extraction step S220 can also obtain R component gray-scale map and B component gray-scale map similarly.
Partiting step S230 between gray area; To a R image, a G image and a B image, and the 2nd R image, the 2nd G image and the 2nd B image histogram separately; The tonal range of above-mentioned each gray level image is divided between at least one gray area, as much as possible the picture material such as pattern, text in the gray level image is distinguished between different gray areas with its background.The tonal range of dividing each gray level image can adopt identical means and standard.
Explanation is divided into the processing between at least one gray area with the tonal range of each gray level image among the partiting step S230 between gray area below.This division is handled; Between said gray area among the partiting step S230; The maximum value of the predetermined number of capture vegetarian refreshments number maximum in histogram; With both sides end points and the most contiguous in a predetermined direction minimum point or the 0 value point of each maximum value is the boundary, and the gray scale of corresponding gray level image is divided between gray area.
For a gray level image, in the histogram of its pixel, suppose that transverse axis represents gray-scale value with respect to the distribution of gray-scale value, longitudinal axis represent pixel value is the pixel number of corresponding gray-scale value.It will be appreciated by those skilled in the art that; Above-mentioned form is the typical form of this kind histogram; The enforcement of the embodiment of the invention does not rely on the histogram of this kind form, if with its horizontal longitudinal axis meaning exchange, and still can embodiment of the present invention; So long as embody the histogram of pixel, all can realize the embodiment of the invention originally with respect to the distribution of gray-scale value.
In the histogram of above-mentioned concrete form, the maximum point of for example can capture vegetarian refreshments number maximum predetermined number, the maximum point of M position before for example the pixel number is in; M is a natural number; For example be 2,3,5 ..., begin from this M maximum point then, on the direction that for example gray-scale value increases; Seek nearest separately minimum point or pixel number and be 0 point; As separation, thereby gray-scale value can be divided into a plurality of gray areas by above-mentioned separation from 0 to peaked scope.It will be understood by those skilled in the art that behind definite M maximum point, also can be on the direction that for example gray-scale value reduces, seek nearest separately minimum point or pixel number and be 0 point, as separation.
For example; For the pixel number of the G image shown in Fig. 4 B histogram, be in preceding 3 maximum point with capture vegetarian refreshments number, i.e. some Q1, Q2 and Q3 shown in Fig. 4 B with respect to intensity profile; Then; On direction that gray-scale value increases (be among Fig. 4 B from left to right direction), seek nearest separately minimum point or pixel number and be 0 point, as separation.For maximum point Q1; The nearest minimum point Q1 ' in its right side is as separation, and for maximum point Q2, the nearest minimum point Q2 ' in its right side is a separation; For maximum point Q3; Because itself corresponding gray scale value is a maximal value 255, thereby no longer seeks separation on its right side, himself can be regarded as a separation.Thereby, by lowest gray value 0, minimum point Q1 ' corresponding gray 48, minimum point Q2 ' corresponding gray 195 and the highest gray-scale value 255; The tonal range of the G image shown in Fig. 4 B is divided between three gray areas [0; 48], [49,195] and [196,255].
The processing that the tonal range of gray level image is divided between gray area is not limited to mentioned above; For example can also be; Confirm certain number threshold value; This number threshold value can be certain ratio of total number of image pixels purpose, in histogram, confirms the maximum point of pixel number more than this number threshold value, seeks separation according to determined maximum point then; Perhaps; Searching pixel number is 0 gray value interval continuously in histogram; Confirm M the gray value interval that gray value interval is maximum; Perhaps confirm the gray value interval of gray value interval greater than certain threshold value, with in the determined gray value interval arbitrarily a bit as separation, be divided into gray-scale value a plurality of gray areas from 0 to peaked scope.Those skilled in the art can conceive alternate manner and divide between gray area.
At binaryzation step S240, to a R image, a G image and a B image, and the 2nd R image, the 2nd G image and the 2nd B image each gray area separately between, according between each gray area with corresponding gray level image binaryzation.For example; For the some gray level images in above-mentioned a plurality of gray level images; In the tonal range of this gray level image, mark off between certain gray area, then can the pixel in gray-scale value in this gray level image is between this gray area be put blackly, remaining pixel is put white; Handle between each gray area that so marks off in the tonal range about this gray level image, from this gray level image obtain and its each gray area between corresponding each binary image.It will be appreciated by those skilled in the art that; The mode of image binaryzation is not limited to as stated; For example also can be that the pixel of gray-scale value between this gray area put white in this gray level image; Remaining pixel is put black, and perhaps any other distinguished the mode of pixel inside and outside between gray area with two-value.
For example, divide between the gray area that obtains [0,48] for the tonal range of the G image shown in Fig. 4 A; With gray-scale value between gray area [0; 48] pixel in the scope is changed to 1 (being shown as black), and the pixel of all the other gray-scale values all is changed to 0 (being shown as white), thus from a G image obtain and gray area between [0; 48] corresponding binary image is shown in Fig. 4 C.It will be understood by those skilled in the art that and to extract the binary image between corresponding gray area to [49,195] between gray area and [196,255] from a G image in the same way.
At frames images obtaining step S250; To and a R image, a G image and a B image, and the 2nd R image, the 2nd G image and the 2nd B image each gray area separately between corresponding each binary image; Can utilize any known extraction such as pattern or text picture material edge contour and and then obtain to comprise the technology of the rectangle frame of picture material, come in above-mentioned each binary image, to extract rectangle frame and be used as frames images.Extract edge contour and confirm that according to edge contour rectangle frame can be through the method for looking for connected domain the order of the FindContours in known image software package OpenCV, or realize through calling the BlobLibrary storehouse.Because each image is binaryzation, thereby leaching process is easy and accurate.The one R image, a G image and a B image are derived from first image; The all images frame that therefrom extracts is first frames images; The 2nd R image, the 2nd G image and the 2nd B image are derived from second image, and all images frame that therefrom extracts is second frames images.
Fig. 4 D illustrate for shown in Fig. 4 C and gray area between corresponding binary image carry out the synoptic diagram that frames images is obtained each frames images that is obtained; The rectangle frame that surrounds each content of text in the image among Fig. 4 D is the frames images of being extracted, and is a plurality of first frames images at this.It will be understood by those skilled in the art that can be in the same way, to and gray area between [49,195] and [196,255] corresponding binary image extract frames images.
It will be understood by those skilled in the art that first image can have identical coordinate system with second image, the position coordinates in this coordinate system of all images frame can be known and record.At this moment, each frames images of being extracted all has definite position in coordinate system, can get into this aftertreatment with its state in the respective binary image, and just this frames images comprises the content in its position range in the respective binary image.Can get into this aftertreatment with its state in corresponding monochromatic gray level image, just this frames images comprises the content in its position range in the corresponding gray level image.Also can get into this aftertreatment with its state in corresponding first or second image; Just this frames images is if first frames images then comprises the content in its position range in first image, and this frames images is if second frames images then comprises the content in its position range in second image.In the case; Possibly exist overlapping from the frames images that gray-scale map extracted of different color components; If the information that comprises the position about one of them then can be only write down in the coincidence fully from the frames images location that gray-scale map extracted of different color components; And if first or second image is coloured picture, then frames images possibly comprise colored content.
Those skilled in the art are further appreciated that R, G, B component must all not adopt, but can therefrom choose use wantonly, only need to guarantee choose identical color component for first image with second image, to guarantee the comparability of first frames images and second frames images.
No matter obtain between gray area through which kind of means division; As improvement for the embodiment of the invention; All can attempt further being subdivided between gray area between more sub-gray area; Replace between former gray area, between a plurality of sub-gray area that obtains with segmentation then as being used for processing procedure thereafter between new a plurality of gray areas.For example; Image processing method according to the embodiment of the invention can also comprise fine division step between gray area; To between the gray area of being divided in the partiting step between said gray area, travel through successively that gray scale is positioned at the pixel between this gray area in the corresponding gray level image, gray scale difference between the neighbor pixel is put between same sub-gray area less than the gray scale of the 5th predetermined threshold; To have between the sub-gray area of lap and merge, between gray area as one or more segmentations.Between this gray area fine division step can be between gray area after the partiting step S230, carry out before the binaryzation step S240, be used for improving the implementation effect of binaryzation step S240.
Between above-mentioned gray area in the fine division step, about handling respectively between each gray area.For between a gray area, be positioned at the pixel between this gray area to gray scale in the corresponding gray level image, can select arbitrarily wherein that certain pixel is a starting point, its gray scale places between certain sub-gray area.For between each sub-gray area, wherein the minimum gray values of pixel points of gray-scale value is the tonal range between this sub-gray area to the maximum gray values of pixel points of gray-scale value.Seek according to the certain orientation order from starting point, as line by line and in the row from left to right direction, by in row and the row from the top down direction, to left and right sides four direction or eight directions up and down towards periphery, carry out the comparison procedure of neighbor pixel with following mode.For example; More between the sub-gray area of i RANGEi (i is the index value between sub-gray area; Whether the gray scale difference between the pixel Sig natural number) (g is the index value of pixel among the RANGEi between sub-gray area, natural number) and adjacent certain pixel Sx to be judged is less than the 5th predetermined threshold.If the gray scale difference between these two neighbor pixels is less than the 5th predetermined threshold; Then the gray scale of this pixel Sx places RANGEi between this sub-gray area; This pixel Sx can be labeled as Sih, and (h is the index value of pixel among the RANGEi between sub-gray area; Natural number), and from this pixel Sih begin to continue above-mentioned searching, comparison by former direction; If the gray value differences between these two neighbor pixels is more than or equal to the 5th predetermined threshold; Then (j is the index value between sub-gray area with RANGEj between the sub-gray area of the newly-built j of gray-scale value of this pixel Sx; Natural number), this pixel Sx can be labeled as Sjg (g is the index value of pixel among the RANGEj between sub-gray area, natural number); Finish in this direction from the searching process of pixel Sig, from pixel Sjg begin to seek in this direction, comparison procedure.
Wherein, can set above-mentioned the 5th predetermined threshold according to design demand.For example, if adopt relatively stricter standard, then the 5th predetermined threshold should be less, otherwise then the 5th predetermined threshold should be established greatlyyer.The 5th predetermined threshold for example also can be set at the certain proportion of the total tonal range of gray level image, perhaps can be set at the certain proportion of the tonal range between corresponding gray area.
In addition, one skilled in the art will recognize that between a gray area; Be positioned at the pixel between this gray area to gray scale in the corresponding gray level image, can select wherein a plurality of pixels arbitrarily is starting point, carries out above-mentioned searching, comparison process respectively; Travel through that gray scale is positioned at the whole pixels between this gray area in this gray level image; Then can accomplish above-mentioned searching, comparison procedure, in the case, alternatively; The searching, the comparison procedure that begin from a plurality of starting points are directed against the pixels with different point separately, to avoid repetition.
Travel through after gray scale is positioned at the whole pixels between this gray area in this gray level image; Possibly obtain between a plurality of sub-gray areas; Can tonal range be existed between overlapping sub-gray area and merge; Respectively as between the gray area that segments, replace between this original gray area between at least one the sub-gray area that the merging back forms, be used for processing after this.
For example, to from [0,48], [49 between three gray areas that tonal range marked off of the G image shown in Fig. 4 B; 195] and [196,255], between gray area wherein [49; 195], be under 15 the situation at the 5th predetermined threshold, through segmentation process between above-mentioned gray area; Can further be subdivided into [49,150] and [151,195] between two gray areas between this gray area.With between two gray areas [49; 150] and [151; 195] replace former [49,195] gray space to be used for binaryzation step S240, can more exactly the gray scale of content of text in the image and background be divided between different gray areas; Thereby help obtaining more exactly binary image, and and then help extracting more exactly frames images.
Above for explanation according to the embodiment of the invention in, in second image, seek the difference with first image, in fact; If with first image is original image, is contrast images with second image, according to the embodiment of the invention; One side can be sought the difference with original image in contrast images, on the other hand, and after with first image and second image exchange; Promptly; Original image is imported as second image, contrast images is imported as first image, then can aspect original image, mark the variation of contrast images with respect to it.For example, with image shown in Fig. 2 A as second image and after image shown in Fig. 2 B comes embodiment of the present invention embodiment as first image, can export result as shown in Figure 5.Fig. 5 is the synoptic diagram of situation that is illustrated in the difference of original image marked and contrast images, and wherein the indicated position of rectangle frame is the difference of original image with respect to contrast images.
The present invention can also implement through a kind of image processing system.Fig. 6 is the The general frame that illustrates according to the image processing system 1000 of the embodiment of the invention; As shown in Figure 6; Image processing system 1000 can comprise: input equipment 1100; Be used for the image that will contrast, for example can comprise keyboard, Genius mouse, scanner and communication network and the long-range input equipment that connected or the like from outside input; Treatment facility 1200 is used to implement the above-mentioned image processing method according to the embodiment of the invention, for example can comprise central processing unit of computing machine or the like; Output device 1300 is used for implementing to outside output the result of above-mentioned image processing method gained, for example can comprise display, printer and communication network and the long-range output device that connected or the like; And memory device 1400; Be used for storing image to be contrasted, the result who implements above-mentioned image processing method gained, order, intermediate data or the like, for example can comprise the various easy mistake or the nonvolatile memory of random-access memory (ram), ROM (read-only memory) (ROM), hard disk or semiconductor memory or the like to be prone to mistake or non-volatile mode.
The present invention can also be embodied as a kind of image processing equipment.Fig. 7 is the The general frame that illustrates according to the image processing equipment 2000 of the embodiment of the invention; As shown in Figure 7; Image processing equipment 2000 can comprise: input media 2100 can be used for carrying out above-mentioned input step S100, in order to input same size first image and second image; Frames images extraction element 2200 can be used for carrying out above-mentioned frames images extraction step S200, in order in an identical manner, from first image, extracts at least one first frames images, from second image, extracts at least one second frames images; Frames images contrast means 2300 can be used for carrying out above-mentioned frames images pairing step S300, in order to said second frames images, in first image, seeks immediate first frames images in position, as with corresponding first frames images of this second frames images; Comparison means 2400 can be used for carrying out above-mentioned comparison step S400, and is in order to compare second frames images and corresponding first frames images, identical or different to confirm this second frames images and corresponding first frames images; Labelling apparatus 2500; Can be used for carrying out above-mentioned markers step S500; If in order to confirm that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And output unit 2600, can be used for carrying out above-mentioned output step S600, in order to export second image.
In above-mentioned image processing equipment 2000; If there is first frames images that does not have corresponding second frames images; Then labelling apparatus 2500 can also be to making a mark with this corresponding position of first frames images in second image, and said mark is appended to second image.
Above-mentioned image processing equipment 2000 may further include the frames images segmenting device; Can be used for carrying out above-mentioned frames images segmentation procedure; If in order to the tolerance of said first frames images or second frames images greater than first predetermined threshold; Then,, said first frames images or second frames images further are divided into a plurality of first frames images or a plurality of second frames images with identical alignment thereof with fixing grid.
In above-mentioned image processing equipment 2000; Said comparison means 2400 can more said second frames images and the position relation of corresponding first frames images; If the distance of this second frames images and this first frames images, confirms then that this second frames images is different with this first frames images greater than second predetermined threshold.
In above-mentioned image processing equipment 2000; Said comparison means 2400 can more said second frames images and corresponding first frames images respectively about the gray scale difference mean value of same hue; If maximum gray scale difference mean value, confirms then that this second frames images is different with this first frames images greater than the 3rd predetermined threshold.
In above-mentioned image processing equipment 2000; Said comparison means 2400 can more said second frames images and corresponding first frames images respectively about the histogram distance of same hue; If maximum histogram distance, confirms then that this second frames images is different with this first frames images greater than the 4th predetermined threshold.
In above-mentioned image processing equipment 2000; Said frames images extraction element 2200 can be used for carrying out above-mentioned said frames images extraction step S200; Said frames images extraction element 2200 can comprise: the gray level image extraction element; Can be used for carrying out above-mentioned gray level image extraction step S210, in order to extract the gray level image of same hue respectively from first image and second image; The histogram extraction element can be used for carrying out above-mentioned histogram extraction step S220, extracts the histogram of pixel number with respect to intensity profile respectively in order to be directed against from each gray level image of first image extraction and each gray level image that extracts from second image; Classification apparatus between gray area; Can be used for carrying out partiting step S230 between above-mentioned gray area;, be divided between gray area to from each gray level image of first image extraction and each gray level image that extracts from second image in order to respectively according to the gray scale of histogram with corresponding gray level image; The binaryzation device; Can be used for carrying out above-mentioned binaryzation step S240; To from each gray level image of first image extraction and each gray level image that extracts from second image, respectively corresponding gray level image two-value is turned to binary image in order to respectively about coming between each gray area; And frames images deriving means; Can be used for carrying out above-mentioned frames images obtaining step S250; In order in each binary image of first image, to extract frames images respectively; As said at least one first frames images, in each binary image of second image, extract frames images respectively, as said at least one second frames images.
In above-mentioned image processing equipment 2000; Between said gray area classification apparatus can be in histogram the maximum value of the maximum predetermined number of capture vegetarian refreshments number; With both sides end points and the most contiguous in a predetermined direction minimum point or the 0 value point of each maximum value is the boundary, and the gray scale of corresponding gray level image is divided between gray area.
Above-mentioned image processing equipment 2000 may further include subdividing device between gray area; Subdividing device can be used for carrying out fine division step between above-mentioned gray area between said gray area; In order to be directed against between the gray area of between said gray area, being divided in the partiting step; Travel through successively that gray scale is positioned at the pixel between this gray area in the corresponding gray level image; Gray scale difference between the neighbor pixel is put between same sub-gray area less than the gray scale of the 5th predetermined threshold, will be had between the sub-gray area of lap and merge, between gray area as one or more segmentations.
The sequence of operations of in instructions, explaining can be carried out through the combination of hardware, software or hardware and software.When by this sequence of operations of software executing, can be installed to computer program wherein in the storer in the computing machine that is built in specialized hardware, make computing machine carry out this computer program.Perhaps, can be installed to computer program in the multi-purpose computer that can carry out various types of processing, make computing machine carry out this computer program.
For example, can store computer program in advance in the hard disk or ROM (ROM (read-only memory)) as recording medium.Perhaps, can perhaps for good and all store (record) computer program in removable recording medium, such as floppy disk, CD-ROM (compact disc read-only memory), MO (magneto-optic) dish, DVD (digital versatile disc), disk or semiconductor memory temporarily.Can provide so removable recording medium as canned software.
The present invention specifies with reference to specific embodiment.Yet clearly, under the situation that does not deviate from spirit of the present invention, those skilled in the art can carry out change and replacement to embodiment.In other words, the present invention is open with form illustrated, rather than explains with being limited.Judge main idea of the present invention, should consider appended claim.

Claims (10)

1. image processing method comprises:
Input step, input same size first image and second image;
The frames images extraction step in an identical manner, extracts at least one first frames images from first image, from second image, extract at least one second frames images;
The frames images pairing step to said second frames images, is sought immediate first frames images in position in first image, as with corresponding first frames images of this second frames images;
Comparison step, relatively second frames images and corresponding first frames images are identical or different to confirm this second frames images and corresponding first frames images;
Markers step, if confirm that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And
The output step is exported second image.
2. according to the described image processing method of claim 1, wherein,
If there is first frames images do not have corresponding second frames images, then in said markers step,, and said mark is appended to second image also to making a mark with this corresponding position of first frames images in second image.
3. according to the described image processing method of claim 1, also comprise
The frames images segmentation procedure; If the tolerance of said first frames images or second frames images is greater than first predetermined threshold; Then,, said first frames images or second frames images further are divided into a plurality of first frames images or a plurality of second frames images with identical alignment thereof with fixing grid.
4. according to any described image processing method among the claim 1-3, wherein,
In said comparison step, the position of more said second frames images and corresponding first frames images relation is if the distance of this second frames images and this first frames images, confirms then that this second frames images is different with this first frames images greater than second predetermined threshold.
5. according to any described image processing method among the claim 1-3, wherein,
In said comparison step; More said second frames images and corresponding first frames images are respectively about the gray scale difference mean value of same hue; If maximum gray scale difference mean value, confirms then that this second frames images is different with this first frames images greater than the 3rd predetermined threshold.
6. according to any described image processing method among the claim 1-3, wherein,
In said comparison step, more said second frames images and corresponding first frames images are respectively about the histogram distance of same hue, if maximum histogram distance, confirms then that this second frames images is different with this first frames images greater than the 4th predetermined threshold.
7. according to the described image processing method of claim 1, wherein, said frames images extraction step comprises:
The gray level image extraction step extracts the gray level image of same hue respectively from first image and second image;
The histogram extraction step is to extracting the histogram of pixel number with respect to intensity profile respectively from each gray level image of first image extraction and each gray level image that extracts from second image;
Partiting step between gray area to from each gray level image of first image extraction and each gray level image that extracts from second image, is divided between gray area according to the gray scale of histogram with corresponding gray level image respectively;
The binaryzation step to from each gray level image of first image extraction and each gray level image that extracts from second image, turns to binary image with corresponding gray level image two-value respectively about coming between each gray area respectively; And
The frames images obtaining step extracts frames images respectively in each binary image of first image, as said at least one first frames images, in each binary image of second image, extract frames images respectively, as said at least one second frames images.
8. according to the described image processing method of claim 7; Wherein, Between said gray area in the partiting step; The maximum value of the maximum predetermined number of capture vegetarian refreshments number is the boundary with the both sides end points with the most contiguous in a predetermined direction minimum point or the 0 value point of each maximum value in histogram, and the gray scale of corresponding gray level image is divided between gray area.
9. according to the described image processing method of claim 7, also comprise
Fine division step between gray area; To between the gray area of being divided in the partiting step between said gray area; Travel through successively that gray scale is positioned at the pixel between this gray area in the corresponding gray level image; Gray scale difference between the neighbor pixel is put between same sub-gray area less than the gray scale of the 5th predetermined threshold, will be had between the sub-gray area of lap and merge, between gray area as one or more segmentations.
10. image processing equipment comprises:
Input media is used to import same size first image and second image;
The frames images extraction element is used in an identical manner, from first image, extracts at least one first frames images, from second image, extracts at least one second frames images;
The frames images contrast means is used for to said second frames images, in first image, seeks immediate first frames images in position, as with corresponding first frames images of this second frames images;
Comparison means is used for comparison second frames images and corresponding first frames images, and is identical or different to confirm this second frames images and corresponding first frames images;
Labelling apparatus, if be used for confirming that this second frames images is different with this corresponding first frames images, then the position to this second frames images in second image makes a mark, and above-mentioned mark is appended to second image; And
Output unit is used to export second image.
CN201110064527.7A 2011-03-17 2011-03-17 Imaging processing method and device Expired - Fee Related CN102682308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110064527.7A CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110064527.7A CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Publications (2)

Publication Number Publication Date
CN102682308A true CN102682308A (en) 2012-09-19
CN102682308B CN102682308B (en) 2014-05-28

Family

ID=46814203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110064527.7A Expired - Fee Related CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Country Status (1)

Country Link
CN (1) CN102682308B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123029A (en) * 2017-04-28 2017-09-01 深圳前海弘稼科技有限公司 The method and system of fruits and vegetables is specified in purchase
CN109766837A (en) * 2019-01-11 2019-05-17 广州人资选互联网科技有限公司 A kind of employee information input system
CN111177470A (en) * 2019-12-30 2020-05-19 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1015150A (en) * 1996-06-28 1998-01-20 Sanyo Electric Co Ltd Recognition device for pieces on japanese chess board
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1015150A (en) * 1996-06-28 1998-01-20 Sanyo Electric Co Ltd Recognition device for pieces on japanese chess board
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123029A (en) * 2017-04-28 2017-09-01 深圳前海弘稼科技有限公司 The method and system of fruits and vegetables is specified in purchase
CN109766837A (en) * 2019-01-11 2019-05-17 广州人资选互联网科技有限公司 A kind of employee information input system
CN111177470A (en) * 2019-12-30 2020-05-19 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment
CN111177470B (en) * 2019-12-30 2024-04-30 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment

Also Published As

Publication number Publication date
CN102682308B (en) 2014-05-28

Similar Documents

Publication Publication Date Title
CN104112128B (en) Digital image processing system and method applied to bill image character recognition
CN110766014B (en) Bill information positioning method, system and computer readable storage medium
US6778703B1 (en) Form recognition using reference areas
EP2897082B1 (en) Methods and systems for improved license plate signature matching
US7627148B2 (en) Image data processing apparatus and method, and image data processing program
US9158986B2 (en) Character segmentation device and character segmentation method
WO2016127545A1 (en) Character segmentation and recognition method
US6771813B1 (en) Image processing apparatus and pattern extraction apparatus
CN103034848B (en) A kind of recognition methods of form types
CN108596166A (en) A kind of container number identification method based on convolutional neural networks classification
US20070253040A1 (en) Color scanning to enhance bitonal image
CN103460222A (en) Text string cut-out method and text string cut-out device
JP2003506767A (en) Apparatus and method for matching scanned images
US20080069398A1 (en) Code image processing method
CN102750530B (en) Character recognition method and device
US7564587B2 (en) Method of scoring a printed form having targets to be marked
US7680357B2 (en) Method and apparatus for detecting positions of center points of circular patterns
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
JP2013084071A (en) Form recognition method and form recognition device
CN113158895A (en) Bill identification method and device, electronic equipment and storage medium
CN113903024A (en) Handwritten bill numerical value information identification method, system, medium and device
JP4275866B2 (en) Apparatus and method for extracting character string pattern from color image
CN102682308B (en) Imaging processing method and device
JP5929282B2 (en) Image processing apparatus and image processing program
EP0651337A1 (en) Object recognizing method, its apparatus, and image processing method and its apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140528

Termination date: 20170317