CN102682308B - Imaging processing method and device - Google Patents

Imaging processing method and device Download PDF

Info

Publication number
CN102682308B
CN102682308B CN201110064527.7A CN201110064527A CN102682308B CN 102682308 B CN102682308 B CN 102682308B CN 201110064527 A CN201110064527 A CN 201110064527A CN 102682308 B CN102682308 B CN 102682308B
Authority
CN
China
Prior art keywords
image
frames images
images
gray
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110064527.7A
Other languages
Chinese (zh)
Other versions
CN102682308A (en
Inventor
游赣梅
杜成
长谷川史裕
郑继川
赵立军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110064527.7A priority Critical patent/CN102682308B/en
Publication of CN102682308A publication Critical patent/CN102682308A/en
Application granted granted Critical
Publication of CN102682308B publication Critical patent/CN102682308B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an imaging processing method comprising the following steps: an input step, inputting a first image and a second image with the same size; an image frame extracting step, in the same way, extracting at least a first image frame from the first image, and extracting at least a second image frame from the second image; an image frame match step, aiming at the second image frame, finding the closest first image frame in the first image as the first image frame corresponding to the second image frame; a comparison step, comparing the second image frame with the corresponding first image frame to determine whether the second image frame is same with or different from the corresponding first image frame; a marking step, if the second image frame is different from the corresponding first image frame, then marking the position of the second image frame in the second image, and adding the mark to the second image; and an output step, outputting the second image. The invention also correspondingly provides an image processing device.

Description

Image processing method and image processing equipment
Technical field
The present invention relates to a kind of image processing method and image processing equipment, more particularly, the present invention relates to a kind of image processing method for image comparison and image processing equipment.
Background technology
Along with the development of computer image technology, the demand of picture quality inspection and digital document analysis, also expanding step by step, wherein, in the urgent need to contrasting two width images, is judged to the technology whether it is identical.In reality, after original image is through a series of process of printing, scanning etc., or after original image carries out remote printing or remote scanning, often need to judge whether whether the image of gained after a series of processes is identical with original image, once changed; More it is desirable for if judging image was changed, preferably can also in the image of gained, find the position of being changed.If can realize this target, there is important meaning for document security.
U.S. patent documents US 7190470B2 (System and method for automatic documentverification, HEWLETT PACKARD DEVELOPMENT COMPANY, L.P) disclose a kind of document print verification system, it is for comparing the scanned document of original document and printed document.Specifically, for text document, image is changed into text, then carry out comparison text by text comparison techniques by OCR (Optical Character Recognition, optical character identification) technology.This patent relies on OCR technology, thereby can only process the image that flesh and blood is text, or only can process the part that in image, flesh and blood is text, and the image type that therefore can process is limited; And the enforcement of this patent needs dictionary, character-recognition errors will cause the mistake of text comparison, and therefore processing procedure is consuming time grows and easily makes mistakes.
U.S. patent documents US20060126106A1 (System and method for remote proofprinting and verification, XEROX CORP) provide a kind of for the remote printing verification method before submitting to user and browsing, proof procedure wherein judges that with the histogram of the contrast images that scans gained whether the two is identical by original image relatively.But this technology utilizes the histogram of entire image to compare, degree of accuracy is poor, even and in the situation that judging contrast images and changing with respect to original image, also cannot find the variation part in contrast images.
U.S. patent documents US 7076086B2 (Image inspection device, FUJI XEROX COLTD) a kind of equipment that is used for checking output image is provided, its image parameter by relatively resolution, brightness etc. judges whether output image changes with respect to original image.But this technology utilizes the parameter of entire image to compare, degree of accuracy is poor, even and in the situation that judging output image and changing with respect to original image, also cannot find the variation part in output image.
Summary of the invention
Make the present invention in view of the above-mentioned problems in the prior art, to solve problems of the prior art.The present invention proposes a kind of image processing method and image processing equipment, by extracting frames images and carry out image ratio based on frames images from image, judge that whether two width images are identical, and find that wherein piece image is with respect to the variation part of another piece image.
According to one embodiment of present invention, provide a kind of image processing method, comprising: input step, input same size the first image and the second image; Frames images extraction step in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image; Frames images pairing step for described the second frames images, is found immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image; Comparison step, relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images; Markers step, if determine that this second frames images first frames images corresponding from this is different, makes a mark to the position of this second frames images in the second image, and above-mentioned mark is appended to the second image; And output step, output the second image.
According to another embodiment of the invention, provide a kind of image processing equipment, comprising: input media, for inputting same size the first image and the second image; Frames images extraction element in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image; Frames images contrast means for for described the second frames images, is found immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image; Comparison means, for relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images; Labelling apparatus, if different for determining this second frames images first frames images corresponding from this, make a mark to the position of this second frames images in the second image, and above-mentioned mark is appended to the second image; And output unit, for exporting the second image.
Do not need with the help of a dictionary according to the enforcement of the image processing method of the embodiment of the present invention and image processing equipment, be not limited to the image that comparison flesh and blood is text, but whether can automatically, accurately judge two width images identical, and can automatically find the difference of the two.
By reading the detailed description of following the preferred embodiments of the present invention of considering by reference to the accompanying drawings, will understand better above and other target of the present invention, feature, advantage and technology and industrial significance.
Accompanying drawing explanation
Fig. 1 is the overview flow chart illustrating according to the image processing method of the embodiment of the present invention.
Fig. 2 A exemplarily illustrates and will it be implemented according to the first image of the embodiment of the present invention.
Fig. 2 B exemplarily illustrates and will it be implemented according to the second image of the embodiment of the present invention.
Fig. 3 is the process flow diagram illustrating according to the frames images leaching process of the embodiment of the present invention.
Fig. 4 A to Fig. 4 D schematically illustrates according to the schematic diagram of the frames images leaching process of the embodiment of the present invention by example, wherein, Fig. 4 A shows the gray-scale map of the color component extracting from the first image shown in Fig. 2 A, Fig. 4 B shows the pixel number of the color gray-scale map of the monochrome shown in Fig. 4 A with respect to the histogram of intensity profile, Fig. 4 C illustrate obtain from the color gray-scale map of monochrome shown in Fig. 4 A with a gray area between corresponding binary image, and Fig. 4 D illustrates the schematic diagram that carries out frames images and obtain each obtained frames images for the binary image shown in Fig. 4 C.
Fig. 5 is the schematic diagram that is illustrated in the situation of the difference of mark and contrast images on original image.
Fig. 6 is the general frame illustrating according to the image processing system of the embodiment of the present invention.
Fig. 7 is the general frame illustrating according to the image processing equipment of the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing explanation embodiments of the invention.
Fig. 1 is the overview flow chart illustrating according to the image processing method of the embodiment of the present invention.Image processing method according to the embodiment of the present invention comprises: input step S100, input same size the first image and the second image; Frames images extraction step S200 in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image; Frames images pairing step S300 for described the second frames images, finds immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image; Comparison step S400, relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images; Markers step S500, if determine that this second frames images first frames images corresponding from this is different, makes a mark to the position of this second frames images in the second image, and above-mentioned mark is appended to the second image; And output step S600, output the second image.
The first image and the second image in input step S100, inputted have identical size, and namely the first image and the second image are made up of the pixel of similar number ranks.The image processing method comparison same size of the embodiment of the present invention is two width images of formed objects, before input step S100, may further include preposition step, originally read in two width images, judge whether this two width picture size equates, be judged as size equal in the situation that, then using this two width image as the first image and the second image for input step S100 input.Can judge two width images by various prior aries varies in size also and then is judged as two width image differences, in according to the image processing method of the embodiment of the present invention, do not process, or by adjusted size step, two width picture sizes are adjusted into identical, and then using this two width image as the first image and the second image for input step S100 input.Fig. 2 A exemplarily illustrates and will it be implemented according to the first image of the embodiment of the present invention; Fig. 2 B exemplarily illustrates and will it be implemented according to the second image of the embodiment of the present invention, and wherein the first image can be original image, and the second image can be contrast images.
Extract respectively the first frames images and the second frames images at frames images extraction step S200 from the first image and the second image, for extracting the first frames images from the first image and extracting the second frames images from the second image, adopt in a like fashion.The first frames images and the second frames images can be commonly referred to as frames images.For example, can be by the method for looking for connected domain the FindContours order in conventional images software package OpenCV, or by calling existing BlobLibrary storehouse, from the first colored or monochromatic image and the second image, extract respectively in image such as the edge contour of the picture material of pattern or word and and then obtain the rectangular shaped rim that comprises picture material, come respectively as the first frames images and the second frames images.The first frames images or the second frames images all may be more than one, and on the other hand, the in the situation that of not comprising any content in image, for example white image of a width, can be using entire image as a frames images.The embodiment of the present invention also provides the another kind of means of extracting frames images from image, will describe below.By extracting frames images, the white space in image is excluded to processing procedure after this at least in part, thus can speed up processing.
At frames images pairing step S300, for the second frames images of extracting from the second image is found the first corresponding frames images the first frames images of extracting from the first image, in the situation that there is multiple the second frames images, be that each the second frames images is found the first corresponding frames images.The coordinate of the first image and the second image can be arranged according to same mode, for example, all take image lower-left end points as initial point or all take image center as initial point etc., thereby make the coordinate of any point in the first image and the second image all have comparability, any point in the first image and the second image all has the corresponding point of same coordinate in the other side's image in other words.Because frames images is rectangle frame, and it will be appreciated by those skilled in the art that, in the time extracting frames images, can know and record the information of this frames images, the information of frames images at least can comprise the apex coordinate of this frames images in respective image, in fact only need know that summit on a diagonal line of the frames images coordinate in image coordinate system is the position of known this frames images in this coordinate system.
The first image and the second image adopt identical coordinate system, can at least one first frames images of the first image, be that each second frames images is found the first corresponding frames images in several ways.For example, can be at least one first frames images the first nearest frames images of center point coordinate of this second frames images of finding center point coordinate distance, think that this first frames images is and this immediate frames images in the second frames images position, using this first frames images as first frames images corresponding with this second frames images, or, can find with the diagonal line of certain direction of this second frames images on first frames images of distance sum minimum of respective vertices, for example, the top left corner apex of this second frames images is minimum at least one first frames images at this apart from the distance sum on the summit, the lower right corner of this first frames images apart from the distance of the top left corner apex of certain the first frames images and the summit, the lower right corner of this second frames images, think that this first frames images is and this immediate frames images in the second frames images position, using this first frames images as first frames images corresponding with this second frames images, or, can find with the diagonal line of certain direction of this second frames images on first frames images of mean distance minimum of respective vertices, for example, the summit, the lower left corner of this second frames images is minimum at least one first frames images at this apart from the mean value of the distance on the summit, the upper right corner of this first frames images apart from the distance on the summit, the lower left corner of certain the first frames images and the summit, the upper right corner of this second frames images, think that this first frames images is and this immediate frames images in the second frames images position, using this first frames images as first frames images corresponding with this second frames images.The mode that is the first frames images corresponding to the second frames images searching is not limited to the above, it will be appreciated by those skilled in the art that, in the case of being that some the second frames images find the first frames images of multiple correspondences by certain mode, can be in conjunction with alternate manner to determine a first corresponding frames images, or also can, using the first frames images of the plurality of correspondence all as the first corresponding frames images, carry out processing procedure after this.It will be understood by those skilled in the art that the in the situation that of only having first frames images in the first image, this first frames images can be corresponding with all the second frames images.
At comparison step S400, whether the first frames images and the second frames images that relatively in frames images pairing step S300, are paired into mutual correspondence be identical.According to the embodiment of the present invention, can judge in several ways that whether the first frames images and the second frames images be identical, this compares deterministic process and is described further below.
At markers step S500, in comparison step S400, the first corresponding frames images and the second frames images being judged as to the two second different frames images, this second frames images position in the second image of mark.This mark both can carry out in the position of this second frames images, also can be outside the position of this second frames images, it is for example the mark that even points to this second frames images position in this second frames images outside the second image, or can list by list mode the coordinate position of this second frames images, can be the mode of any position of expressing this second frames images in a word, and above-mentioned mark is appended to the second image, as the ingredient of the second image.At output step S600, output the second image, if be that the second image is appeared mark at markers step S500, the second image of output contains above-mentioned mark.
As the improvement for above-mentioned image processing method, in above-mentioned image processing method, if there is the first frames images that there is no the second corresponding frames images, in described markers step, can be further to making a mark with this corresponding position of the first frames images in the second image, and described mark is appended to the second image.Namely, if after the processing through frames images pairing step S300, if certain first frames images does not have the second frames images to answer in contrast, can, in markers step S500, carry out mark for the relevant position of this first frames images in the second image further.Because the first image is identical with the coordinate system of the second image, therefore the coordinate position of this first frames images in the first image can be used as this relevant position of the first frames images in the second image.Similarly, this mark both can carry out in this corresponding position, also can be outside this relevant position, it is for example the mark that even points to this relevant position in this relevant position outside the second image, or can list by list mode the coordinate of this relevant position, can be any mode of expressing this relevant position of the first frames images in the second image in a word, above-mentioned mark be appended to the second image, as the ingredient of the second image.In the case, the mark that the second image of output step S600 output contains the relevant position in this second image about this first frames images.
As the improvement for above-mentioned image processing method, this image processing method can also comprise frames images segmentation step, if the tolerance of described the first frames images or the second frames images is greater than the first predetermined threshold, use fixing grid, with identical alignment thereof, described the first frames images or the second frames images are further divided into multiple the first frames images or multiple the second frames images.This frames images segmentation step can be chosen between frames images extraction step S200 and frames images pairing step S300 and carry out.
; for the first frames images of extracting by frames images extraction step S200 or the second frames images, alternatively, can further be divided into less frames images; replace this original image frame by less multiple frames images of cutting apart a frames images gained, enter processing after this.Can be using area as tolerance, set certain area threshold as the first predetermined threshold, the first frames images or the second frames images that area are greater than to this area threshold are cut apart; Or can be using length of side length as tolerance, set certain length threshold, if the length of long sides of the first frames images or the second frames images or be greater than this length threshold, if or the length of short sides of the first frames images or the second frames images is greater than this length threshold, cut apart this first frames images or the second frames images.This first predetermined threshold can be set in several ways, certain ratio of for example total image area, or certain ratio of the image length of side, or consider that the area of frames images or the distribution of the length of side that frames images extraction step S200 extracts set, the number of for example setting the frames images that will cut apart accounts for the ratio of frames images total number.
Can adopt grid to cut apart frames images, for whole the first frames images and the second frames images extracted from the first image and the second image, adopt the grid of same yardstick, and this grid should be identical with the alignment thereof of all frames images, for example adopting each lattice is the upper left corner end points alignment of the grid of 20*30 pixel all frames images of exceeding the first predetermined threshold in tolerance, to have comparability between multiple the first frames images after guaranteeing to cut apart and multiple the second frames images.It will be understood by those skilled in the art that the grid length of side can choose in several ways, for example the grid length of side accounts for certain ratio of the corresponding length of side of image etc.
If image processing method has been selected frames images segmentation step, frames images is replaced to original image frame input image frame pairing step S300 through multiple frames images of mesh segmentation gained.
Be described in as an example the first frames images mode that can adopt whether identical with the second frames images that is relatively paired into mutual correspondence in comparison step S400 in frames images pairing step S300 below.
For example, in described comparison step S400, can more described the second frames images and the position relationship of corresponding the first frames images, if the distance of this second frames images and this first frames images is greater than the second predetermined threshold, determine that this second frames images is different from this first frames images.
The first image and the second image adopt identical coordinate system, can define in several ways the distance of this second frames images and this first frames images, for example, the distance of this second frames images and this first frames images can be the distance at the center of this second frames images and the center of this first frames images; Or, the distance of this second frames images and this first frames images can be the distance sum between this second frames images and the respective vertices of this first frames images on unidirectional diagonal line, for example, the distance sum between the distance between this summit, the second frames images lower-left and the summit, lower-left of this first frames images and this summit, the second frames images upper right and the summit, upper right of this first frames images; Or, the distance of this second frames images and this first frames images can be that the distance between this second frames images and the respective vertices of this first frames images on unidirectional diagonal line is average, for example, the distance between the distance between this second frames images left upper apex and the left upper apex of this first frames images and this summit, the second frames images bottom right and the summit, bottom right of this first frames images is average; Etc..One skilled in the art will recognize that and can also define two distances between frames images by alternate manner.Identical with the mode that the definition mode of the distance of this first frames images can and define two distances between frames images in frames images pairing step S300 in this this second frames images, also can be different.
For judging that the second predetermined threshold whether the first corresponding frames images is identical with the second frames images can come to determine as required, for example, if adopt stricter standard, this second predetermined threshold should be less, otherwise this second predetermined threshold should be established highlyer.In addition, definite definition mode that it is also conceivable that frames images distance of the second predetermined threshold, for example, distance in the corresponding diagonal angle of employing vertex distance sum as two frames images, this second predetermined threshold can be than adopting central point distance as large in the situation of the distance of two frames images.In addition, definite size that it is also conceivable that entire image of the second predetermined threshold, for example, be set as certain ratio of the image length of side.If the distance between corresponding the second frames images and the first frames images is greater than the second predetermined threshold mutually, determine that this second frames images is different from this first frames images.
Or, for example, in described comparison step S400, can more described the second frames images with corresponding the first frames images respectively about the gray scale difference mean value of same hue, if maximum gray scale difference mean value is greater than the 3rd predetermined threshold, determine that this second frames images is different from this first frames images.
Those skilled in the art are to be understood that, can know RGB (RGB) pixel value of each pixel in the first corresponding frames images and the second frames images, thereby can calculate respectively the first frames images and the second frames images gray scale difference mean value about RGB component by following formula (1)-(3)
R diff=∑|R p1-R p2|/N (1)
G diff=∑|G p1-G p2|/N (2)
B diff=∑|B p1-B p2|/N (3)
Wherein, R diff, G diff, and B diffbe respectively corresponding the first frames images and the second frames images gray scale difference mean value about RGB component, R p1, G p1, and B p1be respectively pixel p1 in the first frames images gray-scale value about RGB component, R p1, G p1, and B p1be respectively pixel p2 in the second frames images gray-scale value about RGB component, p1 pixel is identical with p2 pixel coordinate, and N is the number of pixel in the first frames images or the second frames images.
As mentioned before, the first image and the second image adopt identical coordinate system, p1 and p2 are the pixel in this coordinate system with same coordinate, if the first corresponding frames images and the position of the second frames images in this coordinate system fit like a glove, pixel p1 wherein and p2 can realize corresponding one by one.But, the possibility location deviation in the same coordinate system due to the first corresponding frames images and the second frames images, in the case, can only consider " common factor " of the first frames images and the second frames images, namely only calculate pixel p1 and the p2 in the first frames images and the second frames images with same coordinate; Or, also can consider " union " of the first frames images and the second frames images, diffuse into the coordinate range of the other side's frames images outside namely the first frames images and the second frames images is interim in the first image and the second image respectively; Or, in calculating, do not consider that the first frames images comprises and the pixel of the position that the second frames images does not comprise, and the first frames images is diffused into the coordinate range that comprises the second frames images in the first image China and foreign countries, otherwise or.In a word, the first frames images and the second frames images are adjusted to position in coordinate system and fit like a glove, N pixel p1 and p2 that it has after adjusting can realize correspondence one by one.
Then, with R diff, G diff, and B diffin maximal value as the maximum gray scale difference mean value of the second corresponding frames images and the first frames images, if this maximum gray scale difference mean value is greater than the 3rd predetermined threshold, determine that this second frames images is different from this first frames images.Or, for example, also can be set in R diff, G diff, and B diffall be greater than in the situation of the 3rd predetermined threshold, determine that this second frames images is different from this first frames images.
For judging that the 3rd predetermined threshold whether the first corresponding frames images is identical with the second frames images can come to determine as required, for example, if adopt stricter standard, the 3rd predetermined threshold should be less, otherwise the 3rd predetermined threshold should be established highlyer.In addition, definite gray-scale value scope that it is also conceivable that entire image of the 3rd predetermined threshold, for example, be set as certain ratio of maximum gradation value (for example 256).If the maximum gray scale difference mean value between corresponding the second frames images and the first frames images is greater than the 3rd predetermined threshold mutually, determine that this second frames images is different from this first frames images.
It will be understood by those skilled in the art that R, G, B component must all not adopt, but can therefrom optionally use, only need to guarantee choose identical color component for the first frames images and the second frames images.
It will be understood by those skilled in the art that HSI (tone-saturation degree-intensity) component that also can utilize each pixel determines that whether this second frames images is identical with this first frames images.
Or, for example, in described comparison step S400, can more described the second frames images with corresponding the first frames images respectively about the histogram distance of same hue, if maximum histogram distance is greater than the 4th predetermined threshold, determine that this second frames images is different from this first frames images.
As mentioned before, the first image and the second image adopt identical coordinate system, if the first corresponding frames images and the position of the second frames images in this coordinate system are not exclusively identical, can consider that aforementioned manner adjusts to position in coordinate system the first frames images and the second frames images and fit like a glove.
Those skilled in the art are to be understood that, RGB (RGB) pixel value that can know each pixel in the first corresponding frames images and the second frames images is gray-scale value, thereby also just can know that the first frames images divides other histogram and the second frames images to divide other histogram about rgb color about rgb color.Wherein, the histogram that can adopt can be histogram arbitrarily, for example pixel number with respect to the histogram of intensity profile or local binary patterns (Local Binary Patterns, LBP) histogram, etc.
Describe as an example of employing LBP histogram example.Calculate respectively this first frames images about the LBP histogram of R component and this second frames images about the distance between the LBP histogram of R component, this first frames images about the LBP histogram of G component and this second frames images about the distance between the LBP histogram of G component and this first frames images about the LBP histogram of B component and this second frames images about the distance between the LBP histogram of B component.The histogram distance that can adopt can be any histogram distance, for example CHI-Square (card side) distance, relevant (Correlation) distance etc.In addition, can be according to design needs, for example character of histogrammic character, histogram distance and strict or loose degree etc., set the 4th predetermined threshold for comparing with each histogram distance.For example, if adopt stricter standard, the 4th predetermined threshold should be less, otherwise the 4th predetermined threshold should be established highlyer.
Then, maximum histogram distance using the maximal value in the histogram distance about RGB component as the second corresponding frames images and the first frames images, if this maximum histogram distance is greater than the 4th predetermined threshold, determine that this second frames images is different from this first frames images.Or, for example also can be set in about the histogram distance of RGB component and all be greater than in the situation of the 4th predetermined threshold, determine that this second frames images is different from this first frames images.
It will be understood by those skilled in the art that HSI (tone-saturation degree-intensity) component that also can utilize each pixel determines that whether this second frames images is identical with this first frames images.
Those skilled in the art will recognize that, the means whether above-mentioned comparison diagram frame is identical can be distinguished use separately, also can be according to random order, use by combination in any, in the situation that obtaining different comparative result by above-mentioned means, can determine comparative result according to design inclination, for example, if adopt stricter standard, can set and only have various means to be all judged as identically just to determine that two frames images are identical, if adopt looser standard, be judged as identical identical with regard to definite two frames images as long as can set a kind of means; In addition, can also pass through alternate manner, adopt other image parameter to carry out the first more corresponding frames images whether identical with the second frames images.
A kind of method of extracting frames images that can adopt according to the image processing method of the embodiment of the present invention is described below.Fig. 3 is the process flow diagram illustrating according to the frames images leaching process of the embodiment of the present invention, and in conjunction with Fig. 4 A to Fig. 4 D, the implementation process of this frames images leaching process is schematically described by example.
As shown in Figure 3, frames images leaching process S200 can comprise: gray level image extraction step S210, extracts respectively the gray level image of same hue from the first image and the second image; Histogram extraction step S220, each gray level image extracting for each gray level image extracting from the first image with from the second image extracts respectively the histogram of pixel number with respect to intensity profile; Partiting step S230 between gray area, each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, is divided into the gray scale of corresponding gray level image between gray area according to histogram; Binaryzation step S240, each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, is binary image by corresponding Binary Sketch of Grey Scale Image respectively about coming between each gray area; And frames images obtaining step S250, in each binary image of the first image, extract respectively frames images, as described at least one first frames images, in each binary image of the second image, extract respectively frames images, as described at least one second frames images.
At gray level image extraction step S210, can adopt known means arbitrarily that image is divided into corresponding monochromatic gray-scale map according to each component.For example, the first image can be divided into a R image, a G image and a B image about RGB component, and can utilize the means of identical extraction component and the second image is divided into the 2nd R image, the 2nd G image and the 2nd B image about RGB component.So that the original image shown in Fig. 2 A (the first image) is illustrated to frames images leaching process as example, it will be understood by those skilled in the art that frames images leaching process S200 also can similarly be implemented on the contrast images shown in Fig. 2 B (the second image).
Fig. 4 A shows the schematic diagram by the first image shown in Fig. 2 A being extracted respectively to the above-mentioned G image that RGB component obtains, describe as an example of G component gray-scale map example at this, it will be understood by those skilled in the art that implementing gray level image extraction step S210 can also obtain R component gray-scale map and B component gray-scale map similarly.
At histogram extraction step S220, can utilize known means, extract respectively a R image, a G image and a B image and the 2nd R image, the 2nd G image and the 2nd B image pixel number separately histogram with respect to intensity profile, wherein, the tonal range of each gray level image can be 0 to 255, also can be for example 0 to 31,0 to 1023 etc., can make the tonal range of each gray level image identical.
Fig. 4 B shows the pixel number of the G image shown in Fig. 4 A of extraction with respect to the histogram of intensity profile.As example describes, it will be understood by those skilled in the art that enforcement histogram extraction step S220 can also obtain the histogram of R component gray-scale map and B component gray-scale map similarly at this histogram take G component gray-scale map.
Partiting step S230 between gray area, for a R image, a G image and a B image and the 2nd R image, the 2nd G image and the 2nd B image histogram separately, the tonal range of above-mentioned each gray level image is divided between at least one gray area, as much as possible the picture material such as pattern, text in gray level image is distinguished between different gray areas from its background.The tonal range of dividing each gray level image can adopt identical means and standard.
The following describes between gray area, in partiting step S230, the tonal range of each gray level image is divided into the processing between at least one gray area.This division processing example is as being, between described gray area in partiting step S230, the maximum value of the predetermined number of capture vegetarian refreshments number maximum in histogram, take both sides end points and the most contiguous in a predetermined direction minimum point of each maximum value or 0 value point as boundary, the gray scale of corresponding gray level image is divided between gray area.
For a gray level image, in the histogram at its pixel with respect to the distribution of gray-scale value, suppose that transverse axis represents gray-scale value, the pixel number that longitudinal axis represent pixel value is corresponding gray-scale value.It will be appreciated by those skilled in the art that, above-mentioned form is this kind of typical form of histogram, the enforcement of the embodiment of the present invention does not rely on the histogram of this kind of form, if by its transverse and longitudinal axle meaning exchange, still can implement the present invention, as long as embodying the histogram of pixel with respect to the distribution of gray-scale value, all can originally realize the embodiment of the present invention.
In the histogram of above-mentioned concrete form, the maximum point of predetermined number that for example can capture vegetarian refreshments number maximum, the for example maximum point of pixel number in front M position, M is natural number, be for example 2,3,5 ..., then from this M maximum point, in the direction that for example gray-scale value increases, find the point that nearest separately minimum point or pixel number are 0, as separation, thereby gray-scale value can be divided into multiple gray areas by above-mentioned separation from 0 to peaked scope.It will be understood by those skilled in the art that after definite M maximum point, in the direction that also can reduce at for example gray-scale value, find the point that nearest separately minimum point or pixel number are 0, as separation.
For example, histogram for the pixel number of the G image shown in Fig. 4 B with respect to intensity profile, maximum point with capture vegetarian refreshments number in first 3, be some Q1, Q2 and the Q3 shown in Fig. 4 B, then, the direction (be in Fig. 4 B direction) from left to right increasing at gray-scale value is upper, finds the point that nearest separately minimum point or pixel number are 0, as separation.For maximum point Q1, the nearest minimum point Q1 ' in its right side is as separation, for maximum point Q2, the nearest minimum point Q2 ' in its right side is separation, for maximum point Q3, because itself corresponding gray-scale value is maximal value 255, thereby no longer find separation on its right side, himself can be considered as a separation.Thereby, by lowest gray value 0, the gray-scale value 48 that minimum point Q1 ' is corresponding, the gray-scale value 195 that minimum point Q2 ' is corresponding, and the highest gray-scale value 255, the tonal range of the G image shown in Fig. 4 B is divided between three gray areas to [0,48], [49,195] and [196,255].
The processing that the tonal range of gray level image is divided between gray area is not limited to mentioned above, can also be for example, determine certain number threshold value, this number threshold value can be certain ratio of total number of image pixels object, in histogram, determine the maximum point of pixel number more than this number threshold value, then find separation according to determined maximum point; Or, in histogram, find pixel number and be continuously 0 gray value interval, determine M gray value interval of gray value interval maximum, or definite gray value interval is greater than the gray value interval of certain threshold value, using any point in determined gray value interval as separation, gray-scale value is divided into multiple gray areas from 0 to peaked scope.Those skilled in the art can conceive alternate manner and divide between gray area.
At binaryzation step S240, between a R image, a G image and a B image and the 2nd R image, the 2nd G image and the 2nd B image each gray area separately, according between each gray area by corresponding Binary Sketch of Grey Scale Image.For example, for the some gray level images in above-mentioned multiple gray level images, in the tonal range of this gray level image, mark off between certain gray area, can gray-scale value in this gray level image be put black by the pixel in this gray area, remaining pixel is put white, between each gray area so marking off in the tonal range about this gray level image, process, from this gray level image obtain with its each gray area between corresponding each binary image.It will be appreciated by those skilled in the art that, the mode of image binaryzation is not limited to as mentioned above, can be also for example that in this gray level image, the pixel of gray-scale value in this gray area put white, remaining pixel is put black, or any other distinguished between gray area the mode of inside and outside pixel with two-value.
For example, divide between the gray area obtaining [0 for the tonal range of the G image shown in Fig. 4 A, 48], by gray-scale value between gray area [0,48] pixel in scope is set to 1 (being shown as black), and the pixel of all the other gray-scale values is all set to 0 (being shown as white), thus from a G image obtain and gray area between [0,48] corresponding binary image, as shown in Figure 4 C.It will be understood by those skilled in the art that and can in the same way, extract the binary image between corresponding gray area for [49,195] between gray area and [196,255] from a G image.
At frames images obtaining step S250, for with a R image, a G image and a B image and the 2nd R image, the 2nd G image and the 2nd B image each gray area separately between corresponding each binary image, can utilize any known extraction such as the picture material of pattern or text edge contour and and then obtain the technology of rectangle frame that comprises picture material, in above-mentioned each binary image, extract rectangle frame and be used as frames images.Extract edge contour and determine that according to edge contour rectangle frame can be by the method for looking for connected domain the FindContours order in known image software package OpenCV, or realize by calling BlobLibrary storehouse.Due to binaryzation of each image, thereby leaching process is easy and accurate.The one R image, a G image and a B image source are from the first image, all frames images of therefrom extracting are the first frames images, the 2nd R image, the 2nd G image and the 2nd B image source are from the second image, and all frames images of therefrom extracting are the second frames images.
Fig. 4 D illustrate for shown in Fig. 4 C with a gray area between corresponding binary image carry out frames images and obtain the schematic diagram of each obtained frames images, the rectangle frame that surrounds each content of text in image in Fig. 4 D is extracted frames images, in this case multiple the first frames images.It will be understood by those skilled in the art that can be in the same way, for and gray area between [49,195] and [196,255] corresponding binary image extract frames images.
It will be understood by those skilled in the art that the first image and the second image can have identical coordinate system, the position coordinates in this coordinate system of all frames images can be known and record.Now, each frames images of extracting all has definite position in coordinate system, can enter this aftertreatment by the state in respective binary image with it, and namely this frames images comprises the content in its position range in respective binary image.Can enter this aftertreatment by the state in corresponding monochromatic gray level image with it, namely this frames images comprises the content in its position range in corresponding gray level image.Also can enter this aftertreatment by the state in the corresponding first or second image with it, namely this frames images is if the first frames images comprises the content in its position range in the first image, and this frames images is if the second frames images comprises the content in its position range in the second image.In the case, the frames images of extracting from the gray-scale map of different color components may exist overlapping, if the coincidence completely the frames images location of extracting from the gray-scale map of different color components, can only record the information including position about one of them; And if the first or second image is coloured picture, frames images may comprise colored content.
Those skilled in the art are further appreciated that R, G, B component must all not adopt, but can therefrom optionally use, and only need to guarantee choose identical color component for the first image and the second image, to guarantee the comparability of the first frames images and the second frames images.
No matter by which kind of means divide and obtain between gray area, as the improvement for the embodiment of the present invention, all can attempt between gray area, to be further subdivided between more sub-gray area, then to replace between former gray area between the multiple sub-gray area that obtains of segmentation, as between new multiple gray areas for processing procedure thereafter.For example, can also comprise fine division step between gray area according to the image processing method of the embodiment of the present invention, for between the gray area of dividing in partiting step between described gray area, travel through successively gray scale in corresponding gray level image and be positioned at the pixel between this gray area, the gray scale that gray scale difference between neighbor pixel is less than to the 5th predetermined threshold is put between same sub-gray area, merge having between the sub-gray area of lap, between the gray area as one or more segmentations.Between this gray area fine division step can be between gray area after partiting step S230, before binaryzation step S240, carry out, be used for improving the implementation effect of binaryzation step S240.
Between above-mentioned gray area in fine division step, about processing respectively between each gray area.For between a gray area, be positioned at the pixel between this gray area for gray scale in corresponding gray level image, can select arbitrarily wherein certain pixel is starting point, its gray scale is placed between certain sub-gray area.For between each sub-gray area, wherein the gray-scale value of the pixel of gray-scale value minimum is the tonal range between this sub-gray area to the gray-scale value of the pixel of gray-scale value maximum.Find according to certain orientation order from starting point, as direction, direction, up and down left and right four direction or eight directions towards periphery from the top down by column and in row from left to right line by line and in row, carry out in the following manner the comparison procedure of neighbor pixel.For example, more between the sub-gray area of i RANGEi (i is the index value between sub-gray area, natural number) in pixel Sig (g is the index value of pixel in RANGEi between sub-gray area, natural number) and the gray scale difference between adjacent certain pixel Sx to be judged whether be less than the 5th predetermined threshold.If the gray scale difference between these two neighbor pixels is less than the 5th predetermined threshold, the gray scale of this pixel Sx is placed in RANGEi between this sub-gray area, this pixel Sx can be labeled as Sih, and (h is the index value of pixel in RANGEi between sub-gray area, natural number), and start to continue above-mentioned searching, comparison by former direction from this pixel Sih; If the gray value differences between these two neighbor pixels is more than or equal to the 5th predetermined threshold, take the RANGEj (index value of j as between sub-gray area between the sub-gray area of the newly-built j of gray-scale value of this pixel Sx, natural number), this pixel Sx can be labeled as Sjg, and (g is the index value of pixel in RANGEj between sub-gray area, natural number), finish in this direction from the searching process of pixel Sig, from pixel Sjg start to find in this direction, comparison procedure.
Wherein, can need to set above-mentioned the 5th predetermined threshold according to design.For example, if adopt stricter standard, the 5th predetermined threshold should be less, otherwise the 5th predetermined threshold should be established greatlyr.The 5th predetermined threshold for example also can be set as the certain proportion of the total tonal range of gray level image, or can be set as the certain proportion of the tonal range between corresponding gray area.
In addition, those skilled in the art will recognize that, for between a gray area, be positioned at the pixel between this gray area for gray scale in corresponding gray level image, can select arbitrarily wherein multiple pixels is starting point, carry out respectively above-mentioned searching, comparison process, travel through gray scale in this gray level image and be positioned at the whole pixels between this gray area, can complete above-mentioned searching, comparison procedure, in the case, alternatively, the searching that starts from multiple starting points, comparison procedure are separately for different pixels, to avoid repetition.
Traveling through gray scale in this gray level image is positioned at after the whole pixels between this gray area, may obtain between multiple sub-gray areas, tonal range can be existed between overlapping sub-gray area and merge, between the gray area that between at least one the sub-gray area forming after merging, conduct is segmented respectively, replace between this original gray area, for processing after this.
For example, for [0,48], [49,195] and [196 between three gray areas that mark off from the tonal range of the G image shown in Fig. 4 B, 255], for [49,195] between gray area wherein, by segmentation process between above-mentioned gray area, in the situation that the 5th predetermined threshold is 15, between this gray area, can be further subdivided into [49,150] and [151,195] between two gray areas.With between two gray areas [49,150] and [151,195] replace former [49,195] gray space is for binaryzation step S240, can more exactly the gray scale of the content of text in image and background be divided between different gray areas, thereby be conducive to obtain more exactly binary image, and and then be conducive to extract more exactly frames images.
Above according to the explanation of the embodiment of the present invention, in the second image, find the difference with the first image, in fact, if take the first image as original image, take the second image as contrast images, according to the embodiment of the present invention, can in contrast images, find on the one hand the difference with original image, on the other hand, by after the first image and the second image exchange, that is, original image is inputted as the second image, contrast images is inputted as the first image, can aspect original image, be marked the variation of contrast images with respect to it.For example, using image shown in Fig. 2 A as the second image and image shown in Fig. 2 B is implemented after the embodiment of the present invention as the first image, can export result as shown in Figure 5.Fig. 5 is the schematic diagram that is illustrated in the situation of the difference of mark and contrast images on original image, and wherein the indicated position of rectangle frame is the difference of original image with respect to contrast images.
The present invention can also implement by a kind of image processing system.Fig. 6 is the general frame illustrating according to the image processing system of the embodiment of the present invention 1000, as shown in Figure 6, image processing system 1000 can comprise: input equipment 1100, for example, for input the image that will contrast, the remote input equipment that can comprise keyboard, Genius mouse, scanner and communication network and connect from outside; Treatment facility 1200, above-mentioned according to the image processing method of the embodiment of the present invention for implementing, for example can comprise central processing unit of computing machine etc.; Output device 1300, for implement the result of above-mentioned image processing method gained to outside output, for example, can comprise display, printer and communication network and the long-range output device that connects etc.; And memory device 1400, for storing image to be contrasted in volatile or non-volatile mode, implementing result, order, intermediate data of above-mentioned image processing method gained etc., for example, can comprise the various volatile or nonvolatile memory of random access memory (RAM), ROM (read-only memory) (ROM), hard disk or semiconductor memory etc.
The present invention can also be embodied as a kind of image processing equipment.Fig. 7 is the general frame illustrating according to the image processing equipment of the embodiment of the present invention 2000, as shown in Figure 7, image processing equipment 2000 can comprise: input media 2100, can be used for carrying out above-mentioned input step S100, in order to input same size the first image and the second image; Frames images extraction element 2200, can be used for carrying out above-mentioned frames images extraction step S200, in order in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image; Frames images contrast means 2300, can be used for carrying out above-mentioned frames images pairing step S300, in order to for described the second frames images, finds immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image; Comparison means 2400, can be used for carrying out above-mentioned comparison step S400, in order to relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images; Labelling apparatus 2500, can be used for carrying out above-mentioned markers step S500, if different in order to determine this second frames images first frames images corresponding from this, the position of this second frames images in the second image made a mark, and above-mentioned mark is appended to the second image; And output unit 2600, can be used for carrying out above-mentioned output step S600, in order to export the second image.
In above-mentioned image processing equipment 2000, if there is the first frames images that there is no the second corresponding frames images, labelling apparatus 2500 can also be to making a mark with this corresponding position of the first frames images in the second image, and described mark is appended to the second image.
Above-mentioned image processing equipment 2000 may further include frames images segmenting device, can be used for carrying out above-mentioned frames images segmentation step, if the tolerance in order to described the first frames images or the second frames images is greater than the first predetermined threshold, use fixing grid, with identical alignment thereof, described the first frames images or the second frames images are further divided into multiple the first frames images or multiple the second frames images.
In above-mentioned image processing equipment 2000, described comparison means 2400 can more described the second frames images and the position relationship of corresponding the first frames images, if the distance of this second frames images and this first frames images is greater than the second predetermined threshold, determine that this second frames images is different from this first frames images.
In above-mentioned image processing equipment 2000, described comparison means 2400 can more described the second frames images with corresponding the first frames images respectively about the gray scale difference mean value of same hue, if maximum gray scale difference mean value is greater than the 3rd predetermined threshold, determine that this second frames images is different from this first frames images.
In above-mentioned image processing equipment 2000, described comparison means 2400 can more described the second frames images with corresponding the first frames images respectively about the histogram distance of same hue, if maximum histogram distance is greater than the 4th predetermined threshold, determine that this second frames images is different from this first frames images.
In above-mentioned image processing equipment 2000, described frames images extraction element 2200 can be used for carrying out frames images extraction step S200 described above, described frames images extraction element 2200 can comprise: gray level image extraction element, can be used for carrying out above-mentioned gray level image extraction step S210, in order to extract respectively the gray level image of same hue from the first image and the second image; Histogram extraction element, can be used for carrying out above-mentioned histogram extraction step S220, and each gray level image extracting in order to each gray level image for extracting from the first image with from the second image extracts respectively the histogram of pixel number with respect to intensity profile; Gray scale interval division device, can be used for carrying out partiting step S230 between above-mentioned gray area, in order to each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, according to histogram, the gray scale of corresponding gray level image is divided between gray area; Binaryzation device, can be used for carrying out above-mentioned binaryzation step S240, in order to each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, be binary image by corresponding Binary Sketch of Grey Scale Image respectively about coming between each gray area; And frames images acquisition device, can be used for carrying out above-mentioned frames images obtaining step S250, in order to extract respectively frames images in each binary image of the first image, as described at least one first frames images, in each binary image of the second image, extract respectively frames images, as described at least one second frames images.
In above-mentioned image processing equipment 2000, described gray scale interval division device can be in histogram the maximum value of the predetermined number of capture vegetarian refreshments number maximum, take both sides end points and the most contiguous in a predetermined direction minimum point of each maximum value or 0 value point as boundary, the gray scale of corresponding gray level image is divided between gray area.
Above-mentioned image processing equipment 2000 may further include subdividing device between gray area, between described gray area, subdividing device can be used for carrying out fine division step between above-mentioned gray area, in order between the gray area for dividing in partiting step between described gray area, travel through successively gray scale in corresponding gray level image and be positioned at the pixel between this gray area, the gray scale that gray scale difference between neighbor pixel is less than to the 5th predetermined threshold is put between same sub-gray area, merge having between the sub-gray area of lap, between the gray area as one or more segmentations.
The sequence of operations illustrating in instructions can be carried out by the combination of hardware, software or hardware and software.In the time carrying out this sequence of operations by software, computer program wherein can be installed in the storer in the computing machine that is built in specialized hardware, make computing machine carry out this computer program.Or, computer program can be installed in the multi-purpose computer that can carry out various types of processing, make computing machine carry out this computer program.
For example, can be using pre-stored computer program in the hard disk or ROM (ROM (read-only memory)) of recording medium.Or, can store (record) computer program in removable recording medium, such as floppy disk, CD-ROM (compact disc read-only memory), MO (magneto-optic) dish, DVD (digital versatile disc), disk or semiconductor memory temporarily or for good and all.So removable recording medium can be provided as canned software.
The present invention has been described in detail with reference to specific embodiment.But clearly, in the situation that not deviating from spirit of the present invention, those skilled in the art can carry out change and replace embodiment.In other words, the present invention is open by the form of explanation, rather than is limited to explain.Judge main idea of the present invention, should consider appended claim.

Claims (10)

1. an image processing method, comprising:
Input step, input same size the first image and the second image;
Frames images extraction step in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image;
Frames images pairing step for described the second frames images, is found immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image;
Comparison step, relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images;
Markers step, if determine that this second frames images first frames images corresponding from this is different, makes a mark to the position of this second frames images in the second image, and above-mentioned mark is appended to the second image; And
Output step, output the second image.
2. according to image processing method claimed in claim 1, wherein,
If there is the first frames images that there is no the second corresponding frames images,, in described markers step, also to making a mark with this corresponding position of the first frames images in the second image, and described mark appended to the second image.
3. according to image processing method claimed in claim 1, also comprise
Frames images segmentation step, if the tolerance of described the first frames images or the second frames images is greater than the first predetermined threshold, use fixing grid, with identical alignment thereof, described the first frames images or the second frames images are further divided into multiple the first frames images or multiple the second frames images.
4. according to the image processing method described in any one in claim 1-3, wherein,
In described comparison step, the position relationship of more described the second frames images and corresponding the first frames images, if the distance of this second frames images and this first frames images is greater than the second predetermined threshold, determines that this second frames images is different from this first frames images.
5. according to the image processing method described in any one in claim 1-3, wherein,
In described comparison step, more described the second frames images and corresponding the first frames images are respectively about the gray scale difference mean value of same hue, if maximum gray scale difference mean value is greater than the 3rd predetermined threshold, determine that this second frames images is different from this first frames images.
6. according to the image processing method described in any one in claim 1-3, wherein,
In described comparison step, more described the second frames images respectively about the histogram distance of same hue, if maximum histogram distance is greater than the 4th predetermined threshold, determines that this second frames images is different from this first frames images with the first corresponding frames images.
7. according to image processing method claimed in claim 1, wherein, described frames images extraction step comprises:
Gray level image extraction step, extracts respectively the gray level image of same hue from the first image and the second image;
Histogram extraction step, each gray level image extracting for each gray level image extracting from the first image with from the second image extracts respectively the histogram of pixel number with respect to intensity profile;
Partiting step between gray area, each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, is divided into the gray scale of corresponding gray level image between gray area according to histogram;
Binaryzation step, each gray level image extracting for each gray level image extracting from the first image with from the second image respectively, is binary image by corresponding Binary Sketch of Grey Scale Image respectively about coming between each gray area; And
Frames images obtaining step extracts respectively frames images in each binary image of the first image, as described at least one first frames images, in each binary image of the second image, extracts respectively frames images, as described at least one second frames images.
8. according to image processing method claimed in claim 7, wherein, between described gray area in partiting step, the maximum value of the predetermined number of capture vegetarian refreshments number maximum in histogram, take both sides end points and the most contiguous in a predetermined direction minimum point of each maximum value or 0 value point as boundary, the gray scale of corresponding gray level image is divided between gray area.
9. according to image processing method claimed in claim 7, also comprise
Fine division step between gray area, for between the gray area of dividing in partiting step between described gray area, travel through successively gray scale in corresponding gray level image and be positioned at the pixel between this gray area, the gray scale that gray scale difference between neighbor pixel is less than to the 5th predetermined threshold is put between same sub-gray area, merge having between the sub-gray area of lap, between the gray area as one or more segmentations.
10. an image processing equipment, comprising:
Input media, for inputting same size the first image and the second image;
Frames images extraction element in an identical manner, extracts at least one first frames images from the first image, extracts at least one second frames images from the second image;
Frames images contrast means for for described the second frames images, is found immediate the first frames images in position, as first frames images corresponding with this second frames images in the first image;
Comparison means, for relatively the second frames images and the first corresponding frames images, identical or different to determine this second frames images and corresponding the first frames images;
Labelling apparatus, if different for determining this second frames images first frames images corresponding from this, make a mark to the position of this second frames images in the second image, and above-mentioned mark is appended to the second image; And
Output unit, for exporting the second image.
CN201110064527.7A 2011-03-17 2011-03-17 Imaging processing method and device Expired - Fee Related CN102682308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110064527.7A CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110064527.7A CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Publications (2)

Publication Number Publication Date
CN102682308A CN102682308A (en) 2012-09-19
CN102682308B true CN102682308B (en) 2014-05-28

Family

ID=46814203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110064527.7A Expired - Fee Related CN102682308B (en) 2011-03-17 2011-03-17 Imaging processing method and device

Country Status (1)

Country Link
CN (1) CN102682308B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123029A (en) * 2017-04-28 2017-09-01 深圳前海弘稼科技有限公司 The method and system of fruits and vegetables is specified in purchase
CN109766837A (en) * 2019-01-11 2019-05-17 广州人资选互联网科技有限公司 A kind of employee information input system
CN111177470B (en) * 2019-12-30 2024-04-30 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1015150A (en) * 1996-06-28 1998-01-20 Sanyo Electric Co Ltd Recognition device for pieces on japanese chess board

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平10-15150A 1998.01.20

Also Published As

Publication number Publication date
CN102682308A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN110766014B (en) Bill information positioning method, system and computer readable storage medium
CN104112128B (en) Digital image processing system and method applied to bill image character recognition
EP2897082B1 (en) Methods and systems for improved license plate signature matching
US6839466B2 (en) Detecting overlapping images in an automatic image segmentation device with the presence of severe bleeding
US6771813B1 (en) Image processing apparatus and pattern extraction apparatus
US7627148B2 (en) Image data processing apparatus and method, and image data processing program
US6778703B1 (en) Form recognition using reference areas
JP4928310B2 (en) License plate recognition device, control method thereof, computer program
JP6139396B2 (en) Method and program for compressing binary image representing document
CN103034848B (en) A kind of recognition methods of form types
Khotanzad et al. Contour line and geographic feature extraction from USGS color topographical paper maps
JP6080259B2 (en) Character cutting device and character cutting method
CN108596166A (en) A kind of container number identification method based on convolutional neural networks classification
CN107093172A (en) character detecting method and system
KR19990072314A (en) Color image processing apparatus and pattern extracting apparatus
US6704456B1 (en) Automatic image segmentation in the presence of severe background bleeding
JP2002133426A (en) Ruled line extracting device for extracting ruled line from multiple image
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
Azad et al. A novel and robust method for automatic license plate recognition system based on pattern recognition
CN113158895A (en) Bill identification method and device, electronic equipment and storage medium
JP4149464B2 (en) Image processing device
CN111126266B (en) Text processing method, text processing system, equipment and medium
CN102682308B (en) Imaging processing method and device
JP4275866B2 (en) Apparatus and method for extracting character string pattern from color image
JP5929282B2 (en) Image processing apparatus and image processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140528

Termination date: 20170317