CN100527164C - Mining method for low-level image and image mining apparatus employing the same - Google Patents

Mining method for low-level image and image mining apparatus employing the same Download PDF

Info

Publication number
CN100527164C
CN100527164C CNB2006100543338A CN200610054333A CN100527164C CN 100527164 C CN100527164 C CN 100527164C CN B2006100543338 A CNB2006100543338 A CN B2006100543338A CN 200610054333 A CN200610054333 A CN 200610054333A CN 100527164 C CN100527164 C CN 100527164C
Authority
CN
China
Prior art keywords
image
gray
pixel
gradually flattening
dig
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100543338A
Other languages
Chinese (zh)
Other versions
CN1866295A (en
Inventor
谢正祥
刘玉红
李虹
王志芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Medical University
Original Assignee
Chongqing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Medical University filed Critical Chongqing Medical University
Priority to CNB2006100543338A priority Critical patent/CN100527164C/en
Publication of CN1866295A publication Critical patent/CN1866295A/en
Application granted granted Critical
Publication of CN100527164C publication Critical patent/CN100527164C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The disclosed mining method for bottom image comprises: mining target gray/color information by gray/color spectral leveling smooth method to determine parameter; obtaining the result with Zadeh-X transformation. Wherein, a CPU connects to a detection device for image information, a mining device, an interacting device, and an external image recognition device. This invention can mine the hidden image with well high resolution.

Description

Be used for the method for digging of bottom layer image and adopt the image mining device of this method
Technical field
The invention belongs to technical field of image processing, the method and apparatus that the image information that the strong background of quilt that can not see people's vision has been buried is excavated specifically, relates to a kind of image mining device that is used for the method for digging of bottom layer image and adopts this method.
Background technology
The gray limiting resolution of human vision is about the difference of 4~5 levels, and the human eyesight of image information that is lower than this limit can't be differentiated at all, therefore can utilize the gray resolution restriction of human vision to realize hiding of target image; Correspondingly, the target image of hiding or buried by strong background is excavated in order to obtain target image.This method can be used for text, image encrypting and decrypting, the transmission of maintaining secrecy, or excavate captured image information under the mal-condition is as the image information of escaping behind the automobile accident, hiding in financial institution's monitoring image etc., these information can be automobile profile, the trade mark or textile design, or even dna image.In the conventional images treatment technology, when excavating the bottom layer image information that the human vision of being buried by strong background can not see, what adopt usually is the gray-level histogram equalization method, its specific practice is: 256 gray levels of image 0 to 255 are divided into plurality of sections, as 16 sections or 32 sections, obtain respectively that gray scale belongs to certain section pixel count in the piece image, make histogram (as shown in Figure 2) then, to observe the statistical distribution of gray scale.The used formula of the algorithm of histogram equalization is TE i=p/q, in the formula subscript i represent segments (i=1,2 ..., q), P is the total pixel number of piece image, TE iBe the pixel count of i section, promptly every section pixel count is identical.This equalization histogram does not contain important information, and its shortcoming is: resolution is low, the bottom layer image less than 4~5 gray level differences cannot be detected and excavate, so its application is very limited.
Summary of the invention
The purpose of this invention is to provide a kind of image mining device that is used for the method for digging of bottom layer image and adopts this method, make it have a gray class resolution ratio.
For achieving the above object, the method for digging that is used for bottom layer image of the present invention, its key is to comprise the following steps:
(1), step 1, excavate the gray letter wait to dig image with gray gradually flattening method
Breath is made gray behind the gradually flattening of waiting to dig image, and its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug, adopt digitizing photographic equipment or image scanning apparatus to obtain image to be dug;
B, described image file to be dug is read in computing machine, by central-processor organization obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), (x y), can know figure mechanism by DELPHI and realize chromatic value R (x B, y), G (x, y), B (x, obtaining y);
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image by central-processor organization, obtain described wait to dig each gray values of pixel points OZ in the image (x, y);
D, obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i to the gray-scale value OZ among the different grey-scale i by central-processor organization again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray of waiting to dig image;
E, rely on described original gray, utilize the gradually flattening method, make gray behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
(2), step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of gray behind the described gradually flattening, by the excavation scope of man-machine conversation mechanism detection image information, determine that two are excavated parameters: gray scale initial value Sita, tonal range Delta;
B, in conjunction with gray scale initial value Sita, tonal range Delta, and wait to dig gray-scale value OZ (x, y) information in the image; Treat the pick image by the image information digging mechanism with the Zadeh-X transform method and excavate, (x, y), (x, information y) obtain result images to rely on the described T of pixel value as a result to obtain one group of new T of pixel value as a result.
Described look/gradation conversion method has two kinds:
One calculates for adopting normalization weighted sum formula:
Gray-scale value OZ (x, y)=R (x, y) * 0.3+G (x, y) * 0.59+B (x, y) * 0.11
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
One calculates for adopting conversion formula such as the power of grade:
Gray-scale value OZ (x, y)=R (x, y)/3+G (x, y)/3+B (x, y)/3
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
The described original gray of digging image of waiting is embodied as a width of cloth coordinate diagram, and its X-direction is for waiting to dig image gray levels i, and the distributed area of i is [0,255], and its Y direction is the pixel number OZ of each gray level i i, OZ iBe distributed as the interval [0, ∑ OZ i], this original gray has embodied is waiting to dig in the image: in different gray level i, comprise pixel quantity sum OZ respectively iWhat, because of the distribution range of gray level i be [0,255] totally 256 grades, so the distributed area of gray level i is [0,255], the total OZ of pixel quantity iQuantity what according to waiting to dig the size of image and deciding.
Cooperate total pixel number ∑ OZ iPrinciple of invariance: draw one group of new gray information, total pixel number ∑ TE behind the gradually flattening of gray information spectrum behind the gradually flattening i,
Described gradually flattening method and total pixel number ∑ OZ iPrinciple of invariance forms system of equations:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
Total pixel number ∑ (OZ i) principle of invariance: ∑ TE i=∑ OZ i
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level;
Described gradually flattening gray is embodied as a width of cloth coordinate diagram, and its X-direction is gray level i, and its distributed area is [0,255], and its Y direction is the pixel number TE of each gray level i behind the gradually flattening i, its distributed area be [0, ∑ OZ i], in the actual mapping, Y direction adopts the normalization mapping.
Gray is generated by described original gray variation behind this gradually flattening, and the foundation of its variation is described gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m, and must follow the constant principle of total pixel number, i.e. formula: ∑ TE i=∑ OZ i, otherwise image information will distortion.
Concrete variation is presented as: under the situation that gray level i remains unchanged, the pixel number that each gray level i is comprised is by OZ iBecome TE i, that is: utilize at exponential function y=(OZ i) 1/mIn: at OZ iGreater than 1 o'clock, m was big more, and y is more little, and when m was tending towards infinity, y was tending towards 1.At this moment, the half-tone information that has only a few pixels to count that people's naked eyes can't be seen in the original gray highlights, difference with gradually flattening level m, generate gray behind the different gradually flattenings, the half-tone information that its a few pixels is counted highlights effect also with regard to difference, when m was tending towards infinity, the pixel number that described planarization gray has made a variation to all gray scales was tending towards the same.
The variation result: people's naked eyes just can identify described scope and concrete grey level of waiting to dig the gray level i that includes in the image, are presented as the pixel number TE of each gray level i iConcrete distribution after described classified planarization in the gray.
Described two are excavated parameter: gray scale initial value Sita, tonal range Delta are the interior TE of gray after relying on described gradually flattening iDistribution relation determine:
Human gray limiting resolution is about the difference of 4~5 levels, and gray gradually flattening method can present the information of the difference of a gray level, comprise the image information of being flooded, the gray information that a few pixels is occupied, the image information of having hidden by strong background.Therefore just can identify this with gray gradually flattening method waits to dig whether image information to be excavated is arranged in the image, and the excavation scope of image information: gray scale initial value Sita and tonal range Delta.
The concrete excavation when waiting to dig image, again gray scale initial value Sita and tonal range Delta are introduced in the Zadeh-X conversion:
Described Zadeh-X transform method calculates as adopting following formula:
T(x,y)=K[OZ(x,y)-Sita]/Delta
Wherein: OZ (x, scope y) is [0,255];
T (x, scope y) is [0,255];
The scope of Sita is [0,255];
The scope of Delta is [1,255];
The scope of K is [1,255];
It is described that (x y) is the coordinate of each pixel of image.
Described Zadeh-X conversion principle: in conjunction with gray scale initial value Sita and tonal range Delta, treat the pick image in each gray values of pixel points OZ (x y) carries out assignment again, OZ (x, y) be converted to T (x, y).
Excavate extraneous gray-scale value OZ (x, y) conversion is as follows:
Before the assignment: when gray-scale value OZ (x, y) less than Sita, after the assignment: T (x, y)=0;
Before the assignment: when gray-scale value OZ (x, y) greater than Sita+Delta, after the assignment: T (x, y)=255; (x y) presses following formula and calculates pixel value T, and K be the highest gray-scale value of pixel as a result, and when K=255, result images has the contrast of maximum as a result.During Delta=1, the highest resolution is arranged.
Obtain a result images this moment.
A kind of method for digging that is used for bottom layer image, its key is to comprise the following steps:
(1) step 1 is excavated the chrominance information of waiting to dig image with colourity spectrum gradually flattening method, makes and treating
Colourity spectrum behind the gradually flattening of pick image, its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug;
B, described image file to be dug is read in computing machine, by central-processor organization obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y);
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), (x y), counts chromatic value R by central-processor organization respectively to the chromatic value among the different gamut of chromaticities i to B i(x, y), G i(x, y), B i(x, pixel quantity OZ y) i
D, rely on described original colourity spectrum, utilize the gradually flattening method, make colourity spectrum behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
(2) step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of chrominance information spectrum behind the described gradually flattening, the excavation scope of detection image information, determine that by man-machine conversation mechanism two are excavated parameters: colourity initial value Sita, chromaticity range Delta:
B, in conjunction with colourity initial value Sita, chromaticity range Delta, and wait to dig chromatic value R in the image (x, y), G (x, y), B (x, y) information; Treat the pick image by the image information digging mechanism with the Zadeh-X transform method and excavate, obtain one group of new T of pixel value as a result R(x, y), T G(x, y), T B(x y), relies on the described T of pixel value as a result R(x, y), T G(x, y), T B(x, information y) obtain result images.
Described central-processor organization is connected with described man-machine conversation mechanism with described image information testing agency, described image information digging mechanism respectively.
A kind of excavating gear that is used for bottom layer image, its key is:
Comprise central-processor organization, image information testing agency, man-machine conversation mechanism and image information digging mechanism, wherein said central-processor organization is connected with image information testing agency, image information digging mechanism and man-machine conversation mechanism respectively, and this central-processor organization also is connected with the image recognition mechanism of outside.
Wherein, described central-processor organization is accepted the image to be dug that described image recognition mechanism catches; Obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y); Then by three kinds of chromatic value R of red, green, blue (x of described each pixel, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image, obtain describedly waiting to dig each gray values of pixel points OZ in the image (x y), sending to described image information testing agency;
Described image information testing agency obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i to the gray-scale value OZ among the different grey-scale i again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray wait to dig image, rely on described original gray, utilize the gradually flattening method: make gray behind the gradually flattening, and send to described man-machine conversation mechanism by described central-processor organization;
Described man-machine conversation mechanism: the gradually flattening method is made the information that gray provides behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image, determine that two are excavated parameter: gray scale initial value Sita, tonal range Delta;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
Described image information digging mechanism:, and wait to dig gray-scale value OZ (x, y) information in the image according to gray scale initial value Sita, tonal range Delta; Treating the pick image with the Zadeh-X transform method excavates, obtain the one group of new T of pixel value as a result (x, y), in conjunction with described man-machine conversation mechanism, detect many group gray scale initial value Sita, tonal range Delta, and optimize best gray scale initial value Sita and tonal range Delta, (x y), relies on described optimal results pixel value T (x to obtain optimal results pixel value T, y) information obtains the optimal results image.
Remarkable result of the present invention is: have a gray class resolution ratio, can excavate the image information that the strong background of quilt that human vision can not see has been buried.Be the important tool of finding and excavate the image information of having been buried by strong background, this method can be used for text, image encrypting and decrypting, the high density transmission, excavates captured image information under mal-condition.
Description of drawings
Accompanying drawing 1: gray scale is 255 grades a white box, and gray scale is 253 grades pentagram figure;
Accompanying drawing 2: the original gray that accompanying drawing 1 is generated;
Accompanying drawing 3: gray behind the gradually flattening that accompanying drawing 2 variations generate
Accompanying drawing 4: to the result images figure behind the accompanying drawing 1 enforcement image mining;
Accompanying drawing 5: gray scale is 0 grade a black box, and gray scale is 1 grade pentagram figure;
Accompanying drawing 6: the original gray that accompanying drawing 4 is generated;
Accompanying drawing 7: to the result images figure behind the accompanying drawing 4 enforcement image minings;
Accompanying drawing 8: workflow diagram of the present invention;
Accompanying drawing 9: be the connection block diagram of the excavating gear that is used for bottom layer image.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is described in further detail.
As shown in Figure 8,
Embodiment 1: to be painted with a gray scale in the image be 255 grades white box waiting to dig, and draws a gray scale in the middle and be 253 grades pentagram, and the figure that clearly obtains is a slice white, and human vision can not be told the pentagram that exists in this white image.Adopt method for digging step of the present invention to be:
As shown in Figure 1,
(1), step 1, excavate the gray letter wait to dig image with gray gradually flattening method
Breath is made gray information spectrum behind the gradually flattening of waiting to dig image, and its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug, adopt photographic equipment or image scanning apparatus to catch image to be dug;
B, described image file to be dug is read in computing machine, generate the digital format file, by central-processor organization 1 obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y), can by DELPHI know figure mechanism realize chromatic value R (x, y), G (x, y), B (x, obtaining y) is prior art;
As shown in Figure 2,
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image by central-processor organization 1, obtain described wait to dig each gray values of pixel points OZ in the image (x, y); (x y) has only 255 and 253 two kind to the gray-scale value OZ of this moment.
D, obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i by the gray-scale value OZ among 1 couple of different grey-scale i of central-processor organization again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray of waiting to dig image; Count gray-scale value OZ (x, pixel quantity OZ y)=0 respectively 0With gray-scale value OZ (x, pixel quantity OZ y)=1 1
As shown in Figure 3,
E, rely on described original gray, utilize the gradually flattening method, make gray behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency 2;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
(2), step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of gray behind the described gradually flattening, by the excavation scope of man-machine conversation mechanism 3 detection image information, determine that two are excavated parameters: gray scale initial value Sita, tonal range Delta;
The excavation scope of this moment is [253,255].
B, determine the excavation scope of image information, and wait to dig gray-scale value OZ (x, y) information in the image in conjunction with gray scale initial value Sita, tonal range Delta; Treat the pick image by image information digging mechanism 4 usefulness Zadeh-X transform methods and excavate, (x, y), (x, information y) obtain result images to rely on the described T of pixel value as a result to obtain one group of new T of pixel value as a result.
Described look/gradation conversion method has two kinds:
One is the thicker method of precision:
Described look/gradation conversion method is to adopt normalization weighted sum formula to calculate:
Gray-scale value OZ (x, y)=R (x, and y) * O.3+G (x, y) * 0.59+B (x, y) * 0.11
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
It two is the thinner method of precision:
Described look/gradation conversion method is to adopt conversion formula such as the power of grade to calculate:
Gray-scale value OZ (x, y)=R (x, y)/3+G (x, y)/3+B (x, y)/3
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
The described original gray of digging image of waiting is embodied as a width of cloth coordinate diagram, and its X-direction is for waiting to dig image gray levels i, and the distributed area of i is [0,255], and its Y direction is the pixel number OZ of each gray level i i, OZ iBe distributed as the interval [0, ∑ OZ i], this original gray has embodied is waiting to dig in the image: in different gray level i, comprise pixel quantity sum OZ respectively iWhat, because of the distribution range of gray level i be [0,255] totally 256 grades, so the distributed area of gray level i is [0,255], the total ∑ OZ of pixel quantity iQuantity what are concrete and decide according to waiting to dig image.
Cooperate total pixel number ∑ OZ iPrinciple of invariance: draw one group of new gray information, total pixel number ∑ TE behind the gradually flattening of gray information spectrum behind the gradually flattening i,
Described gradually flattening method and total pixel number ∑ OZ iPrinciple of invariance forms system of equations:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
Total pixel number ∑ (OZ i) principle of invariance: ∑ TE i=∑ OZ i
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level;
Gray is embodied as a width of cloth coordinate diagram behind the described gradually flattening, and its X-direction is gray level i, and its distributed area is [0,255], and its Y direction is the pixel number TE of each gray level i behind the gradually flattening i, its distributed area be [0, ∑ OZ i].
This m level classification planarization gray is generated by described original gray variation, and the foundation of its variation is described gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m, and follow the constant principle of total pixel number, i.e. formula: ∑ TE i=∑ OZ i, otherwise image information will distortion.
Concrete variation is presented as: under the situation that gray level i remains unchanged, the pixel number that each gray level i is comprised is by OZ iBecome TE i, that is: utilize at exponential function y=(OZ i) 1/mIn: at OZ iGreater than 1 o'clock, m was big more, and y is more little, and when m was tending towards infinity, y was tending towards 1.At this moment, the half-tone information that has only a few pixels to count that people's naked eyes can't be seen in the original gray highlights, difference with gradually flattening level m, generate gray behind the different gradually flattenings, the half-tone information that its a few pixels is counted highlights effect also with regard to difference, when m was tending towards infinity, described gray information was composed the pixel number that has made a variation to all half-tone informations and is tending towards the same
The variation result: people's naked eyes just can be discerned unresolvable described distribution and the scope of waiting to dig the gray level that includes in the image of human vision in the original image, are presented as each TE iConcrete distribution behind described gradually flattening in the gray.
Described two are excavated parameter: gray scale initial value Sita, tonal range Delta are the interior TE of gray after relying on described gradually flattening iDistribution relation determine:
Human gray limiting resolution is about the difference of 4~5 levels, but just can show out the unresolvable image information of human vision especially with gray gradually flattening method: the gray information of the image that the strong background of quilt is flooded; The gray information of the artificial image of hiding; The gray information of having only a few pixels to occupy.By these information, just can determine to carry out the excavation parameter of bottom layer image mining: gray scale initial value Sita and tonal range Delta.
The concrete excavation when waiting to dig image, again gray scale initial value Sita and tonal range Delta are introduced in the Zadeh-X conversion:
Described Zadeh-X transform method calculates as adopting following formula:
T(x,y)=K[OZ(x,y)-Sita]/Delta
Wherein: OZ (x, scope y) is [0,255];
T (x, scope y) is [0,255];
The scope of Sita is [0,255];
The scope of Delta is [1,255]; The scope of K is [1,255];
It is described that (x y) is the coordinate of each pixel of image.
Described central-processor organization 1 is connected with described man-machine conversation mechanism 3 with described image information testing agency 2, described image information digging mechanism 4 respectively.
Described Zadeh-X conversion principle: in conjunction with gray scale initial value Sita and tonal range Delta, treat each gray values of pixel points OZ in the pick image (x y) carries out by the calculating of following formula assignment again, OZ (x, y) be converted to T (x, y):
Excavate extraneous gray-scale value OZ (x, y) conversion is as follows:
Before the assignment: when gray-scale value OZ (x, y) less than Sita, after the assignment: T (x, y)=0;
Before the assignment: when gray-scale value OZ (x, y) greater than Sita+Delta, after the assignment: T (x, y)=255;
And the gray-scale value OZ in the excavation scope (x, y) conversion, each OZ (x, y) according to the linear relationship of K/Delta by assignment again be as a result pixel value T (x, y), K is the resolving effect value, when K=255, it is the highest that resolving effect has reached.
At this moment: (x is y) for there being 0 and 255 two value for the gray-scale value T of the pixel as a result of generation.
As shown in Figure 4, rely on DELPHI to know of the identification of figure mechanism, directly obtain result images: the black pentagram under the full white background of a slice the result.
Described central-processor organization 1 is connected 3 with described image information testing agency 2, described image information digging mechanism 4 with described man-machine conversation mechanism respectively.
Embodiment 2: this embodiment 2 is consistent with the principle of work of embodiment 1, and its difference is:
As accompanying drawing 5, accompanying drawing 6, accompanying drawing 7, shown in,
Waiting to dig the black box that to be painted with a gray scale in the image be the O level, drawing a gray scale in the middle and be 1 grade pentagram, the figure that clearly obtains is a slice black, and human vision can not be told the pentagram that exists in this black image.As shown in Figure 2, adopt method for digging step of the present invention the same with embodiment 1, its difference is:
(x y) has only 0 and 1 two kind to wait to dig gray-scale value OZ in the image.
Count gray-scale value OZ (x, pixel quantity OZ y)=1 respectively 1With gray-scale value OZ (x, pixel quantity OZ y)=0 0
The excavation scope of this moment is [0,1], gray scale initial value Sita=0, tonal range Delta=1;
And the T of pixel value as a result that generates (x is 0 and 255 two kind of gray-scale value y).
White pentagram under the full black matrix look of a slice of final generation.
Embodiment 3: adopt method for digging step of the present invention the same with embodiment 1, its difference is: be painted with a gamut of chromaticities in the image and be respectively 90,91,93 grades three red streaks waiting to dig, human vision can not be told this striped.
A kind of method for digging that is used for bottom layer image comprises the following steps:
(1) step 1 is excavated the chrominance information of waiting to dig image with colourity spectrum gradually flattening method, makes colourity spectrum behind the gradually flattening of waiting to dig image, and its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug;
B, described image file to be dug is read in computing machine, by central-processor organization 1 obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y);
(x, y), (x, y), (x y) has only three kinds of (255,90,90), (255,91,91) and (255,93,93) to B to G to wait to dig chromatic value R in the image.
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), (x y), counts chromatic value R respectively by the chromatic value among 1 couple of different gamut of chromaticities i of central-processor organization to B i(x, y), G i(x, y), B i(x, pixel quantity OZ y) i
Count chromatic value R respectively 90(x, y), G 90(x, y), B 90(x, pixel quantity OZ y)=(255,90,90) 90, chromatic value R 91(x, y), G 91(x, y), B 91(x, pixel quantity OZ y)=(255,91,91) 91With chromatic value R 93(x, y), G 93(x, y), B 93(x, pixel quantity OZ y)=(255,93,93) 93
D, rely on described original colourity spectrum, utilize the gradually flattening method, make colourity spectrum behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency 2;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TEi behind the gradually flattening of i gray level in the gray behind the gradually flattening, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
(2) step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of colourity spectrum behind the described gradually flattening, the excavation scope of detection image information determines that by man-machine conversation mechanism 3 two are excavated parameters: colourity initial value Sita, chromaticity range Delta;
The excavation scope of this moment is [90,93], colourity initial value Sita=90, chromaticity range Delta=93-90=3;
B, in conjunction with colourity initial value Sita, chromaticity range Delta, and wait to dig chromatic value R in the image (x, y), G (x, y), B (x, y) information; Treat the pick image by image information digging mechanism (4) with the Zadeh-X transform method and excavate, obtain one group of new T of pixel value as a result R(x, y), T G(x, y), T B(x y) relies on the described T of pixel value as a result R(x, y), T G(x, y), T B(x, information y) obtain result images.
And the T of pixel value as a result that generates (x y) has three kinds of (255,0,0), (255,83,83) and (255,255,255).
Finally generate one at pure red, light red, lily three vitta lines.
Embodiment 4: a kind of excavating gear that is used for bottom layer image,
Shown in accompanying drawing 8, accompanying drawing 9, comprise central-processor organization 1, image information testing agency 2, man-machine conversation mechanism 3 and image information digging mechanism 4, wherein said central-processor organization 1 is connected with image information testing agency 2, image information digging mechanism 4 and man-machine conversation mechanism 3 respectively, and this central-processor organization 1 also is connected with the image recognition mechanism of outside.
Wherein, the image to be dug of the described image recognition of described central-processor organization 1 acceptance mechanism seizure; Obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y); Then by three kinds of chromatic value R of red, green, blue (x of described each pixel, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image, obtain describedly waiting to dig each gray values of pixel points OZ in the image (x y), sending to described image information testing agency 2;
Described image information testing agency 2 obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i to the gray-scale value OZ among the different grey-scale i again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray of waiting to dig image, and rely on described original gray, utilize gradually flattening method and total pixel number ∑ OZ iPrinciple of invariance: make gradually flattening gray, and send to described man-machine conversation mechanism 3 by described central-processor organization;
Described man-machine conversation mechanism 3: the gradually flattening method is made the information that gray provides behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image, determine that two are excavated parameter: gray scale initial value Sita, tonal range Delta;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
Described image information digging mechanism 4:, and wait to dig gray-scale value OZ (x, y) information in the image according to gray scale initial value Sita, tonal range Delta; Treating the pick image with the Zadeh-X transform method excavates, obtain the one group of new T of pixel value as a result (x, y), in conjunction with described man-machine conversation mechanism 3, detect many group gray scale initial value Sita, tonal range Delta, and optimize best gray scale initial value Sita and tonal range Delta, (x y), relies on described optimal results pixel value T (x to obtain optimal results pixel value T, y) information obtains the optimal results image.
The principle of work of present embodiment 4 is consistent with embodiment 1 and embodiment 2, embodiment 3.

Claims (9)

1, a kind of method for digging that is used for bottom layer image is characterized in that comprising the following steps:
(1) step 1 is excavated the gray information wait to dig image with gray gradually flattening method, makes gray behind the gradually flattening of waiting to dig image, and its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug;
B, described image file to be dug is read in computing machine, by central-processor organization (1) obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y);
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image by central-processor organization (1), obtain described wait to dig each gray values of pixel points OZ in the image (x, y);
D, obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i to the gray-scale value OZ among the different grey-scale i by central-processor organization (1) again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray of waiting to dig image;
E, rely on described original gray, utilize the gradually flattening method, make gray behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency (2);
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level;
(2) step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of gray behind the described gradually flattening, the excavation scope of detection image information determines that by man-machine conversation mechanism (3) two are excavated parameters: gray scale initial value Sita, tonal range Delta;
B, according to gray scale initial value Sita, tonal range Delta, and wait to dig gray-scale value OZ (x, y) information in the image; Treat the pick image by image information digging mechanism (4) with the Zadeh-X transform method and excavate, (x, y), (x, information y) obtain result images to rely on the described T of pixel value as a result to obtain one group of new T of pixel value as a result.
2, the method for digging that is used for bottom layer image according to claim 1 is characterized in that:
Described look/gradation conversion method is to adopt normalization weighted sum formula to calculate:
Gray-scale value OZ (x, y)=R (x, y) * 0.3+G (x, y) * 0.59+B (x, y) * 0.11
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
3, the method for digging that is used for bottom layer image according to claim 1 is characterized in that:
Described look/gradation conversion method is to adopt conversion formula such as the power of grade to calculate:
Gray-scale value OZ (x, y)=R (x, y)/3+G (x, y)/3+B (x, y)/3
It is described that (x is y) for waiting to dig the coordinate of each pixel of image.
4, the method for digging that is used for bottom layer image according to claim 1 is characterized in that:
The described original gray of digging image of waiting is embodied as a width of cloth coordinate diagram, and its X-direction is for waiting to dig image gray levels i, and the distributed area of i is [0,255], and its Y direction is the pixel number OZ of each gray level i i, OZ iBe distributed as interval [O, ∑ OZ i].
5, the method for digging that is used for bottom layer image according to claim 1 is characterized in that:
Described two are excavated parameter: gray scale initial value Sita, tonal range Delta are the interior TE of gray after relying on described gradually flattening iDistribution relation determine.
6, the method for digging that is used for bottom layer image according to claim 1 is characterized in that:
Described Zadeh-X transform method calculates as adopting following formula:
T(x,y)=K[OZ(x,y)-Sita]/Delta
Wherein: OZ (x, scope y) is [0,255];
T (x, scope y) is [0,255];
The scope of Sita is [0,255];
The scope of Delta is [0,255];
The scope of K is [1,255];
It is described that (x y) is the coordinate of each pixel of image.
7, a kind of method for digging that is used for bottom layer image is characterized in that comprising the following steps:
(1) step 1 is excavated the chrominance information of waiting to dig image with colourity spectrum gradually flattening method, makes colourity spectrum behind the gradually flattening of waiting to dig image, and its concrete grammar is as follows:
A, utilize image recognition mechanism to catch image to be dug;
B, described image file to be dug is read in computing machine, by central-processor organization (1) obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y);
C, then by three kinds of chromatic value R of red, green, blue of described each pixel (x, y), G (x, y), (x y), counts chromatic value R by central-processor organization (1) respectively to the chromatic value among the different gamut of chromaticities i to B i(x, y), G i(x, y), B i(x, pixel quantity OZ y) i
D, rely on described original colourity spectrum, utilize the gradually flattening method, make colourity spectrum behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image by image information testing agency (2);
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level;
(2) step 2, the excavation of target image, its concrete grammar is as follows:
A, according to the distribution characteristics information of colourity spectrum behind the described gradually flattening, the excavation scope of detection image information determines that by man-machine conversation mechanism (3) two are excavated parameters: colourity initial value Sita, chromaticity range Delta;
B, in conjunction with colourity initial value Sita, chromaticity range Delta, and wait to dig chromatic value R in the image (x, y), G (x, y), B (x, y) information; Treat the pick image by image information digging mechanism (4) with the Zadeh-X transform method and excavate, obtain one group of new T of pixel value as a result R(x, y), T G(x, y), T B(x y) relies on the described T of pixel value as a result R(x, y), T G(x, y), T B(x, information y) obtain result images.
8, according to claim 1 or the 7 described method for digging that are used for bottom layer image, it is characterized in that:
Described central-processor organization (1) is connected with described image information testing agency (2), described image information digging mechanism (4) and described man-machine conversation mechanism (3) respectively.
9, a kind of excavating gear that is used for bottom layer image is characterized in that:
Comprise central-processor organization (1), image information testing agency (2), man-machine conversation mechanism (3) and image information digging mechanism (4), wherein said central-processor organization (1) is connected with image information testing agency (2), image information digging mechanism (4) and man-machine conversation mechanism (3) respectively, and this central-processor organization (1) also is connected with the image recognition mechanism of outside.
Wherein, the image to be dug of the described image recognition of described central-processor organization (1) acceptance mechanism seizure; Obtain each pixel three kinds of chromatic value R of red, green, blue (x, y), G (x, y), B (x, y); Then by three kinds of chromatic value R of red, green, blue (x of described each pixel, y), G (x, y), B (x, y), utilize look/gradation conversion method to carry out the gradation conversion of image, obtain describedly waiting to dig each gray values of pixel points OZ in the image (x y), sending to described image information testing agency (2);
Described image information testing agency (2) obtain described wait to dig each gray values of pixel points OZ in the image (x, y) after, (x, y) quantity is added up, and draws the pixel quantity sum OZ of each gray level i to the gray-scale value OZ among the different grey-scale i again i, and count the described total pixel number ∑ OZ that waits to dig image i, and generate the original gray wait to dig image, rely on described original gray, utilize the gradually flattening method: make gray behind the gradually flattening, and send to described man-machine conversation mechanism (3) by described central-processor organization;
Described man-machine conversation mechanism (3): the gradually flattening method is made the information that gray provides behind the gradually flattening, identify and wait to dig whether image information to be excavated is arranged in the image, determine that two are excavated parameter: gray scale initial value Sita, tonal range Delta;
Described image information digging mechanism (4):, and wait to dig gray-scale value OZ (x, y) information in the image according to gray scale initial value Sita, tonal range Delta; Treat the pick image with the Zadeh-X transform method and excavate, (x y), relies on the described T of pixel value as a result (x, information y), acquisition result images to obtain one group of new T of pixel value as a result;
Described gradually flattening method: be to adopt following equation to draw one group of new gray information: pixel number TE behind the gradually flattening of i gray level in the gray behind the gradually flattening i, make gray behind the described gradually flattening:
Gradually flattening method: TE i=(OZ i) 1/m∑ TE i/ ∑ (OZ i) 1/m
I=0 in the formula, 1,2 ..., N-1, N=256, the expression gray level, m ∈ [1, ∞) be the gradually flattening level.
CNB2006100543338A 2006-05-29 2006-05-29 Mining method for low-level image and image mining apparatus employing the same Expired - Fee Related CN100527164C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100543338A CN100527164C (en) 2006-05-29 2006-05-29 Mining method for low-level image and image mining apparatus employing the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100543338A CN100527164C (en) 2006-05-29 2006-05-29 Mining method for low-level image and image mining apparatus employing the same

Publications (2)

Publication Number Publication Date
CN1866295A CN1866295A (en) 2006-11-22
CN100527164C true CN100527164C (en) 2009-08-12

Family

ID=37425310

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100543338A Expired - Fee Related CN100527164C (en) 2006-05-29 2006-05-29 Mining method for low-level image and image mining apparatus employing the same

Country Status (1)

Country Link
CN (1) CN100527164C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441770B (en) * 2008-11-28 2012-03-21 重庆医科大学 Method for excavating optimum image based on information entropy and logarithm contrast weight sum
CN101668226B (en) * 2009-09-11 2011-05-04 重庆医科大学 Method for acquiring color image with best quality
CN108133215B (en) * 2012-04-02 2020-10-20 惠安县崇武镇锐锐海鲜干品店 Processing unit
CN102800061B (en) * 2012-06-26 2016-05-11 重庆医科大学 The quick self-adapted optimization method of digital picture under high illumination

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6021214A (en) * 1993-09-30 2000-02-01 Kla Instruments Corp. Inspection method and apparatus for the inspection of either random or repeating patterns

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6021214A (en) * 1993-09-30 2000-02-01 Kla Instruments Corp. Inspection method and apparatus for the inspection of either random or repeating patterns

Also Published As

Publication number Publication date
CN1866295A (en) 2006-11-22

Similar Documents

Publication Publication Date Title
CN101433075A (en) Generating a bitonal image from a scanned colour image
CN100595799C (en) Two-dimensional currency automatic recognition method and system
CN101950407B (en) Method for realizing color image digital watermark for certificate anti-counterfeiting
CN106096610A (en) A kind of file and picture binary coding method based on support vector machine
CN100527164C (en) Mining method for low-level image and image mining apparatus employing the same
CN104992496A (en) Paper money face identification method and apparatus
DE112007001793T5 (en) Method and device for comparing documents by means of a cross-level comparison
CN101051351A (en) Image band parameter two-valued method and device using said method
CN107146258B (en) Image salient region detection method
JPH0957201A (en) Specific color region extracting system and specific color region removing system
Borges et al. Robust and transparent color modulation for text data hiding
CN107437293A (en) A kind of bill anti-counterfeit discrimination method based on bill global characteristics
CN100383822C (en) High-resolution detection method for image gray scale/chromaticity information for base image mining
CN106446885A (en) Paper-based Braille recognition method and system
CN110929562A (en) Answer sheet identification method based on improved Hough transformation
CN115760826B (en) Bearing wear condition diagnosis method based on image processing
CN113947563A (en) Cable process quality dynamic defect detection method based on deep learning
WO2006126348A1 (en) Number recognizing device, and recognition method therefor
CN100392675C (en) Method for hiding and excavating bottom image and device thereby
CN111582115B (en) Financial bill processing method, device, equipment and readable storage medium
CN110335406B (en) Multimedia glasses type portable currency detector
CN110135274B (en) Face recognition-based people flow statistics method
JP2004246618A (en) Method, device, and program for generating image used for collating in pattern recognition and pattern recognition using the image
CN116342644A (en) Intelligent monitoring method and system suitable for coal yard
CN108985307A (en) A kind of Clean water withdraw method and system based on remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090812

Termination date: 20120529