CN107392912A - Image partition method based on pseudo-color coding and DISCOV codings - Google Patents
Image partition method based on pseudo-color coding and DISCOV codings Download PDFInfo
- Publication number
- CN107392912A CN107392912A CN201710613528.XA CN201710613528A CN107392912A CN 107392912 A CN107392912 A CN 107392912A CN 201710613528 A CN201710613528 A CN 201710613528A CN 107392912 A CN107392912 A CN 107392912A
- Authority
- CN
- China
- Prior art keywords
- image
- pseudo
- color coding
- discov
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Image partition method based on pseudo-color coding and DISCOV codings, the present invention relates to image partition method.The invention aims to solve the problems, such as prior art, calculating speed low to image zooming-out accuracy rate is slow in the presence of noise.Process is:First, view data is obtained, gray proces are carried out to view data, obtain the image after gray proces, plus noise processing is carried out to the image after gray proces, obtains the image after a plus noise;2nd, the image after the plus noise obtained to one obtains the image after rejecting abnormalities value according to 3 σ criterion rejecting abnormalities values;3rd, pseudo-color coding is carried out to the image after rejecting abnormalities value, obtains pseudo-color coding image;4th, pseudo-color coding image R, G, B are sent into DISCOV, founding the RG in component numbering using single pair to pseudo-color coding image encodes;5th, RG code coefficients are divided into by 0 or 1 using otsu methods.The present invention is used for image segmentation field.
Description
Technical field
The present invention relates to image partition method.
Background technology
Conventional target method for extracting region such as Fuzzy C is classified, can be according to the coherency and target and background of target internal
Between Euclidean distance adjustment threshold value to realize the extraction to target area.In the last few years, it is a series of based on Modified Retinal Model
Image partition method constantly proposes, it is desirable to simulate the processing mode of human eye retina to handle background environment complicated and changeable.
Image partition method based on Modified Retinal Model has a Pulse Coupled Neural Network (PCNN), and intersecting sight cortex (ICM) is improved
Intersecting sight cortex (SICM).PCNN is three layers of concussion network with five equations of convolution twice, feedback neural member, is connected
Connect neuron, internal drive, dynamic threshold constantly to adjust, make it constantly close to the effect to be reached.PCNN is reduced to only by ICM
One influenceed by state and threshold value has two layers of concussion network of three equations of a convolution, improves the speed of service of model
And practicality.SICM is further improved ICM, and the parameter in ICM is improved to adjust according to environment self-adaption by setting manually
It is whole, and iteration direction is adjusted to the direction iteration with maximum information (MI).But above method is in noisy situation
Under it is low to image zooming-out accuracy rate, calculating speed is slow.
The content of the invention
The invention aims to solve prior art, calculating low to image zooming-out accuracy rate in the presence of noise
Slow-footed problem, and propose the image segmentation algorithm based on pseudo-color coding and DISCOV codings.
It is based on the image partition method detailed process that pseudo-color coding encodes with DISCOV:
Step 1: obtaining view data, gray proces are carried out to view data, the image after gray proces are obtained, to ash
Image after degree processing carries out plus noise processing, obtains the image after a plus noise;
Institute's plus noise meets that condition is:The signal power of image and the power ratio of noise are 4;
Step 2: the image after the plus noise obtained to step 1 obtains rejecting abnormalities according to 3 σ criterion rejecting abnormalities values
Image after value;
Step 3: carrying out pseudo-color coding to the image after rejecting abnormalities value, pseudo-color coding image is obtained;
Pseudo-color coding image is RGB three-dimensional channels;
Step 4: pseudo-color coding image R, G, B are sent into DISCOV, single counter-element is used to pseudo-color coding image
RG codings in coding;
DISCOV is to shunt colour vision model without dimension;
RG is encoded to region centered on R passages, and G passages are fringe regions;
Step 5: RG code coefficients are divided into by 0 or 1 using Otsu methods;
Otsu is maximum between-cluster variance.
Beneficial effects of the present invention are:
Color combining information of the present invention and Modified Retinal Model --- propose a kind of puppet without dimension shunting function DISCOV codings
Colored DISCOV coding methods, by increased color multi-channel information DISCOV analysis of encoding, this method introduces biological god
The thought of the study of Confucian classics, it can not only realize that target area is split, also analyze each point and week from the angular quantification of biological neural
Relation between collarette border.256 grades of gray level is promoted in the RGB color of triple channel by pseudo-colours, improves the letter of image
Breath amount.DISCOV codings take full advantage of information, color contrast information fusion are entered in Modified Retinal Model, with reference between maximum kind
Variance sorting technique, in accuracy rate, target area internal consistency, target-to-background contrast, image segmentation information transmission
In comprehensive analysis, there is best segmentation effect;Improve image zooming-out accuracy rate and calculating speed.
For data image a1_img026, ICM method F values are that 0.4488, UM values are that 0.8450, GLC values are 0.3281,
MI values are 0.1505, and the segmentation quality after splitting to image is 0.0187;
For data image a1_img026, SICM method F values are that 0.5583, UM values are that 0.7524, GLC values are 0.6885,
MI values are 0.0689, and the segmentation quality after splitting to image is 0.0199;
For data image a1_img026, FCM method F values are that 0.3339, UM values are that 0.8668, GLC values are 0.9114,
MI values are 0.0114, and the segmentation quality after splitting to image is 0.0030;
For data image a1_img026, the inventive method F values are that 0.8418, UM values are that 0.9186, GLC values are
0.8807, MI value is 0.3805, and the segmentation quality after splitting to image is 0.2591;
Draw the inventive method under the same conditions, there is higher F values, UM values, GLC values, MI values and segmentation quality;
F is accuracy rate, and UM is region consistency, and GLC is region contrast, and MI is information exchange;
Aircraft graph data ' a1_img000 ' data image is split using four kinds of methods as Fig. 2 a, 2b, 2c, 2d are drawn
Design sketch, ICM can preferably extract the profile of target area with methods herein, but the inventive method is by the shadow to interference
Ring minimum.
Illustrate the inventive method in the presence of noise to image zooming-out accuracy rate with reference to Fig. 2 a, 2b, 2c, 2d and table 1
It is high.
Brief description of the drawings
Fig. 1 a are airplane data ' a1_img000 ' original image schematic diagram;
Fig. 1 b are to the image schematic diagram after airplane data ' a1_img000 ' original image plus noise;
Fig. 2 a are to the image segmentation figure after airplane data ' a1_img000 ' plus noise using ICM methods;
Fig. 2 b are to the image segmentation figure after airplane data ' a1_img000 ' plus noise using SICM methods;
Fig. 2 c are to the image segmentation figure after airplane data ' a1_img000 ' plus noise using FCM methods;
Fig. 2 d are to the image segmentation figure after airplane data ' a1_img000 ' plus noise using the inventive method;
Fig. 3 is the inventive method flow chart;
Fig. 4 be pseudo-color coding of the present invention after R, G, B as three-dimensional multi-channel data image schematic diagram, z, which refers to, to be encoded
The average of pixel selected by being to the coding of (x, y) pixel, x' under pattern around central area in the range of 5x5, middle imago
Vegetarian refreshments is R (x, y);Y' is the average of the pixel in the range of 15x15 around selected central area, central pixel point be G (x,
y);C is influence powers of the x' to y', value 1.1;D is x' and y' Amplitude Ratio, value 2;Region, G are edge centered on R
The RG models in region.
Embodiment
Embodiment one:Illustrate present embodiment with reference to Fig. 3, present embodiment based on pseudo-color coding with
DISCOV coding image partition method detailed process be:
Step 1: the simulated environment that the present invention uses is by https://github.com/kishankondaveeti/
Synthetic_ISAR_images_of_aircrafts network address is increased income the multi-angle airplane data of offer.The data specifically utilized
For ' a1_img026.mat ', ' a5_img019.mat ', ' a6_img087.mat '.Data gray is turned to input data;Such as
Fig. 1 a;
View data is obtained, gray proces are carried out to view data, the image after gray proces are obtained, after gray proces
Image carry out plus noise processing, obtain the image after a plus noise;Such as Fig. 1 b;
Institute's plus noise meets that condition is:The signal power of image and the power ratio of noise are 4;
Step 2: the image after the plus noise obtained to step 1 obtains rejecting abnormalities according to 3 σ criterion rejecting abnormalities values
Image after value;
Image after rejecting abnormalities value is normalized, will be returned except the pixel value of the image after exceptional value by [0,255]
[0,1] is arrived in one change;
Step 3: carrying out pseudo-color coding to the image after rejecting abnormalities value, pseudo-color coding image is obtained;
Pseudo-color coding image is RGB three-dimensional channels;
Step 4: pseudo-color coding image R, G, B are sent into DISCOV, single counter-element is used to pseudo-color coding image
RG codings in coding;
DISCOV is without dimension shunting colour vision model (DISCOV (DImensionless Shunting COlor
Vision):A neural model for spatial data analysis);
RG is encoded to region centered on R passages, and G passages are fringe regions;
Step 5: RG code coefficients are divided into by 0 or 1 using Otsu (maximum between-cluster variance) methods.
Embodiment two:Present embodiment is unlike embodiment one:To rejecting in the step 3
Image after exceptional value carries out pseudo-color coding, obtains pseudo-color coding image;Detailed process is:
Pseudo-colours is come instead of a kind of technology of grey scale pixel value, it by each gray level by matching colour with colored
Monochrome image, is mapped as a width coloured image by a bit of space.Its mapping relations can be represented with formula below:
The gray level image of image after rejecting abnormalities value is mapped as a width by the pseudo-color coding based on pixel itself conversion
RGB image, mapping relations are:
In formula, f (x, y) represents the grey scale pixel value that (x, y) is put in image;TR, TG, TBFor the color of red, green, blue three respectively with
Mapping function between gray scale, it reflects pixel from gray scale to the mapping relations of colour.Compare the local face of concentration in gray level
Color distribution is relatively more, can increase color contrast.Traditional pseudo-color coding method has rainbow code, thermometal code, continuous face
Color transition coding, pseudo-color coding etc. based on pixel itself conversion, coded system used in the present invention be based on pixel from
The pseudo-color coding of body conversion.
False color of image coding method based on pixel itself conversion is directly to utilize pixel gray value in itself and pixel
The method that gray value after itself conversion carries out pseudo-color coding.
Gray value f (x, the y) pseudo-color coding at any pixel (x, y) place on the image after rejecting abnormalities value is converted, by
There are 256 tonal gradations in gray level image, therefore f (x, y) span is [0,255].To pixel itself f1(x's, y)
Value is in the range of [0,255], in order to ensure f2The value of (x, y) is in [0,255], f2(x, y) also needs to be multiplied by a scale
COEFFICIENT K:K values are 0.0157;Pseudo-color coding conversion formula be:
In formula, R (x, y) is the value of pixel (x, y) place R passages after pseudo-color coding;G (x, y) is pseudo-color coding after image
The value of plain (x, y) place G passages;B (x, y) is the value of pixel (x, y) place channel B after pseudo-color coding;RGB be represent it is red, green,
The color of blue three passages.
Other steps and parameter are identical with embodiment one.
Embodiment three:Present embodiment is unlike embodiment one or two:Will in the step 4
Pseudo-color coding image R, G, B are sent into DISCOV, and founding the RG in component numbering using single pair to pseudo-color coding image encodes;Tool
Body process is:
Single pair founds component numbering formula:
After by gray level image pseudo-color coding, we obtain the abundant color information for meeting human vision model,
Set forth herein method color information can be made full use of and be encoded, realize further image understanding with cognition.
RGB image element is converted into one by retina, single counter-element, double opposition members by DISCOV without dimension shunting model
The model of part feature cascade, the system for converting the input into a computer capacity cognition.
User targetedly can compile according to the multi-channel information around target and the multi-channel information in target's center region
Code is established without dimension shunting model.This method relies primarily on as follows without dimension shunting formula:
In formula, x ' refers to central area pumping signal, refer herein to selection a certain passage pixel R of R, G, B (x,
Y), the pixel average of G (x, y), B (x, y) in the 5x5 blocks of region;Y ' refers to R, G, B of the suppression signal, i.e. selection of peripheral region
The pixel average of a certain passage pixel R (x, y), G (x, y), B (x, y) in the 15x15 blocks of region;Selected center's passage with
After circumferential passages, z refers under the coding mode to the coding of the pixel.B is passive attenuation rate, and C is pumping signal x to suppressing
Signal y influence power, D are Amplitude Ratio of the pumping signal with suppressing signal.
In DISCOV models, single pair formwork erection type can preferably describe the profile information after different passage contrasts, double to formwork erection
Type can preferably describe texture information.Via test, single pair founds encoding model and is best suitable for clicking through each pixel with composing with RD
The vertical coding formula of single pair after row analysis simplifies is as follows:
In formula, z refers to the coding of (x, y) pixel under RG coding modes, and x ' is 5x5 models around selected central area
The average of pixel in enclosing, central pixel point is R (x, y);Such as Fig. 4, y ' is for selected by around central area in the range of 15x15
The average of pixel, central pixel point are G (x, y);C is influence powers of the x ' to y ', and D is x ' and y ' Amplitude Ratio.
Other steps and parameter are identical with embodiment one or two.
Embodiment four:Unlike one of present embodiment and embodiment one to three:The C values are
1.1。
Other steps and parameter are identical with one of embodiment one to three.
Embodiment five:Unlike one of present embodiment and embodiment one to four:The D values are
2。
Other steps and parameter are identical with one of embodiment one to four.
The present invention will using coding after single counter-element be analyzed as coding characteristic, by the R after pseudo-color coding,
G, B is as three-dimensional multi-channel data.Such as Fig. 4, region, the RG models that G is fringe region centered on R.R passages, G passages, B in figure
Passage is respectively that R figures, G figures in the RGB three-dimensional datas after color change are schemed with B.Black pixel point will be handled in G figures
Pixel, the 15x15 white portions average of surrounding is suppression signal y ';Black pixel point is that will handle pixel in R in R figures
Figure corresponding position, surrounding 5x5 gray areas average are pumping signal average x '.Code coefficient of the pixel under RG codings is z.
By the vertical coding of single pair, basis that can be adaptive each puts the multichannel at the multi-channel information and center of surrounding
Information distribution situation encodes, and then the profound information of energy analyze data.
Encoding model of the form 2 without dimension shunting model
Image is split
Image segmentation is carried out to obtained DISCOV color codings, preferably can realize that target area carries using color information
Take.It is used herein to partitioning scheme be that the more conventional method of current gray level image segmentation is OTSU algorithms, i.e., between maximum kind
Segmentation.This method mainly utilizes low-dimensional visual signature, without very strong specific aim, can distinguish various forms of targets
Advantage.Derived during OTSU algorithms by the method for minimum variance.This method by do not stop calculate target and background between
Square error and the mode of inter-class variance adjust threshold value, ensure the maximum differentiation of target and background.
(1) F criterions
Precision ratio and recall rate are two metrics in classification field, for the quality of evaluation result.It is based primarily upon mixed
The definitely accuracy and the index of comprehensive rate that the matrix (confusion matrix) that confuses provides.
Form 3.confusion matrix models
Precision ratio:
Recall rate:
F criterions:Wish that precision ratio and recall rate are all very high, at the same time, avoid the appearance of extreme case:
F is bigger, and classification performance is better.And forced coding model is chosen with F criterions, this paper Selection Models are RG.
(2) region is homogeneity
Splitting has larger homogeneity and uniformity inside rear region, segmentation effect is better:
RiRefer to the ith zone after segmentation, AiRefer to the elemental area of ith zone.F (x, y) refers to not adding
The data that the original-gray image of noise is normalized to after [0,1] scope.C refers to normalized parameter.Understanding, UM is bigger, point
Cut that the uniformity of rear region is stronger, the effect for being divided into target area is better.
(3) regional correlation
After segmentation, there is larger contrast between different zones, segmentation effect is better:
fiAnd fjThe original gradation figure for splitting latter two region (target area and background area) in non-plus noise is represented respectively
Gray average as being normalized to [0,1] image afterwards.GLC is bigger, and the contrast between region is bigger, target area and background area
Domain discrimination is stronger, and effect is better.
(4) information exchange
The information that image after segmentation obtains from normalized original-gray image is more, then proves that segmentation effect is got over
It is good.Interactive information can be calculated by following formula:
P (a) is the probability of every appearance after non-plus noise original-gray image normalization, and p (b) is the every point of image after segmentation
The probability of appearance, therefore, the comentropy that two images occur can be calculated respectively by H (A), H (B), with reference to the interactive information of the two,
Finally we can calculate the information content transmitted after the extraction of target area with MI.Information content is bigger, and segmentation effect is better.(5) it is comprehensive
Close evaluation result
Synthesis result=F*UM*GLC*MI.
Beneficial effects of the present invention are verified using following examples:
Embodiment one:
Simulated environment used herein is by https://github.com/kishankondaveeti/Synthetic_
ISAR_images_of_aircrafts network address is increased income the multi-angle airplane data of offer.The data specifically utilized are ' a1_
Img026.mat ', ' a5_img019.mat ', ' a6_img087.mat '.Data gray is turned to input data.This group of data
Background is simple, and objective contour is complicated, convenient to carry out quantitative assessment to the effect of enhancing with the method for threshold Image Segmentation.
The allocation of computer that this experiment uses is as follows:Processor:Inter (R) Core (TM) i7-4790CPU@3.60GHz,
Internal memory:8G, operating system:Windous7, software for calculation:Matlab2012a.
As a result as shown in Fig. 1 a, 1b, table 1, it can be seen that method proposed by the present invention all has preferable under various circumstances
Target area recognition effect.The present invention is using F criterions and region consistency in conventional confusion matrix, area
Domain contrast, the effect after information exchange is split to image are evaluated, and are divided using Monte Carlo formula many experiments
Analysis.And with Intersecting Cortical Model (ICM) retina cortexes model, by the improved SICM models of ICM, warp
Fuzzy C model (FCM) comparative analysis of allusion quotation.It can be seen that using the present invention coded system, optimal encoding efficiency have compared with
Good segmentation quality, can reach the target capabilities initially set up.Square of signal amplitude average of the invention by target area
As signal power, the variance for adding white Gaussian noise ensures signal power and noise power as noise power, institute's plus noise
The ratio between be 4.
Table 1
F is accuracy rate, and UM is region consistency, and GLC is region contrast, and MI is information exchange;
For data image a1_img026, ICM method F values are that 0.4488, UM values are that 0.8450, GLC values are 0.3281,
MI values are 0.1505, and the segmentation quality after splitting to image is 0.0187;
For data image a1_img026, SICM method F values are that 0.5583, UM values are that 0.7524, GLC values are 0.6885,
MI values are 0.0689, and the segmentation quality after splitting to image is 0.0199;
For data image a1_img026, FCM method F values are that 0.3339, UM values are that 0.8668, GLC values are 0.9114,
MI values are 0.0114, and the segmentation quality after splitting to image is 0.0030;
For data image a1_img026, the inventive method F values are that 0.8418, UM values are that 0.9186, GLC values are
0.8807, MI value is 0.3805, and the segmentation quality after splitting to image is 0.2591;
Draw the inventive method under the same conditions, there is higher F values, UM values, GLC values, MI values and segmentation quality;
Aircraft graph data ' a1_img000 ' data image is split using four kinds of methods as Fig. 2 a, 2b, 2c, 2d are drawn
Design sketch, ICM can preferably extract the profile of target area with methods herein, but the inventive method is by the shadow to interference
Ring minimum.
Illustrate the inventive method in the presence of noise to image zooming-out accuracy rate with reference to Fig. 2 a, 2b, 2c, 2d and table 1
It is high.
The present invention can also have other various embodiments, in the case of without departing substantially from spirit of the invention and its essence, this area
Technical staff works as can make various corresponding changes and deformation according to the present invention, but these corresponding changes and deformation should all belong to
The protection domain of appended claims of the invention.
Claims (5)
1. the image partition method based on pseudo-color coding and DISCOV codings, it is characterised in that:Methods described detailed process is:
Step 1: obtaining view data, gray proces are carried out to view data, the image after gray proces are obtained, at gray scale
Image after reason carries out plus noise processing, obtains the image after a plus noise;
Institute's plus noise meets that condition is:The signal power of image and the power ratio of noise are 4;
Step 2: the image 3 σ criterion rejecting abnormalities values of foundation after the plus noise obtained to step 1, after obtaining rejecting abnormalities value
Image;
Step 3: carrying out pseudo-color coding to the image after rejecting abnormalities value, pseudo-color coding image is obtained;
Pseudo-color coding image is RGB three-dimensional channels;
Step 4: pseudo-color coding image R, G, B are sent into DISCOV, component numbering is stood using single pair to pseudo-color coding image
In RG coding;
DISCOV is to shunt colour vision model without dimension;
RG is encoded to region centered on R passages, and G passages are fringe regions;
Step 5: RG code coefficients are divided into by 0 or 1 using Otsu methods;
Otsu is maximum between-cluster variance.
2. the image partition method based on pseudo-color coding and DISCOV codings according to claim 1, it is characterised in that:Institute
State in step 3 and pseudo-color coding is carried out to the image after rejecting abnormalities value, obtain pseudo-color coding image;Detailed process is:
Pseudo-color coding is carried out to the gray value f (x, y) at any pixel (x, y) place on the image after rejecting abnormalities value, formula is:
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mi>R</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>=</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>/</mo>
<mo>&lsqb;</mo>
<mn>255</mn>
<mo>+</mo>
<mn>0.0157</mn>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>(</mo>
<mn>255</mn>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>)</mo>
<mo>&rsqb;</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>G</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>=</mo>
<mo>&lsqb;</mo>
<mn>255</mn>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>&rsqb;</mo>
<mo>/</mo>
<mo>&lsqb;</mo>
<mn>255</mn>
<mo>+</mo>
<mn>0.0157</mn>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>(</mo>
<mn>255</mn>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>)</mo>
<mo>&rsqb;</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>B</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>=</mo>
<mn>0.0157</mn>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>(</mo>
<mn>255</mn>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>)</mo>
<mo>/</mo>
<mo>&lsqb;</mo>
<mn>255</mn>
<mo>+</mo>
<mn>0.0157</mn>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>(</mo>
<mn>255</mn>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>)</mo>
<mo>&rsqb;</mo>
</mtd>
</mtr>
</mtable>
</mfenced>
In formula, R (x, y) is the value of pixel (x, y) place R passages after pseudo-color coding;G (x, y) be pseudo-color coding after pixel (x,
Y) value of place G passages;B (x, y) is the value of pixel (x, y) place channel B after pseudo-color coding.
3. the image partition method based on pseudo-color coding and DISCOV codings according to claim 2, it is characterised in that:Institute
State in step 4 and pseudo-color coding image R, G, B are sent into DISCOV, component numbering is stood using single pair to pseudo-color coding image
In RG coding;Detailed process is:
Pseudo-color coding image R, G, B are sent into DISCOV, the RG stood using single pair in component numbering is encoded;
Single pair founds component numbering formula:
<mrow>
<mi>z</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>Cx</mi>
<mo>&prime;</mo>
</msup>
<mo>-</mo>
<msup>
<mi>y</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mrow>
<msup>
<mi>Cx</mi>
<mo>&prime;</mo>
</msup>
<mo>+</mo>
<msup>
<mi>Dy</mi>
<mo>&prime;</mo>
</msup>
</mrow>
</mfrac>
</mrow>
In formula, z refers to the coding of (x, y) pixel under RG coding modes, and x ' is for selected by around central area in the range of 5x5
Pixel average, central pixel point is R (x, y);Y ' is the equal of the pixel in the range of 15x15 around selected central area
Value, central pixel point is G (x, y);C is influence powers of the x ' to y ', and D is x ' and y ' Amplitude Ratio.
4. the image partition method based on pseudo-color coding and DISCOV codings according to claim 3, it is characterised in that:Institute
C values are stated as 1.1.
5. the image partition method based on pseudo-color coding and DISCOV codings according to claim 4, it is characterised in that:Institute
D values are stated as 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613528.XA CN107392912A (en) | 2017-07-25 | 2017-07-25 | Image partition method based on pseudo-color coding and DISCOV codings |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710613528.XA CN107392912A (en) | 2017-07-25 | 2017-07-25 | Image partition method based on pseudo-color coding and DISCOV codings |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107392912A true CN107392912A (en) | 2017-11-24 |
Family
ID=60337610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710613528.XA Pending CN107392912A (en) | 2017-07-25 | 2017-07-25 | Image partition method based on pseudo-color coding and DISCOV codings |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392912A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111049527A (en) * | 2019-12-23 | 2020-04-21 | 云南大学 | Image coding and decoding method |
CN111882518A (en) * | 2020-06-09 | 2020-11-03 | 中海石油(中国)有限公司 | Magnetic leakage data self-adaptive pseudo-colorization method |
-
2017
- 2017-07-25 CN CN201710613528.XA patent/CN107392912A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111049527A (en) * | 2019-12-23 | 2020-04-21 | 云南大学 | Image coding and decoding method |
CN111882518A (en) * | 2020-06-09 | 2020-11-03 | 中海石油(中国)有限公司 | Magnetic leakage data self-adaptive pseudo-colorization method |
CN111882518B (en) * | 2020-06-09 | 2023-12-19 | 中海石油(中国)有限公司 | Self-adaptive pseudo-colorization method for magnetic flux leakage data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN106169081B (en) | A kind of image classification and processing method based on different illumination | |
CN105138993B (en) | Establish the method and device of human face recognition model | |
CN111709902A (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
CN107153816A (en) | A kind of data enhancement methods recognized for robust human face | |
CN106295694B (en) | A kind of face identification method of iteration weight set of constraints rarefaction representation classification | |
CN103400146B (en) | Chinese medicine complexion recognition method based on color modeling | |
CN106778785B (en) | Construct the method for image Feature Selection Model and the method, apparatus of image recognition | |
CN109584251A (en) | A kind of tongue body image partition method based on single goal region segmentation | |
CN108932536A (en) | Human face posture method for reconstructing based on deep neural network | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
CN106446872A (en) | Detection and recognition method of human face in video under low-light conditions | |
CN112288010B (en) | Finger vein image quality evaluation method based on network learning | |
CN101739712A (en) | Video-based 3D human face expression cartoon driving method | |
CN106327459A (en) | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) | |
CN104091145B (en) | Human body slaps arteries and veins characteristic image acquisition method | |
CN102799872B (en) | Image processing method based on face image characteristics | |
CN108509920A (en) | The face identification method of the multichannel combined feature selecting study of more patch based on CNN | |
CN108596126A (en) | A kind of finger venous image recognition methods based on improved LGS weighted codings | |
CN107657619A (en) | A kind of low-light (level) Forest fire image dividing method | |
CN106780465A (en) | Retinal images aneurysms automatic detection and recognition methods based on gradient vector analysis | |
CN107358267A (en) | A kind of breast ultrasound image multivariate classification system and method based on cross-correlation feature | |
CN109003287A (en) | Image partition method based on improved adaptive GA-IAGA | |
CN102930538B (en) | The self-adaptive projection method method that Gauss potential and spatial histogram merge | |
CN108364011A (en) | PolSAR image multi-stage characteristics extract and unsupervised segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171124 |
|
RJ01 | Rejection of invention patent application after publication |