CN104200232B - Twice-sparse representation image processing method based on sliding window fusion - Google Patents

Twice-sparse representation image processing method based on sliding window fusion Download PDF

Info

Publication number
CN104200232B
CN104200232B CN201410443591.XA CN201410443591A CN104200232B CN 104200232 B CN104200232 B CN 104200232B CN 201410443591 A CN201410443591 A CN 201410443591A CN 104200232 B CN104200232 B CN 104200232B
Authority
CN
China
Prior art keywords
vector
mammograms
image
training set
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410443591.XA
Other languages
Chinese (zh)
Other versions
CN104200232A (en
Inventor
王颖
李洁
逄敏
高宪军
李圣喜
焦志成
王斌
张建龙
韩冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410443591.XA priority Critical patent/CN104200232B/en
Publication of CN104200232A publication Critical patent/CN104200232A/en
Application granted granted Critical
Publication of CN104200232B publication Critical patent/CN104200232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a twice-sparse representation image processing method based on sliding window fusion and mainly aims at solving the problem of low breast mass detection accuracy in the prior art. The twice-sparse representation image processing method based on sliding window fusion comprises the implementation steps of (1) reading in an image, (2) preprocessing, (3) extracting the gray-scale characteristics of a training set image and a target image, (4) performing primary sparse representation, (5) performing sliding window fusion, (6) performing region growing, (7) extracting the gray-scale characteristics of an ROI, and (8) performing secondary sparse representation. The twice-sparse representation image processing method based on sliding window fusion is capable of improving the detection rate of the breast mass, accurately representing the location information of the breast mass and reducing the false positive rate of breast mass detection, and as a result, the accuracy of detection can be improved; the twice-sparse representation image processing method is capable of quickly detecting a suspicious mass region from a molybdenum-target mammographic image and marking the suspicious mass region.

Description

The image processing method of the rarefaction representation twice merged based on sliding window
Technical field
The invention belongs to image processing field.Further relate to a kind of rarefaction representation twice merged based on sliding window Mammary gland tumor image processing method.The present invention can be used to suspicious mass region is quickly detected from from mammograms, And suspicious mass region is marked.
Background technology
At present, many scholars propose the Mass detection method of mammography X, cannot be obtained very well in existing method Testing result, the reliability of detecting system is relatively low.As computer vision information processing, pattern recognition, machine learning etc. are learned Various new thoughts are blended and are incorporated in the middle of the detection in lump region, are increasingly becoming improvement and carry by developing rapidly for section field The focus direction of high detection systematic function.
Conventional image detection instrument include support vector machine (Support Vector Machine, SVM), Bayes, Neutral net and k nearest neighbor grader etc..
Patent application " a kind of mammary gland tumor suspicious region quick detection side based on hierarchy that Nanjing University proposes Method " (application number:200810235120, publication number:CN101401730A) disclose a kind of mammary gland tumor based on hierarchy Suspicious region method for quick.The method includes that the training construction of hierarchy grader and mammary gland digital picture lump can be with The training construction of two parts of detection in region, wherein hierarchy grader includes the feature extraction and three in hierarchy The training of individual grader and construction.The method can relatively accurately detect mammary gland tumor, but the deficiency for yet suffering from is: The construction of grader has used layered approach, and each grader is required for being trained, and process is complex, reduces detection Efficiency, in addition, requirement of the method to training sample is higher.
Mohammed A.Alolfe et al. are in article " A statistical based feature extraction method for breast cancer diagnosis in digital mammogram using multiresolution representation.”(Computers in Biology and Medicine.2012,42(1):123-128.) propose With reference to the side classified by the algorithm of SVM and linear discriminent analysis (Linear Discriminant Analysis, LDA) Method.Et al. the expression of feature is carried out with multi-resolution representation, they by galactophore image by small echo or Qu Bo be changed into it is long to Coefficient of discharge.The covariance vector arrangement of each image is obtained into covariance matrix, the columns of covariance matrix is the individual of image Number.Feature extraction is carried out by inspection statistics method then, during extraction, carrys out arrayed feature according to the different classes of suitability, used Dynamic threshold method is finally classified using SVM selecting feature quantity.The method is that the data under SVM schemes are global special The summary levied, it is preferable to the effect of the classification of galactophore image.But, the deficiency that the method is present is:SVM classifier algorithm is more Complexity, Selection of kernel function are extremely difficult, and SVM classifier is difficult to carry out to large-scale training sample, take longer, detection Efficiency is low.
Guo-Shiang Lin et al. are in article " Detecting masses in digital mammograms based on texture analysis and neural classifier.International”(Conference on Information Security and Intelligence Control,2012:222-225.) in using neutral net point Class device is classified to lump in galactophore image.The sorting technique is that lump in galactophore image is extracted in space with wavelet field Texture and monochrome information, as the character representation of image, are then carried out to lump point using the neural network classifier for having supervision Class, the effectiveness of the algorithm that classification results show.The method exist deficiency be:Only with textural characteristics be analyzed, Spatial information is have ignored, false positive rate is high.
The content of the invention
It is an object of the invention to overcome above-mentioned the deficiencies in the prior art, propose a kind of based on the dilute twice of sliding window fusion Relieving the exterior syndrome shows sorting technique.The present invention can be quick, efficiently, accurately, can effectively improve the verification and measurement ratio of lump, obtain more Accurate testing result.
Realize the present invention technical thought be in the result of first time rarefaction representation detection, differed due to Tumor size, Orientation in sliding window differs, and in feature extraction as dimensionality reduction have lost the characteristic information in some lump regions, nothing Method represents the marginal information of lump, therefore by the testing result of first time rarefaction representation, carries after being merged based on slip window adaption Area-of-interest (Region of Interest, ROI) is taken, second rarefaction representation detection is carried out, as self adaptation merges More accurate area-of-interest scope is arrived, therefore has obtained more accurately detecting result, effectively raised verification and measurement ratio, Reduce false positive rate.
For achieving the above object, the present invention comprises the steps:
(1) read in image:
(1a) choosing 100 width respectively from breast molybdenum target image digital database D DSM has the breast molybdenum target X of mammary gland tumor Line image, the normal mammograms of 100 width, by selected 200 width mammograms composition image training Collection;
(1b) choosing 234 width from breast molybdenum target image digital database D DSM has the breast molybdenum target x-ray figure of mammary gland tumor As target image;
(2) pretreatment:
(2a) using the method for medium filtering, denoising is carried out to mammograms;
(2b) mammograms after denoising are carried out down with 5 samplings;
(2c) to the mammograms after lower 5 samplings, downward 40 row of cutting in edge from it, from below edge to 40 row of upper cutting, arranges from its left hand edge cutting to the right 10, arranges from its right hand edge cutting to the left 10, remaining mammary gland molybdenum after cutting out Mammograms after the composition cutting of target X-ray image;
(2d) maximum variance between clusters are adopted, the mammograms after cutting is carried out with binary conversion treatment, obtain breast Glandular tissue region;
(2e) judge whether to have processed all of training set of images and target image, if it is, execution step (3), otherwise, Execution step (2a);
(3) extract the gray feature of training set image and target image:
(3a) size for arranging sliding window is 100 × 100 pixels, and the coverage rate of adjacent sliding window is 75%;
(3b) have in training set of images in the mammary gland tissue region of mammograms of mammary gland tumor, sliding window Slip scan is carried out according to order from left to right from top to bottom, in extracting training set of images, has the breast molybdenum target X of mammary gland tumor The gray feature vector of the image block of line image;
(3c) in training set of images in the mammary gland tissue region of normal mammograms, sliding window by according to The from left to right line slip that sequentially travels from top to bottom is scanned, normal mammograms in extraction training set of images The gray feature vector of image block;
(3d) by the gray feature vector of the image block of the mammograms for having mammary gland tumor, train as image The lump characteristic vector of collection dictionary, the gray feature vector of the image block of normal mammograms is instructed as image Practice the normal characteristics vector of collection dictionary;
(3e) in the mammary gland tissue region of target image, sliding window is carried out according to sequential lines from left to right from top to bottom Slip scan, extracts the gray feature vector of the image block of target image, as target feature vector;
(4) first time rarefaction representation:
(4a) with the vector of each column in training set of images dictionary divided by the vector field homoemorphism value, obtain normalizated unit vector;
(4b) target feature vector is needed the vector of decomposition as first time iteration;
(4c) according to the following formula, calculating first time iteration needs the vector for decomposing each vector in training set of images dictionary Projection vector on direction:
G=<R0,d>d
Wherein, g represents that first time iteration needs the throwing in vector direction of the vector for decomposing in training set of images dictionary Shadow vector, R0 represent that first time iteration needs the vector for decomposing, d to represent the vector in training set of images dictionary,<R0,d>Represent The inner product of the vector in vector and training set of images dictionary that first time iteration needs decompose;
(4d) in movement images training set dictionary the projection vector of all vector directions size, maximum of which is projected Maximal projection vector of the vector as first time iteration;
(4e) according to the following formula, calculate the vectorial residual error that first time iteration needs to decompose:
R1=R0-h
Wherein, R1 represent first time iteration need decompose vectorial residual error, R0 represent first time iteration need decomposition to Amount, h represent the maximal projection vector of first time iteration;
(4f) according to the following formula, calculate first time iteration need in vectorial residual error and the training set of images dictionary for decomposing it is any to The inner product of amount:
P=<R1,d>
Wherein, p represent first time iteration need decompose vectorial residual error and training set of images dictionary in vector inner product, R1 first times iteration needs the vectorial residual error decomposed, d to represent the vector in training set of images dictionary,
(4g) judge that first time iteration needs the inner product of the vectorial residual error and any vector in training set of images dictionary for decomposing Whether 0.1 is less than, if it is, execution step (4i), otherwise, execution step (4h);
(4h) iteration each time is needed the vectorial residual error decomposed the vector of decomposition is needed as next iteration, performs step Suddenly (4c);
(4i) the inner product item of iteration each time is arranged according to the order of iteration, is obtained rarefaction representation coefficient;
(4j) according to the following formula, weight is carried out respectively to the lump characteristic vector in training set of images dictionary and normal characteristics vector Structure:
F=D α
Wherein, f represents the reconstruction result of lump characteristic vector and normal characteristics vector in training set of images dictionary, and D is represented Lump characteristic vector and normal characteristics vector in training set of images dictionary, α represent rarefaction representation coefficient;
(4k) by lump characteristic vector and the reconstruction result of normal characteristics vector, difference is carried out with target feature vector respectively Computing, obtains the reconstructed error of lump characteristic vector and normal characteristics vector;
(4l) compare lump characteristic vector and normal characteristics vector reconstructed error, by wherein reconstructed error little feature to The classification of amount, as the image block classification of the target image, obtains the image block and normal target of the target image of lump The image block of image;
(4m) image block of the target image for having lump is marked with the indicia framing of 100 × 100 pixels;
(5) sliding window fusion:
(5a) according to the following formula, calculate the Euclidean distance of any two indicia framing in the mammograms after labelling:
Wherein, d represents the Euclidean distance of two indicia framings in mammograms, x1Represent the 1st indicia framing upper left The abscissa of point, y1Represent the vertical coordinate of the 1st indicia framing upper left point, x2Represent the abscissa of the 2nd indicia framing upper left point, y2 Represent the vertical coordinate of the 2nd indicia framing upper left point;
(5b) Euclidean distance of arbitrary two indicia framings in mammograms is less than two of 107 pixels Indicia framing, as two indicia framings for needing to merge in mammograms;
(5c) abscissa value minimum in two indicia framings that will need to merge in mammograms, as merging The indicia framing upper left corner and the abscissa value in the lower left corner, it is maximum in two indicia framings that will need to merge in mammograms Abscissa value, as the abscissa value for merging the indicia framing upper right corner and the lower right corner, will need to merge in mammograms Two indicia framings in maximum ordinate value, as the ordinate value for merging the indicia framing upper left corner and the upper right corner, by mammary gland molybdenum Minimum ordinate value in two indicia framings for merging is needed in target X-ray image as merging the indicia framing lower left corner and the lower right corner Ordinate value, obtains merging the coordinate at four angles of indicia framing;
(6) region growing:
(6a) gray value highest pixel in indicia framing will be merged, as sub-pixel point;
(6b) select in sub-pixel point surrounding neighbors with the gray scale difference value of sub-pixel point less than 3 pixels pixel Point, as new pixel;
(6c) using new pixel as sub-pixel point according to step (6b) method, continue to surrounding grow;
(6d) in judging to merge indicia framing, whether all of pixel has grown, if so, then by all of sub-pixel point Composition region of interest ROI, execution step (7), otherwise, execution step (6c);
(7) extract ROI region gray feature:
Using region of interest ROI as target image, according to the method for step (3), the gray scale of region of interest ROI is extracted Characteristic vector is used as target feature vector;
(8) second rarefaction representation:
According to the method for step (4), second rarefaction representation is carried out to target feature vector, final result is obtained.
The present invention is had the advantage that compared with the conventional method:
First, the present invention crops non-mammary region, reduces original breast molybdenum target X by series of preprocessing method The noise of line image, overcomes view data capacity present in prior art greatly, the low shortcoming of detection rates so that the present invention Substantially increase the detection rates in later stage.
Second, the present invention carries out second to first time rarefaction representation classification results by rarefaction representation sorting technique twice Secondary rarefaction representation classification, overcomes the low shortcoming of Detection accuracy present in prior art so that the present invention has detection accurate The high advantage of true rate.
3rd, the present invention passes through sliding window fusion method and algorithm of region growing, reduces the classification of first time rarefaction representation The quantity of indicia framing afterwards, it is determined that lump general area, is that second rarefaction representation classification reduces detection range, overcomes existing The problem for having false positive rate present in technology high so that the present invention has the advantages that false positive rate is low.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 shows figure for first time rarefaction representation result of the invention;
Fig. 3 is that sliding window of the present invention merges schematic diagram;
Fig. 4 is that second rarefaction representation result of the invention shows figure.
Specific embodiment
Below in conjunction with the accompanying drawings, the present invention is described in further detail.
Referring to the drawings 1, it is as follows the step of the present invention is realized.
Step 1, reads in image.
(1a) from breast molybdenum target image digital data base (DDSM, The Digital Database for Screening Mammography) choosing 100 width has a mammograms of mammary gland tumor, the normal mammograms of 100 width, By selected 200 width mammograms composition training set of images.
(1b) choosing 234 width from breast molybdenum target image digital database D DSM has the breast molybdenum target x-ray figure of mammary gland tumor As target image.
Step 2, pretreatment.
(2a) using the method for medium filtering, mammograms are carried out with denoising, the method for medium filtering is such as Under:
The sliding window of median filter is set to the square window of 3 × 3 pixels by the first step.
Second step, is slided along the direction of mammograms row pixel-by-pixel with square window, is slided the phase each time In, the gray value of all pixels in square window is ranked up according to ascending order, ranking results are chosen Intermediate value, substitutes the gray value of square window center pixel.
3rd step, judges whether to have processed all pixels in mammograms, and if so, medium filtering is completed, and is obtained Mammograms to after denoising, otherwise, perform second step.
(2b) mammograms after denoising are carried out down with 5 samplings, the method for lower 5 samplings is as follows:
The sampling interval of down-sampling is set to 5 by the first step.
Second step, in the mammograms after denoising, retains a pixel every 5 pixels, the institute that will retain Mammograms after having pixel to constitute down-sampling.
(2c) to the mammograms after lower 5 samplings, downward 40 row of cutting in edge from it, from below edge to 40 row of upper cutting, arranges from its left hand edge cutting to the right 10, arranges from its right hand edge cutting to the left 10, remaining mammary gland molybdenum after cutting out Mammograms after the composition cutting of target X-ray image.
(2d) maximum variance between clusters are adopted, the mammograms after cutting is carried out with binary conversion treatment, obtain breast Glandular tissue region, the method for maximum variance between clusters are as follows:
The first step, averages to the gray value of all pixels in the mammograms after cutting, after obtaining cutting Mammograms average gray value.
Second step, in the mammograms after cutting in the gray value of all pixels, chooses minima with maximum Any one gray value between value scope, as target and the segmentation threshold of background.
3rd step, according to the following formula, calculates the inter-class variance of target and background:
G=w1 × (u1-u)2+w2×(u2-u)2
Wherein, G represents the inter-class variance of the mammograms target after cutting and background, after w1 represents cutting Object pixel number and mammograms total number of pixels of the gray value more than segmentation threshold t in mammograms Ratio, u1 represents the average gray value of the mammograms object pixel after cutting, and u represents the breast molybdenum target X after cutting The average gray value of line image, during w2 represents the mammograms after cutting, gray value is less than or equal to segmentation threshold t's The ratio of background pixel number and the total number of pixels of mammograms, u2 represent the mammograms background after cutting The average gray value of pixel.
4th step, travels through all values of the segmentation threshold t of target and background, look for inter-class variance G it is maximum when, segmentation threshold Value corresponding to value t, using the value as optimal segmenting threshold.
5th step, extracts pixel of all gray values of mammograms after cutting more than optimal segmenting threshold t, Constitute mammary gland tissue region.
(2e) judge whether to have processed all of training set of images and target image, if it is, execution step (3), otherwise, Execution step (2a).
Step 3, extracts the gray feature of training set image and target image.
(3a) size for arranging sliding window is 100 × 100 pixels, and the coverage rate of adjacent sliding window is 75%.
(3b) have in training set of images in the mammary gland tissue region of mammograms of mammary gland tumor, sliding window Slip scan is carried out according to order from left to right from top to bottom, in extracting training set of images, has the breast molybdenum target X of mammary gland tumor The gray feature vector of the image block of line image.
The method for extracting gray feature vector is as follows:
The first step, extracts the gray value in mammograms region by column.
Second step, combines each row gray value for extracting, and obtains the gray feature vector in mammograms region.
(3c) in training set of images in the mammary gland tissue region of normal mammograms, sliding window by according to The from left to right line slip that sequentially travels from top to bottom is scanned, normal mammograms in extraction training set of images The gray feature vector of image block.
(3d) by the gray feature vector of the image block of the mammograms for having mammary gland tumor, train as image The lump characteristic vector of collection dictionary, the gray feature vector of the image block of normal mammograms is instructed as image Practice the normal characteristics vector of collection dictionary.
(3e) in the mammary gland tissue region of target image, sliding window is carried out according to sequential lines from left to right from top to bottom Slip scan, extracts the gray feature vector of the image block of target image, as target feature vector.
Step 4, first time rarefaction representation.
(4a) with the vector of each column in training set of images dictionary divided by the vector field homoemorphism value, obtain normalizated unit vector.
(4b) target feature vector is needed the vector of decomposition as first time iteration.
(4c) according to the following formula, calculating first time iteration needs the vector for decomposing each vector in training set of images dictionary Projection vector on direction:
G=<R0,d>d
Wherein, g represents that first time iteration needs the throwing in vector direction of the vector for decomposing in training set of images dictionary Shadow vector, R0 represent that first time iteration needs the vector for decomposing, d to represent the vector in training set of images dictionary,<R0,d>Represent The inner product of the vector in vector and training set of images dictionary that first time iteration needs decompose.
(4d) in movement images training set dictionary the projection vector of all vector directions size, maximum of which is projected Maximal projection vector of the vector as first time iteration.
(4e) according to the following formula, calculate the vectorial residual error that first time iteration needs to decompose:
R1=R0-h
Wherein, R1 represent first time iteration need decompose vectorial residual error, R0 represent first time iteration need decomposition to Amount, h represent the maximal projection vector of first time iteration.
(4f) according to the following formula, calculate first time iteration need in vectorial residual error and the training set of images dictionary for decomposing it is any to The inner product of amount:
P=<R1,d>
Wherein, p represent first time iteration need decompose vectorial residual error and training set of images dictionary in vector inner product, R1 first times iteration needs the vectorial residual error decomposed, d to represent the vector in training set of images dictionary.
(4g) judge that first time iteration needs the inner product of the vectorial residual error and any vector in training set of images dictionary for decomposing Whether 0.1 is less than, if it is, execution step (4i), otherwise, execution step (4h).
(4h) iteration each time is needed the vectorial residual error decomposed the vector of decomposition is needed as next iteration, performs step Suddenly (4c).
(4i) the inner product item of iteration each time is arranged according to the order of iteration, is obtained rarefaction representation coefficient.
(4j) according to the following formula, weight is carried out respectively to the lump characteristic vector in training set of images dictionary and normal characteristics vector Structure:
F=D α
Wherein, f represents the reconstruction result of lump characteristic vector and normal characteristics vector in training set of images dictionary, and D is represented Lump characteristic vector and normal characteristics vector in training set of images dictionary, α represent rarefaction representation coefficient.
(4k) by lump characteristic vector and the reconstruction result of normal characteristics vector, difference is carried out with target feature vector respectively Computing, obtains the reconstructed error of lump characteristic vector and normal characteristics vector.
(4l) compare lump characteristic vector and normal characteristics vector reconstructed error, by wherein reconstructed error little feature to The classification of amount, as the image block classification of the target image, obtains the image block and normal target of the target image of lump The image block of image.
(4m) image block of the target image for having lump is marked with the indicia framing of 100 × 100 pixels.
Step 5, sliding window fusion.
(5a) according to the following formula, calculate the Euclidean distance of any two indicia framing in the mammograms after labelling:
Wherein, d represents the Euclidean distance of two indicia framings in mammograms, x1Represent the 1st indicia framing upper left The abscissa of point, y1Represent the vertical coordinate of the 1st indicia framing upper left point, x2Represent the abscissa of the 2nd indicia framing upper left point, y2 Represent the vertical coordinate of the 2nd indicia framing upper left point.
(5b) Euclidean distance of arbitrary two indicia framings in mammograms is less than two of 107 pixels Indicia framing, as two indicia framings for needing to merge in mammograms.
(5c) abscissa value minimum in two indicia framings that will need to merge in mammograms, as merging The indicia framing upper left corner and the abscissa value in the lower left corner, it is maximum in two indicia framings that will need to merge in mammograms Abscissa value, as the abscissa value for merging the indicia framing upper right corner and the lower right corner, will need to merge in mammograms Two indicia framings in maximum ordinate value, as the ordinate value for merging the indicia framing upper left corner and the upper right corner, by mammary gland molybdenum Minimum ordinate value in two indicia framings for merging is needed in target X-ray image as merging the indicia framing lower left corner and the lower right corner Ordinate value, obtains merging the coordinate at four angles of indicia framing.
Step 6, region growing.
(6a) in merging indicia framing, gray value highest pixel is used as sub-pixel point.
(6b) select in sub-pixel point surrounding neighbors with the gray scale difference value of sub-pixel point less than 3 pixels pixel Point, as new pixel.
(6c) new pixel is continued to surrounding to grow as sub-pixel point execution step (6b).
(6d) in judging to merge indicia framing, whether all of pixel has grown, if so, then by all of sub-pixel point Composition region of interest ROI, execution step (7), otherwise, execution step (6c).
Step 7, extracts ROI region gray feature.
Using region of interest ROI as target image, according to the method for step 3, the gray scale for extracting region of interest ROI is special Vector is levied as target feature vector.
Step 8, second rarefaction representation.
According to the method for step 4, second rarefaction representation is carried out to target feature vector, final result is obtained.
The effect of the present invention can be described further by following emulation.
1. simulated conditions:
The present invention is grasped for Intel (R) Core i3-2100 3.10GHZ, internal memory 4G, WINDOWS 7 in central processing unit Make in system, the emulation carried out with MATLAB softwares.
2. emulation content:
The all of mammograms of the present invention choose 200 width breast molybdenum target x-ray figures from DDSM data bases, arbitrarily As doing training set, wherein there is lump image to be 100 width, normal picture is 100 width, arbitrarily 234 width gland mammography conducts of selection Test image, test image have been lump image.
The present invention is evaluated and tested with recall rate and false positive rate, and recall rate directly reflects the quality of grader, recall rate Calculating with the quantity of lump for detecting divided by all lumps quantity, therefore recall rate is higher, shows the performance of grader Better.False positive rate directly reflects the accuracy of grader, and the calculating of false positive rate is that normal mistake is divided into the quantity of lump and removes With the quantity of all lumps, false positive is lower, and the performance of grader is better.
Fig. 2 is the sliding window fusion process analogous diagram of the present invention.Fig. 2 (a) is the effect emulation figure before sliding window fusion, In Fig. 2 (a), fine rule indicia framing is the indicia framing before sliding window fusion.Fig. 2 (b) is the effect emulation after sliding window fusion Figure, the indicia framing before being merged with the frame of fine rule labelling as sliding window in Fig. 2 (b) merge by sliding window of the frame of thick line labelling Indicia framing afterwards.Fig. 3 is first time rarefaction representation result analogous diagram, indicates lump region with the frame of fine rule labelling in Fig. 3. Fig. 4 is second rarefaction representation result analogous diagram, indicates lump region with the frame of fine rule labelling in Fig. 4.
3. simulated effect analysis:
The emulation experiment of the present invention is compared using SVM classifier.The reason for being compared using SVM classifier is SVM point Preferably, nicety of grading is high, classification speed is fast, is one of conventional grader for the versatility of class device.Contrast and experiment such as following table It is shown:
1 present invention of table and SVM classifier Comparative result
In upper table, first row represents different case in DDSM data bases, and secondary series represents proposed by the present invention based on cunning Dynamic window merges the testing result of rarefaction representation twice, and the 3rd row represent the testing result for being used to doing the SVM classifier for contrasting.Following table In all data, molecule represents the lump quantity for detecting, denominator represents all of lump quantity in the case.
Recall rate and false positive rate result shown by upper table, compared with SVM classifier, recall rate will for the present invention Height, false positive rate are low, illustrate that the classifying quality of the present invention is reasonable compared with SVM classifier, this is because of the invention After the classification of first time rarefaction representation, the sliding window for obtaining is carried out into self adaptation fusion, then is carried out by algorithm of region growing ROI region is extracted, and carries out feature analysiss by the testing result to first time rarefaction representation, is devised second rarefaction representation and is come Again the suspicious region after fusion is marked, after second rarefaction representation, in the result for obtaining, indicia framing is less, lump Position is more accurate.
In sum, the present invention can effectively detect mammary gland tumor, improve the verification and measurement ratio of lump, reduce false positive Rate, has reached good Detection results.

Claims (5)

1. a kind of image processing method of the rarefaction representation twice merged based on sliding window, is comprised the steps:
(1) read in image:
(1a) choosing 100 width respectively from breast molybdenum target image digital database D DSM has the breast molybdenum target x-ray figure of mammary gland tumor Picture, the normal mammograms of 100 width, by selected 200 width mammograms composition training set of images;
(1b) choosing 234 width from breast molybdenum target image digital database D DSM has the mammograms work of mammary gland tumor For target image;
(2) pretreatment:
(2a) using the method for medium filtering, denoising is carried out to mammograms;
(2b) mammograms after denoising are carried out down with 5 samplings;
(2c) to the mammograms after lower 5 samplings, downward 40 row of cutting in edge from it, from below edge cut out upwards 40 rows are cut, is arranged from its left hand edge cutting to the right 10, arranged from its right hand edge cutting to the left 10, remaining breast molybdenum target X after cutting out Mammograms after line image composition cutting;
(2d) maximum variance between clusters are adopted, the mammograms after cutting is carried out with binary conversion treatment, obtain mammary gland group Tissue region;
(2e) judge whether to have processed all of training set of images and target image, if it is, execution step (3), otherwise, performs Step (2a);
(3) extract the gray feature of training set image and target image:
(3a) size for arranging sliding window is 100 × 100 pixels, and the coverage rate of adjacent sliding window is 75%;
(3b) have in training set of images in the mammary gland tissue region of mammograms of mammary gland tumor, sliding window according to From left to right order from top to bottom carries out slip scan, the breast molybdenum target x-ray figure for having mammary gland tumor in extracting training set of images The gray feature vector of the image block of picture;
(3c), in training set of images in the mammary gland tissue region of normal mammograms, sliding window is by according to from a left side To the right image for sequentially travelling line slip scanning, extracting normal mammograms in training set of images from top to bottom The gray feature vector of block;
(3d) by the gray feature vector of the image block of the mammograms for having mammary gland tumor, as training set of images word The lump characteristic vector of allusion quotation, by the gray feature vector of the image block of normal mammograms, as training set of images The normal characteristics vector of dictionary;
(3e) in the mammary gland tissue region of target image, sliding window according to from left to right from top to bottom sequentially travel line slip Scanning, extracts the gray feature vector of the image block of target image, as target feature vector;
(4) first time rarefaction representation:
(4a) with the vector of each column in training set of images dictionary divided by the vector field homoemorphism value, obtain normalizated unit vector;
(4b) target feature vector is needed the vector of decomposition as first time iteration;
(4c) according to the following formula, calculating first time iteration needs the vector for decomposing each vector direction in training set of images dictionary On projection vector:
G=<R0,d>d
Wherein, g represent first time iteration need the projection in vector direction of the vector in training set of images dictionary decomposed to Amount, R0 represent that first time iteration needs the vector for decomposing, d to represent the vector in training set of images dictionary,<R0,d>Represent first The inner product of the vector in vector and training set of images dictionary that secondary iteration needs decompose;
(4d) in movement images training set dictionary the projection vector of all vector directions size, by maximum of which projection vector As the maximal projection vector of first time iteration;
(4e) according to the following formula, calculate the vectorial residual error that first time iteration needs to decompose:
R1=R0-h
Wherein, R1 represents that first time iteration needs the vectorial residual error decomposed, R0 to represent that first time iteration needs the vector for decomposing, h Represent the maximal projection vector of first time iteration;
(4f) according to the following formula, calculating first time iteration needs the vectorial residual error decomposed with any vector in training set of images dictionary Inner product:
P=<R1,d>
Wherein, p represents that first time iteration needs the inner product of the vector in the vectorial residual error and training set of images dictionary decomposed, R1 the An iteration needs the vectorial residual error decomposed, d to represent the vector in training set of images dictionary;
(4g) judge whether first time iteration needs arbitrarily vectorial inner product in vectorial residual error and the training set of images dictionary for decomposing Less than 0.1, if it is, execution step (4i), otherwise, execution step (4h);
(4h) iteration each time is needed the vectorial residual error decomposed the vector of decomposition, execution step are needed as next iteration (4c);
(4i) the inner product item of iteration each time is arranged according to the order of iteration, is obtained rarefaction representation coefficient;
(4j) according to the following formula, the lump characteristic vector in training set of images dictionary and normal characteristics vector are reconstructed respectively:
F=D α
Wherein, f represents the reconstruction result of lump characteristic vector and normal characteristics vector in training set of images dictionary, and D represents image Lump characteristic vector and normal characteristics vector in training set dictionary, α represent rarefaction representation coefficient;
(4k) by lump characteristic vector and the reconstruction result of normal characteristics vector, difference fortune is carried out with target feature vector respectively Calculate, obtain the reconstructed error of lump characteristic vector and normal characteristics vector;
(4l) compare the reconstructed error of lump characteristic vector and normal characteristics vector, by wherein reconstructed error little characteristic vector Classification, as the image block classification of the target image, obtains the image block and normal target image of the target image of lump Image block;
(4m) image block of the target image for having lump is marked with the indicia framing of 100 × 100 pixels;
(5) sliding window fusion:
(5a) according to the following formula, calculate the Euclidean distance of any two indicia framing in the mammograms after labelling:
d = ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2
Wherein, d represents the Euclidean distance of two indicia framings in mammograms, x1Represent the 1st indicia framing upper left point Abscissa, y1Represent the vertical coordinate of the 1st indicia framing upper left point, x2Represent the abscissa of the 2nd indicia framing upper left point, y2Represent The vertical coordinate of the 2nd indicia framing upper left point;
(5b) Euclidean distance of arbitrary two indicia framings in mammograms is less than two labellings of 107 pixels Frame, as two indicia framings for needing to merge in mammograms;
(5c) abscissa value of minimum in two indicia framings for merging will be needed in mammograms, as merging labelling The frame upper left corner and the abscissa value in the lower left corner, maximum horizontal stroke in two indicia framings that will need to merge in mammograms Coordinate figure, as the abscissa value for merging the indicia framing upper right corner and the lower right corner, will need to merge in mammograms two Maximum ordinate value in individual indicia framing, as the ordinate value for merging the indicia framing upper left corner and the upper right corner, by breast molybdenum target x-ray The ordinate value of minimum in two indicia framings for merging is needed in image as the vertical seat for merging the indicia framing lower left corner and the lower right corner Scale value, obtains merging the coordinate at four angles of indicia framing;
(6) region growing:
(6a) gray value highest pixel in indicia framing will be merged, as sub-pixel point;
(6b) select in sub-pixel point surrounding neighbors with the gray scale difference value of sub-pixel point less than the pixel of 3 pixels, make For new pixel;
(6c) using new pixel as sub-pixel point according to step (6b) method, continue to surrounding grow;
(6d) in judging to merge indicia framing, whether all of pixel has grown, and if so, then constitutes all of sub-pixel point Region of interest ROI, execution step (7), otherwise, execution step (6c);
(7) extract ROI region gray feature:
Using region of interest ROI as target image, according to the method for step (3), the gray feature of region of interest ROI is extracted Vector is used as target feature vector;
(8) second rarefaction representation:
According to the method for step (4), second rarefaction representation is carried out to target feature vector, final result is obtained.
2. the image processing method of the rarefaction representation twice merged based on sliding window according to claim 1, its feature exists It is as follows the step of, median filter method described in step (2a):
The sliding window of median filter is set to the square window of 3 × 3 pixels by the first step;
Second step, is slided along the direction of mammograms row pixel-by-pixel with square window, during sliding each time It is interior, the gray value of all pixels in square window is ranked up according to ascending order, is chosen in ranking results Between be worth, substitute the gray value of square window center pixel;
3rd step, judges whether to have processed all pixels in mammograms, and if so, medium filtering is completed, and is gone Mammograms after making an uproar, otherwise, perform second step.
3. the image processing method of the rarefaction representation twice merged based on sliding window according to claim 1, its feature exists In, described in step (2b) it is lower 5 sampling the step of it is as follows:
The sampling interval of down-sampling is set to 5 by the first step;
Second step, in the mammograms after denoising, retains a pixel every 5 pixels, all pictures that will retain Element constitutes the mammograms after down-sampling.
4. the image processing method of the rarefaction representation twice merged based on sliding window according to claim 1, its feature exists In the maximum variance between clusters described in step (2d) are carried out as follows:
The first step, averages to the gray value of all pixels in the mammograms after cutting, obtains the breast after cutting The average gray value of gland molybdenum target X-ray image;
Second step, in the mammograms after cutting in the gray value of all pixels, chooses minima and maximum model Any one gray value between enclosing, as target and the segmentation threshold of background;
3rd step, according to the following formula, calculates the inter-class variance of target and background:
G=w1 × (u1-u)2+w2×(u2-u)2
Wherein, G represents the inter-class variance of the mammograms target after cutting and background, and w1 represents the mammary gland after cutting In molybdenum target X-ray image gray value more than segmentation threshold t object pixel number and the total number of pixels of mammograms it Than u1 represents the average gray value of the mammograms object pixel after cutting, and u represents the breast molybdenum target x-ray after cutting The average gray value of image, w2 represent the back of the body of the gray value less than or equal to segmentation threshold t in the mammograms after cutting The ratio of scape number of pixels and the total number of pixels of mammograms, u2 represent the mammograms background picture after cutting The average gray value of element;
4th step, travels through all values of the segmentation threshold t of target and background, look for inter-class variance G it is maximum when, segmentation threshold t Corresponding value, using the value as optimal segmenting threshold;
5th step, extracts pixel of all gray values of mammograms after cutting more than optimal segmenting threshold t, constitutes Mammary gland tissue region.
5. the image processing method of the rarefaction representation twice merged based on sliding window according to claim 1, its feature exists It is as follows the step of, gray feature extracting method described in step (3), step (7):
The first step, extracts the gray value in mammograms region by column;
Second step, combines each row gray value for extracting, obtains the gray feature column vector in mammograms region.
CN201410443591.XA 2014-09-02 2014-09-02 Twice-sparse representation image processing method based on sliding window fusion Active CN104200232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410443591.XA CN104200232B (en) 2014-09-02 2014-09-02 Twice-sparse representation image processing method based on sliding window fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410443591.XA CN104200232B (en) 2014-09-02 2014-09-02 Twice-sparse representation image processing method based on sliding window fusion

Publications (2)

Publication Number Publication Date
CN104200232A CN104200232A (en) 2014-12-10
CN104200232B true CN104200232B (en) 2017-05-17

Family

ID=52085521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410443591.XA Active CN104200232B (en) 2014-09-02 2014-09-02 Twice-sparse representation image processing method based on sliding window fusion

Country Status (1)

Country Link
CN (1) CN104200232B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049598A1 (en) * 2016-09-14 2018-03-22 深圳大学 Ocular fundus image enhancement method and system
CN110123347B (en) * 2019-03-22 2023-06-16 杭州深睿博联科技有限公司 Image processing method and device for breast molybdenum target
CN113421240B (en) * 2021-06-23 2023-04-07 深圳大学 Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279966A (en) * 2013-06-02 2013-09-04 复旦大学 Method for rebuilding photoacoustic imaging image based on sparse coefficient p norm and total vibration parameter of image
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7386165B2 (en) * 2004-02-06 2008-06-10 Siemens Medical Solutions Usa, Inc. System and method for a sparse kernel expansion for a Bayes classifier

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279966A (en) * 2013-06-02 2013-09-04 复旦大学 Method for rebuilding photoacoustic imaging image based on sparse coefficient p norm and total vibration parameter of image
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing

Also Published As

Publication number Publication date
CN104200232A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
Abdelrahman et al. Convolutional neural networks for breast cancer detection in mammography: A survey
Zhou et al. A comprehensive review for breast histopathology image analysis using classical and deep neural networks
US20210200988A1 (en) Method and equipment for classifying hepatocellular carcinoma images by combining computer vision features and radiomics features
CN109447065B (en) Method and device for identifying mammary gland image
CN110930367B (en) Multi-modal ultrasound image classification method and breast cancer diagnosis device
Campanini et al. A novel featureless approach to mass detection in digital mammograms based on support vector machines
CN109363698B (en) Method and device for identifying mammary gland image signs
CN109754361A (en) The anisotropic hybrid network of 3D: the convolution feature from 2D image is transmitted to 3D anisotropy volume
CN107067402B (en) Medical image processing apparatus and breast image processing method thereof
KR20010023427A (en) Method and system for automated detection of clustered microcalcifications from digital mammograms
CN108765387A (en) Based on Faster RCNN mammary gland DBT image lump automatic testing methods
US20030161522A1 (en) Method, and corresponding apparatus, for automatic detection of regions of interest in digital images of biological tissue
CN111563897B (en) Breast nuclear magnetic image tumor segmentation method and device based on weak supervision learning
CN104751178A (en) Pulmonary nodule detection device and method based on shape template matching and combining classifier
CN101517614A (en) Advanced computer-aided diagnosis of lung nodules
CN104217213B (en) A kind of medical image multistage sorting technique based on symmetric theory
CN104616289A (en) Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
CN104200232B (en) Twice-sparse representation image processing method based on sliding window fusion
CN108053401A (en) A kind of B ultrasound image processing method and device
Nagarajan et al. Feature extraction based on empirical mode decomposition for automatic mass classification of mammogram images
CN108875741A (en) It is a kind of based on multiple dimensioned fuzzy acoustic picture texture characteristic extracting method
Safdarian et al. Detection and classification of breast cancer in mammography images using pattern recognition methods
Zhao et al. AE-FLOW: autoencoders with normalizing flows for medical images anomaly detection
CN104331864B (en) Based on the processing of the breast image of non-down sampling contourlet and the significant model of vision
US7548642B2 (en) System and method for detection of ground glass objects and nodules

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant