CN106251299A - A kind of high-efficient noise-reducing visual pattern reconstructing method - Google Patents

A kind of high-efficient noise-reducing visual pattern reconstructing method Download PDF

Info

Publication number
CN106251299A
CN106251299A CN201610589708.4A CN201610589708A CN106251299A CN 106251299 A CN106251299 A CN 106251299A CN 201610589708 A CN201610589708 A CN 201610589708A CN 106251299 A CN106251299 A CN 106251299A
Authority
CN
China
Prior art keywords
base
topography
window
pixel point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610589708.4A
Other languages
Chinese (zh)
Other versions
CN106251299B (en
Inventor
黄伟
颜红梅
陈华富
王亦伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610589708.4A priority Critical patent/CN106251299B/en
Publication of CN106251299A publication Critical patent/CN106251299A/en
Application granted granted Critical
Publication of CN106251299B publication Critical patent/CN106251299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention discloses a kind of high-efficient noise-reducing visual pattern reconstructing method, belong to Biomedical Image mode identification technology.First the present invention carries out dimension-reduction treatment to characteristic vector, more corresponding topography based on different scale base carries out image conversion process to the contrast figure of the stimulating image of training sample, and builds the grader of correspondence;Simultaneously by the classification of training sample and prediction being obtained many groups the first prediction label, and then obtain the coefficient of colligation of each pixel of visual pattern;Input pending fMRI signal, obtain the second prediction label that different topographies base is corresponding, the coefficient of colligation of same pixel and the weighted sum of the second prediction label obtain the reconstruct label of current pixel point, thus obtain the visual pattern of reconstruct.The enforcement of the present invention not only can reconstruct visual pattern, and reduces reconstruct image noise.

Description

A kind of high-efficient noise-reducing visual pattern reconstructing method
Technical field
This method belongs to Biomedical Image mode identification technology, is specifically related to the vision figure of functional MRI As reconstructing method.
Background technology
Human brain has the advantages such as efficient, robust and anti-noise in complicated Vision information processing.Visual information is human perception With one of topmost approach in the understanding external world.Brain has certain general character for different classes of natural scene processing, Such as, the other region of temporo abdomen Hippocampus is the closely bound up brain domain that processes with scene information, but the processing to scene information Also relying on multiple vision brain district, brain function characterization of visual information is high complexity.Up to now, brain is processed by we Brain district and the encoding mechanism thereof of complicated natural scene classification are still known little about it.
In recent years, along with modern neuro image (brain electricity, brain magnetic, functional mri (fMRI), near-infrared optical brain merit Energy imaging) and the development of computer technology, people have been gradually opened the gate understanding brain Vision information processing mechanism.Due to nothing Wound property and the advantage of high spatial resolution, fMRI has played extremely important effect in Vision information processing, applies fMRI pair Natural scene coding and decoding also achieve a series of important achievement.By fMRI technology, we are possible not only to detect volume The brain district of code natural image, the process of tracking brain coding natural scene, it is also possible to enter according to corresponding cerebration signal Row mode classification and visual information reconstruct (i.e. reappearing the natural image observed), and natural image and cerebration are entered Row coupling modeling, and then be expected to cerebration is carried out real time information decoding.See Fig. 1, the overall think of of visual pattern reconstruct Lu Shi: first, (flicker gridiron pattern image, if background is Lycoperdon polymorphum Vitt, pattern (geometry or letter) is black to allow tester watch stimulation figure The stimulating images such as white flicker gridiron pattern image);Then, obtain flicker gridiron pattern image by magnetic resonance to stimulate in brain visual area The fMRI signal of (as determined position, brain visual area by the experiment of retina Topological Mapping);Finally, mode identification method is used This signal is carried out visual scene decoding.But, the brain function that it is critical only that extraction efficient stable of decoding brain function information is lived Dynamic feature and the mode identification technology of correspondence.These difficult problems allow the research understanding human brain visual cognition movable become current undoubtedly Most forward position, the direction most challenged in brain science research.
Visual pattern reconstruct is one of important research direction of visual information decoding.Trace back to eighties of last century the nineties, Researcher is just had to use outer paint somatic nerves to put a signal to reconstruct visual scene, due to social production technology, the work in that age The a series of restriction such as technology, although visual pattern reconstruct achieves certain effect, but reconstruction accuracy is also far from reaching To preferable result, and subjects is had invasive damage.Along with rise and the development of mr techniques, the most non-invasive Research brain become possibility, as use sparse polynomial logistic regression (SMLR) vision reconstructing method, although the method Improve existing quality reconstruction, but the noise contained by reconstruction result is big.
Summary of the invention
The goal of the invention of the present invention is: for the problem of above-mentioned existence, it is provided that a kind of high-efficient noise-reducing visual pattern reconstruct Method.
The high-efficient noise-reducing visual pattern reconstructing method of the present invention includes that training and vision reconstruct two parts, and it implements Process is respectively
A. training step:
Step A1: input training sample, described training sample include the stimulating image FMRI signal in brain visual area, with And with the contrast figure (the size normalization of contrast figure) of FMRI signal stimulating image one to one, wherein stimulating image is Flicker checkerboard pattern;
Step A2: use N (N is more than 1) to plant topography base (φ1~φN) the contrast figure of stimulating image is converted Process, obtain the N width changing image of same contrast figure:
Based on current topography base, respectively each pixel of stimulating image is carried out conversion process, take topography's base Average contrast as the label of current pixel point, obtain the changing image of current topography base, wherein topography's base Average contrast defined in the window of place for the ratio of number and chessboard sum of flicker chessboard.As follows in used Nine kinds of topography's bases:
Topography base φ1: only include current pixel point 1 × 1 window;
Topography base φ2: include current pixel point abutment points right with it 1 × 2 window;
Topography base φ3: include current pixel point abutment points left with it 1 × 2 window;
Topography base φ4: include current pixel point with its on abutment points 2 × 1 window;;
Topography base φ5: include that current pixel point and its descend the window of the 2 × 1 of abutment points;
Topography base φ6: include current pixel point 2 × 2 window, wherein current pixel point is in the upper left of window Angle;
Topography base φ7: include current pixel point 2 × 2 window, wherein current pixel point is in the upper right of window Angle;
Topography base φ8: include current pixel point 2 × 2 window, wherein current pixel point is in the lower-left of window Angle;
Topography base φ9: include current pixel point 2 × 2 window, wherein current pixel point is in the bottom right of window Angle;
Step A3: the coefficient of colligation of the structure each pixel of visual pattern:
Step A3-1: each pixel position to the contrast figure of stimulating image, the respectively instruction of acquisition N kind topography base Practice and predict label:
Training sample is divided into two subsets, and a subset is as training data, and a subset is as test data, not Under same topography's base, based on changing image and FMRI signal, training data is carried out classifier training, obtain each pixel At the first grader of different topographies base, wherein the classification of the first grader is the average contrast of each topography base Value.Corresponding above-mentioned 9 kinds of topography's bases, wherein topography's base φ1Including two classes, all kinds of contrasts is respectively as follows: 0,1; Topography base φ2、φ3、φ4、φ5Including three classes, all kinds of contrasts is respectively as follows: 0,0.5,1;Topography base φ6、φ7、 φ8、φ9Including five classes, all kinds of contrasts is respectively as follows: 0,0.25,0.5,0.75,1.
By the first grader, test data are carried out Classification and Identification, obtain each pixel position at N kind topography base Under first prediction labelWherein i=1,2 ..., N;
Step A3-2: according to formulaIn multiple test data, the numerical value of residual epsilon will be made (unsigned number word, or directly take residual epsilon absolute value or square) minimum ω12,...,ωNAs current pixel point The coefficient of colligation of position, thus obtain the coefficient of colligation of each pixel of visual pattern, wherein CtrRepresent the contrast of stimulating image The pixel value of figure;
Step A4: the value of average contrast based on N kind topography base divides classification, and training sample is carried out classification Divide;And the characteristic vector of the FMRI signal of training sample is carried out Feature Selection, take the spy that front K classification judgement ability is maximum Levying vector as Feature Selection result, wherein K is preset value, as used multi-class F-score feature selection mode, thus realizes Dimension-reduction treatment to characteristic vector;
Under different topography's bases, carry out classifier training based on the FMRI signal after changing image and screening, To pixel at the second grader of different topographies base, wherein the classification of the second grader is the flat of N kind topography base All the value of contrast, above-mentioned 9 kinds of topography's bases, can be divided into five classes by training sample, and the contrast of all kinds of correspondences is respectively For: 0,0.25,0.5,0.75,1.
Step B: visual pattern reconstructs
Step B1: input FMRI signal to be reconstructed, and the selection result based on step A4 treats the FMRI signal of reconstruct Feature screen, by K characteristic vector after screening as the input of the second grader;
Step B2: recognition result based on the second grader, obtains second prediction under N kind topography base of each pixel Label: C1,C2,...,CN;Step B3: by the second prediction label C of each pixel1,C2,...,CNWith coefficient of colligation ω1, ω2,...,ωNWeighted sumObtain the reconstruct label of each pixel, thus obtain vision reconstruct image.
In sum, owing to have employed technique scheme, the invention has the beneficial effects as follows: before classification processes, preferentially Carry out the screening of characteristic vector, reduce characteristic vector latitude, reduce computation complexity and the noise of reconstruct image of reconstruct; Meanwhile, local loop can promote the degree of accuracy of visual pattern reconstruct further in based on the present invention 9.
Accompanying drawing explanation
Fig. 1 vision reconfiguration principle figure.
Fig. 2 training image sequence chart.
Fig. 3 tests image sequence figure.
What Fig. 4 visual pattern reconstructed is embodied as flow process.
Fig. 5 topography base transition diagram.
Fig. 6 visual pattern reconstruction result comparison diagram.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with embodiment and accompanying drawing, to this Bright it is described in further detail.
Embodiment
Collection training sample and test sample:
See Fig. 2, allow subjects at first 28 seconds eye gaze tranquillization images (being easy to obtain the original position of fMRI signal); Watch attentively the most successively 6 seconds be made up of random image stimulation figure (will random image with flicker checkerboard pattern (letter or geometry Image is that black and white flashes checkerboard pattern, and background is Lycoperdon polymorphum Vitt) present to subjects) and 6 seconds tranquillization images, repeat 22 times;Finally 12 seconds eye gaze tranquillization images (being easy to obtain the final position of fMRI signal).Said process is performed 20 times, then can get The fMRI signals of 440 times and corresponding random image (are stimulated the contrast of figure by the fMRI signal under 440 stimulating images Figure) as training sample A.
See Fig. 3, allow first 28 seconds eye gaze tranquillization images of being tried;Watch test image (flicker in 12 seconds the most successively attentively The stimulating image of checkerboard pattern) and 12 seconds tranquillization images, it is repeated 10 times;Latter 12 seconds eye gaze tranquillization images.By above-mentioned mistake Cheng Zhihang 8 times, then can get the fMRI signal under 80 test images, using the fMRI signals of 80 times as test sample B.
Performing training and reconstruction processing respectively based on the training sample A obtained and test sample B, see Fig. 4, it specifically walks Suddenly it is:
First, the stimulating image in training sample is carried out the conversion process of different scale, putting down of Ji Qu topography base All contrast (for number and the ratio of chessboard sum of flicker chessboard defined in the window of topography's base place) is as current picture The label of vegetarian refreshments, obtains the changing image corresponding with each topography base.See Fig. 5, use 9 kinds of Local maps as shown in Fig. 5-a Carry out conversion process respectively as base, the window of i.e. 1 × 1 yardstick includes base φ1: the most only include current pixel point;1 × 2 yardstick Window includes two base (φ2、φ3): i.e. include current pixel point and abutment points around;The window of 2 × 1 yardsticks includes two Base (φ4、φ5): i.e. include current pixel point abutment points upper and lower with it;And 2 × 2 the window of yardstick include four base (φ6、 φ7、φ8、φ9): i.e. include that in the window of 2 × 2 of current pixel point, current pixel point lays respectively at the window upper left corner, upper right Angle, the lower left corner, the lower right corner.Fig. 5-a gives the conversion schematic diagram under different base.I.e. use above-mentioned nine kinds of topography base φ1 ~φ9Same stimulating image is carried out conversion process, respectively obtains nine width Transformation Graphs of correspondence.
Secondly, to training sample A, based on naive Bayes classifier and ten folding cross validation methods, visual pattern is built Coefficient of colligation (the ω of each pixel12,...,ω9):
Under different topography's bases, use ten folding cross validation methods, respectively training sample is divided into 10 parts, 1 Part is as test data t1, and 9 parts, as training data s1, based on training data t1, use naive Bayes classifier to build pass In first grader (characteristic vector is corresponding FMRI signal) of pixel position, then the FMRI signal of test data t1 is made It is the input of the first grader, carries out Classification and Identification, obtain each pixel position the first pre-mark under N kind topography base Sign, i.e. obtain many groups under s1, t1 (number depends on the test sample number included by t1) training data prediction labelsAfter ten folding cross processings, i.e. can get MA(sample number of training sample A) group
True pixel values (also can the claim true tag) C of the random image according to training sample AtrMark pre-with training data SignSet up multiple linear regression model:Wherein ε represents Residual error.Based on MAGroupUse the method for least square multiple linear equation matching to above-mentioned regression model, Littleization mean square sesidualBy corresponding ω1~ωNAs the coefficient of colligation of each pixel of visual pattern, i.e.Wherein εjIn corresponding regression model ε,C in corresponding regression modeltr, subscript j is sample identification symbol.
Coefficient of colligation (the ω of each pixel of visual pattern is then can get based on above-mentioned multiple linear regression model1, ω2,...,ω9)。
Thirdly, classifying training sample based on pixel label, one is divided into five classes, and all kinds of label values is respectively For: 0,0.25,0.5,0.75,1.Then use multi-class F-score feature selection approach that the fMRI signal of training sample A is entered Row Feature Selection processes, i.e. basisCalculate the F value of each characteristic vector of fMRI signal respectively, then take front 5 Big F value is as Feature Selection result.Wherein SfarReflect different classes of between distance.ScloseReflection identical category between away from From.SfarAnd ScloseComputing formula is respectivelyIts Middle M is class number,Represent the average of all samples of characteristic vector;Represent in characteristic vector, belong to classification ciSon The average of sample;niRepresent in characteristic vector, belong to classification ciThe length of subsample;Represent in characteristic vector, belong to class Other ciSubsample in jth value.
Then, under different topography's bases, grader instruction is carried out based on the FMRI signal after changing image and screening Practice, obtain the pixel the second grader at different topographies base, for the Tag Estimation of test sample.The present embodiment is adopted Grader is built, to topography base φ with naive Bayes classifier1, training sample is divided into two classes, the picture of all kinds of correspondences Vegetarian refreshments label is 0,1;To topography base φ2、φ3、φ4、φ5, respectively training sample is divided three classes, the picture of all kinds of correspondences Vegetarian refreshments label is 0,0.5,1, to topography base φ6、φ7、φ8、φ9, respectively training sample is divided into five classes, all kinds of correspondences Pixel label be 0,0.25,0.5,0.75,1.It is characterized vector with FMRI signal, builds the different local about pixel Second grader of image base.Naive Bayes Classifier is a simple and efficient sorting technique.Naive Bayes Classification Can predict that given sample belongs to the probability of a given class, calculate sample the most respectively and belong to each class probability, take general Classification belonging to rate maximum.When setting up grader, the present invention can also use other machines study classification method to substitute Naive Bayes Classification method, such as support vector machine (SVM), random forest (RF), sparse polynomial logistic regression (SMLR) etc. Deng.For running the factor of time, these methods run the time much larger than the Nae Bayesianmethod operation time.In view of people Class brain processes the such characteristic of visual information as quick as thought, uses Naive Bayes Classification method in the present embodiment.
Then, the FMRI signal of input training sample B, based on the K category feature screened, enters the FMRI signal of input After row Feature Selection, as the input of Equations of The Second Kind grader, carry out classification differentiation, obtain each pixel position in difference local The second prediction label under image basePre-as current pixel point training sample under each topography base Mark label.
Finally, the training sample of same pixel position is predicted labelWith reconstruction parameter ω1, ω2,...,ω9Being weighted sues for peace obtains the reconstruct label of current pixel point position, i.e. topography's reconstruct Thus obtain the heavy pixel value of each pixel in visual pattern, i.e. obtain the visual pattern of reconstruct.
Fig. 6 gives the Contrast on effect of the present invention and the reconstruct visual pattern of existing mode, and wherein Fig. 6-a is for using SMLR The reconstruct visual pattern of mode, 6-a is the reconstruct visual pattern of the present invention.From figure, the result noise that the present invention obtains shows Write and reduce.
The above, the only detailed description of the invention of the present invention, any feature disclosed in this specification, unless especially Narration, all can be by other equivalences or have the alternative features of similar purpose and replaced;Disclosed all features or all sides Method or during step, in addition to mutually exclusive feature and/or step, all can be combined in any way.

Claims (6)

1. a high-efficient noise-reducing visual pattern reconstructing method, it is characterised in that comprise the following steps:
A. training step:
Step A1: input training sample, described training sample includes the stimulating image FMRI signal in brain visual area, Yi Jiyu The contrast figure of FMRI signal stimulating image one to one, wherein stimulating image is flicker checkerboard pattern;
Step A2: use N kind topography base that the contrast figure of stimulating image is carried out conversion process, obtain same contrast figure N width changing image, wherein N be more than 1:
Based on current topography base, respectively each pixel of stimulating image is carried out conversion process, take the flat of topography's base All contrast is as the label of current pixel point, obtains the changing image of current topography base, and wherein topography's base is flat All contrasts are ratio total with chessboard for the number of flicker chessboard defined in the window of place;
Step A3: the coefficient of colligation of the structure each pixel of visual pattern:
Step A3-1: each pixel position to the contrast figure of stimulating image, the training of acquisition N kind topography base is pre-respectively Mark label:
Training sample is divided into two subsets, and a subset is as training data, and a subset is as test data, different Under topography's base, based on changing image and FMRI signal, training data is carried out classifier training, obtain each pixel not With the first grader of topography's base, wherein the classification of the first grader is the taking of average contrast of each topography base Value;
By the first grader, test data are carried out Classification and Identification, obtain each pixel position under N kind topography base First prediction labelWherein i=1,2 ..., N;
Step A3-2: according to formulaIn multiple test data, the numerical value minimum of residual epsilon will be made ω12,...,ωNAs the coefficient of colligation of current pixel point position, thus obtain combining of each pixel of visual pattern and be Number, wherein CtrRepresent the pixel value of the contrast figure of stimulating image;
Step A4: the value of average contrast based on N kind topography base divides classification, training sample is carried out classification and draws Point;And the characteristic vector of the FMRI signal of training sample is carried out Feature Selection, take the feature that front K classification judgement ability is maximum Vector is as Feature Selection result, and wherein K is preset value;
Under different topography's bases, carry out classifier training based on the FMRI signal after changing image and screening, obtain picture Vegetarian refreshments is at the second grader of different topographies base, and wherein the classification of the second grader is the most right of N kind topography base Value than degree;
Step B: image reconstruction
Step B1: input FMRI signal to be reconstructed, and the selection result based on step A4 treats the spy of FMRI signal of reconstruct Levying and screen, K characteristic vector after screening is as the input of the second grader;
Step B2: recognition result based on the second grader, obtains each pixel second pre-mark under N kind topography base Sign: C1,C2,...,CN;Step B3: by the second prediction label C of each pixel1,C2,...,CNWith coefficient of colligation ω1, ω2,...,ωNWeighted sum obtain the reconstruct label of each pixel, thus obtain vision reconstruct image.
2. the method for claim 1, it is characterised in that topography's base includes nine kinds, respectively:
Topography base φ1: only include current pixel point 1 × 1 window;
Topography base φ2: include current pixel point abutment points right with it 1 × 2 window;
Topography base φ3: include current pixel point abutment points left with it 1 × 2 window;
Topography base φ4: include current pixel point with its on abutment points 2 × 1 window;;
Topography base φ5: include that current pixel point and its descend the window of the 2 × 1 of abutment points;
Topography base φ6: include current pixel point 2 × 2 window, wherein current pixel point is in the upper left corner of window;
Topography base φ7: include current pixel point 2 × 2 window, wherein current pixel point is in the upper right corner of window;
Topography base φ8: include current pixel point 2 × 2 window, wherein current pixel point is in the lower left corner of window;
Topography base φ9: include current pixel point 2 × 2 window, wherein current pixel point is in the lower right corner of window;
In step A4, training sample is divided into five classes, the contrast of all kinds of correspondences are respectively as follows: 0,0.25,0.5,0.75,1;
The classification information of different topographies base is:
Topography base φ1Including two classes, all kinds of contrasts is respectively as follows: 0,1;
Topography base φ2、φ3、φ4、φ5Including three classes, all kinds of contrasts is respectively as follows: 0,0.5,1;
Topography base φ6、φ7、φ8、φ9Including five classes, all kinds of contrasts is respectively as follows: 0,0.25,0.5,0.75,1.
3. method as claimed in claim 1 or 2, it is characterised in that in described step A4, use multi-class F-score feature Selection mode carries out Feature Selection.
4. method as claimed in claim 1 or 2, it is characterised in that use naive Bayes classifier to obtain first, second point Class device.
5. method as claimed in claim 1 or 2, it is characterised in that in described step A3-1, uses ten folding interior extrapolation methods to obtain the One prediction label
6. method as claimed in claim 1 or 2, it is characterised in that in step A3-2, uses method of least square to obtain associating ginseng Number ω12,...,ωN
CN201610589708.4A 2016-07-25 2016-07-25 A kind of high-efficient noise-reducing visual pattern reconstructing method Active CN106251299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610589708.4A CN106251299B (en) 2016-07-25 2016-07-25 A kind of high-efficient noise-reducing visual pattern reconstructing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610589708.4A CN106251299B (en) 2016-07-25 2016-07-25 A kind of high-efficient noise-reducing visual pattern reconstructing method

Publications (2)

Publication Number Publication Date
CN106251299A true CN106251299A (en) 2016-12-21
CN106251299B CN106251299B (en) 2019-05-10

Family

ID=57603922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610589708.4A Active CN106251299B (en) 2016-07-25 2016-07-25 A kind of high-efficient noise-reducing visual pattern reconstructing method

Country Status (1)

Country Link
CN (1) CN106251299B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573512A (en) * 2018-03-21 2018-09-25 电子科技大学 A kind of complicated visual pattern reconstructing method based on depth encoding and decoding veneziano model
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
CN113362408A (en) * 2021-05-11 2021-09-07 山东师范大学 Bayes reconstruction method and system for brain activity multi-scale local contrast image
CN114469009A (en) * 2022-03-18 2022-05-13 电子科技大学 Facial pain expression grading evaluation method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777818A (en) * 2003-04-24 2006-05-24 皇家飞利浦电子股份有限公司 Fibre tracking magnetic resonance imaging
CN101224114A (en) * 2008-01-25 2008-07-23 西安交通大学 High dynamic range regenerating method of X-ray image based on scale space decomposition
CN103247028A (en) * 2013-03-19 2013-08-14 广东技术师范学院 Multi-hypothesis prediction block compressed sensing image processing method
US20140023253A1 (en) * 2012-07-20 2014-01-23 Jan S. Hesthaven Image Reconstruction from Incomplete Fourier Measurements and Prior Edge Information
US20140180060A1 (en) * 2012-12-17 2014-06-26 Todd Parrish Methods and Systems for Automated Functional MRI in Clinical Applications
CN105068644A (en) * 2015-07-24 2015-11-18 山东大学 Method for detecting P300 electroencephalogram based on convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777818A (en) * 2003-04-24 2006-05-24 皇家飞利浦电子股份有限公司 Fibre tracking magnetic resonance imaging
CN101224114A (en) * 2008-01-25 2008-07-23 西安交通大学 High dynamic range regenerating method of X-ray image based on scale space decomposition
US20140023253A1 (en) * 2012-07-20 2014-01-23 Jan S. Hesthaven Image Reconstruction from Incomplete Fourier Measurements and Prior Edge Information
US20140180060A1 (en) * 2012-12-17 2014-06-26 Todd Parrish Methods and Systems for Automated Functional MRI in Clinical Applications
CN103247028A (en) * 2013-03-19 2013-08-14 广东技术师范学院 Multi-hypothesis prediction block compressed sensing image processing method
CN105068644A (en) * 2015-07-24 2015-11-18 山东大学 Method for detecting P300 electroencephalogram based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SUTAO SONG 等: "Bayesian reconstruction of multiscale local contrast images from brain activity brain activity", 《JOURNAL OF NEUROSCIENCE METHODS 》 *
YOICHI MIYAWAKI 等: "Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders", 《NEURON》 *
宋素涛 等: "基于fMRI的视觉信息解码研究进展", 《济南大学学报(自然科学版)》 *
谢娟英 等: "基于改进的F-score与支持向量机的特征选择方法", 《计算机应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573512A (en) * 2018-03-21 2018-09-25 电子科技大学 A kind of complicated visual pattern reconstructing method based on depth encoding and decoding veneziano model
CN108573512B (en) * 2018-03-21 2021-04-30 电子科技大学 Complex visual image reconstruction method based on depth coding and decoding dual model
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
CN108960073B (en) * 2018-06-05 2020-07-24 大连理工大学 Cross-modal image mode identification method for biomedical literature
CN113362408A (en) * 2021-05-11 2021-09-07 山东师范大学 Bayes reconstruction method and system for brain activity multi-scale local contrast image
CN114469009A (en) * 2022-03-18 2022-05-13 电子科技大学 Facial pain expression grading evaluation method

Also Published As

Publication number Publication date
CN106251299B (en) 2019-05-10

Similar Documents

Publication Publication Date Title
Noreen et al. A deep learning model based on concatenation approach for the diagnosis of brain tumor
Farooq et al. A deep CNN based multi-class classification of Alzheimer's disease using MRI
Işın et al. Review of MRI-based brain tumor image segmentation using deep learning methods
Cheng et al. CNNs based multi-modality classification for AD diagnosis
Jabason et al. Classification of Alzheimer’s disease from MRI data using an ensemble of hybrid deep convolutional neural networks
Yu et al. Hybrid dermoscopy image classification framework based on deep convolutional neural network and Fisher vector
CN106251299A (en) A kind of high-efficient noise-reducing visual pattern reconstructing method
CN106408001A (en) Rapid area-of-interest detection method based on depth kernelized hashing
CN106096636A (en) A kind of Advancement Type mild cognition impairment recognition methods based on neuroimaging
CN106127263A (en) The human brain magnetic resonance image (MRI) classifying identification method extracted based on three-dimensional feature and system
CN111714118A (en) Brain cognition model fusion method based on ensemble learning
CN109816630A (en) FMRI visual coding model building method based on transfer learning
Ortiz et al. Learning longitudinal MRI patterns by SICE and deep learning: Assessing the Alzheimer’s disease progression
CN109902682A (en) A kind of mammary gland x line image detection method based on residual error convolutional neural networks
Ben-Cohen et al. Anatomical data augmentation for CNN based pixel-wise classification
Aloyayri et al. Breast cancer classification from histopathological images using transfer learning and deep neural networks
Divya et al. A deep transfer learning framework for multi class brain tumor classification using MRI
Liu et al. Multi-LSTM networks for accurate classification of attention deficit hyperactivity disorder from resting-state fMRI data
Qian et al. 3D automatic segmentation of brain tumor based on deep neural network and multimodal MRI images
Sharma et al. A Suitable Approach for Classifying Skin Disease Using Deep Convolutional Neural Network
Pusparani et al. Diagnosis of Alzheimer’s disease using convolutional neural network with select slices by landmark on Hippocampus in MRI images
Castro et al. Generation of synthetic structural magnetic resonance images for deep learning pre-training
Rezaei et al. Brain abnormality detection by deep convolutional neural network
Yeung et al. Pipeline comparisons of convolutional neural networks for structural connectomes: predicting sex across 3,152 participants
CN109146005A (en) A kind of brain cognitive ability analysis method based on rarefaction representation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant