CN114898193A - Manifold learning-based image feature fusion method and device and image classification system - Google Patents
Manifold learning-based image feature fusion method and device and image classification system Download PDFInfo
- Publication number
- CN114898193A CN114898193A CN202210809090.3A CN202210809090A CN114898193A CN 114898193 A CN114898193 A CN 114898193A CN 202210809090 A CN202210809090 A CN 202210809090A CN 114898193 A CN114898193 A CN 114898193A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- fusion
- manifold learning
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image feature fusion method and device based on manifold learning and an image classification system. Firstly, acquiring multi-class characteristics of a plurality of images to construct a characteristic set; taking each type of feature in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angle; and then constructing and solving a manifold learning-based multi-view feature selection and fusion model to obtain fused image features. The invention can more fully utilize the structural information of the data, and more pay attention to the change trend of the overall distribution and the characteristics of the data rather than the absolute value of specific characteristics, so that the invention has the potential of weakening the multi-center effect, and more pay attention to the essential disease representation of the medical image rather than the disturbance caused by multiple centers; meanwhile, the image fusion device based on the popular learning can also perform more effective fusion on the features under different visual angles, and the classification performance of the medical images is improved.
Description
Technical Field
The invention relates to the field of medical images, in particular to an image feature fusion method and device based on manifold learning and an image classification system.
Background
In the field of medical imaging, generally, a large flux of different kinds of features, such as different modal features, frequency features, spatial features, etc., are extracted, and the different kinds of features reflect different characteristics of an image and are very important for diagnosis of diseases. However, the numerical ranges, distributions, and the like of the different features have significant differences, and if the features are not processed, the classification performance is reduced if the features are directly input into the classifier, so that effective feature fusion needs to be performed on the different features at this time.
The multi-center effect refers to that a model trained by data of the center a tests data of the center B, generally, the test performance is reduced, and is particularly obvious for a diagnostic model based on medical image classification, which is called the multi-center effect because data distribution is different due to differences of scanner equipment, acquisition protocols and reconstruction parameters of different centers.
Therefore, it is a difficult point to effectively fuse features of different centers and different types.
At present, the traditional method for solving the multi-center effect of the medical image features mainly utilizes data processing methods such as Bayes theory and the like to eliminate the difference of the mean value, the variance and the like of the multi-center data image group to achieve the elimination of the multi-center effect, the method mechanically adjusts the mean value and the variance of the different center data features to the same level, and in the process, the essential disease representation of the medical image is also eliminated sometimes, so that the performance of a multi-center medical image diagnosis model is reduced.
Manifold learning has received a great deal of attention in recent years, and has found many applications in machine learning fields such as computer vision, speech recognition, and recommendation systems. The manifold learning has the advantages that the manifold learning can use structural information of data, and the change trend of the overall distribution and characteristics of the data is more concerned than the absolute value of a specific characteristic, so that the manifold learning has the potential of weakening the multi-center effect, and the essential disease representation of a medical image is more concerned than the disturbance caused by multiple centers, so that the development of an image characteristic fusion method based on the manifold learning is of great significance.
Disclosure of Invention
The invention aims to provide an image feature fusion method and device based on manifold learning and an image classification system aiming at the defects of a feature fusion method in the prior art, in particular to the problem of a multi-center effect of a medical image.
The technical scheme adopted by the invention is as follows:
an image feature fusion method based on manifold learning comprises the following steps:
acquiring multi-type features of a plurality of images to construct a feature set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
constructing and solving a manifold learning-based multi-view feature selection and fusion model to obtain fused image features; the manifold learning-based multi-view feature selection and fusion model is represented as follows:
wherein the content of the first and second substances,Ythe image features after the fusion are obtained, wherein one line of vectors corresponds to one image feature after the fusion of the image;Z i a low-dimensional representation of the ith viewing angle characteristic,α i is the weight corresponding to the ith view feature, tr (×) represents the trace of the matrix,γis a weight parameter.
Further, the graph laplacian matrixL i The method comprises the following steps:
L i = D i −W i
wherein the content of the first and second substances,D i is as followsiAt a single angle of viewN ×NDiagonal matrix, diagonal elements of diagonal matrixd mm = ∑ n (W i ) mn ,1≤m,n≤NAnd ism≠n,NNumber of samplesAn amount;W i is the firstiAt a single angle of viewN ×NCorrelation matrix of (A)W i ) mn Is thatW i The mth row and nth column elements of (1) represent the correlation of the mth sample and the nth sample based on the ith view angle characteristic.
Further, the image is a medical image, in particular a CT image, a PET image, an ultrasound image or an OCT image.
Further, the medical images are acquired by different central acquisitions.
Further, the feature classes include at least two of shape, texture, grayscale, gradient, frequency domain features.
An image feature fusion device based on manifold learning, comprising:
the data processing module is used for acquiring multi-class characteristics of a plurality of images to construct a characteristic set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
the characteristic fusion module is used for constructing and solving a manifold learning-based multi-view characteristic selection and fusion model to obtain fused image characteristics; the manifold learning-based multi-view feature selection and fusion model is represented as follows:
wherein the content of the first and second substances,Ythe fused image features are obtained, wherein each row vector is the feature of each image after fusion;Z i a low-dimensional representation of the ith viewing angle characteristic,α i is the weight corresponding to the ith viewing angle feature, tr (×) represents the trace of the matrix, and γ is the weight parameter.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the manifold learning-based image feature fusion method as described above when executing the computer program.
A storage medium containing computer executable instructions which, when executed by a computer processor, implement a manifold learning-based image feature fusion method as described above.
An image classification system comprising:
the image feature fusion device based on manifold learning;
and the classification module is used for classifying according to the image characteristics acquired by the image characteristic fusion device.
The invention has the beneficial effects that: the invention can more fully utilize the structural information of the data, and more pay attention to the change trend of the overall distribution and the characteristics of the data rather than the absolute value of specific characteristics, so that the invention has the potential of weakening the multi-center effect, and more pay attention to the essential disease representation of the medical image rather than the disturbance caused by multiple centers; meanwhile, the image fusion device based on the popular learning can effectively fuse the features under different visual angles, and the classification performance of the medical images is improved.
Drawings
FIG. 1 is a flow chart of an exemplary manifold learning-based image feature fusion method of the present invention;
FIG. 2 is a flowchart of an exemplary manifold learning-based CT image feature fusion method of the present invention;
FIG. 3 is a block diagram of an exemplary manifold learning-based image feature fusion apparatus according to the present invention;
FIG. 4 is a block diagram of an exemplary electronic device of the present invention;
FIG. 5 is a block diagram of an exemplary image classification system of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart of an exemplary manifold learning-based image feature fusion method according to the present invention, and as shown in fig. 1, the manifold learning-based image feature fusion method according to the present invention includes:
acquiring multi-type features of a plurality of images to construct a feature set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
constructing and solving a manifold learning-based multi-view feature selection and fusion model to obtain fused image features;
the method comprises the following steps of constructing and solving a manifold learning-based multi-view feature selection and fusion model:
first, it is considered that although features under different viewing angles are affected by noise and the like, laplacian matrices under different viewing angles should have certain topological similarity. Further, since the laplacian matrix is a representation of the correlation between samples of high dimensions, the consistency or similarity between different views can usually be calculated by calculating their corresponding low-dimensional representations.
For the low-dimensional representation of a single view angle, the low-dimensional representation can be obtained by Laplace eigen decomposition firstZ i . Specifically, by map theory, the Laplace matrix is obtainedThe characteristic value decomposition is carried out, and the corresponding characteristic vector can be used as the characterization of the potential structure of the graph, namely, the following objective function is solvedp, is represented as follows:
for the feature dimensions after the dimension reduction, the dimension of the feature is determined,Nindicating the number of samples and tr (×) the traces of the matrix.
Due to the fact thatZ i Orthogonal, low dimensional representation at multiple viewing anglesThe distribution in space is a special grassmann manifold structure, i.e. the low-dimensional representation at each viewing angle is a point distributed on the grassmann manifold surface. Thus, the similarity or distance between different perspectives can be accurately represented by calculating their geodesic distance in streaming space. Adopts the following shapeThe formula is calculated as:
wherein, 1 is less than or equal toi,j≤KAnd isi≠j,Representing the characteristic dimension after dimension reduction; given that the feature selection vector needs to be constrained to a certain extent in consideration of the existence of abnormal features caused by noise or other factors, the weight assignment is given to the features of different perspectives(). Therefore, in combination with the above considerations, the manifold learning-based multi-view feature selection and fusion model is constructed as follows:
wherein the content of the first and second substances,is set toPerforming characteristic value decompositionA large set of eigenvectors for a eigenvalue,the characteristic dimension after dimension reduction;Yis a feature of the fused image, each of whichOne line of vectors corresponds to a new feature after image fusion; γ is a weight parameter, s.t. means "make".
The method can realize feature fusion, reduce feature dimension, synthesize multi-view features and effectively extract the essential disease representation of the image. Further, the images can be acquired by different centers/devices, the method is based on manifold learning, the change trend of the overall distribution and characteristics of the data is more concerned than the absolute value of a specific characteristic, the multi-center effect can be effectively weakened, and errors caused by the acquisition of different centers/devices are solved. Illustratively, when the classification model training and testing needs to be performed on the data acquired by the two centers A, B, the multi-class feature construction feature data set of the data acquired by the center a and the data acquired by the center B may be subjected to feature fusion to obtain fused image features, and the fused image features are used for training and classifying the classifier, so as to obtain an image classification result capable of eliminating the center effect.
Further, the invention is particularly applicable to medical image classification, the medical image may be a CT image, a PET image, an ultrasound image, an OCT image, or the like. The feature classes include at least two of shape, texture, grayscale, gradient, frequency domain features (wavelet features, etc.).
In the following, the present invention will be further described in detail by taking the fusion of the characteristics of the cinematology of CT images acquired at two centers (center a and center B) as an example:
the method firstly obtains the characteristics of the imagery omics from the CT images of the center A and the center B based on the method of the imagery omics. Secondly, an image correlation matrix representing a single visual angle is constructed according to the image group characteristics of a single category, and an image Laplace matrix is calculated and is used as the input of manifold learning, so that the overall data distribution of a whole batch of samples is concerned more, and the multi-center effect is eliminated; and finally, learning and extracting fused low-dimensional features from the multi-view graph Laplacian matrix by constructing a manifold learning method, and representing by using the low-dimensional features, thereby effectively integrating the multi-view features, being beneficial to removing disturbance caused by multiple centers and keeping the original characteristics of biological characteristics. Specifically, as shown in fig. 2, the method comprises the following steps:
(1) feature extraction based on image omics
Method for obtaining shape, texture, gray level and other image omics characteristics from CT images of center A and center B to construct multi-view characteristic set based on image omicsX:
Wherein the content of the first and second substances,x i is as followsiAt a certain angle of view namelyiClass (e.g. grayscale or texture) cinematographic features,Kis the number of extracted view angle features.
(2) Construction of graph Laplace matrix under single view angle
Aiming at single view characteristic based on constructed multi-view characteristic setx i Building graph correlation matrixW i ,W i Row m and column n elements of (W i ) mn Is represented as follows:
wherein the content of the first and second substances,x im /x in is shown asAt a certain angle of viewm/nFeature vectors of individual samples. 1 is less than or equal tom,n≤NAnd ism≠n,NRepresents the number of samples;represents the covariance of the two images,𝝈 im /𝝈 in respectively representAt a certain angle of viewm/nThe standard deviation of the feature vector of an individual sample,𝝁 im /𝝁 in respectively representAt a certain angle of viewm/nThe mean of the individual sample feature vectors, E, represents the expectation.
Based on the constructed graph correlation matrix, a laplacian matrix of the graph can be calculated and obtained:
L i = D i −W i
wherein the content of the first and second substances,D i is as followsiAt a single angle of viewN ×NDiagonal matrix, in which diagonal elementsd mm = ∑ n (W i ) mn 。
(3) Feature selection and fusion based on manifold learning
Constructing and solving a manifold learning-based multi-view feature selection and fusion model, namely obtaining fused image features; the manifold learning-based multi-view feature selection and fusion model is expressed as follows:
wherein the content of the first and second substances,Z i a low-dimensional representation of the ith viewing angle characteristic,is set toPerforming characteristic value decompositionA large set of eigenvectors for a eigenvalue,the characteristic dimension after dimension reduction;Ythe fused image features are obtained, wherein each row vector is a new feature of each image after fusion;α i is the weight corresponding to the ith viewing angle feature, tr (×) represents the trace of the matrix, and γ is the weight parameter.
Corresponding to the embodiment of the image feature fusion method based on manifold learning, the invention also provides an embodiment of an image feature fusion device based on manifold learning.
Referring to fig. 3, an image feature fusion apparatus based on manifold learning according to an embodiment of the present invention includes:
the data processing module is used for acquiring multi-class characteristics of a plurality of images to construct a characteristic set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
the characteristic fusion module is used for constructing and solving a manifold learning-based multi-view characteristic selection and fusion model to obtain fused image characteristics; the manifold learning-based multi-view feature selection and fusion model is represented as follows:
wherein the content of the first and second substances,Z i a low-dimensional representation of the ith viewing angle characteristic,is set toCarry out eigenvalueBefore the solution of correspondenceA large set of eigenvectors for a eigenvalue,the characteristic dimension after dimension reduction;Ythe fused image features are obtained, wherein each row vector is a new feature of each image after fusion;α i is the weight corresponding to the ith viewing angle feature, tr (×) represents the trace of the matrix, and γ is the weight parameter.
Further, the present invention also provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the image feature fusion method based on manifold learning in the above embodiments.
The electronic device of the present invention is any device having data processing capability, and the any device having data processing capability may be a device or apparatus such as a computer.
As a device in a logical sense, in terms of hardware, the processor of any device with data processing capability reads corresponding computer program instructions in the non-volatile memory to the memory for running, as shown in fig. 4, the device with data processing capability in the embodiment may generally include other hardware according to actual functions of the device with data processing capability, except for the processor, the memory, the network interface, and the non-volatile memory shown in fig. 4.
The implementation process of the functions and actions of each unit in the electronic device is specifically described in the implementation process of the corresponding step in the method, and is not described herein again.
For the electronic device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the electronic device are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the method for fusing image features based on manifold learning in the foregoing embodiments is implemented.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing device described in any previous embodiment. The computer readable storage medium can be any device with data processing capability, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
In addition, the present invention also provides an image classification system, as shown in fig. 5, including:
the image feature fusion device based on manifold learning;
and the classification module is used for classifying according to the image characteristics acquired by the image characteristic fusion device.
The classification module may adopt a common support vector machine SVM, a decision tree, a random forest, etc., and its usage method is a known method and will not be described herein again.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should all embodiments be exhaustive. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.
Claims (9)
1. An image feature fusion method based on manifold learning is characterized by comprising the following steps:
acquiring multi-type features of a plurality of images to construct a feature set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
constructing and solving a manifold learning-based multi-view feature selection and fusion model to obtain fused image features; the manifold learning-based multi-view feature selection and fusion model is represented as follows:
wherein the content of the first and second substances,Ythe image features after the fusion are obtained, wherein one line of vectors corresponds to one image feature after the fusion of the image;Z i a low-dimensional representation of the ith viewing angle characteristic,α i is the weight corresponding to the ith view feature, tr (×) represents the trace of the matrix,γis a weight parameter.
2. The method of claim 1, wherein the graph laplacian matrixL i The method comprises the following steps:
L i = D i −W i
wherein the content of the first and second substances,D i is as followsiAt a single angle of viewN ×NDiagonal matrix, diagonal elements of a diagonal matrixd mm = ∑ n (W i ) mn ,1≤m,n≤NAnd ism≠n,NRepresents the number of samples;W i is the firstiAt a single angle of viewN ×NCorrelation matrix of (A)W i ) mn Is thatW i The mth row and nth column elements of (1) represent the correlation of the mth sample and the nth sample based on the ith view angle characteristic.
3. Method according to claim 1, characterized in that the image is a medical image, in particular a CT image, a PET image, an ultrasound image or an OCT image.
4. The method of claim 3, wherein the medical images are obtained from different central acquisitions.
5. The method of claim 1, wherein the feature classes comprise at least two of shape, texture, grayscale, gradient, frequency domain features.
6. An image feature fusion device based on manifold learning, comprising:
the data processing module is used for acquiring multi-class characteristics of a plurality of images to construct a characteristic set; taking each type of features in the feature set as a visual angle, and constructing a graph Laplacian matrix under each single visual angleL i ,iK, K denotes the number of feature classes;
the characteristic fusion module is used for constructing and solving a manifold learning-based multi-view characteristic selection and fusion model to obtain fused image characteristics; the manifold learning-based multi-view feature selection and fusion model is represented as follows:
wherein the content of the first and second substances,Ythe fused image features are obtained, wherein each row vector is the feature of each image after fusion;Z i a low-dimensional representation of the ith viewing angle characteristic,α i and tr (×) represents the trace of the matrix corresponding to the weight corresponding to the ith view angle characteristic, and γ is a weight parameter.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the manifold learning based image feature fusion method according to any one of claims 1 to 5 when executing the computer program.
8. A storage medium containing computer executable instructions which, when executed by a computer processor, implement the manifold learning based image feature fusion method according to any one of claims 1-5.
9. An image classification system, comprising:
the manifold learning-based image feature fusion apparatus as claimed in claim 6;
and the classification module is used for classifying according to the image characteristics acquired by the image characteristic fusion device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210809090.3A CN114898193A (en) | 2022-07-11 | 2022-07-11 | Manifold learning-based image feature fusion method and device and image classification system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210809090.3A CN114898193A (en) | 2022-07-11 | 2022-07-11 | Manifold learning-based image feature fusion method and device and image classification system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114898193A true CN114898193A (en) | 2022-08-12 |
Family
ID=82730404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210809090.3A Pending CN114898193A (en) | 2022-07-11 | 2022-07-11 | Manifold learning-based image feature fusion method and device and image classification system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114898193A (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6226418B1 (en) * | 1997-11-07 | 2001-05-01 | Washington University | Rapid convolution based large deformation image matching via landmark and volume imagery |
US20130178952A1 (en) * | 2010-06-28 | 2013-07-11 | Precitec Itm Gmbh | Method for closed-loop controlling a laser processing operation and laser material processing head using the same |
CN106203511A (en) * | 2016-06-12 | 2016-12-07 | 湘潭大学 | A kind of image similarity block appraisal procedure |
CN106778885A (en) * | 2016-12-26 | 2017-05-31 | 重庆大学 | Hyperspectral image classification method based on local manifolds insertion |
CN107239777A (en) * | 2017-05-13 | 2017-10-10 | 大连理工大学 | A kind of tableware detection and recognition methods based on various visual angles graph model |
CN107885787A (en) * | 2017-10-18 | 2018-04-06 | 大连理工大学 | Image search method based on the embedded various visual angles Fusion Features of spectrum |
CN111340768A (en) * | 2020-02-21 | 2020-06-26 | 之江实验室 | Multi-center effect compensation method based on PET/CT intelligent diagnosis system |
CN111738370A (en) * | 2020-08-25 | 2020-10-02 | 湖南大学 | Image feature fusion and clustering collaborative expression method and system of intrinsic manifold structure |
CN112801159A (en) * | 2021-01-21 | 2021-05-14 | 中国人民解放军国防科技大学 | Zero-small sample machine learning method and system fusing image and text description thereof |
CN113288170A (en) * | 2021-05-13 | 2021-08-24 | 浙江大学 | Electroencephalogram signal calibration method based on fuzzy processing |
US20210406560A1 (en) * | 2020-06-25 | 2021-12-30 | Nvidia Corporation | Sensor fusion for autonomous machine applications using machine learning |
CN114004998A (en) * | 2021-11-03 | 2022-02-01 | 中国人民解放军国防科技大学 | Unsupervised polarization SAR image terrain classification method based on multi-view tensor product diffusion |
WO2022029218A1 (en) * | 2020-08-05 | 2022-02-10 | Katholieke Universiteit Leuven | Method for data fusion |
WO2022100497A1 (en) * | 2020-11-13 | 2022-05-19 | 上海健康医学院 | Method for determining mutation state of epidermal growth factor receptor, and medium and electronic device |
CN114528917A (en) * | 2022-01-14 | 2022-05-24 | 中山大学 | Dictionary learning algorithm based on SPD data of Riemannian manifold cut space and local homoembryo |
US20220215548A1 (en) * | 2020-05-09 | 2022-07-07 | Tencent Technology (Shenzhen) Company Limited | Method and device for identifying abnormal cell in to-be-detected sample, and storage medium |
-
2022
- 2022-07-11 CN CN202210809090.3A patent/CN114898193A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6226418B1 (en) * | 1997-11-07 | 2001-05-01 | Washington University | Rapid convolution based large deformation image matching via landmark and volume imagery |
US20130178952A1 (en) * | 2010-06-28 | 2013-07-11 | Precitec Itm Gmbh | Method for closed-loop controlling a laser processing operation and laser material processing head using the same |
CN106203511A (en) * | 2016-06-12 | 2016-12-07 | 湘潭大学 | A kind of image similarity block appraisal procedure |
CN106778885A (en) * | 2016-12-26 | 2017-05-31 | 重庆大学 | Hyperspectral image classification method based on local manifolds insertion |
CN107239777A (en) * | 2017-05-13 | 2017-10-10 | 大连理工大学 | A kind of tableware detection and recognition methods based on various visual angles graph model |
CN107885787A (en) * | 2017-10-18 | 2018-04-06 | 大连理工大学 | Image search method based on the embedded various visual angles Fusion Features of spectrum |
CN111340768A (en) * | 2020-02-21 | 2020-06-26 | 之江实验室 | Multi-center effect compensation method based on PET/CT intelligent diagnosis system |
US20220215548A1 (en) * | 2020-05-09 | 2022-07-07 | Tencent Technology (Shenzhen) Company Limited | Method and device for identifying abnormal cell in to-be-detected sample, and storage medium |
US20210406560A1 (en) * | 2020-06-25 | 2021-12-30 | Nvidia Corporation | Sensor fusion for autonomous machine applications using machine learning |
WO2022029218A1 (en) * | 2020-08-05 | 2022-02-10 | Katholieke Universiteit Leuven | Method for data fusion |
CN111738370A (en) * | 2020-08-25 | 2020-10-02 | 湖南大学 | Image feature fusion and clustering collaborative expression method and system of intrinsic manifold structure |
WO2022100497A1 (en) * | 2020-11-13 | 2022-05-19 | 上海健康医学院 | Method for determining mutation state of epidermal growth factor receptor, and medium and electronic device |
CN112801159A (en) * | 2021-01-21 | 2021-05-14 | 中国人民解放军国防科技大学 | Zero-small sample machine learning method and system fusing image and text description thereof |
CN113288170A (en) * | 2021-05-13 | 2021-08-24 | 浙江大学 | Electroencephalogram signal calibration method based on fuzzy processing |
CN114004998A (en) * | 2021-11-03 | 2022-02-01 | 中国人民解放军国防科技大学 | Unsupervised polarization SAR image terrain classification method based on multi-view tensor product diffusion |
CN114528917A (en) * | 2022-01-14 | 2022-05-24 | 中山大学 | Dictionary learning algorithm based on SPD data of Riemannian manifold cut space and local homoembryo |
Non-Patent Citations (5)
Title |
---|
DEFU YANG等: "Group-wise Hub Identification by Learning Common Graph Embeddings on Grassmannian Manifold", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
HAOYANG XUE 等: "RGB-D saliency detection via mutual guided manifold ranking", 《2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
JINGLIANG HU 等: "A Topological Data Analysis Guided Fusion Algorithm: Mapper-Regularized Manifold Alignment", 《IGARSS 2019 - 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 * |
YUYUAN YANG 等: "Manifold Learning of Dynamic Functional Connectivity Reliably Identifies Functionally Consistent Coupling Patterns in Human Brains", 《BRAIN SCIENCES》 * |
林晓佳: "基于改进Adaboost M1算法医学图像分类系统的研究", 《聊城大学学报(自然科学版)》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11593943B2 (en) | RECIST assessment of tumour progression | |
CN111310731B (en) | Video recommendation method, device, equipment and storage medium based on artificial intelligence | |
CN108108662B (en) | Deep neural network recognition model and recognition method | |
EP3333768A1 (en) | Method and apparatus for detecting target | |
US10853409B2 (en) | Systems and methods for image search | |
CN110659665B (en) | Model construction method of different-dimension characteristics and image recognition method and device | |
US9152926B2 (en) | Systems, methods, and media for updating a classifier | |
Lepsøy et al. | Statistical modelling of outliers for fast visual search | |
WO2021120961A1 (en) | Brain addiction structure map evaluation method and apparatus | |
CN109685830B (en) | Target tracking method, device and equipment and computer storage medium | |
CN112215119A (en) | Small target identification method, device and medium based on super-resolution reconstruction | |
CN111523578B (en) | Image classification method and device and neural network model training method and device | |
CN115205547A (en) | Target image detection method and device, electronic equipment and storage medium | |
CN111507288A (en) | Image detection method, image detection device, computer equipment and storage medium | |
CN117690128A (en) | Embryo cell multi-core target detection system, method and computer readable storage medium | |
CN113592769B (en) | Abnormal image detection and model training method, device, equipment and medium | |
CN115170401A (en) | Image completion method, device, equipment and storage medium | |
CN111311594A (en) | No-reference image quality evaluation method | |
CN111382791A (en) | Deep learning task processing method, image recognition task processing method and device | |
CN111414579B (en) | Method and system for acquiring brain region association information based on multi-angle association relation | |
WO2015052919A1 (en) | Medical image processing device and operation method therefore, and medical image processing program | |
CN112927235A (en) | Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation | |
CN117435896A (en) | Verification aggregation method without segmentation under unbalanced classification scene | |
CN114898193A (en) | Manifold learning-based image feature fusion method and device and image classification system | |
CN114821205B (en) | Image processing method, device and equipment based on multi-dimensional features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220812 |
|
RJ01 | Rejection of invention patent application after publication |