CN116246774B - Classification method, device and equipment based on information fusion - Google Patents

Classification method, device and equipment based on information fusion Download PDF

Info

Publication number
CN116246774B
CN116246774B CN202310252573.2A CN202310252573A CN116246774B CN 116246774 B CN116246774 B CN 116246774B CN 202310252573 A CN202310252573 A CN 202310252573A CN 116246774 B CN116246774 B CN 116246774B
Authority
CN
China
Prior art keywords
image data
meniscus
data
processed
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310252573.2A
Other languages
Chinese (zh)
Other versions
CN116246774A (en
Inventor
孙安澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yizhun Intelligent Technology Co ltd
Original Assignee
Zhejiang Yizhun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yizhun Intelligent Technology Co ltd filed Critical Zhejiang Yizhun Intelligent Technology Co ltd
Priority to CN202310252573.2A priority Critical patent/CN116246774B/en
Publication of CN116246774A publication Critical patent/CN116246774A/en
Application granted granted Critical
Publication of CN116246774B publication Critical patent/CN116246774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification discloses a classification method, a device and equipment based on information fusion, wherein the classification method comprises the following steps: acquiring first image data, second image data and third image data to be processed; the first image data, the second image data and the third image data to be processed are respectively input into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model, and the first feature data, the second feature data and the preset third feature data are obtained; splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data; taking the spliced characteristic data as the input of a modified transducer encoder to obtain fusion characteristic data output by the modified transducer encoder; and respectively inputting the fusion characteristic data output by the transformed transducer encoder into a preset classification model to obtain a diagnosis result.

Description

Classification method, device and equipment based on information fusion
Technical Field
The present disclosure relates to the field of medical imaging and computer technologies, and in particular, to a classification method, apparatus, and device based on information fusion.
Background
Many diseases often have the association of etiology and other layers in the clinical diagnosis process, so that the joint incidence probability is higher to form a clinical priori experience. For example, similar conditions exist in knee MRI scenes, such as meniscal injury, bone bruise and anterior cruciate ligament injury, with strong relevance. This is because the most common injury in knee joints is a sports injury, and the knee joint as a whole receives a considerable part of body weight while being a sports joint, and most of the injuries occur in cases of joint twisting, severe collision, and the like, and the injuries are often accompanied by injuries in a plurality of parts, so that clinical practitioners can observe injuries in other relevant parts while observing one injury.
In the prior art, diagnosis and analysis are generally carried out on disease diagnosis by utilizing information of one part, and fusion of information of a plurality of parts cannot be completed, so that the accuracy and the reliability of diagnosis results are poor, and the reference value is low.
Based on this, a classification method based on information fusion is required.
Disclosure of Invention
The embodiment of the specification provides a classification method, a classification device and classification equipment based on information fusion, which are used for solving the following technical problems: in the prior art, fusion of a plurality of position information cannot be completed, so that the accuracy of a diagnosis result is poor, the reliability is low, and the reference value is low.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the embodiment of the specification provides a classification method based on information fusion, which comprises the following steps:
acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
taking the spliced characteristic data as the input of a modified transducer encoder to obtain fusion characteristic data output by the modified transducer encoder;
and respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
The embodiment of the specification provides a sorting device based on information fusion, and the sorting device includes:
the acquisition module is used for acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
the feature extraction module is used for inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
the splicing module splices the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
the fusion module takes the spliced characteristic data as the input of the modified transform encoder to obtain the fusion characteristic data output by the modified transform encoder;
and the classification module is used for respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
The embodiment of the specification also provides an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
taking the spliced characteristic data as the input of a modified transducer encoder to obtain fusion characteristic data output by the modified transducer encoder;
And respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect: respectively inputting image data of meniscus, image data of anterior cruciate ligament and image data of skeleton into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model, and respectively carrying out feature extraction to obtain first feature data, preset second feature data and preset third feature data; then, the first characteristic data, the second characteristic data and the third characteristic data are spliced to obtain spliced characteristic data; further taking the spliced characteristic data as the input of the modified transform encoder to obtain fusion characteristic data output by the modified transform encoder; and finally, respectively inputting the fusion characteristic data output by the transformed transducer encoder into a preset classification model to obtain a diagnosis result, so that the purposes of classifying based on the fusion characteristic data after characteristic fusion of the image data of the meniscus, the image data of the anterior cruciate ligament and the image data of the skeleton are realized, and further, the purposes of classifying based on fusion of a plurality of part information are realized, the diagnosis accuracy is improved, and the reliability is improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a system architecture of a classification method based on information fusion according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a classification method based on information fusion according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another classification method based on information fusion according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a classification method for information fusion according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a classification device based on information fusion according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
In the embodiment of the specification, the image data of the meniscus, the image data of the anterior cruciate ligament and the image data of the bone are respectively input into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model, and feature extraction is respectively carried out to obtain first feature data, preset second feature data and preset third feature data; then, the first characteristic data, the second characteristic data and the third characteristic data are spliced to obtain spliced characteristic data; further taking the spliced characteristic data as the input of the modified transform encoder to obtain fusion characteristic data output by the modified transform encoder; and finally, respectively inputting the fusion characteristic data output by the transformed transducer encoder into a preset classification model to obtain a diagnosis result. Therefore, the purposes of classifying the image data of the meniscus, the image data of the anterior cruciate ligament and the image data of the bones based on the fused characteristic data after the characteristic fusion is realized.
Fig. 1 is a schematic system architecture diagram of a classification method based on information fusion according to an embodiment of the present disclosure. As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 interact with the server 105 via the network 104 to receive or send messages or the like. Various client applications can be installed on the terminal devices 101, 102, 103. Such as a dedicated application having medical image display, classification result display, report generation, etc.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be a variety of special purpose or general purpose electronic devices including, but not limited to, smartphones, tablets, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services) or as a single software or software module.
The server 105 may be a server providing various services, such as a back-end server providing services for client applications installed on the terminal devices 101, 102, 103. For example, the server may train and run a registration model to implement a tissue registration function to display the results of the automatic classification diagnosis on the terminal devices 101, 102, 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When server 105 is software, it may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services), or as a single software or software module.
The classification method provided in the embodiment of the present specification may be executed by the server 105, for example, or may be executed by the terminal devices 101, 102, 103. Alternatively, the image processing method of the embodiment of the present disclosure may be partially executed by the terminal apparatuses 101, 102, 103, and the other portions are executed by the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 is a flowchart of a classification method based on information fusion according to an embodiment of the present disclosure. As shown in fig. 2, the classification method includes the steps of:
step S201: acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of meniscus, the second image data is image data of anterior cruciate ligament, and the third image data is image data of skeleton.
In the embodiment of the present disclosure, the first image data to be processed, the second image data to be processed, and the third image data to be processed are image data from the same person to be detected, and the image data is MRI (Magnetic Resonance Imaging ) image data of the knee joint.
In an embodiment of the present disclosure, acquiring first image data to be processed, second image data to be processed, and third image data to be processed further includes:
and adjusting the first image data to be processed, the second image data to be processed and the third image data to be processed to a preset image size.
In an embodiment, the predetermined image size may be (256,256,8).
Step S203: and respectively inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model to obtain first feature data, preset second feature data and preset third feature data.
In this embodiment of the present disclosure, the first feature extraction model, the second feature extraction model, and the third feature extraction model are all models generated based on a network of Resnet50, and parameters of the first feature extraction model, the second feature extraction model, and the third feature extraction model are not shared.
In a specific embodiment, inputting first image data to be processed into a preset first feature extraction model, and performing feature extraction to obtain first feature data; the method comprises the steps of inputting image data of meniscus to be processed into a first feature extraction model, and carrying out feature extraction to obtain feature data of the meniscus, namely first feature data.
Similarly, inputting second image data to be processed into a preset second feature extraction model, and extracting features to obtain second feature data; the preset second feature extraction model is used for feature extraction of the anterior cruciate ligament image data, the image data of the anterior cruciate ligament to be processed is input into the second feature extraction model, feature extraction is carried out, and feature data of the anterior cruciate ligament, namely second feature data, is obtained;
inputting third image data to be processed into a preset third feature extraction model, and performing feature extraction to obtain third feature data; the preset third feature extraction model is used for feature extraction of bone image data, the bone image data to be processed is input into the third feature extraction model, feature extraction is carried out, and the feature data of the bone, namely third feature data, is obtained.
The network of Resnet is called depth residual network Deep Residual Network, and Resnet50 is an important network structure of Resnet and has wide application in feature extraction. The Resnet50 network includes five phases (Stage 0, stage 1, … … Stage 4) for a total of 50 layers. The network of Resnet50 is prior art and will not be described in detail herein.
Continuing with the previous example, the predetermined image size is 256,256,8, and the dimensions of the first feature data, the second feature data and the third feature data are 1024.
Step S205: and splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data.
Continuing the previous example, the preset image size is (256,256,8), and after the first feature data, the second feature data and the third feature data are spliced, the dimension of the obtained spliced feature data is 1024 x 3.
Step S207: and taking the spliced characteristic data as the input of the modified converter encoder, and obtaining the fused characteristic data output by the modified converter encoder.
the transducer is a model which utilizes an attention mechanism to increase the training speed of the model, and can perform parallelization calculation. In the present embodiment, the transformation encoder is modified to obtain fusion characteristic data.
In the embodiment of the present specification, the splicing feature data is used as an input of a modified transducer encoder to obtain fusion feature data; the method specifically comprises the following steps:
the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton;
and performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain fusion characteristic data output by the modified converter encoder.
In the embodiment of the present specification, the calculation of the Q vector is:
Q acl =wq acl *f acl
Q meniscus =wq meniscus *f meniscus
the calculation of the V vector is as follows:
V=wv*[f acl ,f meniscus ,f bone ]
the calculation of the K vector is as follows:
K=wk*[f acl ,f meniscus ,f bone ]
wherein,
Q acl the Q vector for the anterior cruciate ligament;
wq acl a learnable weight matrix for the anterior cruciate ligament;
f acl is a feature of the anterior cruciate ligament of input;
Q meniscus is the Q vector of the meniscus;
wq meniscus a learnable weight matrix for a meniscus;
f meniscus is a feature of the input meniscus;
f bone is a feature of the bone being input;
wv is a learnable weight matrix;
wk is a learnable weight matrix.
In this embodiment of the present disclosure, the obtaining the fusion feature data output by the modified transducer encoder based on the Q vector, the K vector, and the V vector specifically includes:
Performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain the characteristics of the anterior cruciate ligament output by the self-attention module and the characteristics of the meniscus output by the self-attention module;
and fusing the characteristics of the output anterior cruciate ligament and the characteristics of the output meniscus to obtain fused characteristic data output by the modified transducer encoder.
In the embodiment of the present disclosure, the calculation formula of the characteristic of the anterior cruciate ligament output by the self-attention module is:
f acl ′=softmax(Q acl *K)*V+f acl
the calculation formula of the characteristics of the meniscus output by the self-attention module is as follows:
f meniscus ′=softmax(Q meniscus *K)*V+f meniscus
the calculation formula of the fusion characteristic data output by the transformed transducer encoder is as follows:
f out =[f acl ′,f meniscus ′]
wherein,
f acl ' is a characteristic of the anterior cruciate ligament output by the self-attention module;
f meniscus ' is a feature of the meniscus output by the self-attention module;
softmax is a normalized exponential function;
Q meniscus is the Q vector of the meniscus;
f meniscus features of meniscus as input;
f out Fusion characteristic data output by the modified transducer encoder;
[f acl ’,f meniscus ’]represents f acl ' and f meniscus ' all features.
Continuing with the previous example, the predetermined image size is (256, 8), then the dimension of the Q vector is 1024, the dimension of the K vector is 1024, and the dimension of the V vector is 1024.
Step S209: and respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
The preset classification model includes a meniscus injury classification model and an anterior cruciate ligament injury classification model, and the training methods of the meniscus injury classification model and the anterior cruciate ligament injury classification model do not form limitations of the present application, and are not described herein.
The embodiment of the present disclosure also provides a classification method based on information fusion, as shown in fig. 3. The fusion method comprises the following steps:
step S301: acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of meniscus, the second image data is image data of anterior cruciate ligament, and the third image data is image data of skeleton.
Step S303: and adjusting the first image data to be processed, the second image data to be processed and the third image data to be processed to preset image sizes to serve as the first image data, the second image data and the third image data after being preprocessed.
Step S305: and respectively inputting the preprocessed first image data, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model to obtain first feature data, preset second feature data and preset third feature data.
Step S307: and splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data.
Step S309: and taking the spliced characteristic data as the input of the modified converter encoder, and obtaining the fused characteristic data output by the modified converter encoder.
Step S311: and respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
In order to further understand the classification method of information fusion provided in the embodiments of the present disclosure, the embodiments of the present disclosure further provide a frame diagram of the classification method of information fusion, as shown in fig. 4, the first image data to be processed, the second image data to be processed, and the third image data to be processed are respectively input into corresponding feature extraction models, feature extraction is performed, and the obtained first feature data, second feature data, and third feature data are spliced to obtain spliced feature data. The splice feature data is used as input to a modified transducer encoder, which outputs fusion feature data. And finally, respectively taking the fusion characteristic data as input of a meniscus injury classifier and an anterior cruciate ligament injury classifier for classification.
In the modified transducer encoder, the Q vector of the meniscus and the anterior cruciate ligament, the K vector and the V vector shared by the meniscus, the anterior cruciate ligament and the skeleton are obtained by mapping after receiving the spliced characteristic data; and then, self-attention calculation is carried out based on the Q vector, the K vector and the V vector, so that fusion characteristic data output by the modified transducer encoder is obtained.
The classification method provided by the embodiment of the specification can realize the purpose of classifying based on the fused characteristic data after the characteristic fusion of the image data of the meniscus, the image data of the anterior cruciate ligament and the image data of the skeleton, further realize the purpose of classifying based on the fusion of a plurality of position information, improve the accuracy of diagnosis and improve the reliability.
The embodiment of the specification provides a classification method based on information fusion, and also provides a classification device based on information fusion based on the same thought. Fig. 5 is a schematic diagram of a classification device based on information fusion according to an embodiment of the present disclosure, as shown in fig. 5, the classification device includes:
the acquisition module 501 acquires first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
The feature extraction module 505 inputs the first image data to be processed, the second image data to be processed, and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model, and a preset third feature extraction model, respectively, to obtain first feature data, second feature data, and preset third feature data;
the splicing module 507 splices the first feature data, the second feature data and the third feature data to obtain spliced feature data;
the fusion module 509 takes the spliced characteristic data as input of a modified transform encoder to obtain fusion characteristic data output by the modified transform encoder;
the classification module 511 inputs the fusion characteristic data output by the modified transducer encoder into a preset classification model respectively to obtain diagnosis results, wherein the diagnosis results comprise a meniscus injury classification result and an anterior cruciate ligament injury classification result.
In an embodiment of the present disclosure, the acquiring the first image data to be processed, the second image data to be processed, and the third image data to be processed further includes:
the preprocessing module 503 adjusts the first image data to be processed, the second image data to be processed, and the third image data to be processed to a predetermined image size.
In this embodiment of the present disclosure, the first feature extraction model, the second feature extraction model, and the third feature extraction model are all models generated based on a network of Resnet50, and parameters of the first feature extraction model, the second feature extraction model, and the third feature extraction model are not shared.
The spliced characteristic data is used as the input of a modified transform encoder, and fusion characteristic data output by the modified transform encoder is obtained; the method specifically comprises the following steps:
the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton;
and performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain fusion characteristic data output by the modified converter encoder.
In the embodiment of the present specification, the calculation of the Q vector is:
Q acl =wq acl *f acl
Q meniscus =wq meniscus *f meniscus
the calculation of the V vector is as follows:
V=wv*[f acl ,f meniscus ,f bone ]
the calculation of the K vector is as follows:
K=wk*[f acl ,f meniscus ,f bone ]
wherein,
Q acl the Q vector for the anterior cruciate ligament;
wq acl a learnable weight matrix for the anterior cruciate ligament;
f acl Is a feature of the anterior cruciate ligament of input;
Q meniscus is the Q vector of the meniscus;
wq meniscus a learnable weight matrix for a meniscus;
f meniscus is a feature of the input meniscus;
f bone is a feature of the bone being input;
wv is a learnable weight matrix;
wk is a learnable weight matrix.
In the embodiment of the present disclosure, the splicing feature data is used as an input of a modified transform encoder, and fusion feature data output by the modified transform encoder is obtained; the method specifically comprises the following steps:
the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton;
and performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain fusion characteristic data output by the modified converter encoder.
In the embodiment of the present disclosure, the calculation formula of the characteristic of the anterior cruciate ligament output by the self-attention module is:
f acl ′=softmax(Q acl *K)*V+f acl
the calculation formula of the characteristics of the meniscus output by the self-attention module is as follows:
f meniscus ′=softmax(Q meniscus *K)*V+f meniscus
the calculation formula of the fusion characteristic data output by the transformed transducer encoder is as follows:
f out =[f acl ′,f meniscus ′]
Wherein,
f acl ' is a characteristic of the anterior cruciate ligament output by the self-attention module;
f meniscus ' is a feature of the meniscus output by the self-attention module;
softmax is a normalized exponential function;
Q meniscus is the Q vector of the meniscus;
f meniscus is a feature of the input meniscus;
f out fusion characteristic data output by the modified transducer encoder;
[f acl ’,f meniscus ’]represents f acl ' and f meniscus ' all features.
In this embodiment of the present disclosure, the preset image size is (256, 8), the dimensions of the first feature data, the second feature data, and the third feature data are 1024, the dimensions of the stitching feature data are 1024×3, the dimensions of the Q vector are 1024, the dimensions of the K vector are 1024, and the dimensions of the V vector are 1024.
The embodiment of the specification also provides an electronic device, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
Inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
taking the spliced characteristic data as the input of a modified transducer encoder to obtain fusion characteristic data output by the modified transducer encoder;
and respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
In an embodiment of the present disclosure, acquiring first image data to be processed, second image data to be processed, and third image data to be processed further includes:
and adjusting the first image data to be processed, the second image data to be processed and the third image data to be processed to a preset image size.
In this embodiment of the present disclosure, the first feature extraction model, the second feature extraction model, and the third feature extraction model are all models generated based on a network of Resnet50, and parameters of the first feature extraction model, the second feature extraction model, and the third feature extraction model are not shared.
In the embodiment of the present specification, the splicing feature data is used as an input of a modified transducer encoder to obtain fusion feature data; the method specifically comprises the following steps:
the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton;
and performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain fusion characteristic data output by the modified converter encoder.
In the embodiment of the present specification, the calculation of the Q vector is:
Q acl =wq acl *f acl
Q meniscus =wq meniscus *f meniscus
the calculation of the V vector is as follows:
V=wv*[f acl ,f meniscus ,f bone ]
the calculation of the K vector is as follows:
K=wk*[f acl ,f meniscus ,f bone ]
wherein,
Q acl the Q vector for the anterior cruciate ligament;
wq acl a learnable weight matrix for the anterior cruciate ligament;
f acl Is a feature of the anterior cruciate ligament of input;
Q meniscus is the Q vector of the meniscus;
wq meniscus a learnable weight matrix for a meniscus;
f meniscus is a feature of the input meniscus;
f bone is a feature of the bone being input;
wv is a learnable weight matrix;
wk is a learnable weight matrix.
In this embodiment of the present disclosure, the obtaining the fusion feature data output by the modified transducer encoder based on the Q vector, the K vector, and the V vector specifically includes:
performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain the characteristics of the anterior cruciate ligament output by the self-attention module and the characteristics of the meniscus output by the self-attention module;
and fusing the characteristics of the output anterior cruciate ligament and the characteristics of the output meniscus to obtain fused characteristic data output by the modified transducer encoder.
In the embodiment of the present disclosure, the calculation formula of the characteristic of the anterior cruciate ligament output by the self-attention module is:
f acl ′=softmax(Q acl *K)*V+f acl
the calculation formula of the characteristics of the meniscus output by the self-attention module is as follows:
f meniscus ′=softmax(Q meniscus *K)*V+f meniscus
the calculation formula of the fusion characteristic data output by the transformed transducer encoder is as follows:
f out =[f acl ′,f meniscus ′]
Wherein,
f acl ' is a characteristic of the anterior cruciate ligament output by the self-attention module;
f meniscus ' is a feature of the meniscus output by the self-attention module;
softmax is a normalized exponential function;
Q meniscus is the Q vector of the meniscus;
f meniscus is a feature of the input meniscus;
f out fusion characteristic data output by the modified transducer encoder;
[f acl ’,f meniscus ’]represents f acl ' and f meniscus ' all features.
In this embodiment of the present disclosure, the preset image size is (256, 8), the dimensions of the first feature data, the second feature data, and the third feature data are 1024, the dimensions of the stitching feature data are 1024×3, the dimensions of the Q vector are 1024, the dimensions of the K vector are 1024, and the dimensions of the V vector are 1024.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, electronic devices, non-volatile computer storage medium embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to the description of the method embodiments.
The apparatus, the electronic device, the nonvolatile computer storage medium and the method provided in the embodiments of the present disclosure correspond to each other, and therefore, the apparatus, the electronic device, the nonvolatile computer storage medium also have similar beneficial technical effects as those of the corresponding method, and since the beneficial technical effects of the method have been described in detail above, the beneficial technical effects of the corresponding apparatus, the electronic device, the nonvolatile computer storage medium are not described here again.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
It will be appreciated by those skilled in the art that the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data optimization device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data optimization device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data optimization device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data-optimizing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended as limiting the application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (8)

1. A classification method based on information fusion, characterized in that the classification method comprises:
acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
Inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
and taking the spliced characteristic data as the input of a modified converter encoder to obtain fusion characteristic data output by the modified converter encoder, wherein the fusion characteristic data specifically comprises the following steps: the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton; performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain the characteristics of the anterior cruciate ligament output by the self-attention module and the characteristics of the meniscus output by the self-attention module; fusing the characteristics of the output anterior cruciate ligament and the characteristics of the output meniscus to obtain fused characteristic data output by the modified transducer encoder;
And respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
2. The method of classifying as set forth in claim 1, wherein the acquiring the first image data to be processed, the second image data to be processed, and the third image data to be processed further includes:
and adjusting the first image data to be processed, the second image data to be processed and the third image data to be processed to a preset image size.
3. The classification method of claim 1, wherein the first feature extraction model, the second feature extraction model, and the third feature extraction model are all models generated based on a Resnet50 network, and parameters of the first feature extraction model, the second feature extraction model, and the third feature extraction model are not shared.
4. The classification method of claim 1, wherein the Q vector is calculated as:
Qacll=wqact*facl
Qmeniscus=wqmeniscus*fmeniscus
the calculation of the V vector is as follows:
V=wv*[f acl ,f meniscus ,f bone ]
the calculation of the K vector is as follows:
K=wk*[f acl ,f meniscus ,f bone ]
wherein,
Q acl the Q vector for the anterior cruciate ligament;
wq acl A learnable weight matrix for the anterior cruciate ligament;
f acl is a feature of the anterior cruciate ligament of input;
Q meniscus is the Q vector of the meniscus;
wq meniscus a learnable weight matrix for a meniscus;
f meniscus is a feature of the input meniscus;
f bone is a feature of the bone being input;
wv is a learnable weight matrix;
wk is a learnable weight matrix.
5. The classification method of claim 1, wherein the calculation formula of the anterior cruciate ligament output by the self-attention module is:
f acl ’=softmax(Q acl *K)*V+f acl
the calculation formula of the characteristics of the meniscus output by the self-attention module is as follows:
f meniscus ′=softmax(Q meniscus *K)*V+f meniscus
the calculation formula of the fusion characteristic data output by the transformed transducer encoder is as follows:
f out =[f acl ′,f meniscus ]
wherein,
f acl ' is a characteristic of the anterior cruciate ligament output by the self-attention module;
f meniscus ' is a feature of the meniscus output by the self-attention module;
softmax is a normalized exponential function;
Q meniscus is the Q vector of the meniscus;
f meniscus is a feature of the input meniscus;
f out fusion characteristic data output by the modified transducer encoder;
[f acl ’,f meniscus ’]represents f acl ' and f meniscus ' all features.
6. The method of claim 2, wherein the predetermined image size is (256, 8), the dimensions of the first feature data, the second feature data, and the third feature data are 1024, the dimensions of the stitching feature data are 1024 x 3, the dimensions of the Q vector are 1024, the dimensions of the K vector are 1024, and the dimensions of the V vector are 1024.
7. A classification device based on information fusion, characterized in that the classification device comprises:
the acquisition module is used for acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
the feature extraction module is used for inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
the splicing module splices the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
the fusion module takes the spliced characteristic data as the input of the transformed transform encoder, and obtains the fused characteristic data output by the transformed transform encoder, and the fusion module specifically comprises the following steps: the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton; performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain the characteristics of the anterior cruciate ligament output by the self-attention module and the characteristics of the meniscus output by the self-attention module; fusing the characteristics of the output anterior cruciate ligament and the characteristics of the output meniscus to obtain fused characteristic data output by the modified transducer encoder;
And the classification module is used for respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
8. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring first image data to be processed, second image data to be processed and third image data to be processed, wherein the first image data is image data of a meniscus, the second image data is image data of an anterior cruciate ligament, and the third image data is image data of bones;
inputting the first image data to be processed, the second image data to be processed and the third image data to be processed into a preset first feature extraction model, a preset second feature extraction model and a preset third feature extraction model respectively to obtain first feature data, preset second feature data and preset third feature data;
Splicing the first characteristic data, the second characteristic data and the third characteristic data to obtain spliced characteristic data;
and taking the spliced characteristic data as the input of a modified converter encoder to obtain fusion characteristic data output by the modified converter encoder, wherein the fusion characteristic data specifically comprises the following steps: the transformed transducer encoder receives the spliced characteristic data and then maps the spliced characteristic data to obtain Q vectors of the meniscus and the anterior cruciate ligament, and K vectors and V vectors shared by the meniscus, the anterior cruciate ligament and the skeleton; performing self-attention calculation based on the Q vector, the K vector and the V vector to obtain the characteristics of the anterior cruciate ligament output by the self-attention module and the characteristics of the meniscus output by the self-attention module; fusing the characteristics of the output anterior cruciate ligament and the characteristics of the output meniscus to obtain fused characteristic data output by the modified transducer encoder;
and respectively inputting the fusion characteristic data output by the modified transducer encoder into a preset classification model to obtain a diagnosis result, wherein the diagnosis result comprises a meniscus injury classification result and an anterior cruciate ligament injury classification result.
CN202310252573.2A 2023-03-15 2023-03-15 Classification method, device and equipment based on information fusion Active CN116246774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310252573.2A CN116246774B (en) 2023-03-15 2023-03-15 Classification method, device and equipment based on information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310252573.2A CN116246774B (en) 2023-03-15 2023-03-15 Classification method, device and equipment based on information fusion

Publications (2)

Publication Number Publication Date
CN116246774A CN116246774A (en) 2023-06-09
CN116246774B true CN116246774B (en) 2023-11-24

Family

ID=86625992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310252573.2A Active CN116246774B (en) 2023-03-15 2023-03-15 Classification method, device and equipment based on information fusion

Country Status (1)

Country Link
CN (1) CN116246774B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106908340A (en) * 2017-01-06 2017-06-30 南京市六合区人民医院 The method of testing of tendon is transplanted in a kind of Cruciate ligament reconstruction
CN108209924A (en) * 2018-01-16 2018-06-29 北京大学第三医院 The analysis method of gait feature after a kind of Anterior Cruciate Ligament Ruptures
CN113838048A (en) * 2021-10-12 2021-12-24 大连理工大学 Cruciate ligament preoperative insertion center positioning and ligament length calculating method
CN114882978A (en) * 2022-07-12 2022-08-09 紫东信息科技(苏州)有限公司 Stomach image processing method and system introducing picture translation information
CN115223715A (en) * 2022-07-15 2022-10-21 神州医疗科技股份有限公司 Cancer prediction method and system based on multi-modal information fusion
WO2022227294A1 (en) * 2021-04-30 2022-11-03 山东大学 Disease risk prediction method and system based on multi-modal fusion
CN115331048A (en) * 2022-07-29 2022-11-11 北京百度网讯科技有限公司 Image classification method, device, equipment and storage medium
CN115409990A (en) * 2022-09-28 2022-11-29 北京医准智能科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN115423754A (en) * 2022-08-08 2022-12-02 深圳大学 Image classification method, device, equipment and storage medium
CN115578387A (en) * 2022-12-06 2023-01-06 中南大学 Multimodal-based Alzheimer disease medical image classification method and system
CN115568860A (en) * 2022-09-30 2023-01-06 厦门大学 Automatic classification method of twelve-lead electrocardiosignals based on double-attention machine system
CN115631370A (en) * 2022-10-09 2023-01-20 北京医准智能科技有限公司 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11234657B2 (en) * 2017-05-01 2022-02-01 Rhode Island Hospital Non-invasive measurement to predict post-surgery anterior cruciate ligament success

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106908340A (en) * 2017-01-06 2017-06-30 南京市六合区人民医院 The method of testing of tendon is transplanted in a kind of Cruciate ligament reconstruction
CN108209924A (en) * 2018-01-16 2018-06-29 北京大学第三医院 The analysis method of gait feature after a kind of Anterior Cruciate Ligament Ruptures
WO2022227294A1 (en) * 2021-04-30 2022-11-03 山东大学 Disease risk prediction method and system based on multi-modal fusion
CN113838048A (en) * 2021-10-12 2021-12-24 大连理工大学 Cruciate ligament preoperative insertion center positioning and ligament length calculating method
CN114882978A (en) * 2022-07-12 2022-08-09 紫东信息科技(苏州)有限公司 Stomach image processing method and system introducing picture translation information
CN115223715A (en) * 2022-07-15 2022-10-21 神州医疗科技股份有限公司 Cancer prediction method and system based on multi-modal information fusion
CN115331048A (en) * 2022-07-29 2022-11-11 北京百度网讯科技有限公司 Image classification method, device, equipment and storage medium
CN115423754A (en) * 2022-08-08 2022-12-02 深圳大学 Image classification method, device, equipment and storage medium
CN115409990A (en) * 2022-09-28 2022-11-29 北京医准智能科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN115568860A (en) * 2022-09-30 2023-01-06 厦门大学 Automatic classification method of twelve-lead electrocardiosignals based on double-attention machine system
CN115631370A (en) * 2022-10-09 2023-01-20 北京医准智能科技有限公司 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN115578387A (en) * 2022-12-06 2023-01-06 中南大学 Multimodal-based Alzheimer disease medical image classification method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Addressing Class Imbalance for Transformer Based Knee MRI Classification;Gokay Sezen等;《2022 7th International Conference on Computer Science and Engineering (UBMK)》;235-238 *
Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms;Xuxin Chen等;《diagnostics》;第12卷;1-14 *
TransMed: Transformers Advance Multi-Modal Medical Image Classification;Yin Dai等;《diagnostics》;第11卷;1-15 *
多模态融合的膝关节损伤预测;陆莉霞等;《计算机工程与应用》;第57卷(第9期);225-232 *

Also Published As

Publication number Publication date
CN116246774A (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN113095124B (en) Face living body detection method and device and electronic equipment
CN109034183B (en) Target detection method, device and equipment
CN112036236B (en) Image detection method, device and medium based on GhostNet
CN115981870B (en) Data processing method and device, storage medium and electronic equipment
CN115828162B (en) Classification model training method and device, storage medium and electronic equipment
CN117635822A (en) Model training method and device, storage medium and electronic equipment
CN117197781B (en) Traffic sign recognition method and device, storage medium and electronic equipment
CN117409466A (en) Three-dimensional dynamic expression generation method and device based on multi-label control
CN116246774B (en) Classification method, device and equipment based on information fusion
CN116030247B (en) Medical image sample generation method and device, storage medium and electronic equipment
CN116402113B (en) Task execution method and device, storage medium and electronic equipment
CN117173002A (en) Model training, image generation and information extraction methods and devices and electronic equipment
CN116167431B (en) Service processing method and device based on hybrid precision model acceleration
CN117194992A (en) Model training and task execution method and device, storage medium and equipment
CN116630480A (en) Interactive text-driven image editing method and device and electronic equipment
CN117133310A (en) Audio drive video generation method and device, storage medium and electronic equipment
CN116091895A (en) Model training method and device oriented to multitask knowledge fusion
CN112307371B (en) Applet sub-service identification method, device, equipment and storage medium
CN117726760B (en) Training method and device for three-dimensional human body reconstruction model of video
CN116363056B (en) Chest CT fracture detection optimization method, device and equipment
CN117726907B (en) Training method of modeling model, three-dimensional human modeling method and device
CN117132806A (en) Model training method and device, storage medium and electronic equipment
CN113642603B (en) Data matching method and device, storage medium and electronic equipment
CN117057442A (en) Model training method, device and equipment based on federal multitask learning
CN117077817B (en) Personalized federal learning model training method and device based on label distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Applicant after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Address before: No. 301, 3rd Floor, Zhizhen Building, No. 7 Zhichun Road, Haidian District, Beijing, 100000

Applicant before: Beijing Yizhun Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant