CN115690556A - Image recognition method and system based on multi-modal iconography characteristics - Google Patents

Image recognition method and system based on multi-modal iconography characteristics Download PDF

Info

Publication number
CN115690556A
CN115690556A CN202211392434.1A CN202211392434A CN115690556A CN 115690556 A CN115690556 A CN 115690556A CN 202211392434 A CN202211392434 A CN 202211392434A CN 115690556 A CN115690556 A CN 115690556A
Authority
CN
China
Prior art keywords
modal
abnormal
image
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211392434.1A
Other languages
Chinese (zh)
Other versions
CN115690556B (en
Inventor
王立坤
温德惠
阴彦林
张培楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital Of Hebei North University
Original Assignee
First Affiliated Hospital Of Hebei North University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital Of Hebei North University filed Critical First Affiliated Hospital Of Hebei North University
Priority to CN202211392434.1A priority Critical patent/CN115690556B/en
Publication of CN115690556A publication Critical patent/CN115690556A/en
Application granted granted Critical
Publication of CN115690556B publication Critical patent/CN115690556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides an image recognition method and system based on multi-modal imagery features, wherein the method comprises the following substeps: inputting a multi-mode training image set acquired in advance into a convolutional neural network basic model constructed in advance, and training a multi-mode image anomaly recognition model; identifying the multi-modal image to be detected based on the multi-modal image anomaly identification model, and outputting abnormal region data in each modal image; registering multi-modal images of the same detection part in a three-dimensional frame model of a corresponding part, and marking an abnormal area in each modal image; and fusing the related abnormal areas together to form an abnormal block. According to the method and the device, relevance analysis is performed on abnormal areas in the multi-modal images, information in the multi-modal images is displayed through the three-dimensional model, the visual display effect of the abnormal areas is improved, and more data information is displayed.

Description

Image recognition method and system based on multi-modal iconography characteristics
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image recognition method and system based on multi-modal imagery features.
Background
In recent years, medical imaging techniques have been rapidly developed and widely used for clinical diagnosis and treatment. In order to comprehensively utilize information of multiple imaging devices or multiple imaging devices, make up for the defects of incomplete information and the like, and make clinical diagnosis and treatment more accurate, medical images have many modalities, such as Magnetic Resonance Imaging (MRI), ultrasound, DSA (digital subtraction angiography), CT and the like. The single-mode data has respective application range and limitation. Compared with single-mode data, multi-mode image data can provide more information. In the prior art, medical images of multiple modalities are registered, and multiple aspects of information from the human body are simultaneously expressed on one image. The existing defects are as follows: the relevance analysis is not carried out on abnormal areas in the multiple modal images, and the information reflected by the two-dimensional images is not visual enough and the information amount is less.
Therefore, the technical problems to be solved at present are: how to perform relevance analysis on abnormal areas in the multi-modal images and display information in the multi-modal images through a three-dimensional model, so that the visual display effect of the abnormal areas is improved and more data information is displayed.
Disclosure of Invention
The application aims to provide an image recognition method and system based on multi-modal iconography characteristics, which are used for performing relevance analysis on abnormal regions in a plurality of modal images, displaying information in the multi-modal images through a three-dimensional model, improving the visual display effect of the abnormal regions and displaying more data information.
In order to achieve the above object, the present application provides an image recognition method based on multi-modal imagery features, the method comprising the following sub-steps: inputting a multi-mode training image set acquired in advance into a convolutional neural network basic model constructed in advance, and training a multi-mode image anomaly recognition model; identifying the multi-modal image to be detected based on the multi-modal image anomaly identification model, and outputting abnormal region data in each modal image; registering the multi-modal images of the same detection part in the three-dimensional frame model of the corresponding part, and marking an abnormal region in each modal image; and fusing the related abnormal areas together to form an abnormal block.
The image recognition method based on multi-modal imagery features further comprises the following sub-steps: calculating the matching degree of the multi-modal image to be detected and the preset disease type according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type; and sequencing the matched disease types according to the matching degree of the calculated multi-modal image to be detected and the preset disease types and the sequence from high to low of the matching degree.
The image recognition method based on the multi-modal imagery features, wherein the pre-constructed convolutional neural network basic model comprises a convolutional layer, a pooling layer, a full connection layer and a Softmax classifier which are constructed in sequence.
The image recognition method based on the multi-modal imagery features, wherein the method for training the multi-modal imagery anomaly recognition model comprises the following steps: acquiring a multi-mode training image set; and inputting the multi-modal training image set into a convolution neural network basic model, and training the abnormal recognition model of each mode.
The image recognition method based on multi-modal imagery features as described above, wherein the method for registering multi-modal imagery of the same detected region within the three-dimensional frame model of the corresponding region comprises the following sub-steps: extracting edge contour and shape characteristics of a detection part in the multi-modal image; displaying the multi-mode image in a three-dimensional frame model in a matching way according to the extracted edge contour and shape characteristics; and marking abnormal areas in the multi-mode image according to the abnormal area data.
The image recognition method based on multi-modal imagery features as described above, wherein the method for fusing the associated abnormal regions together to form an abnormal block includes the following sub-steps: acquiring associated feature data between the first abnormal area and the second abnormal area; calculating a relevance value of the first abnormal area and the second abnormal area according to the relevance characteristic data; and comparing the relevance value with a preset threshold value, and if the relevance value is larger than the preset threshold value, fusing the first abnormal area and the second abnormal area together to form an abnormal block.
In the image recognition method based on multi-modal imagery features, the feature information of the abnormal area is obtained, and the feature information is displayed in the corresponding abnormal area.
The present application further provides an image recognition system based on multi-modal imagery features, the system comprising: the training module is used for inputting a multi-mode training image set acquired in advance into a convolution neural network basic model constructed in advance and training a multi-mode image anomaly recognition model; the abnormality recognition module is used for recognizing the multi-modal images to be detected based on the multi-modal image abnormality recognition model and outputting abnormal region data in each modal image; the registration module is used for registering the multi-modal images of the same detection part in the three-dimensional frame model of the corresponding part and marking an abnormal area in each modal image; and the fusion module is used for fusing the related abnormal areas together to form an abnormal block.
The image recognition system based on multi-modal imagery features as described above further includes: the data processor is used for calculating the matching degree of the multi-mode image to be detected and the preset disease type according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type; and the recommendation module is used for sequencing the matched disease types according to the calculated matching degree of the multimode image to be detected and the preset disease types and the sequence from high matching degree to low matching degree to obtain a disease matching list.
The image recognition system based on multi-modal imagery features as described above further comprises: and the model building module is used for building a convolution neural network basic model in advance.
The beneficial effect that this application realized is as follows:
(1) According to the method and the device, relevance analysis is carried out on abnormal areas in the multi-modal images, the abnormal areas with strong relevance are fused together to form abnormal blocks, the abnormal block information in the multi-modal images is displayed through the three-dimensional model, the visual display effect of the abnormal areas is improved, and more data information is displayed.
(2) According to the method and the device, the matching degree of the multi-modal image to be detected and the preset disease type is calculated according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type, and the matched disease types are sorted according to the sequence from high matching degree to low matching degree according to the calculated matching degree of the multi-modal image to be detected and the preset disease type.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to these drawings.
Fig. 1 is a flowchart of an image recognition method based on multi-modal imagery features according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a method for training a multi-modal image anomaly recognition model according to an embodiment of the present disclosure.
FIG. 3 is a flowchart of a method for registering multi-modal effects within a three-dimensional framework model according to an embodiment of the present application.
Fig. 4 is a flowchart of a method for fusing a first abnormal region and a second abnormal region associated with each other according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an image recognition system based on multi-modal imagery features according to an embodiment of the present disclosure.
Reference numerals are as follows: 10-a model building module; 20-a training module; 30-an anomaly identification module; 40-a registration module; 50-a fusion module; 60-a data processor; 70-a recommendation module; 100-image recognition system.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person skilled in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Example one
As shown in fig. 1, the present application provides an image recognition method based on multi-modal imagery features, which includes the following sub-steps:
and S1, constructing a basic model of the convolutional neural network in advance.
Specifically, the convolutional neural network basic model comprises a convolutional layer, a pooling layer, a full connection layer and a Softmax classifier. The convolution layer uses convolution kernels of size 3 x 3 and depth 16. The filter size of the convolutional layer is preset to 3 × 3, and the step size is preset to 1 × 1. The pooling layer is connected with the convolution layer, and the use of the pooling layer not only accelerates the calculation speed, but also prevents the overfitting phenomenon to a certain extent. One end of the full connection layer is connected with the pooling layer, and the other end of the full connection layer is connected with the Softmax classifier. And the Softmax classifier is used for classifying the input feature matrix.
And S2, inputting a multi-modal training image set acquired in advance into a pre-constructed convolutional neural network basic model, and training a multi-modal image anomaly recognition model.
And training and learning the characteristics of the abnormal regions in the multi-mode training image set according to a pre-constructed convolutional neural network basic model, and respectively training the recognition models of different regions in each mode by using the multi-mode training image set. Images of different modalities are obtained by different equipment in different imaging modes. Types of modalities include Magnetic Resonance Imaging (MRI), ultrasound, CT, etc., and types of regions include abnormal regions such as edematous tissue, effusion, and tumor, and characteristics of the abnormal regions include: shape features, grayscale features, and texture features.
As shown in fig. 2, step S2 includes the following sub-steps:
and step S210, acquiring a multi-modal training image set.
The multi-modal training image set comprises a training image set of multiple modalities. The multi-modal training image set comprises multi-modal images, and the multi-modal images comprise multi-sequence tomography images of ultrasonic waves or CT.
Wherein, the abnormal region in the training image set is marked in advance. Abnormal areas include regions of edematous tissue, fluid accumulation, and tumors.
And S220, inputting the multi-mode training image set into a convolutional neural network basic model, and training a multi-mode image anomaly recognition model.
Specifically, the training image set is input into a convolutional neural network, and features of an abnormal region are trained and learned through a convolutional layer, a pooling layer, a full-link layer and a Softmax classifier in sequence to obtain an abnormal recognition model of each mode. The abnormality recognition models of the respective modalities are used for recognizing the types of the abnormal regions according to the features of the abnormal regions.
And S3, identifying the multi-modal image to be detected based on the multi-modal image abnormity identification model, and outputting abnormal region data in each modal image.
Specifically, based on the abnormality recognition model of each modal image, the multi-modal image to be detected is recognized and classified, so that the type, edge contour and position information of an abnormal region in each modal image are recognized. The abnormal area data includes the kind, edge profile, and position information of the abnormal area.
And S4, registering the multi-modal images of the same detection part in the three-dimensional frame model of the corresponding part, and marking an abnormal region in each modal image.
Specifically, the detection site includes a head, a chest, a lung, or the like. The multi-modality images include multi-sequence tomographic images of ultrasound or CT.
As shown in fig. 3, step S4 includes the following sub-steps:
in step S410, edge contours and shape features of the detected region in the multi-modal image are extracted.
And step S420, matching and displaying the multi-mode image in the three-dimensional frame model according to the extracted edge contour and the shape feature.
Specifically, images of multiple modalities are matched and fused in a three-dimensional frame model in sequence. In the process of matching a modal image with a three-dimensional frame model, firstly, automatically selecting a corresponding three-dimensional frame model according to the extracted shape characteristics; for example, if the shape feature is the shape of a heart, then a three-dimensional frame model of the heart is selected; if the shape characteristic is the shape of the lung, selecting a three-dimensional frame model of the lung; the shape feature is the shape of the head, and a three-dimensional frame model of the head is selected. Then, according to an edge contour (for example, a heart edge contour, a lung edge contour or a head edge contour, etc.) extracted from a modal image, the modal image is placed in a corresponding position of the three-dimensional frame model for matching, so that the edge contour of the modal image is overlapped with the contour of the three-dimensional frame model. In the matching process, the edge of the three-dimensional frame model is automatically adjusted to be fused with the edge contour extracted from the modal image.
In step S430, an abnormal region is marked in the multimodal image according to the extracted abnormal region.
As a specific embodiment of the present invention, the marking method may be a method of highlighting the abnormal region or displaying an edge contour line.
According to the method and the device, the abnormal region (lesion) in the medical image is automatically identified, the workload of a doctor is reduced, the identification speed is accelerated, the corresponding position of the position in the multi-mode image in the three-dimensional frame model is displayed according to the abnormal region, the multi-mode image is displayed in the three-dimensional frame model, so that the doctor can observe the model visually, and the diagnosis is convenient.
And S5, fusing the related abnormal areas together to form an abnormal block.
As shown in fig. 4, step S5 includes the following sub-steps:
step S510, acquiring associated feature data between the first abnormal region and the second abnormal region.
Specifically, the associated feature data between the first abnormal region and the second abnormal region includes: the abnormal region type, the mapping overlapping area of two non-intersecting abnormal regions in the same direction (perpendicular to the abnormal region direction) displayed in the three-dimensional frame model, the intersecting area of the two abnormal regions, the distance between the most similar points of the two non-intersecting abnormal regions and the pixel mean value.
Step S520, calculating a relevance value between the first abnormal region and the second abnormal region according to the relevance feature data.
The calculation formula of the relevance value of the first abnormal area and the second abnormal area is as follows:
Figure BDA0003932519640000071
wherein L represents a relevance value of the first abnormal region and the second abnormal region; kind represents an abnormal region type consistency factor, if the first abnormal region and the second abnormal region are consistent in type, kind =1, otherwise, kind =0; alpha represents a correlation influence factor of the mapping coincidence area; SB represents the mapping superposition area of the first abnormal area and the second abnormal area which do not intersect; DB represents the distance between the most similar points of two non-intersecting abnormal areas; beta represents a relevance influence factor of the intersection area of the two abnormal regions; SJ represents the area where two abnormal regions intersect; s1 represents an area of the first abnormal region; s2 represents an area of the second abnormal region; gamma represents the influence factor of the two abnormal area pixel mean values on the relevance value, and when the two areas are crossed, gamma = 1-beta; when the two regions do not intersect, γ =1- α; x1 represents a pixel mean value of the first abnormal region; x2 denotes a pixel mean value of the second abnormal region.
Step S530, comparing the relevance value with a preset threshold, and if the relevance value is greater than the preset threshold, fusing the first abnormal area and the second abnormal area together to form an abnormal block.
The method for fusing the first abnormal area and the second abnormal area comprises the following steps: and connecting the boundary contour of the first abnormal area and the second abnormal area together by a curved surface in a smooth mode.
In step S540, the feature information of the abnormal block is obtained and displayed in the corresponding abnormal block.
The characteristic information includes volume, abnormal region type, gray level mean value and the like.
And S6, calculating the matching degree of the multi-modal image to be detected and the preset disease type according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type.
The abnormal attribute feature data corresponding to each kind of diseases are preset and stored in the database, and the abnormal attribute feature data comprise: the type of the abnormal region, the volume range value, and the grayscale range value.
Specifically, the calculation formula of the matching degree between the multimodal image to be detected and the preset disease category is as follows:
Figure BDA0003932519640000081
wherein, P represents the matching degree of the multi-modal image to be detected and the preset disease type; nx represents the number of abnormal region types corresponding to the abnormal blocks belonging to the preset disease types; nb represents the number of abnormal blocks which do not belong to the abnormal region type corresponding to the preset disease type; q represents the total number of abnormal blocks; g is a radical of formula 1 Represents a volume impact factor; g is a radical of formula 2 Representing a gray scale influence factor; w is a group of i Representing the volume matching factor of the ith abnormal block, and if the volume of the ith abnormal block is within the volume range value corresponding to the preset disease type, W i =1; otherwise, W i =0;
Figure BDA0003932519640000082
Representing the mean value of the gray values of the ith abnormal block; h min A minimum value representing a gray scale range value; h max Represents the maximum value of the gray scale range value; e =2.718.
And S7, sorting the matched disease types according to the calculated matching degree of the multimode image to be detected and the preset disease types and the sequence from high matching degree to low matching degree to obtain a disease matching list. The list of disease matches to aid the physician in diagnostic use.
Example two
As shown in fig. 5, the present application provides an image recognition system 100 based on multi-modal imagery features, the system comprising:
and the model building module 10 is used for building a convolution neural network basic model in advance.
The training module 20 is configured to input a pre-acquired multi-modal training image set into a pre-constructed convolutional neural network base model, and train a multi-modal image anomaly identification model.
The anomaly identification module 30 is configured to identify the multi-modal images to be detected based on the multi-modal image anomaly identification model, and output the abnormal region data in each modal image.
And the registration module 40 is used for registering the multi-modal images of the same detected part in the three-dimensional frame model of the corresponding part and marking the abnormal region in each modal image.
And a fusion module 50, configured to fuse the associated abnormal regions together to form an abnormal block.
And the data processor 60 is configured to calculate a matching degree between the multi-modal image to be detected and the preset disease type according to the feature information of the abnormal block and the abnormal attribute feature data corresponding to the preset disease type.
And the recommending module 70 is configured to sort the matched disease types according to the calculated matching degree between the multimodal image to be detected and the preset disease type and the sequence from high matching degree to low matching degree, so as to obtain a disease matching list.
The fusion module 50 includes:
and the acquisition module is used for acquiring the associated characteristic data between the first abnormal area and the second abnormal area.
And the calculation processor is used for calculating the relevance value of the first abnormal area and the second abnormal area according to the relevance characteristic data.
And the comparator is used for comparing the relevance value with a preset threshold value, and fusing the first abnormal area and the second abnormal area together to form an abnormal block if the relevance value is larger than the preset threshold value.
And the display module is used for acquiring the characteristic information of the abnormal block and displaying the characteristic information in the corresponding abnormal block.
The calculation formula of the relevance value of the first abnormal area and the second abnormal area is as follows:
Figure BDA0003932519640000091
wherein L represents a relevance value of the first abnormality region and the second abnormality region; kind represents an abnormal region type consistency factor, if the types of the first abnormal region and the second abnormal region are consistent, kind =1, otherwise, kind =0; alpha represents a relevance influence factor of the mapping coincidence area; SB represents the mapping superposition area of the first abnormal area and the second abnormal area which do not intersect; DB represents the distance between the most similar points of two non-intersecting abnormal areas; beta represents a relevance influence factor of the intersection area of the two abnormal regions; SJ represents the area where two abnormal regions intersect; s1 represents an area of the first abnormal region; s2 represents an area of the second abnormal region; gamma represents the influence factor of the two abnormal area pixel mean values on the relevance value, and when the two areas are crossed, gamma = 1-beta; when the two regions do not intersect, γ =1- α; x1 represents a pixel mean value of the first abnormal region; x2 denotes a pixel mean value of the second abnormal region.
The calculation formula of the matching degree of the multi-modal image to be detected and the preset disease category is as follows:
Figure BDA0003932519640000092
wherein P represents the matching degree of the multi-modal image to be detected and a preset disease type; nx represents the number of abnormal region types corresponding to the abnormal blocks belonging to the preset disease types; nb represents the number of abnormal blocks which do not belong to the abnormal region types corresponding to the preset disease types; q represents the total number of abnormal blocks; g 1 Representing a volume influence factor; g 2 Representing a gray scale influence factor; w i A volume matching factor representing the ith abnormal block, wherein if the volume of the ith abnormal block is within the volume range value corresponding to the preset disease type, W i =1; otherwise, W i =0;
Figure BDA0003932519640000101
Representing the mean value of the gray values of the ith abnormal block; h min Represents the minimum value of the gray scale range value; h max Representing the maximum value of the gray scale value.
The beneficial effect that this application realized is as follows:
(1) According to the method and the device, relevance analysis is carried out on abnormal areas in the multi-modal images, the abnormal areas with strong relevance are fused together to form abnormal blocks, abnormal block information in the multi-modal images is displayed through the three-dimensional model, the visual display effect of the abnormal areas is improved, and more data information is displayed.
(2) According to the method and the device, the matching degree of the multi-modal image to be detected and the preset disease type is calculated according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type, and the matched disease types are sorted according to the sequence from high matching degree to low matching degree according to the calculated matching degree of the multi-modal image to be detected and the preset disease type.
The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. An image recognition method based on multi-modal imagery features, characterized in that the method comprises the following sub-steps:
inputting a multi-mode training image set acquired in advance into a convolutional neural network basic model constructed in advance, and training a multi-mode image anomaly recognition model;
based on the multi-modal image anomaly identification model, identifying the multi-modal images to be detected, and outputting anomaly region data in each modal image;
registering the multi-modal images of the same detection part in the three-dimensional frame model of the corresponding part, and marking an abnormal region in each modal image;
and fusing the related abnormal areas together to form an abnormal block.
2. The method for image recognition based on multi-modal imagery features of claim 1, further comprising the sub-steps of:
calculating the matching degree of the multi-modal image to be detected and the preset disease type according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type;
and sequencing the matched disease types according to the matching degree of the calculated multi-modal image to be detected and the preset disease types and the sequence from high to low of the matching degree.
3. The image recognition method based on the multi-modal imagery features of claim 1, wherein the pre-constructed convolutional neural network base model comprises sequentially constructing a convolutional layer, a pooling layer, a full connection layer and a Softmax classifier.
4. The method for image recognition based on multi-modal imagery features of claim 1, wherein the method for training the multi-modal imagery anomaly recognition model comprises the following steps:
acquiring a multi-mode training image set;
and inputting the multi-modal training image set into a convolutional neural network basic model, and training an abnormal recognition model of each mode.
5. The method for image recognition based on multi-modal imagery features of claim 1, wherein the method for registering multi-modal imagery of a same detection site within a three-dimensional frame model of a corresponding site comprises the sub-steps of:
extracting edge contour and shape characteristics of a detection part in the multi-modal image;
displaying the multi-mode image in a three-dimensional frame model in a matching way according to the extracted edge contour and shape characteristics;
and marking abnormal areas in the multi-mode image according to the abnormal area data.
6. The method for image recognition based on multi-modal imagery features of claim 1, wherein the method for fusing the associated abnormal regions together to form an abnormal block comprises the following sub-steps:
acquiring associated characteristic data between the first abnormal area and the second abnormal area;
calculating a relevance value of the first abnormal area and the second abnormal area according to the relevance characteristic data;
and comparing the relevance value with a preset threshold value, and if the relevance value is larger than the preset threshold value, fusing the first abnormal area and the second abnormal area together to form an abnormal block.
7. The method of claim 6, wherein the feature information of the abnormal region is obtained and displayed on the corresponding abnormal region.
8. An image recognition system based on multi-modal imagery features, the system comprising:
the training module is used for inputting a multi-mode training image set acquired in advance into a convolution neural network basic model constructed in advance and training a multi-mode image anomaly recognition model;
the abnormality recognition module is used for recognizing the multi-modal images to be detected based on the multi-modal image abnormality recognition model and outputting abnormal region data in each modal image;
the registration module is used for registering the multi-modal images of the same detection part in the three-dimensional frame model of the corresponding part and marking an abnormal area in each modal image;
and the fusion module is used for fusing the related abnormal areas together to form an abnormal block.
9. The system of claim 8, further comprising:
the data processor is used for calculating the matching degree of the multi-mode image to be detected and the preset disease type according to the characteristic information of the abnormal block and the abnormal attribute characteristic data corresponding to the preset disease type;
and the recommendation module is used for sequencing the matched disease types according to the calculated matching degree of the multi-modal image to be detected and the preset disease types and the sequence from high matching degree to low matching degree to obtain a disease matching list.
10. The system of claim 9, further comprising: and the model building module is used for building a convolution neural network basic model in advance.
CN202211392434.1A 2022-11-08 2022-11-08 Image recognition method and system based on multi-mode imaging features Active CN115690556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211392434.1A CN115690556B (en) 2022-11-08 2022-11-08 Image recognition method and system based on multi-mode imaging features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211392434.1A CN115690556B (en) 2022-11-08 2022-11-08 Image recognition method and system based on multi-mode imaging features

Publications (2)

Publication Number Publication Date
CN115690556A true CN115690556A (en) 2023-02-03
CN115690556B CN115690556B (en) 2023-06-27

Family

ID=85049191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211392434.1A Active CN115690556B (en) 2022-11-08 2022-11-08 Image recognition method and system based on multi-mode imaging features

Country Status (1)

Country Link
CN (1) CN115690556B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689567A (en) * 2024-01-31 2024-03-12 广州索诺康医疗科技有限公司 Ultrasonic image scanning method and device

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549646B1 (en) * 2000-02-15 2003-04-15 Deus Technologies, Llc Divide-and-conquer method and system for the detection of lung nodule in radiological images
JP2006255065A (en) * 2005-03-16 2006-09-28 Fuji Photo Film Co Ltd Image output method, image output device and program
US20140250120A1 (en) * 2011-11-24 2014-09-04 Microsoft Corporation Interactive Multi-Modal Image Search
US8913807B1 (en) * 2010-12-30 2014-12-16 Given Imaging Ltd. System and method for detecting anomalies in a tissue imaged in-vivo
US20190057778A1 (en) * 2017-08-16 2019-02-21 The Johns Hopkins University Abnormal Tissue Detection Via Modal Upstream Data Fusion
CN109741343A (en) * 2018-12-28 2019-05-10 浙江工业大学 A kind of T1WI-fMRI image tumour collaboration dividing method divided based on 3D-Unet and graph theory
CN111445479A (en) * 2020-03-20 2020-07-24 东软医疗系统股份有限公司 Method and device for segmenting interest region in medical image
CN111553883A (en) * 2020-03-31 2020-08-18 杭州依图医疗技术有限公司 Medical image processing method and device, computer equipment and storage medium
CN111798465A (en) * 2020-07-02 2020-10-20 中国人民解放军空军军医大学 Medical image-based heterogeneous tumor high-risk area detection method and system
CN112215844A (en) * 2020-11-26 2021-01-12 南京信息工程大学 MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
CN112396218A (en) * 2020-11-06 2021-02-23 南京航空航天大学 Crowd flow prediction method based on urban area multi-mode fusion
CN112419247A (en) * 2020-11-12 2021-02-26 复旦大学 MR image brain tumor detection method and system based on machine learning
CN113298742A (en) * 2021-05-20 2021-08-24 广东省人民医院 Multi-modal retinal image fusion method and system based on image registration
US20210264593A1 (en) * 2018-06-14 2021-08-26 Fuel 3D Technologies Limited Deformity edge detection
US11120276B1 (en) * 2020-07-30 2021-09-14 Tsinghua University Deep multimodal cross-layer intersecting fusion method, terminal device, and storage medium
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
US20210304002A1 (en) * 2020-03-27 2021-09-30 Battelle Memorial Institute Data handling and machine learning
CN114283471A (en) * 2021-12-16 2022-04-05 武汉大学 Multi-modal sequencing optimization method for heterogeneous face image re-recognition
CN114387201A (en) * 2021-04-08 2022-04-22 透彻影像科技(南京)有限公司 Cytopathic image auxiliary diagnosis system based on deep learning and reinforcement learning
CN114444561A (en) * 2021-08-23 2022-05-06 感知集团有限公司 PM2.5 prediction method based on CNNs-GRU fusion deep learning model
CN114897779A (en) * 2022-04-12 2022-08-12 华南理工大学 Cervical cytology image abnormal area positioning method and device based on fusion attention
CN115132376A (en) * 2021-03-27 2022-09-30 王宏宇 Cardiovascular and cerebrovascular disease collaborative diagnosis model system based on multivariate heterogeneous medical data

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549646B1 (en) * 2000-02-15 2003-04-15 Deus Technologies, Llc Divide-and-conquer method and system for the detection of lung nodule in radiological images
JP2006255065A (en) * 2005-03-16 2006-09-28 Fuji Photo Film Co Ltd Image output method, image output device and program
US8913807B1 (en) * 2010-12-30 2014-12-16 Given Imaging Ltd. System and method for detecting anomalies in a tissue imaged in-vivo
US20140250120A1 (en) * 2011-11-24 2014-09-04 Microsoft Corporation Interactive Multi-Modal Image Search
US20190057778A1 (en) * 2017-08-16 2019-02-21 The Johns Hopkins University Abnormal Tissue Detection Via Modal Upstream Data Fusion
US20210264593A1 (en) * 2018-06-14 2021-08-26 Fuel 3D Technologies Limited Deformity edge detection
CN109741343A (en) * 2018-12-28 2019-05-10 浙江工业大学 A kind of T1WI-fMRI image tumour collaboration dividing method divided based on 3D-Unet and graph theory
CN111445479A (en) * 2020-03-20 2020-07-24 东软医疗系统股份有限公司 Method and device for segmenting interest region in medical image
US20210304002A1 (en) * 2020-03-27 2021-09-30 Battelle Memorial Institute Data handling and machine learning
CN111553883A (en) * 2020-03-31 2020-08-18 杭州依图医疗技术有限公司 Medical image processing method and device, computer equipment and storage medium
CN111798465A (en) * 2020-07-02 2020-10-20 中国人民解放军空军军医大学 Medical image-based heterogeneous tumor high-risk area detection method and system
US11120276B1 (en) * 2020-07-30 2021-09-14 Tsinghua University Deep multimodal cross-layer intersecting fusion method, terminal device, and storage medium
CN112396218A (en) * 2020-11-06 2021-02-23 南京航空航天大学 Crowd flow prediction method based on urban area multi-mode fusion
CN112419247A (en) * 2020-11-12 2021-02-26 复旦大学 MR image brain tumor detection method and system based on machine learning
CN112215844A (en) * 2020-11-26 2021-01-12 南京信息工程大学 MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
CN115132376A (en) * 2021-03-27 2022-09-30 王宏宇 Cardiovascular and cerebrovascular disease collaborative diagnosis model system based on multivariate heterogeneous medical data
CN114387201A (en) * 2021-04-08 2022-04-22 透彻影像科技(南京)有限公司 Cytopathic image auxiliary diagnosis system based on deep learning and reinforcement learning
CN113298742A (en) * 2021-05-20 2021-08-24 广东省人民医院 Multi-modal retinal image fusion method and system based on image registration
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
CN114444561A (en) * 2021-08-23 2022-05-06 感知集团有限公司 PM2.5 prediction method based on CNNs-GRU fusion deep learning model
CN114283471A (en) * 2021-12-16 2022-04-05 武汉大学 Multi-modal sequencing optimization method for heterogeneous face image re-recognition
CN114897779A (en) * 2022-04-12 2022-08-12 华南理工大学 Cervical cytology image abnormal area positioning method and device based on fusion attention

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAPENG LI等: "TAUNet: a triple-attention-based multi-modality MRI fusion U-Net for cardiac pathology segmentation", 《COMPLEX & INTELLIGENT SYSTEMS VOLUME》, pages 2489 *
刘晓鸣: "深度学习在医学影像分割与分类中的关键技术研究", 《中国优秀硕士论文电子期刊》, pages 53 - 70 *
温德惠等: "多模态超声在中、高度可疑恶性甲状腺结节鉴别中的诊断价值", 《临床研究》, pages 69 - 73 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689567A (en) * 2024-01-31 2024-03-12 广州索诺康医疗科技有限公司 Ultrasonic image scanning method and device

Also Published As

Publication number Publication date
CN115690556B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN109035187B (en) Medical image labeling method and device
CN108520519B (en) Image processing method and device and computer readable storage medium
CN110338844B (en) Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
CN102890823B (en) Motion object outline is extracted and left ventricle image partition method and device
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN106682435A (en) System and method for automatically detecting lesions in medical image through multi-model fusion
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
CN108109151B (en) Method and device for segmenting ventricle of echocardiogram based on deep learning and deformation model
CN111462049B (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
US20210271914A1 (en) Image processing apparatus, image processing method, and program
AU2020340234A1 (en) System and method for identification, labeling, and tracking of a medical instrument
CN115429325A (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN115690556B (en) Image recognition method and system based on multi-mode imaging features
CN115954101A (en) Health degree management system and management method based on AI tongue diagnosis image processing
CN111528907A (en) Ultrasonic image pneumonia auxiliary diagnosis method and system
Saleh et al. A deep learning localization method for measuring abdominal muscle dimensions in ultrasound images
Mohd Noor et al. Segmentation of the lung anatomy for high resolution computed tomography (HRCT) thorax images
CN113688942A (en) Method and device for automatically evaluating cephalic and lateral adenoid body images based on deep learning
CN113222996A (en) Heart segmentation quality evaluation method, device, equipment and storage medium
Wei et al. Automatic recognition of major fissures in human lungs
US20220172367A1 (en) Visualization of sub-pleural regions
Mangin et al. Object-based strategy for morphometry of the cerebral cortex
EP4076207B1 (en) A method and system for improved ultrasound plane acquisition
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
Shaaf et al. A Convolutional Neural Network Model to Segment Myocardial Infarction from MRI Images.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant