CN116758404A - Method and device for intelligently detecting accuracy of medical image - Google Patents
Method and device for intelligently detecting accuracy of medical image Download PDFInfo
- Publication number
- CN116758404A CN116758404A CN202310537520.5A CN202310537520A CN116758404A CN 116758404 A CN116758404 A CN 116758404A CN 202310537520 A CN202310537520 A CN 202310537520A CN 116758404 A CN116758404 A CN 116758404A
- Authority
- CN
- China
- Prior art keywords
- medical image
- information
- accuracy
- characteristic information
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000013136 deep learning model Methods 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims description 13
- 230000000737 periodic effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 17
- 238000002591 computed tomography Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 201000010099 disease Diseases 0.000 description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000011282 treatment Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000001356 surgical procedure Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000001959 radiotherapy Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a device for intelligently detecting the accuracy of medical images, wherein the method comprises the following steps: acquiring a medical image to be identified; inputting the medical image into a trained deep learning model, and extracting characteristic information of the medical image; performing feature processing on the feature information to obtain target feature information; and judging the target characteristic information and outputting the true or false condition of the medical image. The method can identify the authenticity of the medical image by judging the processed target characteristic information. Under the condition that the medical image is forged, a doctor can be prevented from diagnosing by using the forged medical image, so that misdiagnosis of the doctor is avoided, and serious accidents are caused.
Description
Technical Field
The invention relates to the field of medicine, in particular to a method and a device for intelligently detecting the accuracy of medical images.
Background
Medical images (e.g., x-rays, CT) are well known as important means by which doctors need to resort in normal work, through which doctors can realize detection of various diseases and abnormalities, planning of treatment plans such as surgery and radiotherapy, use as navigation tools in surgical procedures, and use in medical research, knowing the progress of diseases and lesion processes. It follows that the importance of medical images is self-evident.
However, with the continuous development of technology, there are some lawbreakers to try to destroy the medical image, which brings great challenges to the diagnosis of the doctor, and easily causes the doctor to misdiagnose the patient, resulting in serious consequences.
Therefore, how to solve the above-mentioned problems is considered.
Disclosure of Invention
The invention provides a method and a device for intelligently detecting the accuracy of medical images, which are used for solving the problems.
In a first aspect, the present invention provides a method for intelligently detecting the accuracy of a medical image, comprising:
acquiring a medical image to be identified;
inputting the medical image into a trained deep learning model, and extracting characteristic information of the medical image;
performing feature processing on the feature information to obtain target feature information;
and judging the target characteristic information and outputting the true or false condition of the medical image.
Optionally, the extracting feature information of the medical image includes:
and extracting global information in the medical image based on a Swin-transducer module of the deep learning model, and extracting periodic information in the medical image based on an LSTM module of the deep learning model, wherein the global information and the periodic information are the characteristic information.
Optionally, the performing feature processing on the feature information to obtain target feature information includes:
and carrying out fusion operation on the global information and the period information to obtain target characteristic information.
Optionally, the determining the target feature information, outputting the true or false condition of the medical image, includes:
if the target characteristic information is judged to meet the preset requirement, outputting a result that the medical image is real;
and if the target characteristic information is judged to not meet the preset requirement, outputting the medical image as a forged result.
Optionally, the deep learning model includes an encoding part and a decoding part, the encoding part and the decoding part each include a plurality of Swin-fransformer modules, and the medical image to be identified is divided into a plurality of minimum units after the medical image to be identified is input into the deep learning model;
the Swin-transducer module is used for extracting local information of each minimum unit to obtain a plurality of local information, and the plurality of local information form the global information.
Optionally, the Swin-transducer module further comprises a multi-scale attention mechanism unit, wherein the multi-scale attention mechanism unit is used for performing feature processing on medical images at different scales.
Alternatively, the medical image is in a DICOM format and the medical image complies with the DICOM3 standard.
In a second aspect, the present invention provides an apparatus for intelligently detecting the accuracy of a medical image, comprising:
the acquisition module is used for acquiring the medical image to be identified;
the extraction module is used for inputting the medical image into a trained deep learning model and extracting the characteristic information of the medical image;
the processing module is used for carrying out feature processing on the feature information to obtain target feature information;
and the processing module is also used for judging and processing the target characteristic information and outputting the true and false condition of the medical image.
In a third aspect, the invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method of intelligently detecting medical image accuracy as described above when executing the program.
In a fourth aspect, the present invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of intelligently detecting medical image accuracy as described above.
The technical scheme of the invention has at least the following beneficial effects:
according to the method for intelligently detecting the accuracy of the medical image, the trained deep learning model is used for extracting the characteristic information of the medical image and further carrying out characteristic processing on the characteristic information, so that the obtained target characteristic information is comprehensive and accurate. The authenticity of the medical image can be identified by judging the processed target characteristic information. Under the condition that the medical image is forged, a doctor can be prevented from diagnosing by using the forged medical image, so that misdiagnosis of the doctor is avoided, and serious accidents are caused.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for intelligently detecting medical image accuracy provided by the invention;
FIG. 2 is a schematic structural diagram of a deep learning model according to the present invention;
FIG. 3 is a schematic diagram of a Swin-transducer module according to the present invention;
FIG. 4 is a schematic block diagram of an apparatus for intelligently detecting the accuracy of medical images according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It should be understood that, in various embodiments of the present invention, the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, and means that three relationships may exist, for example, and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 1, a flow chart of a method for intelligently detecting accuracy of medical images provided by the invention is shown, and the method comprises the following steps:
s11: a medical image to be identified is acquired.
It should be noted that the medical image may be an x-ray image, a CT image, or an MRI image, which is not limited herein. Moreover, the medical image to be identified may be, for example, a medical image comprising a hip joint, or a medical image comprising a knee joint, or a medical image comprising a spinal region, etc.
Alternatively, CT (Computed Tomography ) is a medical imaging technique that uses computer technology to generate three-dimensional images of internal organs of the human body by performing X-ray scans of the human body multiple times. CT images have a wide range of applications in the medical field, the following being some common applications:
diagnosis: CT images can help doctors detect various diseases and abnormalities, such as lung cancer, heart disease, stroke, etc. It can display details of organs, blood vessels and bone structures, so that doctors can make diagnosis more accurately.
Treatment planning: the CT images may help doctors plan treatment plans such as surgery and radiation therapy. Through analysis of the images, the physician can determine the extent, location and intensity of the treatment, reducing the impact of the treatment on healthy tissue.
Surgical navigation: CT images can be used as navigation tools during surgery. The doctor can know the condition of the operation area in real time through the image so as to ensure the accuracy and the safety of the operation.
Disease study: CT images can be used in medical research to help scientists understand the progression of disease and the course of lesions. Through analysis of the images, researchers can determine the impact and characteristics of the disease and develop new treatments.
S12: and inputting the medical image into a trained deep learning model, and extracting characteristic information of the medical image.
It should be noted that the training process of the deep learning model is as follows:
the medical images are divided into a training set, a test set and a validation set. And performing model training on the basis of the training sets and the deep learning models trained by the training sets, testing the deep learning models trained by the training sets on the basis of the testing sets, and after the testing conditions are met, for example, the accuracy of testing the deep learning models reaches more than 95%, verifying the deep learning models tested by the testing sets on the basis of the verification sets, and adjusting the weights of the parameters of the deep learning models according to the verification results so as to finally obtain the trained deep learning models.
S13: and carrying out feature processing on the feature information to obtain target feature information.
It should be noted that, the feature processing of the feature information includes multiple layers, so that the obtained target feature information is relatively comprehensive and accurate.
S14: and judging the target characteristic information and outputting the true or false condition of the medical image.
According to the method for intelligently detecting the accuracy of the medical image, the trained deep learning model is used for extracting the characteristic information of the medical image and further carrying out characteristic processing on the characteristic information, so that the obtained target characteristic information is comprehensive and accurate. The authenticity of the medical image can be identified by judging the processed target characteristic information. Under the condition that the medical image is forged, a doctor can be prevented from diagnosing by using the forged medical image, so that misdiagnosis of the doctor is avoided, and serious accidents are caused.
In case it is determined that the medical image is authentic, the doctor can perform analysis and diagnosis operations based on the medical image with confidence.
Illustratively, the extracting feature information of the medical image includes:
and extracting global information in the medical image based on a Swin-transducer module of the deep learning model, and extracting periodic information in the medical image based on an LSTM module of the deep learning model, wherein the global information and the periodic information are the characteristic information.
It should be noted that, by extracting global information of the medical image, the obtained geometric information can be richer, and the corresponding receptive field is smaller. And the method is also beneficial to the object with smaller segmentation size and improves the accuracy degree of segmentation. By extracting the period information of the medical image, the periodicity of the medical image can be processed and judged. The global information and the period information together constitute the characteristic information.
For example, the performing feature processing on the feature information to obtain target feature information includes:
and carrying out fusion operation on the global information and the period information to obtain target characteristic information.
It should be noted that, by performing a fusion operation on the global information of the medical image and the periodic feature of the medical image, the obtained target feature information may include more feature information. The medical image comprising the object feature information also results more accurate than a medical image comprising only a single piece of information.
Alternatively, the fusion operation may be, for example, an addition operation or a dot product operation.
For example, the determining the target feature information, outputting the true or false condition of the medical image, includes:
if the target characteristic information is judged to meet the preset requirement, outputting a result that the medical image is real;
and if the target characteristic information is judged to not meet the preset requirement, outputting the medical image as a forged result.
The deep learning model performs a judgment process on the target feature information, actually calculates the target feature information, and outputs a numerical value corresponding to the target feature information. The predetermined requirement may be, for example, a predetermined value, which may be, for example, 0.618, 0.6, or 0.7, etc. And if the corresponding value obtained by calculating the target characteristic information is smaller than or equal to the preset value, confirming that the medical image is real. And if the corresponding value calculated on the target characteristic information is larger than the preset value, confirming that the medical image is forged. That is, if the target feature information meets the preset requirement, outputting the medical image as a real result; and if the target characteristic information does not meet the preset requirement, outputting the medical image as a forged result.
Referring next to fig. 2, a schematic structural diagram of a deep learning model according to the present invention is provided. Illustratively, the deep learning model comprises an encoding part and a decoding part, wherein the encoding part and the decoding part comprise a plurality of Swin-transducer modules, and the medical image to be identified is divided into a plurality of minimum units after the medical image to be identified is input into the deep learning model;
the Swin-transducer module is used for extracting local information of each minimum unit to obtain a plurality of local information, and the plurality of local information form the global information.
Optionally, the medical image is processed by the decoding portion after being processed by the encoding portion of the deep learning model, so as to extract global information of the medical image. The global information and the periodic information extracted by the LSTM module can be combined together in an addition mode, and finally the authenticity of the medical image is output through the full connection layer.
Referring next to fig. 3, a schematic structural diagram of a Swin-transducer module according to the present invention is shown. The Swin-transducer module also comprises a multi-scale attention mechanism unit for performing feature processing on medical images at different scales.
Optionally, the Swin-transducer module flow and features are as follows:
input processing: first, an input medical image is divided into a plurality of non-overlapping blocks, each block containing a number of pixels. These blocks are the basic processing units of the Swim-transducer.
Extracting local features: for each block, a small local transducer network is used to extract local features. This local transducer consists of multiple transducer encoder layers for learning the feature representation of each block.
Cross-block information interaction: the local features extracted in each block are integrated into a global feature. To achieve this, swin Transformer uses a trans-former network of cross-blocks for interacting information between different blocks. In this trans-block network, each block has access to information from other blocks so that global features can be built and integrated.
Multiscale processing: swin transducer also uses a multi-scale attention mechanism for handling features at different scales. This multi-scale attention mechanism may help the model better handle objects and scenes of different sizes.
And (3) outputting: finally, the integrated global features are transferred to a linear classifier for prediction of tasks such as medical image classification or object detection.
Continuing to refer to FIG. 2, the LSTM module is now described.
Optionally, the flow and characteristics of the LSTM module are as follows:
input Gate (Input Gate): the input sequence data x and the hidden state h (t-1) at the previous moment are passed through a fully connected layer to generate a value i (t) between 0 and 1. i (t) indicates how much information needs to be added to the cell state C (t) at the current time, which can control the forgotten and updated operation mentioned earlier.
Forget Gate (Forget Gate): the input sequence data x and the hidden state h (t-1) at the previous moment are passed through a fully connected layer to generate a value f (t) between 0 and 1. f (t) represents how much information the cell state C (t-1) at the previous time needs to be forgotten.
Cell State (Cell State) update: the cell state C (t) at the current time is calculated from the values of the input gate and the forget gate, and the cell state C (t-1) at the previous time and the input x (t) at the current time.
Output Gate (Output Gate): the input sequence data x and the hidden state h (t-1) at the previous moment are passed through a fully connected layer to generate a value o (t) between 0 and 1. o (t) represents how much information needs to be output from the cell state C (t) at the current time.
Hidden state update: the hidden state h (t) at the current time is calculated from the cell state C (t) at the current time and the value o (t) of the output gate.
By way of example, the medical image is in the DICOM format and the medical image complies with the DICOM3 standard.
Next, referring to fig. 4, based on the same technical concept as the above-mentioned method for intelligently detecting the accuracy of a medical image, the present invention further provides a device for intelligently detecting the accuracy of a medical image, which has the same functions as the method for intelligently detecting the accuracy of a medical image, and will not be described herein.
The device for intelligently detecting the accuracy of the medical image comprises:
an acquisition module 41 for acquiring a medical image to be identified;
an extracting module 42, configured to input the medical image into a trained deep learning model, and extract feature information of the medical image;
a processing module 43, configured to perform feature processing on the feature information to obtain target feature information;
the processing module 43 is further configured to perform a judgment process on the target feature information, and output an authenticity condition of the medical image.
Optionally, the extracting module 42 is specifically configured to, when extracting the feature information of the medical image:
and extracting global information in the medical image based on a Swin-transducer module of the deep learning model, and extracting periodic information in the medical image based on an LSTM module of the deep learning model, wherein the global information and the periodic information are the characteristic information.
Optionally, the processing module 43 is specifically configured to, when performing feature processing on the feature information to obtain target feature information:
and carrying out fusion operation on the global information and the period information to obtain target characteristic information.
Optionally, the processing module 43 is specifically configured to, when performing a judgment process on the target feature information and outputting the true or false condition of the medical image:
if the target characteristic information is judged to meet the preset requirement, outputting a result that the medical image is real;
and if the target characteristic information is judged to not meet the preset requirement, outputting the medical image as a forged result.
Optionally, the deep learning model includes an encoding part and a decoding part, the encoding part and the decoding part each include a plurality of Swin-fransformer modules, and the medical image to be identified is divided into a plurality of minimum units after the medical image to be identified is input into the deep learning model;
the Swin-transducer module is used for extracting local information of each minimum unit to obtain a plurality of local information, and the plurality of local information form the global information.
Optionally, the Swin-transducer module further comprises a multi-scale attention mechanism unit, wherein the multi-scale attention mechanism unit is used for performing feature processing on medical images at different scales.
Alternatively, the medical image is in a DICOM format and the medical image complies with the DICOM3 standard.
Referring next to fig. 5, a schematic structural diagram of an electronic device according to the present invention is provided.
The electronic device may include: processor 510, communication interface (Communications Interface) 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform the methods of intelligently detecting medical image accuracy provided by the methods described above.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Another embodiment of the invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of intelligently detecting medical image accuracy as described above.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are by way of example only and are not limiting. The objects of the present invention have been fully and effectively achieved. The functional and structural principles of the present invention have been shown and described in the examples and embodiments of the invention may be modified or practiced without departing from the principles described.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.
Claims (10)
1. A method for intelligently detecting the accuracy of a medical image, comprising:
acquiring a medical image to be identified;
inputting the medical image into a trained deep learning model, and extracting characteristic information of the medical image;
performing feature processing on the feature information to obtain target feature information;
and judging the target characteristic information and outputting the true or false condition of the medical image.
2. The method of intelligently detecting medical image accuracy according to claim 1, wherein the extracting feature information of the medical image comprises:
and extracting global information in the medical image based on a Swin-transducer module of the deep learning model, and extracting periodic information in the medical image based on an LSTM module of the deep learning model, wherein the global information and the periodic information are the characteristic information.
3. The method for intelligently detecting the accuracy of a medical image according to claim 2, wherein the performing feature processing on the feature information to obtain target feature information includes:
and carrying out fusion operation on the global information and the period information to obtain target characteristic information.
4. The method for intelligently detecting the accuracy of a medical image according to claim 1, wherein the judging process is performed on the target feature information, and the true or false condition of the medical image is output, comprising:
if the target characteristic information is judged to meet the preset requirement, outputting a result that the medical image is real;
and if the target characteristic information is judged to not meet the preset requirement, outputting the medical image as a forged result.
5. The method for intelligently detecting medical image accuracy according to claim 2, wherein the deep learning model comprises an encoding part and a decoding part, wherein the encoding part and the decoding part comprise a plurality of Swin-transducer modules, and the medical image to be identified is divided into a plurality of minimum units after being input into the deep learning model;
the Swin-transducer module is used for extracting local information of each minimum unit to obtain a plurality of local information, and the plurality of local information form the global information.
6. The method for intelligently detecting medical image accuracy according to claim 2, wherein the Swin-transducer module further comprises a multi-scale attention mechanism unit for performing feature processing on medical images at different scales.
7. The method of any one of claims 1 to 6, wherein the medical image is in DICOM format and the medical image complies with DICOM3 standard.
8. An apparatus for intelligently detecting the accuracy of a medical image, comprising:
the acquisition module is used for acquiring the medical image to be identified;
the extraction module is used for inputting the medical image into a trained deep learning model and extracting the characteristic information of the medical image;
the processing module is used for carrying out feature processing on the feature information to obtain target feature information;
and the processing module is also used for judging and processing the target characteristic information and outputting the true and false condition of the medical image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of intelligently detecting medical image accuracy as claimed in any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the method of intelligently detecting medical image accuracy according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310537520.5A CN116758404A (en) | 2023-05-12 | 2023-05-12 | Method and device for intelligently detecting accuracy of medical image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310537520.5A CN116758404A (en) | 2023-05-12 | 2023-05-12 | Method and device for intelligently detecting accuracy of medical image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116758404A true CN116758404A (en) | 2023-09-15 |
Family
ID=87950376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310537520.5A Pending CN116758404A (en) | 2023-05-12 | 2023-05-12 | Method and device for intelligently detecting accuracy of medical image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116758404A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019204293A (en) * | 2018-05-23 | 2019-11-28 | Necソリューションイノベータ株式会社 | Forgery determining method, program, recording medium and forgery determining device |
CN113869419A (en) * | 2021-09-29 | 2021-12-31 | 上海识装信息科技有限公司 | Method, device and equipment for identifying forged image and storage medium |
CN114444565A (en) * | 2021-12-15 | 2022-05-06 | 厦门市美亚柏科信息股份有限公司 | Image tampering detection method, terminal device and storage medium |
CN114663952A (en) * | 2022-03-28 | 2022-06-24 | 北京百度网讯科技有限公司 | Object classification method, deep learning model training method, device and equipment |
CN114694220A (en) * | 2022-03-25 | 2022-07-01 | 上海大学 | Double-flow face counterfeiting detection method based on Swin transform |
-
2023
- 2023-05-12 CN CN202310537520.5A patent/CN116758404A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019204293A (en) * | 2018-05-23 | 2019-11-28 | Necソリューションイノベータ株式会社 | Forgery determining method, program, recording medium and forgery determining device |
CN113869419A (en) * | 2021-09-29 | 2021-12-31 | 上海识装信息科技有限公司 | Method, device and equipment for identifying forged image and storage medium |
CN114444565A (en) * | 2021-12-15 | 2022-05-06 | 厦门市美亚柏科信息股份有限公司 | Image tampering detection method, terminal device and storage medium |
CN114694220A (en) * | 2022-03-25 | 2022-07-01 | 上海大学 | Double-flow face counterfeiting detection method based on Swin transform |
CN114663952A (en) * | 2022-03-28 | 2022-06-24 | 北京百度网讯科技有限公司 | Object classification method, deep learning model training method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11127137B2 (en) | Malignancy assessment for tumors | |
Anis et al. | An overview of deep learning approaches in chest radiograph | |
US11080895B2 (en) | Generating simulated body parts for images | |
EP2693951A2 (en) | Image analysis for specific objects | |
CN108369824A (en) | For by medical image system and method associated with patient | |
EP2575067B1 (en) | Automatic treatment planning method using retrospective patient data | |
EP3799662B1 (en) | Anonymisation of medical patient images using an atlas | |
Verma et al. | Advancement of machine intelligence in interactive medical image analysis | |
WO2014013285A1 (en) | Apparatus and method for determining optimal positions of a hifu probe | |
Prabha et al. | A big wave of deep learning in medical imaging-analysis of theory and applications | |
Alidoost et al. | Model utility of a deep learning-based segmentation is not Dice coefficient dependent: A case study in volumetric brain blood vessel segmentation | |
US20240074738A1 (en) | Ultrasound image-based identification of anatomical scan window, probe orientation, and/or patient position | |
CN116758404A (en) | Method and device for intelligently detecting accuracy of medical image | |
Singh et al. | Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach | |
EP3965117A1 (en) | Multi-modal computer-aided diagnosis systems and methods for prostate cancer | |
Al-Battal et al. | Object detection and tracking in ultrasound scans using an optical flow and semantic segmentation framework based on convolutional neural networks | |
EP3759685B1 (en) | System and method for an accelerated clinical workflow | |
CN116740015A (en) | Medical image intelligent detection method and device based on deep learning and electronic equipment | |
Anwar | AIM and explainable methods in medical imaging and diagnostics | |
US20230274424A1 (en) | Appartus and method for quantifying lesion in biometric image | |
Klinwichit et al. | The Radiographic view classification and localization of Lumbar spine using Deep Learning Models | |
Pandey et al. | A Framework for Mathematical Methods in Medical Image Processing | |
US11918374B2 (en) | Apparatus for monitoring treatment side effects | |
US20160066891A1 (en) | Image representation set | |
Swathi et al. | Enhancing Lung Segmentation Through Preprocessing of Medical Data Using Convolutional Neural Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |