WO2022062590A1 - 图像识别方法及装置、设备、存储介质和程序 - Google Patents
图像识别方法及装置、设备、存储介质和程序 Download PDFInfo
- Publication number
- WO2022062590A1 WO2022062590A1 PCT/CN2021/106479 CN2021106479W WO2022062590A1 WO 2022062590 A1 WO2022062590 A1 WO 2022062590A1 CN 2021106479 W CN2021106479 W CN 2021106479W WO 2022062590 A1 WO2022062590 A1 WO 2022062590A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- network
- medical images
- sample
- recognized
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012545 processing Methods 0.000 claims abstract description 85
- 238000004590 computer program Methods 0.000 claims abstract description 7
- 230000003902 lesion Effects 0.000 claims description 151
- 238000007499 fusion processing Methods 0.000 claims description 32
- 230000011218 segmentation Effects 0.000 claims description 30
- 238000009826 distribution Methods 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 21
- 238000007781 pre-processing Methods 0.000 claims description 12
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 230000007717 exclusion Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 description 27
- 238000003384 imaging method Methods 0.000 description 25
- 210000004185 liver Anatomy 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 210000003240 portal vein Anatomy 0.000 description 8
- 230000003111 delayed effect Effects 0.000 description 6
- 238000009792 diffusion process Methods 0.000 description 6
- 238000002597 diffusion-weighted imaging Methods 0.000 description 6
- 210000000056 organ Anatomy 0.000 description 5
- 238000002591 computed tomography Methods 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000003908 quality control method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 210000003734 kidney Anatomy 0.000 description 3
- 210000000952 spleen Anatomy 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 210000001367 artery Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 206010054107 Nodule Diseases 0.000 description 1
- 208000007536 Thrombosis Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Definitions
- the present disclosure relates to the technical field of artificial intelligence, and relates to, but is not limited to, an image recognition method and apparatus, electronic device, computer storage medium and computer program.
- CT Computed Tomography
- MRI Magnetic Resonance Imaging
- the scan image categories often include time series-related pre-contrast scan, early arterial phase, late arterial phase, portal venous phase, delayed phase, etc.
- the scan image category can also include scan parameters related to the scan image. T1-weighted inverse imaging, T1-weighted in-phase imaging, T2-weighted imaging, diffusion-weighted imaging, surface diffusion coefficient imaging, and more.
- a radiologist is usually required to identify the scanned image category of the scanned medical image to ensure that the required medical image is obtained; or, during inpatient or outpatient diagnosis and treatment, a doctor is usually required to identify the scanned medical image. , judge the scanned image category of each medical image, and then read the image.
- Embodiments of the present application provide an image recognition method and apparatus, an electronic device, a computer storage medium, and a computer program.
- the embodiment of the present application provides an image recognition method, including: acquiring a plurality of medical images to be recognized; extracting a style feature representation of each medical image to be recognized; classifying the style feature representations of the plurality of medical images to be recognized, Obtain the scanned image category of each medical image to be recognized.
- the style feature representation of the plurality of medical images to be recognized can be classified and processed.
- the differences in the respective style features of the medical images to be recognized can further improve the accuracy of the recognized scanned image categories, and because the style feature representations of multiple medical images to be recognized can be classified and processed, and each medical image to be recognized can be obtained. Therefore, the scanning image categories of multiple medical images to be recognized can be obtained at one time, thereby improving the efficiency of image recognition. Therefore, the above solution can improve the efficiency and accuracy of image recognition.
- classifying the style feature representations of a plurality of medical images to be recognized, and obtaining the scanned image category of each medical image to be recognized includes: performing a first step on the style feature representations of the plurality of medical images to be recognized. Fusion processing is performed to obtain the final style feature representation; the final style feature representation is classified to obtain the scanned image category of each medical image to be recognized.
- the first fusion processing is performed on the style feature representations of the multiple unrecognized medical images to obtain the final style feature representation, so the final style feature representation can represent each
- the difference between the style feature representation of a medical image to be recognized and the style feature representation of other to-be-recognized medical images, so using the final style feature representation for classification processing can improve the accuracy of the recognized scanned image category.
- the image recognition method further includes at least one of the following: The recognized medical images are sorted according to their scanned image categories; at least one to-be-recognized medical image sorted according to the scanned image categories is displayed on the same screen; if the scanned image categories of the to-be-recognized medical images are duplicated, the first warning information is output, to prompt the scanning personnel; if there is no preset scanning image category in the scanned image categories of the medical images to be identified, output second warning information to prompt the scanning personnel; if the classification confidence of the scanned image categories of the medical images to be identified If it is less than the preset reliability threshold, the third warning information is output to remind the scanning personnel.
- the at least one to-be-identified medical image is sorted according to its scanned image category, which can improve the convenience of doctor reading;
- the last at least one medical image to be recognized is displayed on the same screen, which can avoid back-and-forth comparison when the doctor reads the medical image to be recognized, thereby improving the efficiency of the doctor's reading.
- an early warning message to prompt the scanner, when there is no preset scan image category in the scanned image category of at least one medical image to be recognized, output second early warning information to prompt the scanner, in the scanned image category of the medical image to be recognized
- the third warning information is output to remind the scanning personnel, and the image quality control can be realized during the scanning process, so that when it is inconsistent with the actual situation, the error can be corrected in time and the second registration of the patient can be avoided.
- the above method before extracting the style feature representation of each to-be-recognized medical image, the above method further includes: preprocessing each to-be-recognized medical image, wherein the preprocessing includes at least one of the following: The image size of the recognized medical image is adjusted to a preset size, and the image intensity of the medical image to be recognized is normalized to a preset range.
- the image data of each target area is preprocessed, and the preprocessing includes at least one of the following: adjusting the image size of the target area to a preset size, and normalizing the image intensity of the target area It can help to improve the accuracy of subsequent image recognition.
- the image recognition method further includes: extracting the content feature representation of each to-be-recognized medical image respectively; performing lesion identification on the content-feature representations of a plurality of to-be-recognized medical images, and obtaining the content feature representation of each to-be-recognized medical image. the lesion area.
- the lesion area in each to-be-recognized medical image can be obtained, and each to-be-recognized medical image can be obtained after obtaining the lesion area.
- the lesion area is determined while scanning the image category of the medical image, which can help improve the overall reading efficiency, and can help eliminate the interference caused by the lesion to the category recognition of the scanned image, thereby improving the accuracy of image recognition.
- performing lesion identification on the content feature representations of multiple medical images to be identified, and obtaining a lesion area in each to-be-recognized medical image includes: performing a second step on the content feature representations of the multiple to-be-recognized medical images.
- the fusion processing is performed to obtain the final content feature representation; the lesion identification is performed on the final content feature representation to obtain the lesion area in each medical image to be identified.
- the second fusion processing is performed on the content feature representations of multiple medical images to be recognized to obtain the final content feature representation, which can help to make the final content feature representation compensate for inconspicuous lesions or motion interference that may exist in a single medical image to be recognized. Therefore, when using the final content feature representation for lesion identification, the accuracy of lesion identification can be improved.
- the image recognition method further includes: prompting the currently displayed lesion area of the medical image to be recognized.
- performing a second fusion process on the content feature representations of the plurality of medical images to be recognized includes any one of the following: performing a splicing process on the content feature representations of the plurality of medical images to be recognized; The content feature representations of the medical images to be recognized are added, wherein the final content feature representation has the same dimension as the content feature representations of the multiple medical images to be recognized.
- a final content feature representation is obtained, and the final content feature representation is
- the representation has the same dimension as the content feature representation of multiple medical images to be recognized, and the final content feature representation can be obtained in various ways, thereby improving the robustness of image recognition.
- extracting the style feature representation of each to-be-recognized medical image separately includes: using the style coding sub-network of the recognition network to separately extract the style-feature representation of each to-be-recognized medical image;
- the style feature representation of the image is classified and processed to obtain the scanned image category of each to-be-recognized medical image, including: using the classification processing sub-network of the recognition network to classify and process the style feature representations of a plurality of to-be-recognized medical images to obtain each to-be-recognized medical image.
- the style coding sub-network of the recognition network to extract the style feature representation of each medical image to be recognized, and use the classification processing sub-network of the recognition network to classify and process the style feature representations of multiple medical images to be recognized, and obtain each to-be-recognized medical image. Identify the scanned image categories of medical images, use the content coding sub-network of the recognition network to extract the content feature representation of each medical image to be recognized, and use the region segmentation sub-network of the recognition network to perform focus on the content feature representations of multiple medical images to be recognized.
- Recognition can obtain the lesion area in each medical image to be recognized, and the recognition network can be used to perform tasks such as extraction of style feature representation, classification processing, extraction of content feature representation, and lesion identification, so it can help improve the efficiency of image recognition.
- the image recognition method before extracting the style feature representation of each to-be-recognized medical image, the image recognition method further includes: acquiring a plurality of sample medical images, wherein the plurality of sample medical images are marked with their real scanned image categories and The real lesion area; use the style coding sub-network to extract the sample style feature representation of each sample medical image, and use the content coding sub-network to separately extract the sample content feature representation of each sample medical image; use the classification processing sub-network to classify multiple samples.
- the sample style feature representation of medical images is classified and processed to obtain the predicted scan image category of each sample medical image, and the region segmentation sub-network is used to identify the sample content feature representation of multiple sample medical images to obtain each sample medical image.
- the predicted lesion area in the model using the difference between the real scan image category and the predicted scan image category, adjust the network parameters of the style coding sub-network and the classification processing sub-network, and use the difference between the real and predicted lesion areas to adjust the content coding sub-network. and network parameters of the region segmentation sub-network.
- the training of the content coding sub-network and the region segmentation sub-network can be added, so that the lesion identification ability of the region segmentation sub-network can be improved, and the content coding can be improved at the same time.
- the degree of acquisition of the content features related to the lesions by the sub-network can help make the style coding sub-network not respond to the features related to the lesions, so that the subsequent classification will not be affected by the features related to the lesions, so it can improve the robustness of image recognition. .
- the image recognition method further includes: acquiring the sample data distribution represented by the sample style feature of each sample medical image; and adjusting the network parameters of the style coding sub-network by using the difference between the sample data distributions.
- the distribution of sample data is obtained at the same time, and the difference between the distribution of sample data is used to adjust the network parameters of the style coding sub-network, which can help to make the subsequent extraction of style feature representation independent of each other. Therefore, the accuracy of the recognized scanned image category can be improved.
- the image recognition method further includes: using a sample style feature representation and a content feature representation, constructing a reconstructed image corresponding to the sample style feature representation; using the reconstructed image and the corresponding sample style feature representation belong to Differences between sample medical images, adjusting the network parameters of the style-coding sub-network and the content-coding sub-network.
- a sample style feature representation and a content feature representation are used at the same time to construct a reconstructed image corresponding to the sample style feature representation, and the reconstructed image and the corresponding sample style feature represent the relationship between the sample medical images to which they belong.
- Adjust the network parameters of the style coding sub-network and the content coding sub-network so that the style coding sub-network can extract as complete and accurate style features as possible, while the content coding sub-network can extract as complete and accurate style features as possible. , which can help to improve the classification of subsequent scanned images and the accuracy of lesion identification.
- the style encoding sub-network includes: a sequentially connected downsampling layer and a global pooling layer; and/or, the content encoding sub-network includes any one of the following: a sequentially connected downsampling layer and a residual block , sequentially connected convolutional and pooling layers.
- the style encoding sub-network to include sequentially connected downsampling layers and global pooling layers, it is possible to facilitate network training while simplifying the network structure; by setting the content encoding sub-network to include any of the following: Sequentially connected downsampling layers and residual blocks, and sequentially connected convolutional layers and pooling layers can help simplify network training while simplifying the network structure.
- the embodiment of the present application also provides an image recognition device, including an image acquisition module, a style extraction module, and a classification processing module.
- the image acquisition module is configured to acquire a plurality of medical images to be recognized;
- the style extraction module is configured to separately extract each to-be-recognized medical image.
- the classification processing module is configured to classify and process the style feature representation of a plurality of to-be-recognized medical images to obtain the scanned image category of each to-be-recognized medical image.
- An embodiment of the present application further provides an electronic device, including a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory, so as to implement any one of the above image recognition methods.
- the embodiments of the present application also provide a computer-readable storage medium, which stores program instructions, and when the program instructions are executed by a processor, any one of the above image recognition methods is implemented.
- Embodiments of the present application further provide a computer program, including computer-readable codes.
- a processor in the electronic device executes any one of the image recognition described above. method.
- the style feature representation of the plurality of medical images to be recognized is classified and processed. , considering the differences in the respective style features of multiple medical images to be identified, thereby improving the accuracy of the recognized scanned image categories, and because the style feature representations of multiple medical images to be identified can be classified and processed, and each One scanned image category of the medical image to be recognized, so multiple scanned image categories of the medical image to be recognized can be obtained at one time, thereby improving the efficiency of image recognition. Therefore, the embodiment of the present application can improve the efficiency and accuracy of image recognition.
- FIG. 1 is a schematic flowchart of an embodiment of an image recognition method of the present application
- FIG. 2 is a schematic flowchart of an embodiment of a training recognition network
- FIG. 3 is a schematic state diagram of an embodiment of a training recognition network
- FIG. 4 is a schematic diagram of a framework of an embodiment of an image recognition apparatus of the present application.
- FIG. 5 is a schematic diagram of a framework of an embodiment of an electronic device of the present application.
- FIG. 6 is a schematic diagram of a framework of an embodiment of a computer-readable storage medium of the present application.
- system and “network” are often used interchangeably herein.
- the term “and/or” in this article is only an association relationship to describe the associated objects, indicating that there can be three kinds of relationships, for example, A and/or B, which can mean: A alone exists, A and B exist simultaneously, and A and B exist independently B these three cases.
- the character "/” in this document generally indicates that the related objects are an “or” relationship.
- “multiple” herein means two or more than two.
- FIG. 1 is a schematic flowchart of an embodiment of an image recognition method of the present application. Specifically, the following steps can be included:
- Step S11 Acquire a plurality of medical images to be recognized.
- the medical images to be identified may include CT images and MR images, which are not limited herein.
- the medical image to be recognized may be obtained by scanning regions such as the abdomen, chest, etc., which may be specifically set according to the actual application, which is not limited herein. For example, when the liver, spleen, and kidney are the organs that need diagnosis and treatment, the abdomen can be scanned to obtain the medical image to be identified; Recognition of medical images, other situations can be deduced by analogy, and no examples will be given here.
- the scanning mode may be a flat scanning, an enhanced scanning, or the like, which is not limited herein.
- the medical image to be recognized may be a three-dimensional image, which is not limited herein.
- multiple medical images to be recognized may be obtained by scanning the same object.
- Step S12 Extract the style feature representation of each medical image to be recognized, respectively.
- the style feature represents the style used to describe the medical image to be recognized, eg, the degree of enhancement of a contrast agent injected into a blood vessel represented in the medical image to be recognized.
- the degree of enhancement of angiographic contrast agents such as veins and portal veins of the liver in the medical images to be identified in different scanning image categories is different.
- the portal vein has been enhanced in the to-be-identified medical image whose image category is the late arterial stage, and the portal vein has been fully enhanced in the unidentified medical image whose scan image category is the portal venous phase, the liver blood vessels have been enhanced by forward blood flow, and the liver soft cells are in the The peak value has been reached under the marker, and the scanning image category is the delayed phase in the medical image to be identified, in which the portal vein and artery are in an enhanced state and are weaker than the portal phase, and the liver soft cell tissue is in an enhanced state and weaker than the portal phase.
- Other scans Image categories are not listed one by one here.
- the style feature representation may be represented by a vector, and the size of the vector may be set according to the actual situation.
- the size of the vector may be set to 8 bits, which is not limited herein.
- a recognition network in order to improve the convenience of extracting style feature representations, can be pre-trained, and the recognition network includes a style coding sub-network, and the style coding sub-network of the recognition network is used to extract each medical item to be recognized separately.
- the style encoding sub-network may include a sequential connection of a downsampling layer and a global pooling layer, so that after the medical image to be recognized is subjected to downsampling processing, the global pooling layer is used for pooling processing , to get the style feature representation.
- each medical image to be recognized may also be preprocessed.
- the image size of the medical image to be recognized is adjusted to a preset size (eg, 32*256*256).
- the preprocessing may further include normalizing the image intensity of the medical image to be recognized to a preset range (for example, a range of 0 to 1).
- a preset ratio For example, the gray value corresponding to 99.9% is used as the normalized clamping value, so that the contrast of the medical image to be recognized can be enhanced, and the accuracy of subsequent image recognition can be improved.
- Step S13 Perform classification processing on the style feature representations of a plurality of medical images to be recognized, to obtain a scanned image category of each medical image to be recognized.
- the scanned image category can be set according to the actual situation.
- the scan image category may include time-series-related pre-contrast scan, early arterial phase, late arterial phase, portal phase, and delayed phase; or, the scan image category may also include It can include T1-weighted inverse imaging, T1-weighted in-phase imaging, T2-weighted imaging, diffusion-weighted imaging, and surface diffusion coefficient imaging related to scanning parameters.
- the early arterial phase may indicate that the portal vein has not been enhanced
- the late arterial phase may indicate that the portal vein has been enhanced
- the portal venous phase may indicate that the portal vein is sufficiently enhanced and the liver vessels have been enhanced by forward blood flow, and liver parenchyma tissue is under the marker.
- the peak value has been reached, and the delay period can indicate that the portal vein and arteries are in an enhanced state and weaker than the portal venous phase, and the liver parenchyma tissue is in an enhanced state and weaker than the portal venous phase.
- Other scan image categories are not listed here.
- the medical image to be identified is a medical image obtained by scanning other organs, it can be deduced by analogy, and no examples will be given here.
- the respective style feature representations of the respective style feature representations are classified to obtain the scanned image category of each medical image to be recognized.
- the above-mentioned recognition network further includes a classification processing sub-network, so that the classification processing sub-network can be used to classify and process the style feature representations of multiple medical images to be recognized, and obtain each The scanned image category of the medical image to be recognized is obtained, and the scanning image category to which the medical image to be recognized belongs can be obtained by classifying the style feature representation by the classification processing sub-network, so the convenience of the classification processing can be improved.
- the classification processing sub-network may include sequentially connected fully connected layers and softmax layers, which are not limited herein.
- the style feature representations of multiple medical images to be recognized can also be subjected to a first fusion process to obtain a final style feature representation, and the final style feature representation is classified to obtain each A scanned image class of the medical image to be identified.
- the operation of the first fusion process may be to splicing the style feature representations of multiple medical images to be recognized, so as to obtain the final style feature representation; or, the operation of the first fusion process may also be to combine the multiple The style feature representations of the medical images are stacked to obtain the final style feature representation, which is not limited here.
- the final style feature representation obtained by performing the first fusion process on the style feature representations of multiple medical images to be recognized can represent the relationship between the style feature representation of each to-be-recognized medical image and the style feature representations of other to-be-recognized medical images Therefore, using the final style feature representation for classification processing can improve the accuracy of the recognized scanned image categories.
- the content feature representation of each to-be-recognized medical image can also be extracted, and the content feature representation of multiple to-be-recognized medical images can be used for lesion identification.
- the lesion area can be determined while obtaining the scanned image category of each medical image to be identified, so it can help to improve the overall reading performance, and at the same time, it can be beneficial to Eliminate the interference caused by the lesions to the classification recognition of scanned images, thereby improving the accuracy of image recognition.
- the content feature represents the content in the medical image to be recognized, for example, the anatomical features of an organ in the medical image to be recognized.
- the content feature representation can describe the physiological positional relationship between the liver and its adjacent organs (such as spleen, kidney), the shape characteristics of the liver, the texture (such as soft, hard, etc.) characteristics, the composition (such as spleen, kidney, etc.) , water-containing, fat-containing, etc.) characteristics, etc., are not limited here.
- the lesions may include tumors, thrombus, nodules, etc., which may be set according to actual conditions, and are not limited herein.
- the recognition network may further include a content coding sub-network, so that the content feature representation of each medical image to be recognized can be extracted separately by using the content coding sub-network of the recognition network.
- the content coding sub-network can use sequential connected downsampling layers and residual blocks (resblocks), the number of residual blocks can be set according to the actual situation, and the network depth can be improved by setting the residual blocks.
- the content coding sub-network can also use sequentially connected convolutional layers and pooling layers, and the number of groups of convolutional layers and pooling layers can be set according to the actual situation. For example, a set of sequentially connected convolutional layers and pooling layers, two sets of sequentially connected convolutional layers and pooling layers, three sets of sequentially connected convolutional layers and pooling layers, etc., can be used, which are not limited here. .
- the identification network may further include a region segmentation sub-network, so as to use the region segmentation sub-network of the identification network to perform lesion identification on the content feature representations of multiple medical images to be identified , to obtain the lesion area in each medical image to be identified.
- the area segmentation sub-network may adopt Unet, Vnet, etc., which is not limited herein.
- the lesion area may include an area surrounding the lesion, for example, the outline of the lesion, etc., which is not limited herein.
- the content feature representation may be represented by a tensor, for example, the content feature representation may be represented by a low-resolution tensor, which is not limited herein.
- a second fusion process may be performed on the content feature representations of a plurality of medical images to be identified to obtain a final content feature representation, and the final content feature representation is subjected to lesion identification, The lesion area in each medical image to be recognized is obtained, so that the final content feature can be used to compensate for problems such as inconspicuous lesions or artifacts caused by motion interference that may exist in a single medical image to be recognized. Indicates that the accuracy of lesion identification can be improved when performing lesion identification.
- the second fusion process may specifically be performing a concatenation process on the content feature representations of a plurality of medical images to be recognized to implement a fusion process on the content feature representations.
- the final content feature representation can be obtained through several simple convolutional layers without pooling; here, splicing the content features can be regarded as a concatenation operation of tensors; or, it can be the content of multiple medical images to be recognized.
- the feature representation is added (add) to realize the fusion processing of the content feature representation, which can be regarded as a summation operation of tensors;
- a convolution kernel such as 1*1 performs a convolution operation on the stacked content feature representations to achieve fusion processing of the content feature representations;
- the content feature representations of multiple medical images to be recognized can also be weighted, here Not limited.
- the final content feature representation and the content feature representation of the plurality of medical images to be recognized have the same dimension.
- the currently displayed lesion area of the medical image to be recognized may be prompted.
- preset lines eg, bold lines, dot-dash lines, double solid lines, etc.
- preset colors eg, yellow, red, green, etc.
- Symbols for example, arrows pointing to the lesion area, etc. represent the lesion area, which can be set according to the actual situation, which is not limited here.
- the above-mentioned trained identification network can be set in image post-processing workstations, imaging workstations, computer-aided reading systems, telemedicine diagnosis scenarios, cloud platform-assisted intelligent diagnosis scenarios, etc. Automatic recognition of medical images to improve recognition efficiency.
- At least one medical image to be recognized is obtained by scanning the same object, so in order to facilitate the doctor to read the image, after obtaining the scanned image category to which each medical image to be recognized belongs, the at least one medical image to be recognized can also be scanned. Images are sorted by their scan image category, for example, T1-weighted inverse imaging, T1-weighted in-phase imaging, pre-contrast, early arterial, late arterial, portal venous phase, delayed phase, T2-weighted imaging, diffusion-weighted imaging, The preset order of the surface diffusion coefficient imaging is sorted. In addition, the preset order can also be set according to the doctor's habits, which is not limited here, so as to improve the convenience of the doctor's reading.
- At least one medical image to be recognized sorted according to the scanned image category can also be displayed on the same screen.
- the number of medical images to be recognized is 5.
- the medical images to be recognized can be displayed in the five display windows respectively. Therefore, it is possible to reduce the time for doctors to review multiple medical images to be recognized and compare them back and forth, and improve the efficiency of reading images.
- At least one medical image to be recognized is obtained by scanning the same object. Therefore, in order to perform quality control during the scanning process, after obtaining the scanned image category to which each medical image to be recognized belongs, it is also possible to determine the medical image to be recognized. Identify whether the scanned image category of the medical image is duplicated, and when duplicated, output first warning information to prompt the scanning personnel. For example, if there are two medical images to be identified with both scanned image categories as "delay period", it can be considered that the scanning quality is not compliant during the scanning process. Therefore, in order to remind the scanning personnel, the first warning information can be output. For example Optionally, the cause of the warning can be output (for example, there are medical images to be identified with duplicate types of scanned images, etc.).
- the preset scanned image category is "portal phase". If there is no image with the scanned image category of "portal phase" in at least one of the medical images to be identified, it can be considered that the scanning quality is not compliant during the scanning process. , so in order to prompt the scanning personnel, the second warning information can be output, for example, the warning reason (eg, there is no portal phase image in the medical image to be identified, etc.) can be output.
- the classification confidence may be predicted by the classification processing sub-network when the classification processing is performed. For example, when the classification processing sub-network performs classification processing, it predicts the scanned image category and the corresponding classification confidence of each medical image to be recognized.
- the third warning information may be output, for example, the cause of the warning may be output (for example, the medical image to be recognized may have a problem of poor scanning quality). Therefore, the image quality control can be realized during the scanning process, so that when it is contrary to the actual situation, the error can be corrected in time and the second registration of the patient can be avoided.
- the style feature representation of the plurality of medical images to be recognized is classified and processed, so it is possible to consider many
- the differences in the respective style features of the medical images to be recognized can further improve the accuracy of the recognized scanned image categories, and because the style feature representations of a plurality of medical images to be recognized can be classified and processed, and each of the medical images to be recognized can be obtained. Scanning image categories of medical images, so multiple scanned image categories of medical images to be recognized can be obtained at one time, thereby improving the efficiency of image recognition. Therefore, the above solution can improve the efficiency and accuracy of image recognition.
- the identification network includes the content coding sub-network, the style coding sub-network, the classification processing sub-network and the region segmentation sub-network in the foregoing embodiments, and the specific process is as follows:
- Step S21 acquiring multiple sample medical images, wherein the multiple sample medical images are marked with their real scanned image categories and real lesion areas.
- the multiple sample medical images used in a training process can be obtained by scanning the same object.
- the sample medical image used in a certain training may be obtained by scanning object A
- the sample medical image used in another training may be obtained by scanning object B.
- the sample medical images may also include CT images and MR images, which are not limited herein.
- CT images and MR images which are not limited herein.
- the real scanned image category and real lesion area marked by the sample medical image can be marked by clinicians, radiologists and other personnel with medical imaging knowledge.
- the scan image category can be set according to the actual situation. For example, if the sample medical image is obtained by scanning the liver, the scan image category can specifically include the timing-related pre-contrast scan, early arterial, late arterial, portal venous phase, Delay period; alternatively, the scan image category may also include T1-weighted inverse imaging, T1-weighted in-phase imaging, T2-weighted imaging, diffusion-weighted imaging, and surface diffusion coefficient imaging related to scanning parameters. For details, please refer to the relevant steps in the foregoing embodiments. , and will not be repeated here.
- the real lesion area may be marked with polygons, for example, the contour of the lesion may be marked with polygons, etc., which is not limited herein.
- sample medical image 1, sample medical image 2, sample medical image 3, ..., sample medical image n can be obtained, where the value of n can be set according to the actual situation, for example , when the scan image category includes time-related pre-contrast scan, early arterial, late arterial, portal venous phase, and delayed phase, the value of n can be set to an integer less than or equal to 5, for example, 5, 4, 3, etc.
- the value of n can be set to a value less than or equal to 5 Integer, for example, 5, 4, 3, etc.; or, when the scan image category includes both T1-weighted inverse imaging, T1-weighted in-phase imaging, T2-weighted imaging, diffusion-weighted imaging, and surface diffusion coefficient imaging related to scanning parameters, also When including the time-series-related pre-contrast scan, early arterial, late arterial, portal venous phase, and delayed phase, the value of n can be set to an integer less than or equal to 10, for example, 10, 9, 8, etc. Set according to the actual situation, which is not limited here.
- Step S22 Using the style coding sub-network to extract the sample style feature representation of each sample medical image respectively, and using the content coding sub-network to separately extract the sample content feature representation of each sample medical image.
- Fig. 3 in conjunction with the sample medical image 1, the sample medical image 2, the sample medical image 3, ... and the sample medical image n after the style feature extraction by the style coding sub-network, the sample style feature representation 1, sample Style feature representation 2, sample style feature representation 3, ..., sample style feature representation n; after content feature extraction by the content coding sub-network, sample content feature representation 1, sample content feature representation 2, sample content feature representation can be obtained respectively 3, ..., the sample content feature representation n.
- sample style feature representation and the sample content feature representation reference may be made to the style feature representation and the content feature representation in the foregoing embodiments, which will not be repeated here.
- Step S23 Use the classification processing sub-network to classify and process the sample style feature representations of the multiple sample medical images to obtain the predicted scanning image category of each sample medical image, and use the region segmentation sub-network to classify the sample content of the multiple sample medical images.
- the feature representation is used for lesion identification, and the predicted lesion area in each sample medical image is obtained.
- the sample style feature representations of multiple sample medical images can be spliced to obtain the final sample style feature representation, and the final sample style feature representation can be classified by using the classification processing sub-network to obtain each sample medical image.
- the predicted scan image category of the image so that the final sample style feature representation can represent the difference between the sample style feature representation of each sample medical image and the sample style feature representation of other sample medical images, so the classification processing sub-network is used to classify the final sample style.
- the feature representation is used for classification processing, the accuracy of the classification processing can be improved.
- sample medical image 1 the sample medical image 2, the sample medical image 3, .
- the sample style features of the image are subjected to splicing processing (or stacking processing and other processing methods, for details, please refer to the aforementioned disclosed embodiments, which will not be repeated here) to obtain the final sample style feature representation, so that the classification processing sub-network is used to classify the final sample style features.
- splicing processing or stacking processing and other processing methods, for details, please refer to the aforementioned disclosed embodiments, which will not be repeated here
- classification processing sub-network is used to classify the final sample style features.
- Indicates that classification processing is performed to obtain the predicted scanning image categories of sample medical image 1, sample medical image 2, sample medical image 3, ..., and sample medical image n.
- the sample content feature representations of multiple sample medical images can be fused to obtain the final sample content feature representation, and the region segmentation sub-network can be used to identify lesions on the final sample content feature representation to obtain each sample medical image.
- the predicted lesion area in the image can help to make the final sample content feature representation to compensate for problems such as inconspicuous lesions or artifacts caused by motion interference that may exist in a single sample medical image.
- Feature representation can improve the accuracy of lesion identification when performing lesion identification.
- the operation of the above fusion processing may include any one of the following: performing a splicing process on the sample content feature representations of multiple sample medical images, and using at least one convolution layer to characterize the sample content feature representations of the multiple sample medical images. extraction, and the final sample content feature representation has the same dimension as the content feature representation of multiple sample medical images. For details, reference may be made to the relevant steps in the foregoing embodiments, which will not be repeated here.
- sample medical image 1 when using the region segmentation sub-network to perform lesion identification on the sample content feature representation of sample medical image 1, sample medical image 2, sample medical image 3, . . . , sample medical image n, the above sample medical image can be
- the sample content feature representation of the image is subjected to fusion processing (eg, splicing processing, addition processing, etc., for details, please refer to the aforementioned disclosed embodiments, which will not be repeated here) to obtain the final sample content feature representation.
- the feature of the sample content indicates that the lesion is identified, and the predicted lesion area in the sample medical image 1, the sample medical image 2, the sample medical image 3, . . . , and the sample medical image n is obtained.
- Step S24 Adjust the network parameters of the style coding sub-network and the classification processing sub-network by using the difference between the real scanned image category and the predicted scanned image category, and use the difference between the real lesion area and the predicted lesion area to adjust the content coding sub-network and area segmentation Network parameters for the subnet.
- the real scanned image category and the predicted scanned image category can be used to calculate the first loss value of the style encoding sub-network and the classification processing sub-network, and use the first loss value to adjust the style encoding sub-network and the classification processing sub-network.
- Network parameters of the network In an implementation scenario, a cross entropy loss (cross entropy loss) or a softmax logistic loss (logistic softmax loss) or the like may be used to calculate the first loss value, which is not limited herein.
- the actual lesion area and the predicted lesion area may be used to obtain the second loss value of the content coding sub-network and the region segmentation sub-network, and the second loss value may be used to adjust the content coding sub-network and the region segmentation sub-network.
- Network parameters may be used to calculate a binary cross-entropy loss (binary cross-entropy loss) or a dice coefficient loss may be used to calculate the second loss value, which is not limited herein.
- the dice coefficient loss is a set similarity measure function, which is usually used to calculate the similarity between two samples (the range is 0 to 1), and can be calculated by formula (1):
- loss dice represents the second loss value calculated by the loss of dice coefficient
- X represents the real lesion area
- Y represents the predicted lesion area
- X ⁇ Y represents the intersection of the real lesion area and the predicted lesion area.
- the sample data distribution represented by the sample style feature of each sample medical image can also be obtained, and the difference between the sample data distributions can be used. , and adjust the network parameters of the style coding sub-network, which can help to make the subsequent extracted style feature representations independent of each other, so it can help to improve the accuracy of the identified scanned image categories.
- the KL divergence can be used to measure the difference between the distributions of the sample data, and use it as the third loss value, so as to constrain the distribution of the style feature representation.
- KL divergence Kullback–Leibler divergence
- relative entropy relative entropy
- ⁇ (X) and ⁇ (X) respectively represent the distribution of sample data represented by two sample style features
- E ⁇ represents the probability distribution expectation represented by one of the sample style features
- ⁇ (x) and ⁇ (x) respectively represent the distribution probability of the element x in the two sample style feature representations in the respective sample data distributions.
- a Gaussian distribution function can be used to obtain the sample data distribution represented by the sample style features, so that through the above training, the style features obtained by the style coding sub-network subsequently extracted can be expressed as a Gaussian distribution with the same center and anisotropy , and thus can help to improve the accuracy of the recognized scanned image category.
- a sample style feature in order to enable the style encoding sub-network to extract as complete and accurate style features as possible, and the content encoding sub-network to extract as complete and accurate style features as possible, a sample style feature can also be used to represent the same This content feature representation, constructs a reconstructed image corresponding to the sample style feature representation, and uses the difference between the reconstructed image and the corresponding sample style feature representation to the sample medical image to which it belongs, and adjusts the style coding sub-network and the content coding sub-network network parameters, so that the reconstructed image and the corresponding sample medical image can be as identical as possible during the training process, so that the style coding sub-network can extract as complete and accurate style features as possible, and the content coding sub-network can extract as much as possible.
- the difference between the reconstructed image and the sample medical image to which the corresponding sample style feature representation belongs can be used as the fourth loss. value.
- the sample style feature representation and the sample content feature representation of each sample medical image can be used to obtain an intra-domain reconstructed image of the corresponding sample medical image, and the difference between each sample medical image and its intra-domain reconstructed image can be used to adjust The network parameters of the style encoding sub-network and the content encoding sub-network, so as to ensure that the decomposed sample content feature representation and sample style feature representation can stably reconstruct themselves and prevent midway variation.
- sample style feature representation of each sample medical image and the sample content feature representation (or final sample content feature representation) of any other sample medical image can also be used to obtain the cross-domain reconstruction of the corresponding sample medical image image, and use the differences of each sample medical image and its cross-domain reconstructed images to adjust the network parameters of the style coding sub-network and the content coding sub-network, so as to ensure that the extracted content feature representation is the basis for the true coincidence between medical images. feature set.
- the generator can be used to achieve reconstruction, and the discriminator can be used to identify whether the reconstructed image is a real sample medical image or a reconstructed image, so a generative adversarial network loss (GAN loss) can be used to The loss value of the above-mentioned cross-domain reconstruction is measured, and the L1 norm loss is used to measure the loss value of the above-mentioned intra-domain reconstruction, and details are not repeated here.
- GAN loss generative adversarial network loss
- the sample style feature representation 1 corresponding to n, the sample style feature representation 2, the sample style feature representation 3, ..., the sample style feature representation n and the final sample content feature representation are reconstructed to obtain reconstructed image 1, reconstructed image 2, and reconstructed image 3 , ..., reconstruct the image n, so as to realize the intra-domain reconstruction, and use the L1 norm loss to measure the loss value of the above-mentioned intra-domain reconstruction.
- sample style feature representation 1 corresponding to the sample medical image 1 and the sample content feature representations corresponding to other sample medical images can also be used for cross-domain reconstruction, and the other sample medical images can be deduced by analogy.
- Generative adversarial loss GAN loss is used to measure the loss value of the above cross-domain reconstruction.
- the above-mentioned first loss value, second loss value, third loss value and fourth loss value can also be calculated at the same time, and the network parameters of the identification network are adjusted according to these loss values, so as to improve the content coding
- the acquisition degree of the sub-network for the content features related to the lesions makes the style coding sub-network not respond to the features related to the lesions, improves the robustness of image recognition, and makes the subsequent extracted style feature representations independent of each other, and makes the style
- the coding sub-network can extract the complete and accurate style feature representation, and the content encoding sub-network can extract the complete and accurate content feature representation, thereby improving the accuracy of image recognition.
- the training of the content coding sub-network and the region segmentation sub-network is added, so that the lesion identification ability of the region segmentation sub-network can be improved at the same time.
- improve the acquisition of content features related to lesions by the content coding sub-network which can help make the style coding sub-network not respond to features related to lesions, so that subsequent classification is not affected by features related to lesions, so it can improve image recognition. robustness.
- FIG. 4 is a schematic frame diagram of an embodiment of an image recognition apparatus 40 of the present application.
- the image recognition device 40 includes an image acquisition module 41, a style extraction module 42 and a classification processing module 43.
- the image acquisition module 41 is configured to acquire a plurality of medical images to be recognized;
- the style extraction module 42 is configured to extract the style of each medical image to be recognized respectively.
- the classification processing module 43 is configured to perform classification processing on the style feature representations of a plurality of medical images to be recognized, so as to obtain the scanned image category of each medical image to be recognized.
- the style feature representation of the plurality of medical images to be recognized is classified and processed, so it is possible to consider many
- the differences in the respective style features of the medical images to be recognized can further improve the accuracy of the recognized scanned image categories, and because the style feature representations of a plurality of medical images to be recognized can be classified and processed, and each of the medical images to be recognized can be obtained. Scanning image categories of medical images, so multiple scanned image categories of medical images to be recognized can be obtained at one time, thereby improving the efficiency of image recognition. Therefore, the above solution can improve the efficiency and accuracy of image recognition.
- the classification processing module 43 includes a first fusion processing sub-module, which is configured to perform a first fusion processing on the style feature representations of a plurality of medical images to be recognized to obtain a final style feature representation; the classification processing module 43 includes a classification processing The sub-module is configured to perform classification processing on the final style feature representation to obtain the scanned image category of each medical image to be recognized.
- the style feature representations of multiple medical images to be recognized are classified and processed, the style feature representations of the multiple unidentified medical images are subjected to a first fusion process to obtain the final style feature representation. Therefore, the final style feature representation is The representation can represent the difference between the style feature representation of each to-be-recognized medical image and the style feature representations of other to-be-recognized medical images, so using the final style feature representation for classification processing can improve the accuracy of the recognized scanned image category.
- the image recognition device 40 further includes at least one of an image exclusion module, an image display module, a first early warning module, a second early warning module and a third early warning module;
- the image exclusion module is configured to sort a plurality of medical images to be identified according to their scanned image categories; the image display module is configured to display on the same screen at least one medical image to be identified sorted according to the scanned image categories
- the first warning module is configured to output first warning information to prompt the scanning personnel when there are repetitions of the scanned image categories of the medical images to be identified;
- the second warning module is configured to display multiple medical images to be identified When the preset scanned image category does not exist in the scanned image category of the medical image to be identified, output the second early warning information to prompt the scanning personnel;
- the third early warning module is configured to be configured when the classification confidence of the scanned image category of the medical image to be recognized is less than When the reliability threshold is preset, the third warning information is output to remind the scanning personnel.
- At least one medical image to be recognized is sorted according to its scanning image category, which can improve the convenience of doctor reading; After the image categories are sorted, at least one to-be-recognized medical image is displayed on the same screen, which can avoid back-and-forth comparison of the medical image to be recognized by the doctor, thereby improving the efficiency of the doctor's image reading; there are duplications in the scanned image categories of the to-be-recognized medical image.
- output the second early warning information to remind the scanning personnel that it can be scanned during the scanning process. Realize image quality control, so that when it is contrary to reality, it can correct errors in time and avoid the second registration of patients.
- the image recognition apparatus 40 further includes a preprocessing module configured to perform preprocessing on each medical image to be recognized, wherein the preprocessing includes at least one of the following: adjusting the image size of the medical image to be recognized to a predetermined size. Set the size and normalize the image intensity of the medical image to be recognized to a preset range.
- the image data of each target area is preprocessed, and the preprocessing includes at least one of the following: adjusting the image size of the target area to a preset size, and adjusting the image size of the target area to a preset size.
- the image intensity is normalized to a preset range, which can help to improve the accuracy of subsequent image recognition.
- the image recognition device 40 further includes a content extraction module, configured to extract the content feature representation of each medical image to be recognized, respectively; the image recognition device 40 further includes a lesion recognition module, configured to extract a plurality of medical images to be recognized.
- the content feature of indicates that the lesion is identified, and the lesion area in each medical image to be identified is obtained.
- the lesion area in each to-be-recognized medical image can be obtained.
- the lesion area in it is determined, so it can help improve the overall reading performance, and at the same time, it can help to eliminate the interference caused by the lesion to the scanning image category recognition, thereby improving image recognition. accuracy.
- the lesion identification module includes a second fusion processing sub-module configured to perform a second fusion process on the content feature representations of multiple medical images to be identified to obtain a final content feature representation; the lesion identification module includes a lesion identification submodule , which is configured to perform lesion identification on the final content feature representation to obtain the lesion area in each medical image to be identified.
- the second fusion processing is performed on the content feature representations of multiple medical images to be recognized to obtain a final content feature representation, which can help to make the final content feature representation compensate for the inconspicuous lesions that may exist in a single to-be-recognized medical image. or artifacts caused by motion interference, so that when using the final content feature representation for lesion identification, the accuracy of lesion identification can be improved.
- the lesion identification module further includes a lesion prompting sub-module configured to prompt the lesion area of the currently displayed medical image to be identified.
- the doctor's reading experience can be improved.
- the second fusion processing sub-module is specifically configured to perform any one of the following: perform splicing processing on the content feature representations of a plurality of medical images to be recognized; and add the content feature representations of a plurality of medical images to be recognized processing; wherein, the final content feature representation has the same dimension as the content feature representation of multiple medical images to be identified.
- the final content feature representation is obtained by performing a splicing process on the content feature representations of a plurality of medical images to be recognized, or performing an addition process on the content feature representations of a plurality of medical images to be recognized.
- the final content feature representation has the same dimension as the content feature representation of multiple medical images to be recognized, and the final content feature representation can be obtained in various ways, thereby improving the robustness of image recognition.
- the style extraction module 42 is specifically configured to use the style coding sub-network of the recognition network to extract the style feature representation of each medical image to be recognized;
- the classification processing module 43 is specifically configured to use the classification processing sub-network of the recognition network to The style feature representation of a plurality of medical images to be recognized is classified and processed to obtain the scanned image category of each medical image to be recognized;
- the content extraction module is specifically configured to use the content coding sub-network of the recognition network to extract the content of each medical image to be recognized.
- the lesion identification module is specifically configured to use the region segmentation sub-network of the identification network to perform lesion identification on the content feature representation of a plurality of medical images to be identified, and obtain the lesion area in each medical image to be identified.
- the style feature representation of each medical image to be recognized is extracted by the style coding sub-network of the recognition network, and the style feature representation of a plurality of medical images to be recognized is classified and processed by the classification processing sub-network of the recognition network, Obtain the scanned image category of each medical image to be recognized, use the content coding sub-network of the recognition network to extract the content feature representation of each medical image to be recognized, and use the regional segmentation sub-network of the recognition network to identify the content of multiple medical images to be recognized.
- the feature representation is used to identify lesions, and to obtain the lesion area in each medical image to be recognized
- the recognition network can be used to perform tasks such as extraction of style feature representation, classification processing, extraction of content feature representation, and lesion recognition, so it can help improve image recognition. s efficiency.
- the image recognition apparatus 40 further includes a sample acquisition module configured to acquire a plurality of sample medical images, wherein the plurality of sample medical images are marked with their real scanned image categories and real lesion areas; the image recognition apparatus 40 further includes The feature extraction module is configured to use the style coding sub-network to extract the sample style feature representation of each sample medical image respectively, and use the content coding sub-network to separately extract the sample content feature representation of each sample medical image; the image recognition device 40 also includes a recognition The processing module is configured to use the classification processing sub-network to classify and process the sample style feature representation of the multiple sample medical images, obtain the predicted scanning image category of each sample medical image, and use the region segmentation sub-network to classify the multiple sample medical images.
- the feature of the sample content indicates that the lesion is identified, and the predicted lesion area in each sample medical image is obtained; the image recognition device 40 also includes a first adjustment module configured to utilize the difference between the real scan category and the predicted scan category to adjust the style coding sub-network and The network parameters of the classification processing sub-network, and the network parameters of the content coding sub-network and the region segmentation sub-network are adjusted by using the difference between the real lesion area and the predicted lesion area.
- the training of the content coding sub-network and the region segmentation sub-network is added, so that the lesion identification ability of the region segmentation sub-network can be improved at the same time.
- improve the acquisition of content features related to lesions by the content coding sub-network which can help make the style coding sub-network not respond to features related to lesions, so that subsequent classification is not affected by features related to lesions, so it can improve image recognition. robustness.
- the image recognition apparatus 40 further includes a distribution acquisition module configured to acquire the distribution of sample data represented by the sample style features of each sample medical image; the image recognition apparatus 40 further includes a second adjustment module configured to utilize the samples The difference between the data distributions, adjust the network parameters of the style encoding sub-network.
- the distribution of sample data is obtained at the same time, and the difference between the distribution of sample data is used to adjust the network parameters of the style coding sub-network, so it can be beneficial to the subsequent extraction of style feature representation. They are independent of each other, which can help to improve the accuracy of the recognized scanned image categories.
- the image recognition apparatus 40 further includes an image reconstruction module configured to use a sample style feature representation and a content feature representation to construct a reconstructed image corresponding to the sample style feature representation; the image recognition apparatus 40 further includes a third The adjustment module is configured to adjust the network parameters of the style coding sub-network and the content coding sub-network by using the difference between the reconstructed image and the corresponding sample medical image to which the corresponding sample style feature representation belongs.
- a sample style feature representation and a content feature representation are used to construct a reconstructed image corresponding to the sample style feature representation, and the reconstructed image and the corresponding sample style feature are used to represent the sample to which they belong.
- the differences between medical images adjust the network parameters of the style coding sub-network and the content coding sub-network, so that the style coding sub-network can extract as complete and accurate style features as possible, and the content coding sub-network can extract the complete and accurate style features as much as possible.
- Accurate style features can help to improve the classification of subsequent scanned images and the accuracy of lesion identification.
- the style encoding sub-network includes: sequentially connected downsampling layers and a global pooling layer; and/or, the content encoding sub-network includes any of the following: sequentially connected downsampling layers and residual blocks, sequential Connected convolutional and pooling layers.
- the style encoding sub-network by setting the style encoding sub-network to include a sequentially connected downsampling layer and a global pooling layer, it can facilitate network training while simplifying the network structure; by setting the content encoding sub-network to include the following: Either: sequentially connected downsampling layers and residual blocks, sequentially connected convolutional layers and pooling layers, can facilitate network training while simplifying the network structure.
- FIG. 5 is a schematic diagram of a framework of an embodiment of an electronic device 50 of the present application.
- the electronic device 50 includes a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of any of the image recognition method embodiments described above.
- the electronic device 50 may include, but is not limited to, a microcomputer and a server.
- the electronic device 50 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
- the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the image recognition method embodiments described above.
- the processor 52 may also be referred to as a central processing unit (Central Processing Unit, CPU).
- the processor 52 may be an integrated circuit chip with signal processing capability.
- the processor 52 may also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the processor 52 may be jointly implemented by an integrated circuit chip.
- the above solution can improve the efficiency and accuracy of image recognition.
- FIG. 6 is a schematic diagram of a framework of an embodiment of a computer-readable storage medium 60 of the present application.
- the computer-readable storage medium 60 stores program instructions 601 that can be executed by the processor, and the program instructions 601 are used to implement any of the above-mentioned image recognition methods.
- the above solution can improve the efficiency and accuracy of image recognition.
- an embodiment of the present application further provides a computer program, including computer-readable codes, when the computer-readable codes are executed in an electronic device, the processor in the electronic device executes any one of the above-mentioned codes. an image recognition method.
- the disclosed method and apparatus may be implemented in other manners.
- the device implementations described above are only illustrative.
- the division of modules or units is only a logical function division. In actual implementation, there may be other divisions.
- units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
- Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed over network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
- the integrated unit if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
- the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
- the embodiments of the present application disclose an image recognition method and device, electronic equipment, computer storage medium and computer program.
- the image recognition method includes: acquiring a plurality of medical images to be recognized; separately extracting style features of each of the medical images to be recognized Representation; classifying and processing the style feature representations of the plurality of medical images to be recognized, to obtain a scanned image category of each of the medical images to be recognized.
- the above solution can improve the efficiency and accuracy of image recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Analysis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Description
Claims (27)
- 一种图像识别方法,包括:获取多个待识别医学图像;分别提取每一所述待识别医学图像的风格特征表示;对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别。
- 根据权利要求1所述的图像识别方法,其中,所述对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别包括:将所述多个待识别医学图像的风格特征表示进行第一融合处理,得到最终风格特征表示;对所述最终风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别。
- 根据权利要求1或2所述的图像识别方法,其中,所述对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别之后,所述图像识别方法还包括以下至少一者:将所述多个待识别医学图像按照其扫描图像类别进行排序;将按照扫描图像类别进行排序后的至少一个所述待识别医学图像进行同屏显示;若所述待识别医学图像的扫描图像类别存在重复,则输出第一预警信息,以提示扫描人员;若所述多个待识别医学图像的扫描图像类别中不存在预设扫描图像类别,则输出第二预警信息,以提示扫描人员;若所述待识别医学图像的扫描图像类别的分类置信度小于预设置信度阈值,则输出第三预警信息,以提示扫描人员。
- 根据权利要求1至3任一项所述的图像识别方法,其中,所述分别提取每一所述待识别医学图像的风格特征表示之前,所述方法还包括:对每一所述待识别医学图像进行预处理,其中,所述预处理包括以下至少一种:将所述待识别医学图像的图像尺寸调整至预设尺寸、将所述待识别医学图像的图像强度归一化至预设范围。
- 根据权利要求1至4任一项所述的图像识别方法,其中,所述方法还包括:分别提取每一所述待识别医学图像的内容特征表示;对所述多个待识别医学图像的内容特征表示进行病灶识别,得到每一所述待识别医学图像中的病灶区域。
- 根据权利要求5所述的图像识别方法,其中,所述对所述多个待识别医学图像的内容特征表示进行病灶识别,得到每一所述待识别医学图像中的病灶区域包括:将所述多个待识别医学图像的内容特征表示进行第二融合处理,得到最终内容特征表示;对所述最终内容特征表示进行病灶识别,得到每一所述待识别医学图像中的病灶区域;和/或,所述方法还包括:提示当前显示的所述待识别医学图像的病灶区域。
- 根据权利要求6所述的图像识别方法,其中,所述将所述多个待识别医学图像的内容特征表示进行第二融合处理,包括以下任一者:将所述多个待识别医学图像的内容特征表示进行拼接处理;将所述多个待识别医学图像的内容特征表示进行相加处理;其中,所述最终内容特征表示和所述多个待识别医学图像的内容特征表示的维度相同。
- 根据权利要求5至7任一项所述的图像识别方法,其中,所述分别提取每一所述待识别医学图像的风格特征表示,包括:利用识别网络的风格编码子网络分别提取每一所述待识别医学图像的风格特征表示;所述对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别,包括:利用所述识别网络的分类处理子网络对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别;所述分别提取每一所述待识别医学图像的内容特征表示,包括:利用所述识别网络的内容编码子网络分别提取每一所述待识别医学图像的内容特征表示;所述对所述多个待识别医学图像的内容特征表示进行病灶识别,得到每一所述待识别医学图像中的病灶区域,包括:利用所述识别网络的区域分割子网络对所述多个待识别医学图像的内容特征表示进行病灶识别,得到每一所述待识别医学图像中的病灶区域。
- 根据权利要求8所述的图像识别方法,其中,所述分别提取每一所述待识别医学图像的风格特征表示之前,所述图像识别方法还包括:获取多个样本医学图像,其中,所述多个样本医学图像标注有其真实扫描图像类别和真实病灶区域;利用所述风格编码子网络分别提取每一所述样本医学图像的样本风格特征表示,并利用所述内容编码子网络分别提取每一所述样本医学图像的样本内容特征表示;利用所述分类处理子网络对所述多个样本医学图像的样本风格特征表示进行分类处理,得到每一所述样本医学图像的预测扫描图像类别,并利用所述区域分割子网络对所述多个样本医学图像的样本内容特征表示进行病灶识别,得到每一所述样本医学图像中的预测病灶区域;利用所述真实扫描图像类别和所述预测扫描图像类别的差异,调整所述风格编码子网络和所述分类处理子网络的网络参数,以及利用所述真实病灶区域和所述预测病灶区域的差异,调整所述内容编码子网络和所述区域分割子网络的网络参数。
- 根据权利要求9所述的图像识别方法,其中,所述图像识别方法还包括:获取每一所述样本医学图像的样本风格特征表示的样本数据分布情况;利用所述样本数据分布情况之间的差异,调整所述风格编码子网络的网络参数。
- 根据权利要求9所述的图像识别方法,其中,所述图像识别方法还包括:利用一所述样本风格特征表示和一所述内容特征表示,构建得到与所述样本风格特征表示对应的重建图像;利用所述重建图像与对应的所述样本风格特征表示所属的所述样本医学图像之间的差异,调整所述风格编码子网络和所述内容编码子网络的网络参数。
- 根据权利要求8所述的图像识别方法,其中,所述风格编码子网络包括:顺序连接的下采样层和全局池化层;和/或,所述内容编码子网络包括以下任一者:顺序连接的下采样层和残差块、顺序连接的卷积层和池化层。
- 一种图像识别装置,包括:图像获取模块,配置为获取多个待识别医学图像;风格提取模块,配置为分别提取每一所述待识别医学图像的风格特征表示;分类处理模块,配置为对所述多个待识别医学图像的风格特征表示进行分类处理,得到每一所述待识别医学图像的扫描图像类别。
- 根据权利要求13所述的装置,其中,所述分类处理模块包括第一融合处理子模块,配置为将多个待识别医学图像的风格特征表示进行第一融合处理,得到最终风格特征表示;所述分类处理模块还包括分类处理子模块,配置为对最终风格特征表示进行分类处理,得到每一待识别医学图像的扫描图像类别。
- 根据权利要求13或14所述的装置,其中,所述装置还包括图像排除模块、图像显示模块、第一预警模块、第二预警模块和第三预警模块中的至少一个模块;所述图像排除模块,配置为将多个待识别医学图像按照其扫描图像类别进行排序;所述图像显示模块,配置为将按照扫描图像类别进行排序后的至少一个待识别医学图像进行同屏显示;所述第一预警模块,配置为在待识别医学图像的扫描图像类别存在重复时,输出第一预警信息,以提示扫描人员;所述第二预警模块,配置为在多个待识别医学图像的扫描图像类别中不存在预设扫描图像类别时,输出第二预警信息,以提示扫描人员;所述第三预警模块,配置为在所述待识别医学图像的扫描图像类别的分类置信度小于预设置信度阈值时,输出第三预警信息,提示扫描人员。
- 根据权利要求13至15任一项所述的装置,其中,所述装置还包括预处理模块,配置为对每一待识别医学图像进行预处理,其中,预处理包括以下至少一种:将待识别医学图像的图像尺寸调整至预设尺寸、将待识别医学图像的图像强度归一化至预设范围。
- 根据权利要求13至16任一项所述的装置,其中,所述装置还包括内容提取模块,配置为分别提取每一待识别医学图像的内容特征表示;所述装置还包括病灶识别模块,配置为对多个待识别医学图像的内容特征表示进行病灶识别,得到每一待识别医学图像中的病灶区域。
- 根据权利要求17所述的装置,其中,所述病灶识别模块包括第二融合处理子模块和病灶识别子模块,配置为将多个待识别医学图像的内容特征表示进行第二融合处理,得到最终内容特征表示;所述病灶识别模块包括病灶识别子模块,配置为对最终内容特征表示进行病灶识别,得到每一待识别医学图像中的病灶区域;和/或,所述病灶识别模块包括病灶提示子模块,配置为提示当前显示的待识别医学图像的病灶区域。
- 根据权利要求18所述的装置,其中,所述第二融合处理子模块具体配置为执行以下任一者:将多个待识别医学图像的内容特征表示进行拼接处理;将多个待识别医学图像的内容特征表示进行相加处理;其中,最终内容特征表示和多个待识别医学图像的内容特征表示的维度相同。
- 根据权利要求17至19任一项所述的装置,其中,所述风格提取模块具体配置为利用识别网络的风格编码子网络分别提取每一待识别医学图像的风格特征表示;所述分类处理模块具体配置为利用识别网络的分类处理子网络对多个待识别医学图像的风格特征表示进行分类处理,得到每一待识别医学图像的扫描图像类别;所述内容提取模块具体配置为利用识别网络的内容编码子网络分别提取每一待识别医学图像的内容特征表示;所述病灶识别模块具体配置为利用识别网络的区域分割子网络对多个待识别医学图像的内容特征表示进行病灶识别,得到每一待识别医学图像中的病灶区域。
- 根据权利要求20所述的装置,其中,所述装置还包括样本获取模块,配置为获 取多个样本医学图像,其中,多个样本医学图像标注有其真实扫描图像类别和真实病灶区域;所述装置还包括特征提取模块,配置为利用风格编码子网络分别提取每一样本医学图像的样本风格特征表示,并利用内容编码子网络分别提取每一样本医学图像的样本内容特征表示;所述装置还包括识别处理模块,配置为利用分类处理子网络对多个样本医学图像的样本风格特征表示进行分类处理,得到每一样本医学图像的预测扫描图像类别,并利用区域分割子网络对多个样本医学图像的样本内容特征表示进行病灶识别,得到每一样本医学图像中的预测病灶区域;所述装置还包括第一调整模块,配置为利用真实扫描类别和预测扫描类别的差异,调整风格编码子网络和分类处理子网络的网络参数,以及利用真实病灶区域和预测病灶区域的差异,调整内容编码子网络和区域分割子网络的网络参数。
- 根据权利要求21所述的装置,其中,所述装置还包括分布获取模块,配置为获取每一样本医学图像的样本风格特征表示的样本数据分布情况;所述装置还包括第二调整模块,配置为利用样本数据分布情况之间的差异,调整风格编码子网络的网络参数。
- 根据权利要求21所述的装置,其中,所述装置还包括图像重建模块,配置为利用一样本风格特征表示和一内容特征表示,构建得到与样本风格特征表示对应的重建图像;所述装置还包括第三调整模块,配置为利用重建图像与对应的样本风格特征表示所属的样本医学图像之间的差异,调整风格编码子网络和内容编码子网络的网络参数。
- 根据权利要求20所述的装置,其中,所述风格编码子网络包括:顺序连接的下采样层和全局池化层;和/或,内容编码子网络包括以下任一者:顺序连接的下采样层和残差块、顺序连接的卷积层和池化层。
- 一种电子设备,包括相互耦接的存储器和处理器,所述处理器配置为执行所述存储器中存储的程序指令,以实现权利要求1至12任一项所述的图像识别方法。
- 一种计算机可读存储介质,其上存储有程序指令,所述程序指令被处理器执行时实现权利要求1至12任一项所述的图像识别方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至12任一所述的图像识别方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011018760.7 | 2020-09-24 | ||
CN202011018760.7A CN112036506A (zh) | 2020-09-24 | 2020-09-24 | 图像识别方法及相关装置、设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022062590A1 true WO2022062590A1 (zh) | 2022-03-31 |
Family
ID=73574303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/106479 WO2022062590A1 (zh) | 2020-09-24 | 2021-07-15 | 图像识别方法及装置、设备、存储介质和程序 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN112036506A (zh) |
TW (1) | TW202221568A (zh) |
WO (1) | WO2022062590A1 (zh) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036506A (zh) * | 2020-09-24 | 2020-12-04 | 上海商汤智能科技有限公司 | 图像识别方法及相关装置、设备 |
CN113516757A (zh) * | 2021-07-07 | 2021-10-19 | 上海商汤智能科技有限公司 | 图像显示方法及相关装置和电子设备、存储介质 |
CN113516758B (zh) * | 2021-07-07 | 2024-10-29 | 上海商汤善萃医疗科技有限公司 | 图像显示方法及相关装置和电子设备、存储介质 |
CN114663715B (zh) * | 2022-05-26 | 2022-08-26 | 浙江太美医疗科技股份有限公司 | 医学图像质控、分类模型训练方法、装置及计算机设备 |
CN115294110B (zh) * | 2022-09-30 | 2023-01-06 | 杭州太美星程医药科技有限公司 | 扫描期的识别方法、装置、电子设备和存储介质 |
CN117351197A (zh) * | 2023-12-04 | 2024-01-05 | 北京联影智能影像技术研究院 | 图像分割方法、装置、计算机设备和存储介质 |
CN118429734B (zh) * | 2024-07-05 | 2024-10-01 | 之江实验室 | 磁共振数据分类系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447966A (zh) * | 2018-10-26 | 2019-03-08 | 科大讯飞股份有限公司 | 医学图像的病灶定位识别方法、装置、设备及存储介质 |
CN110504029A (zh) * | 2019-08-29 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | 一种医学图像处理方法、医学图像识别方法及装置 |
CN111340083A (zh) * | 2020-02-20 | 2020-06-26 | 京东方科技集团股份有限公司 | 医学图像的处理方法、装置、设备及存储介质 |
CN112036506A (zh) * | 2020-09-24 | 2020-12-04 | 上海商汤智能科技有限公司 | 图像识别方法及相关装置、设备 |
-
2020
- 2020-09-24 CN CN202011018760.7A patent/CN112036506A/zh not_active Withdrawn
-
2021
- 2021-07-15 WO PCT/CN2021/106479 patent/WO2022062590A1/zh active Application Filing
- 2021-08-24 TW TW110131345A patent/TW202221568A/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447966A (zh) * | 2018-10-26 | 2019-03-08 | 科大讯飞股份有限公司 | 医学图像的病灶定位识别方法、装置、设备及存储介质 |
CN110504029A (zh) * | 2019-08-29 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | 一种医学图像处理方法、医学图像识别方法及装置 |
CN111340083A (zh) * | 2020-02-20 | 2020-06-26 | 京东方科技集团股份有限公司 | 医学图像的处理方法、装置、设备及存储介质 |
CN112036506A (zh) * | 2020-09-24 | 2020-12-04 | 上海商汤智能科技有限公司 | 图像识别方法及相关装置、设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112036506A (zh) | 2020-12-04 |
TW202221568A (zh) | 2022-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022062590A1 (zh) | 图像识别方法及装置、设备、存储介质和程序 | |
TWI776426B (zh) | 圖像識別方法、電子設備和電腦可讀儲存介質 | |
Singh et al. | Shallow 3D CNN for detecting acute brain hemorrhage from medical imaging sensors | |
CN110506278B (zh) | 隐空间中的目标检测 | |
Wang et al. | Robust content-adaptive global registration for multimodal retinal images using weakly supervised deep-learning framework | |
CN115496771A (zh) | 一种基于脑部三维mri图像设计的脑肿瘤分割方法 | |
CN114494296A (zh) | 一种基于Unet和Transformer相融合的脑部胶质瘤分割方法与系统 | |
Wang et al. | Sk-unet: An improved u-net model with selective kernel for the segmentation of lge cardiac mr images | |
WO2022095258A1 (zh) | 图像目标分类方法、装置、设备、存储介质及程序 | |
Tang et al. | Automatic lumbar spinal CT image segmentation with a dual densely connected U-Net | |
CN110751629A (zh) | 一种心肌影像分析装置及设备 | |
CN118247284B (zh) | 图像处理模型的训练方法、图像处理方法 | |
Yu et al. | Cardiac LGE MRI segmentation with cross-modality image augmentation and improved U-Net | |
Kascenas et al. | Anomaly detection via context and local feature matching | |
Rajeshkumar et al. | Convolutional Neural Networks (CNN) based Brain Tumor Detection in MRI Images | |
Dikici et al. | Constrained generative adversarial network ensembles for sharable synthetic data generation | |
Pavarut et al. | Improving Kidney Tumor Classification With Multi-Modal Medical Images Recovered Partially by Conditional CycleGAN | |
Balagalla et al. | Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images–a survey | |
CN111598870A (zh) | 基于卷积神经网络端对端推理计算冠状动脉钙化比的方法 | |
CN115294110B (zh) | 扫描期的识别方法、装置、电子设备和存储介质 | |
Murthy et al. | Deep Learning and MRI Improve Carotid Arterial Tree Reconstruction | |
US20240153089A1 (en) | Systems and methods for processing real-time cardiac mri images | |
US20240144469A1 (en) | Systems and methods for automatic cardiac image analysis | |
Gia et al. | A Computer-Aided Detection to Intracranial Hemorrhage by Using Deep Learning: A Case Study | |
Tahir et al. | A Methodical Review on the Segmentation Types and Techniques of Medical Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21870968 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21870968 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/09/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21870968 Country of ref document: EP Kind code of ref document: A1 |