CN112288718B - Image processing method and apparatus, electronic device, and computer-readable storage medium - Google Patents

Image processing method and apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN112288718B
CN112288718B CN202011180299.5A CN202011180299A CN112288718B CN 112288718 B CN112288718 B CN 112288718B CN 202011180299 A CN202011180299 A CN 202011180299A CN 112288718 B CN112288718 B CN 112288718B
Authority
CN
China
Prior art keywords
growth
medical image
ith
pixel points
tubular structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011180299.5A
Other languages
Chinese (zh)
Other versions
CN112288718A (en
Inventor
刘恩佑
张欢
王瑜
李新阳
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202011180299.5A priority Critical patent/CN112288718B/en
Publication of CN112288718A publication Critical patent/CN112288718A/en
Application granted granted Critical
Publication of CN112288718B publication Critical patent/CN112288718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application discloses an image processing method and device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: performing distance transformation on a first medical image with a tubular structure to determine a distance transformation result of pixel points in the tubular structure; and obtaining a third medical image through a region growing algorithm according to the distance transformation result and the second medical image, so that the problem of color cross of the extracted tubular structure at the adhesion part can be avoided.

Description

Image processing method and apparatus, electronic device, and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Image segmentation is useful in imaging diagnosis, and for a medical image with a tubular structure, a traditional region growing method is generally adopted to segment the tubular structure, so that pixel points in the tubular structure are simultaneously subjected to region growing, but the color at the adhesion part of the tubular structure is cross-colored.
Disclosure of Invention
In view of the above, embodiments of the present application are directed to providing an image processing method and apparatus, an electronic device, and a computer-readable storage medium, which can avoid the problem of color cross at the adhesion of the extracted tubular structure.
According to a first aspect of embodiments of the present application, there is provided an image processing method, including: performing distance transformation on a first medical image with a tubular structure to determine a distance transformation result of pixel points in the tubular structure; and obtaining a third medical image through a region growing algorithm according to the distance transformation result and the second medical image.
In one embodiment, the distance transforming a first medical image having a tubular structure to determine a distance transformation result of pixel points in the tubular structure includes: calculating a distance value of a pixel point in the tubular structure from a specific pixel point in the first medical image; and dividing the pixel points in the tubular structure according to the distance value so as to divide the pixel points in the tubular structure into a plurality of groups.
In one embodiment, the dividing the pixel points in the tubular structure according to the distance value includes: and comparing the distance value with a preset threshold value so as to divide the pixel points in the tubular structure.
In one embodiment, obtaining a third medical image by a region growing algorithm according to the distance transformation result and the second medical image includes: and taking the pixel points in the second medical image as growth starting points and the pixel points of each group in the plurality of groups as growth tracks, and carrying out hierarchical region growth to obtain the third medical image.
In one embodiment, the performing hierarchical region growing with the pixel points in the second medical image as a growth starting point and the pixel points in each of the plurality of groups as a growth trajectory to obtain the third medical image includes: a) taking pixel points in the second medical image as a first growth starting point, taking pixel points of a first group in the plurality of groups as a first growth orbit, and performing first-stage regional growth to obtain a first growth result, wherein the pixel points of the first group are located at the central position of the tubular structure; b) taking the pixel point corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing ith level region growth to obtain an ith growth result, wherein the pixel point of the ith group is positioned at the outer side of the pixel point of the ith-1 group, and i is an integer greater than or equal to 2; c) and c) iteratively executing the step b) until all pixel points in the tubular structure complete region growth, and stopping iteration to obtain the third medical image.
In an embodiment, the performing ith level region growing by using the pixel point corresponding to the ith-1 growth result as an ith growth starting point and using the pixel point of the ith group in the plurality of groups as an ith growth track to obtain an ith growth result includes: taking each pixel point in the multiple pixel points corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing multiple ith-level region growth; and performing competitive growth on pixel points at the edge of the tubular structure according to the growth speed and/or the growth round number of each ith level region in the plurality of ith level region growths to obtain the ith growth result.
In one embodiment, the method further comprises: binary segmentation is performed on an original medical image through a first network model to obtain the first medical image.
In one embodiment, the method further comprises: and performing semantic segmentation on the original medical image through a second network model to obtain the second medical image.
In one embodiment, the first medical image is a vessel image of an extension region and the second medical image is a vessel image of a mediastinal region.
According to a second aspect of embodiments of the present application, there is provided an apparatus for image processing, comprising: a distance transformation module configured to perform distance transformation on a first medical image having a tubular structure to determine a distance transformation result of a pixel point in the tubular structure; and the region growing module is configured to obtain a third medical image through a region growing algorithm according to the distance transformation result and the second medical image.
In one embodiment, the apparatus further comprises: a module for performing each step in the method of image processing mentioned in the above embodiments.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform the method of image processing according to any of the above embodiments.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for executing the method of image processing according to any one of the above embodiments.
According to the image processing method provided by the embodiment of the application, the distance transformation is carried out on the first medical image with the tubular structure to determine the distance transformation result of the pixel points in the tubular structure, and then the third medical image is obtained through a region growing algorithm according to the distance transformation result and the second medical image, so that the problem that the extracted tubular structure is cross-colored at the adhesion part can be avoided.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
FIG. 2 is a block diagram of a system for image processing provided by an embodiment of the present application.
Fig. 3 is a flowchart illustrating a method of image processing according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a first medical image provided by an embodiment of the present application.
Fig. 5 is a schematic diagram of a second medical image provided by an embodiment of the present application.
Fig. 6 is a flowchart illustrating a method of image processing according to another embodiment of the present application.
Fig. 7 is a diagram illustrating a distance transformation result according to an embodiment of the present application.
Fig. 8 is a flowchart illustrating a method of image processing according to another embodiment of the present application.
FIG. 9 is a schematic view of the interface where the tubular structures are bonded as provided by one embodiment of the present application.
Fig. 10 is a block diagram of an apparatus for image processing according to an embodiment of the present application.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Summary of the application
Deep learning implements artificial intelligence in a computing system by building artificial neural networks with hierarchical structures. Because the artificial neural network of the hierarchical structure can extract and screen the input information layer by layer, the deep learning has the characteristic learning capability and can realize end-to-end supervised learning and unsupervised learning. The artificial neural network of the hierarchical structure used for deep learning has various forms, the complexity of the hierarchy is generally called 'depth', and the forms of deep learning comprise a multilayer perceptron, a convolutional neural network, a cyclic neural network, a deep belief network and other mixed structures according to the types of structures. The deep learning uses data to update parameters in the construction of the data to achieve a training target, the process is generally called 'learning', the deep learning provides a method for enabling a computer to automatically learn mode characteristics, and the characteristic learning is integrated into the process of establishing a model, so that the incompleteness caused by artificial design characteristics is reduced.
A neural network is an operational model, which is formed by a large number of nodes (or neurons) connected to each other, each node corresponding to a policy function, and the connection between each two nodes representing a weighted value, called weight, for a signal passing through the connection. The neural network generally comprises a plurality of neural network layers, the upper network layer and the lower network layer are mutually cascaded, the output of the ith neural network layer is connected with the input of the (i + 1) th neural network layer, the output of the (i + 1) th neural network layer is connected with the input of the (i + 2) th neural network layer, and the like. After the training samples are input into the cascaded neural network layers, an output result is output through each neural network layer and is used as the input of the next neural network layer, therefore, the output is obtained through calculation of a plurality of neural network layers, the prediction result of the output layer is compared with a real target value, the weight matrix and the strategy function of each layer are adjusted according to the difference condition between the prediction result and the target value, the neural network continuously passes through the adjusting process by using the training samples, so that the parameters such as the weight of the neural network and the like are adjusted until the prediction result of the output of the neural network is consistent with the real target result, and the process is called the training process of the neural network. After the neural network is trained, a neural network model can be obtained.
Ct (computed tomography), that is, electronic computed tomography, uses precisely collimated X-ray beams, gamma rays, ultrasonic waves, etc. to scan the cross section of a human body one by one together with a detector with extremely high sensitivity, has the characteristics of fast scanning time, clear image, etc., and can be used for the examination of various diseases.
At present, the most common method for obtaining a CT image (for example, a blood vessel image) with a tubular structure is a segmentation model based on deep learning, however, the segmentation model based on deep learning performs semantic segmentation on the CT image (that is, both binary segmentation and classification of blood vessels are achieved), but the existing method is not ideal for classification and segmentation of fine-grained blood vessels of pulmonary blood vessels, where the reason is that the fine-grained blood vessels in the lung are relatively far away from the mediastinum and are difficult to distinguish the sub-veins, and on the other hand, the blood vessels are relatively thin in segmentation and are difficult to learn features.
To solve the segmentation and staining problem of small blood vessels in the lung, an improvement can be made in a manner based on the region growing of the blood vessel hu values. However, for the traditional region growing method, the pixel points in the blood vessel are simultaneously region-grown, and the final dyeing of the blood vessel follows that the pixel points are obtained first, that is, the color dyed first can be used as the final color of the blood vessel, so for a certain section of blood vessel with adhesion, if the pixel points in the vein at the adhesion part of the section of blood vessel are prior to other pixel points, the pixel points in the artery at the adhesion part are used as the growth starting points to perform region growing, so that the section of blood vessel is dyed with the color of the artery, and actually, the section of blood vessel should be dyed with the color of the vein, so that the color cross problem of the adhesion part of the blood vessel is caused.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a CT scanner 130, a server 120, and a computer device 110. The computer device 110 may acquire CT images from the CT scanner 130, and the computer device 110 may be connected to the server 120 via a communication network. Optionally, the communication network is a wired network or a wireless network.
The CT scanner 130 is used for performing X-ray scanning on the human tissue to obtain a CT image of the human tissue. In one embodiment, the chest X-ray positive slice, i.e. the original medical image in the present application, can be obtained by scanning the chest with the CT scanner 130.
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. For example, the number of the computer devices 110 may be one, or the number of the computer devices 110 may be several tens or hundreds, or more. The number and the type of the computer devices 110 are not limited in the embodiments of the present application.
A first network model may be deployed in the computer device 110 for performing binary segmentation on the medical image to segment the tubular structure in the medical image from the background, and a second network model may be deployed for performing semantic segmentation on the medical image to segment the tubular structure in the medical image from the background and classify different tissues in the tubular structure. The computer device 110 performs binary segmentation on the original medical image acquired from the CT scanner 130 by using the first network model deployed thereon, so as to obtain a first medical image with a tubular structure, then the computer device 110 performs semantic segmentation on the original medical image acquired from the CT scanner 130 by using the second network model deployed thereon, so as to obtain a second medical image with a classification result, then the computer device 110 performs distance transformation on the first medical image, so as to obtain a distance transformation result of pixel points in the tubular structure, and finally the computer device 110 obtains a third medical image through a region growing algorithm according to the distance transformation result and the second medical image. The tubular structure in the third medical image obtained in this way does not have the color cross problem at the adhesion part, thereby being beneficial to assisting doctors to diagnose the pathological changes in time.
The server 120 is a server, or consists of several servers, or is a virtualization platform, or a cloud computing service center. In some alternative embodiments, the server 120 receives the training images collected by the computer device 110 and trains the neural network through the training images to obtain the first network model and the second network model. The computer device 110 may send an original medical image acquired from the CT scanner 130 to the server 120, the server 120 performs binary segmentation on the original medical image using the first network model trained thereon to obtain a first medical image with a tubular structure, then the server 120 performs semantic segmentation on the original medical image using the second network model trained thereon to obtain a second medical image with a classification result, then the server 120 performs distance transformation on the first medical image to obtain a distance transformation result of a pixel point in the tubular structure, and obtains a third medical image according to the distance transformation result and the second medical image by using a region growing algorithm, and finally the server 120 sends the third medical image to the computer device 110 for a doctor to view. The tubular structure in the third medical image obtained in this way does not have the color cross problem at the adhesion part, thereby being beneficial to assisting doctors to diagnose the pathological changes in time.
FIG. 2 is a block diagram of a system for image processing provided by an embodiment of the present application. As shown in fig. 2, the system includes:
a first network model 21, configured to perform binary segmentation on pixel points of an extension region of an original medical image a to obtain the first medical image B having the tubular structure;
the second network model 22 is configured to perform semantic segmentation on pixel points in a mediastinal region of the original medical image a to obtain a second medical image C;
the distance transformation module 23 is configured to perform distance transformation on the first medical image B having a tubular structure to determine a distance transformation result D of a pixel point in the tubular structure;
and the region growing module 24 is configured to obtain a third medical image E through a region growing algorithm according to the distance transformation result D and the second medical image C.
The third medical image E in the present embodiment is obtained in this way with reference to the data flow direction indicated by the solid line with an arrow in fig. 2.
Exemplary method
Fig. 3 is a flowchart illustrating a method of image processing according to an embodiment of the present application. The method described in fig. 3 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. As shown in fig. 3, the method includes the following.
S310: performing distance transformation on a first medical image with a tubular structure to determine a distance transformation result of pixel points in the tubular structure.
In an embodiment, the medical image may be a medical image such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Computed Radiography (CR), or Digital Radiography (DR), which is not limited in this embodiment.
In an embodiment, the medical image may be a lung medical image, but this is not particularly limited in this embodiment of the present application, and the medical image may also be a medical image of other tissues as long as the medical image has a tubular structure, for example, the medical image may also be a rib medical image. The embodiment of the present application also does not limit the specific form of the medical image, and may be an original medical image, a preprocessed medical image, or a part of the original medical image.
In an embodiment, when the medical image is a pulmonary medical image, the first medical image may be an image of vessels of an extension region, the vessels of the extension region being tubular structures, as shown in fig. 4. The extension region refers to the area in the lung other than the mediastinal region, and the mediastinal region refers to the area near the pleura of the right and left mediastinums, with the heart and large blood vessels, esophagus, trachea, thymus, nerves, and lymphatic tissues, etc. coming in and going out of the heart in between.
For convenience of description, the first medical image is described below with a blood vessel image of an extension region as an example.
In an embodiment, the distance transformation is also referred to as a distance function or a chamfer algorithm, which is an application of the distance concept, some algorithms of image processing are based on the distance transformation. The distance transformation mainly describes the distance of a pixel point relative to some specific pixel points.
The distance transformation of the first medical image is actually the distance transformation of the tubular structure in the first medical image, and the distance transformation result of the pixel points in the tubular structure, that is, the distance between the pixel points in the tubular structure and some specific pixel points, is determined.
However, the embodiment of the present application does not specifically limit the distance transformation result, and those skilled in the art may make different selections according to actual requirements to obtain different distance transformation results.
S320: and obtaining a third medical image through a region growing algorithm according to the distance transformation result and the second medical image.
In an embodiment, when the medical image is a pulmonary medical image, the second medical image may be a blood vessel image of the mediastinal region, as shown in fig. 5. The first medical image and the second medical image are combined to constitute a third medical image.
For convenience of description, the second medical image is described below by taking the blood vessel image of the mediastinal region as an example.
Because the blood vessels in the extension region are thinner and staggered with each other, if the blood vessels in the extension region are directly segmented semantically, the efficiency and the accuracy are not high, so that the blood vessel images in the extension region are not classified on the artery and the vein, namely, the first medical image does not have a classification result, and the granularity of the obtained blood vessel segmentation is thinner; the blood vessels in the mediastinum area are thicker and have clearer boundaries, and the blood vessels in the mediastinum area can be directly segmented semantically, so that the blood vessel images in the mediastinum area are classified on the artery and the vein, namely, the second medical image has a classification result.
It should be understood that the region growing algorithm is to group pixels with similarity together to form the final region. Firstly, a seed pixel is found out from each region to be segmented as a starting point of growth, and then pixels (determined according to a predetermined growth or similarity criterion) with the same or similar properties with the seeds in the neighborhood around the seed pixel are merged into the region where the seed pixel is located. And new pixels continue to grow around as seeds until no more pixels meeting the conditions can be included, and a final region growing is completed.
In an embodiment, the distance transformation result obtained by performing distance transformation on the blood vessel image of the extension region may be understood as orbit data, and the blood vessel image of the mediastinum region may be understood as seed data. The blood vessel image of the extension region and the blood vessel image of the mediastinum region have a superposed region, when a pixel point in the blood vessel image of the mediastinum region is taken as a growth starting point, a distance transformation result obtained after the distance transformation is carried out on the blood vessel image of the extension region is taken as a growth track, and the region is increased, the blood vessel of the extension region further extends towards the extension region along the blood vessel of the mediastinum region, and the blood vessel image of the mediastinum region is classified on the artery and the vein, so that the blood vessel image of the extension region can correspondingly obtain the classification on the artery and the vein. The third medical image thus obtained not only enables segmentation of the blood vessels, but also enables classification of the blood vessels.
Meanwhile, the region growing algorithm is an iterative algorithm, and the orbit data adopts a distance transformation result, namely, iterative region growing is performed on some pixel points in the blood vessel firstly, and iterative region growing is performed on other pixel points in the blood vessel, so that the pixel points in the blood vessel can be sequentially and accurately subjected to region growing, and therefore, the problem of color cross of the blood vessel at the adhesion position can be avoided.
In another embodiment of the present application, the method shown in fig. 6 is an example of the method shown in fig. 3, and the method shown in fig. 6 further includes the following.
S610: calculating a distance value of a pixel point in the tubular structure from a specific pixel point in the first medical image.
In an embodiment, the specific pixel point may be a background pixel point, which is not specifically limited in this embodiment of the present application, and a person skilled in the art may select different pixel points according to actual requirements.
In an embodiment, the pixel points in the tubular structure may refer to all pixel points in the tubular structure, or may refer to a part of the pixel points in the tubular structure.
As shown in fig. 7, the distance between all the pixels in the tubular structure and the background pixel can be calculated, the distance between the pixel in the oblique line frame and the background pixel is larger, the distance between the pixel in the white frame and the background pixel is moderate, and the distance between the pixel in the black frame and the background pixel is smaller.
S620: and dividing the pixel points in the tubular structure according to the distance value so as to divide the pixel points in the tubular structure into a plurality of groups.
With continued reference to fig. 7, the pixels in the diagonal frame are divided into one group, the pixels in the white frame are divided into another group, and the pixels in the black frame are divided into another group.
However, it should be noted that the embodiment of the present application does not specifically limit the specific number of the packets, and the packets may be divided into three packets as shown in fig. 7, four packets, or two packets, and those skilled in the art may select the packets differently according to actual requirements.
S630: and taking the pixel points in the second medical image as growth starting points and the pixel points of each group in the plurality of groups as growth tracks, and carrying out hierarchical region growth to obtain the third medical image.
In an embodiment, when a pixel point in the second medical image is used as a growth starting point, a pixel point of each of the plurality of groups is used as a growth track, and a classification region is increased, a priority level of region increase of the pixel point of each group is determined, that is, which pixel point of each group is firstly subjected to region increase, and which pixel point of each group is then subjected to region increase, so that the classification region increase is formed. A level one region growth corresponds to one packet.
The priority level of region growth is carried out by determining each grouped pixel point, so that the pixel points in the tubular structure can be sequentially and accurately region-grown, and the problem of color cross of the tubular structure at the adhesion position is avoided.
In another embodiment of the present application, the dividing the pixel points in the tubular structure according to the distance value includes: and comparing the distance value with a preset threshold value so as to divide the pixel points in the tubular structure.
With reference to fig. 7, the distance value between each pixel point in the tubular structure and the background pixel point is different, for example, the distance value between the pixel point located in the center of the tubular structure and the background pixel point is the largest, and the distance value between the pixel point located at the outermost side of the tubular structure and the background pixel point is the smallest, so that the pixel points in the tubular structure can be divided by setting the preset threshold.
In an embodiment, the pixel points whose distance value is greater than or equal to the first preset threshold are divided into a first group, for example, the pixel points in the diagonal frame; dividing the pixel points of which the distance values are greater than or equal to a second preset threshold and less than the first preset threshold into a second group, for example, pixel points in a white frame; and dividing the pixel points of which the distance values are greater than or equal to a third preset threshold and less than the second preset threshold into a third group, for example, pixel points in a black point frame.
For example, the pixels in the tubular structure with the distance value from the background pixel point being greater than or equal to 3 pixels are divided into a first group, the pixels in the tubular structure with the distance value from the background pixel point being greater than or equal to 2 pixels and less than 3 pixels are divided into a second group, and the pixels in the tubular structure with the distance value from the background pixel point being greater than or equal to 1 pixel and less than 2 pixels are divided into a third group.
In an embodiment, when the first group, the second group, and the third group are obtained, the pixels in the same group may be marked with the same value, and the pixels in different groups are marked with different values, for example, the pixels in the first group may be marked with 3, the pixels in the second group may be marked with 2, and the pixels in the third group may be marked with 1, but this is not a limitation in the embodiments of the present application, and the value marked by each pixel is for distinguishing the pixels in different groups, which is beneficial to subsequently determining the priority level of region growth of the pixels in each group, for example, the pixels marked with 3 may be region-grown first, the pixels marked with 2 may be region-grown, and finally the pixels marked with 1 may be region-grown. The embodiment of the present application does not specifically limit the order in which different groups undergo region growing.
However, it should be noted that, in the embodiment of the present application, the specific number of the preset thresholds is not specifically limited, and the number of the preset thresholds may be selected according to the number of the groups, and the larger the number of the groups is, the larger the number of the preset thresholds is.
In another embodiment of the present application, the method shown in fig. 8 is an example of step S630 in the method shown in fig. 6, and the method shown in fig. 8 further includes the following.
S810: and taking the pixel points in the second medical image as a first growth starting point, taking the pixel points of a first group in the plurality of groups as a first growth track, and performing first-stage regional growth to obtain a first growth result, wherein the pixel points of the first group are positioned in the central position of the tubular structure.
In an embodiment, the number of the first growth starting points may be one (i.e., a single pixel), or may be multiple (i.e., a pixel of one region), which is not specifically limited in this embodiment of the application.
In one embodiment, the pixel points of the first grouping are located at the center of the tubular structure, corresponding to the pixel points in the diagonal box shown in FIG. 7. And performing first-stage regional growth on the pixel points which are in the first group and have similar properties with the first growth starting point and are adjacent to the first growth starting point and the first growth starting point, so as to obtain a first growth result.
The color of the blood vessel is actually in accordance with the rule of getting first, that is, the color of the pixel point with the regional growth first is taken as the color of the blood vessel, and the reliability that the pixel point of the first group at the central position of the blood vessel belongs to the blood vessel is high, so that the pixel point of the first group is subjected to the first-level regional growth first, the color of the blood vessel is correct, and even if the pixel point at the edge is wrongly dyed in the regional growth of the subsequent level, the whole color of the blood vessel is not influenced, thereby avoiding the color cross at the adhesion position of the blood vessel.
S820: and taking the pixel point corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing ith-level region growth to obtain an ith growth result, wherein the pixel point of the ith group is positioned outside the pixel point of the ith-1 group, and i is an integer greater than or equal to 2.
In an embodiment, the pixel point of the ith grouping is located outside the pixel point of the (i-1) th grouping, that is, the pixel point of the second grouping is located outside the pixel point of the first grouping, and the pixel point of the third grouping is located outside the pixel point of the second grouping.
In one embodiment, the i-1 th growing result can be understood as an image formed by growing the pixel points in the tubular structure through the region.
In an embodiment, after the first growth result is obtained, the pixel point corresponding to the first growth result may be used as a new growth start point (i.e., a second growth start point), and then the pixel point of the second grouping of the plurality of groupings may be used as a second growth track (i.e., a pixel point in the white frame shown in fig. 7), so as to perform the second-level region growth to obtain a second growth result.
In an embodiment, the number of the ith growth starting point may be one (i.e., a single pixel), or may be multiple (i.e., a pixel in one region), which is not specifically limited in this embodiment of the application.
Because the credibility that the pixel point of the first group located at the central position of the blood vessel belongs to the blood vessel is high, after the pixel point in the first group is subjected to first-stage regional growth, the correctness of the obtained first growth result can be ensured, that is, the blood vessel is correctly dyed, and then the correctness of the second growth result obtained by the second-stage regional growth can be ensured.
S830: and (S820) iteratively executing the step until all pixel points in the tubular structure complete region growth, and stopping iteration to obtain the third medical image.
In an embodiment, after the second growth result is obtained, the pixel point corresponding to the second growth result may be used as a new growth start point (i.e., a third growth start point), and then the pixel point of a third group of the plurality of groups may be used as a third growth track (i.e., a pixel point in a black point frame shown in fig. 7), so as to perform third-level region growth to obtain a third growth result.
And (S820) iteratively executing the steps until all the pixel points in the tubular structure complete the region growth, and stopping iteration to obtain a third medical image.
As described above, the pixel point in the first group is marked as 3, the pixel point in the second group is marked as 2, and the pixel point in the third group is marked as 1, so that the pixel point marked as 3 can be used as a first growth track when the first-stage region is increased, the pixel point marked as 2 can be used as a second growth track when the second-stage region is increased, and the pixel point marked as 1 can be used as a third growth track when the third-stage region is increased. The pixel points of different groups in the distance transformation result are marked with different numerical values, so that the region growth of different levels can be conveniently carried out.
When the area grows, the tubular structure grows from the center to the outer side of the tubular structure in sequence, and the problem of color mixing of the tubular structure at the adhesion position is effectively solved.
In an embodiment, in each level of region growth, it can also be determined whether the pixel points in the corresponding group have been completely grown, when the pixel points in the corresponding group have been determined to have been completely grown, the next level of region growth is performed, otherwise, the current level of region growth is continued. Therefore, the pixel points in each group can be guaranteed to complete regional growth, and the situation that the region growth is not carried out on a certain pixel point in the tubular structure can be avoided, so that the tubular structure in the third medical image is more accurate and complete.
In another embodiment of the present application, the performing ith level region growing by using the pixel point corresponding to the ith-1 growth result as an ith growth starting point and using the pixel point of the ith group in the plurality of groups as an ith growth track to obtain an ith growth result includes: taking each pixel point in the multiple pixel points corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing multiple ith-level region growth; and performing competitive growth on pixel points at the edge of the tubular structure according to the growth speed and/or the growth round number of each ith level region in the plurality of ith level region growths to obtain the ith growth result.
In an embodiment, the pixel points on the edge of the tubular structure can be understood as the pixel points on the adhesion part of the tubular structure. The ith-level region growth mentioned in this embodiment may be understood as a first-level region growth corresponding to a pixel point close to a background pixel point in a tubular structure.
In an embodiment, when a plurality of pixel points corresponding to the i-1 th growth result can be used as the i-th growth starting point, a plurality of i-th level region growths can be performed simultaneously, and one i-th level region growth corresponds to one i-th growth starting point.
Even if the type of the edge of the tubular structure cannot be determined (i.e. for the edge of the blood vessel, the edges are adhered to each other, and it cannot be known whether the blood vessel belongs to an artery or a vein), because the growth speed of each ith-stage region can be different and the number of growth rounds can also be different, the pixel points on the edge of the tubular structure can be competitively grown according to the rule of getting first, and the type of the pixel points on the edge of the region growing first is taken as the type of the edge of the tubular structure, so that the type of the edge of the tubular structure can be determined, and a clear interface as shown by an arrow in fig. 9 can be formed at the adhered part of the tubular structure.
In another embodiment of the present application, the method further comprises: binary segmentation is performed on an original medical image through a first network model to obtain the first medical image.
It should be understood that binary segmentation refers to segmenting pixel points of a tubular structure of an original medical image from a background, that is, the pixel points of the tubular structure are 1, the pixel points of the background are 0, and the pixel points of the tubular structure are not classified, that is, the classification of the pixel points of the tubular structure is not distinguished, so that the complexity of a first network model is greatly simplified, some simple model structures can be adopted to reduce video memory and accelerate prediction speed, and then the requirements of real-time performance and resource scheduling of online products are met, and the segmentation granularity of blood vessels of the first medical image can be made finer.
In an embodiment, the original medical image is input into a first network model to perform a binary segmentation of the original medical image to obtain a first medical image.
In another embodiment of the present application, the method further comprises: and performing semantic segmentation on the original medical image through a second network model to obtain the second medical image.
It should be understood that semantic segmentation means that not only the pixel points in a specific region of the original medical image are segmented from the background, but also the pixel points in the specific region are classified, for example, taking a lung medical image as an example, the pixel points in an artery are 1, the pixel points in a vein are 2, and the pixel points in the background are 0.
The above-mentioned specific region of the original medical image refers to a region where the boundaries of the tissue are relatively clear, for example, blood vessels of the mediastinal region in the lung. The requirements on the second network model can be greatly reduced since the decomposition of the organization of a specific area is clearer.
In an embodiment, the original medical image is input into the second network model to semantically segment the original medical image to obtain the second medical image.
The embodiment of the application does not limit the specific types of the first network model and the second network model, the first network model and the second network model can be shallow layer models obtained through machine learning, such as an SVM classifier or a linear regression classifier, and the network models obtained through machine learning can realize rapid image segmentation so as to improve the efficiency of model segmentation; the first network model and the second network model can also refer to deep models obtained through deep learning, the first network model and the second network model can be formed by any type of neural networks, the networks can use ResNet, ResNeXt or DenseNet and the like as main networks, and the accuracy of model segmentation can be improved through the network models obtained through deep learning. Alternatively, the first Network model and the second Network model may be Convolutional Neural Network (CNN), Deep Neural Network (DNN), Recurrent Neural Network (RNN), or the like. The first network model and the second network model may include a neural network layer such as an input layer, a convolutional layer, a pooling layer, and a connection layer, which is not particularly limited in this embodiment of the present application. In addition, the number of each neural network layer is not limited in the embodiments of the present application.
Exemplary devices
The embodiment of the device can be used for executing the embodiment of the method. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 10 is a block diagram illustrating an apparatus for image processing according to an embodiment of the present application. As shown in fig. 10, the apparatus 1000 includes:
a distance transformation module 1010 configured to perform distance transformation on a first medical image having a tubular structure to determine a distance transformation result of a pixel point in the tubular structure;
and a region growing module 1020 configured to obtain a third medical image through a region growing algorithm according to the distance transformation result and the second medical image.
In one embodiment, the apparatus 1000 further comprises: a module for performing each step in the method of image processing mentioned in the above embodiments.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 11. FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 11, electronic device 1100 includes one or more processors 1110 and memory 1120.
The processor 1110 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
The memory 1120 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 1110 to implement the methods of image processing of the various embodiments of the present application described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 1100 may further include: an input device 1130 and an output device 1140, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 1130 may be a microphone or microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input device 1130 may be a communication network connector.
The input devices 1130 may also include, for example, a keyboard, a mouse, and the like.
The output device 1140 may output various information including a third medical image and the like to the outside. The output devices 1140 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 1100 relevant to the present application are shown in fig. 11, and components such as buses, input/output interfaces, and the like are omitted. In addition, electronic device 1100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of image processing according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of image processing according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (9)

1. A method of image processing, comprising:
performing distance transformation on a first medical image with a tubular structure to determine a distance transformation result of pixel points in the tubular structure;
obtaining a third medical image by a region growing algorithm according to the distance transformation result and the second medical image,
wherein the distance transforming a first medical image having a tubular structure to determine a distance transformation result of pixel points in the tubular structure comprises:
calculating a distance value of a pixel point in the tubular structure from a specific pixel point in the first medical image;
dividing the pixel points in the tubular structure according to the distance values to divide the pixel points in the tubular structure into a plurality of groups,
obtaining a third medical image through a region growing algorithm according to the distance transformation result and the second medical image, wherein the obtaining of the third medical image comprises the following steps:
determining the priority level of region growing of the pixel points of each group in the plurality of groups so as to determine which group in the plurality of groups is subjected to region growing firstly and which group is subjected to region growing later;
taking pixel points in the second medical image as growth starting points, and taking the grouped pixel points as growth tracks according to the priority levels to carry out hierarchical region growth so as to obtain a third medical image,
wherein, the step of using the pixel points in the second medical image as growth starting points and using the grouped pixel points as growth tracks according to the priority levels to increase the grading region so as to obtain the third medical image comprises:
a) taking pixel points in the second medical image as a first growth starting point, taking pixel points of a first group in the plurality of groups as a first growth orbit, and performing first-stage regional growth to obtain a first growth result, wherein the pixel points of the first group are located at the central position of the tubular structure;
b) taking the pixel point corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing ith level region growth to obtain an ith growth result, wherein the pixel point of the ith group is positioned at the outer side of the pixel point of the ith-1 group, and i is an integer greater than or equal to 2;
c) and c) iteratively executing the step b) until all pixel points in the tubular structure complete region growth, and stopping iteration to obtain the third medical image.
2. The method of claim 1, wherein said dividing pixel points in said tubular structure according to said distance values comprises:
and comparing the distance value with a preset threshold value so as to divide the pixel points in the tubular structure.
3. The method according to claim 1, wherein the performing ith level region growing by using the pixel point corresponding to the ith-1 growth result as an ith growth starting point and using the pixel point of the ith group in the plurality of groups as an ith growth track to obtain an ith growth result comprises:
taking each pixel point in the multiple pixel points corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing multiple ith-level region growth;
and performing competitive growth on pixel points at the edge of the tubular structure according to the growth speed and/or the growth round number of each ith level region in the plurality of ith level region growths to obtain the ith growth result.
4. The method of any of claims 1 to 3, further comprising:
binary segmentation is performed on an original medical image through a first network model to obtain the first medical image.
5. The method of any of claims 1 to 3, further comprising:
and performing semantic segmentation on the original medical image through a second network model to obtain the second medical image.
6. The method according to any one of claims 1 to 3, wherein the first medical image is a vessel image of an epitaxial region and the second medical image is a vessel image of a mediastinal region.
7. An apparatus for image processing, comprising:
a distance transformation module configured to perform distance transformation on a first medical image having a tubular structure to determine a distance transformation result of a pixel point in the tubular structure;
a region growing module configured to obtain a third medical image by a region growing algorithm based on the distance transformation result and the second medical image,
wherein the distance transformation module is further configured to: calculating a distance value of a pixel point in the tubular structure from a specific pixel point in the first medical image; dividing the pixel points in the tubular structure according to the distance values to divide the pixel points in the tubular structure into a plurality of groups,
wherein the region growing module is further configured to: determining the priority level of region growing of the pixel points of each group in the plurality of groups so as to determine which group in the plurality of groups is subjected to region growing firstly and which group is subjected to region growing later; taking pixel points in the second medical image as growth starting points, and taking the grouped pixel points as growth tracks according to the priority levels to carry out hierarchical region growth so as to obtain a third medical image,
the region growing module is further configured to, when performing hierarchical region growing by using the pixel points in the second medical image as a growth starting point and using the grouped pixel points as growth tracks according to the priority, perform:
a) taking pixel points in the second medical image as a first growth starting point, taking pixel points of a first group in the plurality of groups as a first growth orbit, and performing first-stage regional growth to obtain a first growth result, wherein the pixel points of the first group are located at the central position of the tubular structure;
b) taking the pixel point corresponding to the ith-1 growth result as an ith growth starting point, taking the pixel point of the ith group in the multiple groups as an ith growth track, and performing ith level region growth to obtain an ith growth result, wherein the pixel point of the ith group is positioned at the outer side of the pixel point of the ith-1 group, and i is an integer greater than or equal to 2;
c) and c) iteratively executing the step b) until all pixel points in the tubular structure complete region growth, and stopping iteration to obtain the third medical image.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1 to 6.
9. A computer-readable storage medium, the storage medium storing a computer program for executing the method of any of the preceding claims 1 to 6.
CN202011180299.5A 2020-10-29 2020-10-29 Image processing method and apparatus, electronic device, and computer-readable storage medium Active CN112288718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011180299.5A CN112288718B (en) 2020-10-29 2020-10-29 Image processing method and apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011180299.5A CN112288718B (en) 2020-10-29 2020-10-29 Image processing method and apparatus, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112288718A CN112288718A (en) 2021-01-29
CN112288718B true CN112288718B (en) 2021-11-02

Family

ID=74372759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011180299.5A Active CN112288718B (en) 2020-10-29 2020-10-29 Image processing method and apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112288718B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516677B (en) * 2021-04-13 2022-02-22 推想医疗科技股份有限公司 Method and device for structuring hierarchical tubular structure blood vessel and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2357609A1 (en) * 2009-12-23 2011-08-17 Intrasense Adaptative hit-or-miss region growing for vessel segmentation in medical imaging
AU2015238846A1 (en) * 2010-02-01 2015-10-29 Covidien Lp Region-growing algorithm
CN110992377A (en) * 2019-12-02 2020-04-10 北京推想科技有限公司 Image segmentation method, device, computer-readable storage medium and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8532356B2 (en) * 2006-07-10 2013-09-10 Siemens Medical Solutions Usa, Inc. Method for automatic separation of segmented tubular and circular objects
JP6225636B2 (en) * 2013-10-22 2017-11-08 コニカミノルタ株式会社 Medical image processing apparatus and program
CN110321920B (en) * 2019-05-08 2021-10-22 腾讯科技(深圳)有限公司 Image classification method and device, computer readable storage medium and computer equipment
CN110751605B (en) * 2019-10-16 2022-12-23 深圳开立生物医疗科技股份有限公司 Image processing method and device, electronic equipment and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2357609A1 (en) * 2009-12-23 2011-08-17 Intrasense Adaptative hit-or-miss region growing for vessel segmentation in medical imaging
AU2015238846A1 (en) * 2010-02-01 2015-10-29 Covidien Lp Region-growing algorithm
CN110992377A (en) * 2019-12-02 2020-04-10 北京推想科技有限公司 Image segmentation method, device, computer-readable storage medium and equipment

Also Published As

Publication number Publication date
CN112288718A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN111899245B (en) Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
Zhao et al. Dsal: Deeply supervised active learning from strong and weak labelers for biomedical image segmentation
US11151721B2 (en) System and method for automatic detection, localization, and semantic segmentation of anatomical objects
US20220036561A1 (en) Method for image segmentation, method for training image segmentation model
CN113066090B (en) Training method and device, application method and device of blood vessel segmentation model
KR102250954B1 (en) Apparatus and method for predicting dementia by dividing brain mri by brain region
CN112132815B (en) Pulmonary nodule detection model training method, detection method and device
CN115699208A (en) Artificial Intelligence (AI) method for cleaning data to train AI models
WO2020026852A1 (en) Information processing device, information processing method, and program
US11790492B1 (en) Method of and system for customized image denoising with model interpretations
WO2022105735A1 (en) Coronary artery segmentation method and apparatus, electronic device, and computer-readable storage medium
Xie et al. Optic disc and cup image segmentation utilizing contour-based transformation and sequence labeling networks
Chagas et al. A new approach for the detection of pneumonia in children using CXR images based on an real-time IoT system
Liu et al. Self-supervised attention mechanism for pediatric bone age assessment with efficient weak annotation
CN113077441A (en) Coronary artery calcified plaque segmentation method and method for calculating coronary artery calcified score
CN112288718B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
WO2023108873A1 (en) Brain network and brain addiction connection calculation method and apparatus
CN111524109A (en) Head medical image scoring method and device, electronic equipment and storage medium
Kumar et al. A multi-objective randomly updated beetle swarm and multi-verse optimization for brain tumor segmentation and classification
CN111445456B (en) Classification model, training method and device of network model, and recognition method and device
Bagher-Ebadian et al. Neural network and fuzzy clustering approach for automatic diagnosis of coronary artery disease in nuclear medicine
CN115803751A (en) Training models for performing tasks on medical data
Droste et al. Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
Kumar et al. Smart healthcare: disease prediction using the cuckoo-enabled deep classifier in IoT framework
Zhou et al. Domain adaptation for medical image classification without source data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant