CN114283110A - Image processing method, device, equipment and storage medium for medical image - Google Patents

Image processing method, device, equipment and storage medium for medical image Download PDF

Info

Publication number
CN114283110A
CN114283110A CN202110976657.1A CN202110976657A CN114283110A CN 114283110 A CN114283110 A CN 114283110A CN 202110976657 A CN202110976657 A CN 202110976657A CN 114283110 A CN114283110 A CN 114283110A
Authority
CN
China
Prior art keywords
image
sample image
channel network
result
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110976657.1A
Other languages
Chinese (zh)
Inventor
熊俊峰
伍健荣
李卓琦
钱天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110976657.1A priority Critical patent/CN114283110A/en
Publication of CN114283110A publication Critical patent/CN114283110A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application relates to an image processing method, an image processing device, image processing equipment and a storage medium for medical images, and relates to the technical field of medical treatment. The method comprises the following steps: calling a first channel network to process the first sample image to obtain a prediction type result and image characteristics corresponding to the first sample image; calling a second channel network to process the image characteristics of the second sample image and the first sample image to obtain a prediction classification result; and training the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image and the type label of the second sample, and constructing an image classification model by using the channel network obtained after the training of the first channel network is completed. By the method, the overfitting phenomenon caused by limited sample number in a small sample learning scene is avoided, the training effect of the model is improved, and the classification accuracy of the obtained image classification model is improved.

Description

Image processing method, device, equipment and storage medium for medical image
Technical Field
The present application relates to the field of medical technology, and in particular, to an image processing method, apparatus, device, and storage medium for medical images.
Background
With the development of scientific technology, machine learning algorithms are introduced into medical image processing in order to facilitate the extraction of information from medical images.
In the related art, a convolutional neural network may be trained through a large number of medical sample images and corresponding type labels, so that the training of the obtained neural network model may realize classification of the input medical images.
However, the classification accuracy of the neural network model in the above technology depends on the number and quality of medical sample images, and in the case of fewer medical sample images, that is, in the case of small sample learning, an over-fitting phenomenon is easily created, so that the application range of the neural network model obtained by training is limited, and the accuracy is low.
Disclosure of Invention
The embodiment of the application provides an image processing method, device and equipment for medical images and a storage medium, which can improve the training effect of a model and improve the classification accuracy of an obtained image classification model. The technical scheme is as follows:
in one aspect, an image processing method for medical images is provided, the method comprising:
calling a first channel network, processing a first sample image, and obtaining a prediction type result corresponding to the first sample image and image characteristics of the first sample image; the prediction type result is used for indicating the type of the first sample image;
calling a second channel network, processing image characteristics of a second sample image and the first sample image, and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used for indicating whether the second sample image and the first sample image are the same type of image;
training the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image and the type label of the second sample;
the first channel network is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
In another aspect, an image processing apparatus for medical images is provided, the apparatus comprising:
the first processing module is used for calling a first channel network, processing a first sample image and obtaining a prediction type result corresponding to the first sample image and image characteristics of the first sample image; the prediction type result is used for indicating the type of the first sample image;
the second processing module is used for calling a second channel network, processing a second sample image and the image characteristics of the first sample image and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used for indicating whether the second sample image and the first sample image are the same type of image;
a network training module, configured to train the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image, and the type label of the second sample;
and the channel network obtained after the training of the first channel network is completed is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
In one possible implementation manner, the second channel network includes n feature extraction layers, the n feature extraction layers are connected layer by layer, the first channel network includes m feature extraction layers, and the m feature extraction layers are connected layer by layer; n is more than or equal to 2 and less than or equal to m, and n and m are positive integers;
responding to a first target feature extraction layer as a first feature extraction layer of the second channel network, wherein the input of the first target feature extraction layer is the second sample image;
in response to the first target feature extraction layer being the ith feature extraction layer of the second channel network, the input to the first target feature extraction layer comprises: the output result of the (i-1) th feature extraction layer of the second channel network and the output result of the (i-1) th feature extraction layer of the first channel network; i is more than or equal to 2 and less than or equal to n.
In one possible implementation, in response to the second target feature extraction layer being a first feature extraction layer of the first channel network, the input of the second target feature extraction layer is the first sample image;
responding to the second target feature extraction layer being the ith feature extraction layer of the first channel network, wherein the input of the second target feature extraction layer comprises the output result of the (l-1) th feature extraction layer; l is more than or equal to 2 and less than or equal to m.
In one possible implementation, the network training module includes:
a first parameter updating sub-module, configured to perform parameter updating on the second channel network based on a difference between the predicted classification result and a classification result tag, where the classification result tag is a classification result determined based on a type tag of the first sample image and a type tag of the second sample image;
and the second parameter updating sub-module is used for updating the parameters of the first channel network based on the difference between the prediction type result and the type label of the first sample image and the difference between the prediction classification result and the classification result label.
In a possible implementation, the first parameter updating sub-module is configured to calculate a function value of a first loss function based on a difference between the predicted classification result and the classification result label;
and updating parameters of the second channel network based on the function value of the first loss function.
In a possible implementation, the second parameter updating sub-module is configured to calculate a function value of the first loss function based on a difference between the predicted classification result and the classification result label;
calculating a function value of a second loss function based on a difference between the prediction classification result and a type label of the first sample image;
updating parameters of the first channel network based on the function value of the first loss function and the function value of the second loss function.
In a possible implementation manner, the second parameter updating sub-module is configured to perform weighted summation on the function value of the first loss function and the function value of the second loss function, so as to obtain a weighted summation result;
and updating parameters of the first channel network based on the weighting result.
In one possible implementation, the apparatus further includes:
the first data enhancement module is used for performing data enhancement on the first sample image to obtain the first sample image after data enhancement;
the first processing module is configured to invoke a first channel network, process the data-enhanced first sample image, and obtain the prediction type result corresponding to the first sample image.
In one possible implementation, the apparatus further includes:
the second data enhancement module is used for performing data enhancement on the second sample image to obtain the second sample image after data enhancement;
the second processing module is configured to invoke a second channel network, process the second sample image after data enhancement and the image feature of the first sample image, and obtain the prediction classification result of the second sample image compared with the first sample image.
In one possible implementation, the data enhancement includes at least one of a random rotation and a random translation.
In one possible implementation, the apparatus further includes:
the first data enhancement module is used for performing data enhancement on the target medical image to obtain x data enhancement results corresponding to the target medical image;
the first obtaining module is used for sequentially inputting the x random enhancement results into the image classification model to obtain the prediction type results corresponding to the x random enhancement results respectively;
a second obtaining module, configured to obtain the prediction type result of the target medical image based on the prediction type results corresponding to the x random enhancement results, respectively.
In another aspect, a computer device is provided, comprising a processor and a memory, in which at least one computer program is stored, which is loaded and executed by the processor to implement the above-mentioned image processing method for medical images.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, which is loaded and executed by a processor to implement the above-mentioned image processing method for medical images.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the image processing method for medical images provided in the various alternative implementations described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the image processing method for the medical image, the first channel network and the second channel network are trained by utilizing the first sample image and the second sample image, so that the training of the first channel network can be assisted by a prediction classification result obtained based on the second channel network, and the classification accuracy of an image classification model obtained based on the trained first channel network is improved;
meanwhile, network training is carried out simultaneously based on two samples, the number of sample images which can be learnt can be exponentially increased, so that the overfitting phenomenon caused by limited sample number in a small sample learning scene is avoided, and the training effect of the model is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 shows a schematic diagram of a system architecture for an image processing method of a medical image provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a flow chart of an image processing method for medical images provided by an exemplary embodiment of the present application;
FIG. 3 is a block diagram illustrating image classification model generation and image classification according to an exemplary embodiment;
FIG. 4 illustrates a flow chart of an image processing method for medical images provided by an exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of a first channel network and a second channel network shown in an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram of a backbone network shown in an exemplary embodiment of the present application;
FIG. 7 shows a block diagram of an image processing apparatus for medical images shown in an exemplary embodiment of the present application;
FIG. 8 illustrates a block diagram of a computer device shown in an exemplary embodiment of the present application;
fig. 9 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The embodiment of the application provides an image processing method for medical images, which can improve the image classification accuracy. The present application relates to artificial intelligence techniques and machine learning techniques;
among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like. The display device comprising the image acquisition component mainly relates to the computer vision technology and the machine learning/depth learning direction.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
Fig. 1 shows a schematic diagram of a system architecture of an image processing method for medical images provided by an exemplary embodiment of the present application, and as shown in fig. 1, the system includes: a computer device 110 and a medical image acquisition device 120.
When the computer device 110 is implemented as a server, the computer device 110 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, CDN (Content Delivery Network), big data, and an artificial intelligence platform. When the computer device 110 is implemented as a terminal, the computer device 110 may be a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like.
The medical image capturing apparatus 120 is an apparatus having a medical image capturing function, and for example, the medical image capturing apparatus may be a CT (Computed Tomography) detector for medical examination, a nuclear magnetic resonance apparatus, a positron emission Computed Tomography apparatus, a cardiac magnetic resonance apparatus, or other apparatuses having an image capturing device.
Optionally, the system comprises one or more computer devices 110 and one or more medical image acquisition devices 120. The number of the computer device 110 and the medical image acquisition device 120 is not limited in the embodiment of the present application.
The medical image acquisition device 120 and the computer device 110 are connected via a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above. The application is not limited thereto.
Fig. 2 shows a flowchart of an image processing method for medical images provided by an exemplary embodiment of the present application, which may be executed by a computing device, which may be implemented as a server or a terminal as shown in fig. 1, and as shown in fig. 2, the image processing method for medical images may include the following steps:
step 210, calling a first channel network, processing the first sample image, and obtaining a prediction type result corresponding to the first sample image and an image feature of the first sample image; the prediction type result is used to indicate the type of the first sample image.
Step 220, calling a second channel network, processing the image characteristics of the second sample image and the first sample image, and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used to indicate whether the second sample image and the first sample image are the same type of image.
In the embodiment of the application, the sample image set includes a plurality of sample images, each two sample images can form a sample image pair, the first sample image and the second sample image are two sample images in the same sample image pair, each sample image has a corresponding type label, and the type label of each sample image is used to indicate the type to which each sample image belongs; the sample images in the sample image set may be medical sample images, each medical sample image is a medical sample image corresponding to the same disease, and the type of each sample image may be used to instruct medical staff to determine the image type, assist in disease diagnosis, such as recurrence, disease risk, etc., for example, the type may be used to instruct recurrence of the disease or no recurrence of the disease, or the type may be used to instruct recurrence risk of the disease, such as no recurrence risk, low recurrence risk, medium recurrence risk, or high recurrence risk.
And 230, training the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image and the type label of the second sample.
The image processing method for the medical image provided by the embodiment of the application can be applied to a learning scene based on a small sample, wherein the small sample learning (Few-shot learning) is a branch of the field of machine learning and aims to solve a machine learning task with limited data; the available samples in the small sample learning are fewer, the overfitting phenomenon is easily caused by using the traditional machine learning method, and the generalization performance of the model algorithm is reduced; small sample learning is expected to use as few samples as possible while ensuring good performance.
In the embodiment of the present application, two different sample images in the sample image set need to be input in one round of training, and therefore, images in the sample image set may be randomly grouped first, obtaining at least two first sample image pairs, in response to the first channel network and the second channel network being paired based on the at least two first sample images, after the iterative training is finished according to the training process from step 210 to step 230, the training completion condition is not reached, the images in the sample image set may be re-randomly grouped again, obtaining at least two second sample image pairs, the first channel network and the second channel network based on the at least two second sample image pairs, performing iterative training again according to the training process from step 210 to step 230, and repeating the above process until a training completion condition is reached, wherein the training completion condition includes: and the first channel network is converged, the first channel network and the second channel network are both converged, and the iteration number reaches at least one of the number threshold values. Illustratively, the sample image set includes N sample images, when randomly grouped for the first time, N/2 first sample image pairs can be obtained, and the first channel network and the second channel network are trained based on the sample images in each first sample image pair; and if the training completion condition is still met after the training of the first channel network and the second channel network is completed based on the N/2 first sample image pairs, randomly grouping the N sample images for the second time to obtain N/2 second sample image pairs, training the first channel network and the second channel network based on the sample images in each second sample image pair, and repeating the process until the training completion condition is met. In this process, there may be a combination of n x n different pairs of image samples when randomly grouped, thereby achieving an exponential increase in the number of sample images that can be learned.
In the embodiment of the application, the channel network obtained after the training of the first channel network is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
Illustratively, the channel network obtained after the training of the first channel network may be obtained as an image classification model, or the model may be reconstructed based on the structure and parameters of the channel network obtained after the training of the first channel network, so as to obtain the image classification model. The process of training the first channel network and the second channel network may be executed by a terminal or a server, and if the process is executed by the server, when the channel network obtained after the training of the first channel network is obtained, the server may deploy the channel network obtained after the training of the first channel network as an image classification model, or may issue a structure and parameters of the channel network obtained after the training of the first channel network to a deployment device, so that the deployment device constructs an image classification model based on the obtained structure and parameters of the first channel network, and the deployment device may be implemented as a terminal or a server, or may also construct an image classification model based on the obtained structure and parameters of the first channel network by related personnel, and deploy the image classification model.
In summary, according to the image processing method for medical images provided in the embodiment of the present application, the first channel network and the second channel network are trained by using the first sample image and the second sample image, so that the prediction classification result obtained based on the second channel network can assist the training of the first channel network, and the classification accuracy of the image classification model obtained based on the trained first channel network is improved;
meanwhile, network training is carried out simultaneously based on two samples, the number of sample images which can be learnt can be exponentially increased, so that the overfitting phenomenon caused by limited sample number in a small sample learning scene is avoided, and the training effect of the model is improved.
Application scenarios in the scheme described in the embodiments of the present application include, but are not limited to, the following scenarios:
1) the medical staff is assisted to carry out the esophagus cancer recurrence detection scene:
the esophagus cancer is malignant tumor which is generated in the esophagus and is derived from esophageal epithelial cells, in clinical application, the estimation of the recurrence of the esophagus cancer influences clinical decision, and different treatment means can be adopted for high-risk/low-risk people; in clinical application, a corresponding medical image can be acquired by means of a medical image acquisition device to judge the recurrence risk of esophageal cancer, in order to improve the accuracy of prediction of the recurrence risk of esophageal cancer, the medical image of esophageal cancer acquired by the medical image acquisition device can be input into an image classification model through an image classification model obtained based on the image processing method for medical images provided by the embodiment of the application, a prediction type result output by the medical image of esophageal cancer output by the image classification model is obtained, and the prediction type result is used for indicating the recurrence risk of esophageal cancer, such as no recurrence risk, low recurrence risk, medium recurrence risk, high recurrence risk and the like.
2) And (3) assisting medical staff in judging the medical image focus scene:
in the medical field, medical staff often judge the possibility of a focus existing in an organ through a medical image acquired by medical image acquisition equipment, for example, focus examination is performed on the stomach; lung tumor examination; brain tumor repair examination, etc. In the above scenario, the image classification models corresponding to the above scenarios can be obtained through the image processing method for medical images provided by the present application, so as to determine the possibility of a lesion existing in an organ, so that medical staff can reasonably allocate medical resources based on the possibility of the lesion existing; therefore, based on the image classification model obtained by the image processing method for the medical image, accuracy of classifying the medical image can be improved, accuracy of judging the possibility of existence of the focus can be further improved, and therefore reasonable allocation of medical resources is achieved.
The scheme of the application comprises an image classification model generation stage and an image classification stage. Fig. 3 is a frame diagram illustrating image classification model generation and image classification according to an exemplary embodiment, where, as shown in fig. 3, in an image classification model generation stage, an image classification model generation device 310 trains a first channel network and a second channel network through a preset training sample data set (including different sample images and type labels corresponding to the sample images) to obtain a trained first channel network; and then, generating an image classification model based on the channel network obtained after the training of the first channel network is completed. In the image processing and classifying stage, the image classification device 320 processes the input target medical image based on the image classification model to obtain an image classification result of the target medical image, where the image classification result is used to indicate a prediction type result corresponding to the target medical image, for example, to determine a recurrence risk of a specified medical condition corresponding to the target medical image.
The image classification model generation device 310 and the image classification device 320 may be computer devices, for example, the computer devices may be stationary computer devices such as a personal computer and a server, or the computer devices may also be mobile computer devices such as a tablet computer and an e-book reader.
Alternatively, the image classification model generation device 310 and the image classification device 320 may be the same device, or the image classification model generation device 310 and the image classification device 320 may be different devices. Also, when the image classification model generation device 310 and the image classification device 320 are different devices, the image classification model generation device 310 and the image classification device 320 may be the same type of device, such as the image classification model generation device 310 and the image classification device 320 may both be servers; or the image classification model generation device 310 and the image classification device 320 may be different types of devices, for example, the image classification device 320 may be a personal computer or a terminal, and the image classification model generation device 310 may be a server or the like. The embodiment of the present application does not limit the specific types of the image classification model generation device 310 and the image classification device 320.
Fig. 4 shows a flowchart of an image processing method for medical images provided by an exemplary embodiment of the present application, where the method may be executed by a computing device, and the computer device may be implemented as the server shown in fig. 1 or as the server and the terminal, as shown in fig. 4, and the image processing method for medical images includes the following steps:
step 410, calling a first channel network, processing the first sample image, and obtaining a prediction type result corresponding to the first sample image and an image feature of the first sample image; the prediction type result is used to indicate the type of the first sample image.
In the embodiment of the application, the first channel network comprises m feature extraction layers, wherein the m feature extraction layers are connected layer by layer, m is greater than or equal to 2 and is a positive integer;
responding to a second target feature extraction layer as a first feature extraction layer of the first channel network, wherein the input of the second target feature extraction layer is a first sample image;
responding to the second target feature extraction layer as the ith feature extraction layer of the first channel network, wherein the input of the second target feature extraction layer comprises the output result of the (l-1) th feature extraction layer; l is more than or equal to 2 and less than or equal to m.
That is, for the first feature extraction layer of the first channel network, its corresponding input is the first sample image, and the input of the non-first feature extraction layer is the output result of the previous feature extraction layer.
In a possible implementation manner, in order to further reduce an over-fitting phenomenon of small sample learning caused by a small number of samples, before invoking a first channel network, processing a first sample image, and obtaining a prediction type result corresponding to the first sample image, the method further includes:
performing data enhancement on the first sample image to obtain a data-enhanced first sample image;
and then, calling a first channel network, processing the data-enhanced first sample image, and obtaining a prediction type result corresponding to the first sample image.
The data enhancement includes at least one of cropping, random rotation, and random translation.
The first and second sample images are three-dimensional images, which may be, illustratively, Positron Emission Tomography (PET) images or Computed Tomography (CT) images; accordingly, data enhancement includes random three-dimensional rotation, random translation, and cropping. Wherein, the random rotation means that the three-dimensional image is randomly rotated by 0 to 360 degrees in the directions of x, y and z; the random translation refers to randomly moving the three-dimensional image by different pixel points along the x, y, and z directions, for example, randomly moving 0 to 15 different pixel points along the x, y, and z directions.
Step 420, calling a second channel network, processing the image characteristics of the second sample image and the first sample image, and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used to indicate whether the second sample image and the first sample image are the same type of image.
In the embodiment of the application, the second channel network comprises n feature extraction layers, and the n feature extraction layers are connected layer by layer, wherein n is more than or equal to 2 and less than or equal to m, and n is a positive integer.
In this embodiment of the application, the image feature of the first sample image may be an intermediate feature obtained by processing the first sample image through the first channel network; or, the intermediate feature is obtained after the first sample image after data enhancement is processed through the first channel network.
Because the first channel network comprises m feature extraction layers, the intermediate features of a first sample image can be obtained through the processing of one feature extraction layer; therefore, when the predicted classification result of the second sample image compared with the first sample image is obtained through the second channel network, the image features of the first sample image can be sequentially input into the corresponding feature extraction layers of the second channel network, so that the second channel network can obtain the predicted classification result of the second sample image compared with the first sample image based on the extracted image features of the second sample image and the obtained image features of the first sample image.
That is, in response to the first target feature extraction layer being a first feature extraction layer of the second channel network, the input to the first target feature extraction layer is a second sample image;
responding to the first target feature extraction layer being the ith feature extraction layer of the second channel network, wherein the input of the first target feature extraction layer comprises: the output result of the (i-1) th feature extraction layer of the second channel network and the output result of the (i-1) th feature extraction layer of the first channel network; i is more than or equal to 2 and less than or equal to n.
In a possible implementation manner, data enhancement is performed on the second sample image to obtain a data-enhanced second sample image;
and then, calling a second channel network, processing the image characteristics of the second sample image and the first sample image after data enhancement, and obtaining a prediction classification result of the second sample image compared with the first sample image.
Taking m ═ n as an example, fig. 5 shows schematic diagrams of a first channel Network and a second channel Network shown in an exemplary embodiment of the present application, as shown in fig. 5, a backbone Network in the first channel Network 510 and the second channel Network 520 respectively includes n feature extraction layers, in the embodiment of the present application, the backbone Network can implement a Residual Network (ResNet), the n feature extraction layers can implement n Residual blocks (ResNet Block) in the Residual Network, the first channel Network 510 is a main channel, a classification result finally output is a prediction type result of the first sample image 511, the second channel Network 520 is a cooperative channel, and the prediction classification result finally output is used to indicate whether the first sample image 511 and the second sample image 521 are similar images; in the training stage, taking the first sample image and the second sample image as medical sample images of esophageal cancer correspondingly, and taking the label of each medical sample image as esophageal cancer recurrence or esophageal cancer recurrence as an example, the first block of the first channel network 510 receives a data-enhanced first sample image input, and the first block of the second channel network receives a data-enhanced second sample image input; starting with the second block, for the first channel network 510, the input of each block is the output of the last block of the first channel network 510, and for the second channel network, the input of each block is the sum of the output of the last block of the first channel network 510 and the output of the last block of the second channel network 520; based on the method, the first channel network and the second channel network can be respectively concentrated in learning different contents, the first channel network is concentrated in predicting the type of the sample image, the second channel network is concentrated in distinguishing the difference degree between different types of images, and the difference degree is fed back to the first channel network in the training process so as to assist in training of the first channel network.
Fig. 6 shows a schematic diagram of a backbone network shown in an exemplary embodiment of the present application, and as shown in fig. 6, in the embodiment of the present application, a sample image 610 after data enhancement is used as an input of a residual error network, and image features corresponding to the sample image after data enhancement are obtained based on processing of at least two residual error blocks in the residual error network, as shown in fig. 6, at least two residual error blocks may be implemented as at least two Bottleneck (Bottleneck) layers.
In the training process of the image processing model, the forward propagation stage is mainly divided into a forward propagation stage and a backward propagation stage, the forward propagation stage is as shown in fig. 5, data enhancement processing is respectively carried out on a first sample image and a second sample image, respective image characteristics of the first sample image and the second sample image are extracted, and a prediction classification result and a prediction type result of the first sample image are obtained based on the respective image characteristics of the first sample image and the second sample image;
and the backward feedback stage is a process of updating parameters of the first channel network and the second channel network according to the prediction classification result obtained in the forward propagation stage, the prediction type result of the first sample image, and the type label corresponding to the first sample image and the type label corresponding to the second sample image.
And 430, updating parameters of the second channel network based on the difference between the predicted classification result and a classification result label, wherein the classification result label is a classification result determined based on the type label of the first sample image and the type label of the second sample image.
Because different sample images have different type labels, whether the sample images input into the first channel network and the second channel network respectively are the same type of image can be known based on the type labels of the sample images, and the classification result label is determined.
The process of updating the parameters of the second channel network may be implemented as follows:
calculating a function value of the first loss function based on a difference between the predicted classification result and the classification result label;
and updating parameters of the second channel network based on the function value of the first loss function.
Step 440, updating parameters of the first channel network based on the difference between the predicted type result and the type label of the first sample image, and the difference between the predicted classification result and the classification result label.
In the embodiment of the present application, based on the difference between the predicted classification result and the classification result label, the parameter update may be performed on the first channel network and the second channel network, and based on the difference between the predicted type result and the type label of the first sample image, the parameter update may be further performed on the first channel network.
The process of updating the parameter of the first channel network may be implemented as follows:
calculating a function value of the first loss function based on a difference between the predicted classification result and the classification result label;
calculating a function value of a second loss function based on a difference between the prediction classification result and the type label of the first sample image;
and updating parameters of the first channel network based on the function value of the first loss function and the function value of the second loss function.
In the embodiment of the present application, the first loss function and the second loss function may be implemented as classification loss functions, such as one or more of a cross-entropy loss function, a mean square error loss function, an exponential loss function, a negative log-likelihood loss, and the like.
In this embodiment of the present application, when updating the parameters of the first channel network, the function value of the first loss function and the function value of the second loss function may be subjected to weighted summation to obtain a weighted summation result;
and updating parameters of the first channel network based on the weighting result.
And iteratively executing the steps 410 to 440 based on the first sample image and the second sample image in different sample image pairs until a training completion condition is reached, and obtaining a trained first channel network and a trained second channel network.
And 450, constructing an image classification model based on the channel network obtained after the training of the first channel network is completed, wherein the image classification model is used for predicting the type of the target medical image.
The target medical image is a medical image of an input image classification model.
After the image classification model is obtained, the image classification model may be deployed on a deployment device, which may be implemented as a terminal or a server.
When the image classification model is applied, the prediction type result of the target medical image output by the image classification model can be obtained by inputting the target medical image into the image classification model.
Or, in a possible implementation manner, in order to enhance the robustness of the prediction result, random data enhancement may be performed on the target medical image to obtain x random data enhancement results corresponding to the target medical image;
sequentially inputting the x random data enhancement results into an image classification model to obtain prediction type results corresponding to the x random data enhancement results respectively;
and acquiring the prediction type result of the target medical image based on the prediction type results respectively corresponding to the x random data enhancement results.
Optionally, the x random data enhancement results are different images obtained after different rotations, translations or cropping based on the target medical image.
Schematically, the average value of the prediction type results corresponding to the x random data enhancement results is obtained as the prediction type result of the target medical image, and the corresponding formula is as follows:
Figure BDA0003227780480000161
wherein p isiAnd the prediction type result corresponding to the ith random data enhancement result in the x prediction type results is shown, and p represents the prediction type result of the target medical image.
In summary, according to the image processing method for medical images provided in the embodiment of the present application, the first channel network and the second channel network are trained by using the first sample image and the second sample image, so that the prediction classification result obtained based on the second channel network can assist the training of the first channel network, and the classification accuracy of the image classification model obtained based on the trained first channel network is improved;
meanwhile, network training is carried out simultaneously based on two samples, the number of sample images which can be learnt can be exponentially increased, so that the overfitting phenomenon caused by limited sample number in a small sample learning scene is avoided, and the training effect of the model is improved.
Fig. 7 shows a block diagram of an image processing apparatus for medical images according to an exemplary embodiment of the present application, which includes, as shown in fig. 7:
a first processing module 710, configured to invoke a first channel network, process a first sample image, and obtain a prediction type result corresponding to the first sample image and an image feature of the first sample image; the prediction type result is used for indicating the type of the first sample image;
a second processing module 720, configured to invoke a second channel network, process a second sample image and image features of the first sample image, and obtain a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used for indicating whether the second sample image and the first sample image are the same type of image;
a network training module 730, configured to train the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image, and the type label of the second sample;
and the channel network obtained after the training of the first channel network is completed is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
In one possible implementation manner, the second channel network includes n feature extraction layers, the n feature extraction layers are connected layer by layer, the first channel network includes m feature extraction layers, and the m feature extraction layers are connected layer by layer; n is more than or equal to 2 and less than or equal to m, and n and m are positive integers;
responding to a first target feature extraction layer as a first feature extraction layer of the second channel network, wherein the input of the first target feature extraction layer is the second sample image;
in response to the first target feature extraction layer being the ith feature extraction layer of the second channel network, the input to the first target feature extraction layer comprises: the output result of the (i-1) th feature extraction layer of the second channel network and the output result of the (i-1) th feature extraction layer of the first channel network; i is more than or equal to 2 and less than or equal to n.
In one possible implementation, in response to the second target feature extraction layer being a first feature extraction layer of the first channel network, the input of the second target feature extraction layer is the first sample image;
responding to the second target feature extraction layer being the ith feature extraction layer of the first channel network, wherein the input of the second target feature extraction layer comprises the output result of the (l-1) th feature extraction layer; l is more than or equal to 2 and less than or equal to m.
In one possible implementation manner, the network training module 730 includes:
a first parameter updating sub-module, configured to perform parameter updating on the second channel network based on a difference between the predicted classification result and a classification result tag, where the classification result tag is a classification result determined based on a type tag of the first sample image and a type tag of the second sample image;
and the second parameter updating sub-module is used for updating the parameters of the first channel network based on the difference between the prediction type result and the type label of the first sample image and the difference between the prediction classification result and the classification result label.
In a possible implementation, the first parameter updating sub-module is configured to calculate a function value of a first loss function based on a difference between the predicted classification result and the classification result label;
and updating parameters of the second channel network based on the function value of the first loss function.
In a possible implementation, the second parameter updating sub-module is configured to calculate a function value of the first loss function based on a difference between the predicted classification result and the classification result label;
calculating a function value of a second loss function based on a difference between the prediction classification result and a type label of the first sample image;
updating parameters of the first channel network based on the function value of the first loss function and the function value of the second loss function.
In a possible implementation manner, the second parameter updating sub-module is configured to perform weighted summation on the function value of the first loss function and the function value of the second loss function, so as to obtain a weighted summation result;
and updating parameters of the first channel network based on the weighting result.
In one possible implementation, the apparatus further includes:
the first data enhancement module is used for performing data enhancement on the first sample image to obtain the first sample image after data enhancement;
the first processing module 710 is configured to invoke a first channel network, process the data-enhanced first sample image, and obtain the prediction type result corresponding to the first sample image.
In one possible implementation, the apparatus further includes:
the second data enhancement module is used for performing data enhancement on the second sample image to obtain the second sample image after data enhancement;
the second processing module 720 is configured to invoke a second channel network, process the image features of the second sample image and the first sample image after data enhancement, and obtain the prediction classification result of the second sample image compared with the first sample image.
In one possible implementation, the data enhancement includes at least one of a random rotation and a random translation.
In one possible implementation, the apparatus further includes:
the first data enhancement module is used for performing data enhancement on the target medical image to obtain x data enhancement results corresponding to the target medical image;
the first obtaining module is used for sequentially inputting the x random enhancement results into the image classification model to obtain the prediction type results corresponding to the x random enhancement results respectively;
a second obtaining module, configured to obtain the prediction type result of the target medical image based on the prediction type results corresponding to the x random enhancement results, respectively.
In summary, according to the image processing method for medical images provided in the embodiment of the present application, the first channel network and the second channel network are trained by using the first sample image and the second sample image, so that the prediction classification result obtained based on the second channel network can assist the training of the first channel network, and the classification accuracy of the image classification model obtained based on the trained first channel network is improved;
meanwhile, network training is carried out simultaneously based on two samples, the number of sample images which can be learnt can be exponentially increased, so that the overfitting phenomenon caused by limited sample number in a small sample learning scene is avoided, and the training effect of the model is improved.
Fig. 8 illustrates a block diagram of a computer device 800 according to an exemplary embodiment of the present application. The computer device may be implemented as a server in the above-mentioned aspects of the present application. The computer apparatus 800 includes a Central Processing Unit (CPU) 801, a system Memory 804 including a Random Access Memory (RAM) 802 and a Read-Only Memory (ROM) 803, and a system bus 805 connecting the system Memory 804 and the CPU 801. The computer device 800 also includes a mass storage device 806 for storing an operating system 809, application programs 810 and other program modules 811.
The mass storage device 806 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 806 and its associated computer-readable media provide non-volatile storage for the computer device 800. That is, the mass storage device 806 may include a computer-readable medium (not shown) such as a hard disk or Compact Disc-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 804 and mass storage device 806 as described above may be collectively referred to as memory.
The computer device 800 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 800 may be connected to the network 808 through the network interface unit 807 attached to the system bus 805, or may be connected to another type of network or remote computer system (not shown) using the network interface unit 807.
The memory further comprises at least one instruction, at least one program, a code set, or a set of instructions, which is stored in the memory, and the central processor 801 implements all or part of the steps of the image processing method for medical images shown in the above embodiments by executing the at least one instruction, the at least one program, the code set, or the set of instructions.
Fig. 9 shows a block diagram of a computer device 900 provided in an exemplary embodiment of the present application. The computer device 900 may be implemented as the terminal described above, such as: a smartphone, a tablet, a laptop, or a desktop computer. Computer device 900 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, a 9-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 902 is used to store at least one instruction for execution by the processor 901 to implement all or part of the steps in the image processing method for medical images provided by the method embodiments in the present application.
In some embodiments, computer device 900 may also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
In some embodiments, computer device 900 also includes one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 9 is not intended to be limiting of the computer device 900 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium is also provided, for storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement all or part of the steps of the above-mentioned image processing method for medical images. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any of the embodiments of fig. 2 or fig. 4.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An image processing method for medical images, the method comprising:
calling a first channel network, processing a first sample image, and obtaining a prediction type result corresponding to the first sample image and image characteristics of the first sample image; the prediction type result is used for indicating the type of the first sample image;
calling a second channel network, processing image characteristics of a second sample image and the first sample image, and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used for indicating whether the second sample image and the first sample image are the same type of image;
training the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image and the type label of the second sample;
and the channel network obtained after the training of the first channel network is completed is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
2. The method of claim 1, wherein the second channel network comprises n feature extraction layers, the n feature extraction layers being connected layer-by-layer, and wherein the first channel network comprises m feature extraction layers, the m feature extraction layers being connected layer-by-layer; n is more than or equal to 2 and less than or equal to m, and n and m are positive integers;
responding to a first target feature extraction layer as a first feature extraction layer of the second channel network, wherein the input of the first target feature extraction layer is the second sample image;
in response to the first target feature extraction layer being the ith feature extraction layer of the second channel network, the input to the first target feature extraction layer comprises: the output result of the (i-1) th feature extraction layer of the second channel network and the output result of the (i-1) th feature extraction layer of the first channel network; i is more than or equal to 2 and less than or equal to n.
3. The method of claim 2, wherein in response to the second target feature extraction layer being a first feature extraction layer of the first channel network, the input to the second target feature extraction layer is the first sample image;
responding to the second target feature extraction layer being the ith feature extraction layer of the first channel network, wherein the input of the second target feature extraction layer comprises the output result of the (l-1) th feature extraction layer; l is more than or equal to 2 and less than or equal to m.
4. The method of claim 1, wherein training the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image, and the type label of the second sample image comprises:
updating parameters of the second channel network based on a difference between the predicted classification result and a classification result label indicating a classification result determined based on a type label of the first sample image and a type label of the second sample image;
updating parameters of the first channel network based on a difference between the prediction type result and a type label of the first sample image, and a difference between the prediction classification result and the classification result label.
5. The method of claim 4, wherein the updating parameters of the second channel network based on the difference between the predicted classification result and the classification result tag comprises:
calculating a function value of a first loss function based on a difference between the predicted classification result and a classification result label;
and updating parameters of the second channel network based on the function value of the first loss function.
6. The method of claim 4, wherein the updating the parameters of the first channel network based on the difference between the predicted type result and the type label of the first sample image and the difference between the predicted classification result and the classification result label comprises:
calculating a function value of a first loss function based on a difference between the predicted classification result and a classification result label;
calculating a function value of a second loss function based on a difference between the prediction classification result and a type label of the first sample image;
updating parameters of the first channel network based on the function value of the first loss function and the function value of the second loss function.
7. The method of claim 6, wherein the updating the parameters of the first channel network based on the function value of the first loss function and the function value of the second loss function comprises:
carrying out weighted summation on the function value of the first loss function and the function value of the second loss function to obtain a weighted summation result;
and updating parameters of the first channel network based on the weighting result.
8. The method of claim 1, wherein before invoking the first channel network to process the first sample image and obtain the prediction type result corresponding to the first sample image, the method further comprises:
performing data enhancement on the first sample image to obtain the first sample image after data enhancement;
the calling a first channel network to process a first sample image and obtain a first class classification result corresponding to the first sample image includes:
and calling a first channel network, processing the first sample image after data enhancement, and obtaining the prediction type result corresponding to the first sample image.
9. The method of claim 1, wherein before invoking a second channel network to process a second sample image and image features of the first sample image and obtain a result of predictive classification of the second sample image as compared to the first sample image, the method further comprises:
performing data enhancement on the second sample image to obtain the second sample image after data enhancement;
the calling a second channel network to process image features of a second sample image and the first sample image to obtain a prediction classification result of the second sample image compared with the first sample image includes:
and calling a second channel network, processing the image characteristics of the second sample image and the first sample image after data enhancement, and obtaining the prediction classification result of the second sample image compared with the first sample image.
10. The method of any of claims 8 or 9, wherein the data enhancement comprises at least one of cropping, random rotation, and random translation.
11. The method of claim 1, wherein after obtaining the image classification model, the method further comprises:
performing random data enhancement on the target medical image to obtain x random data enhancement results corresponding to the target medical image;
sequentially inputting the x random data enhancement results into the image classification model to obtain the prediction type results corresponding to the x random data enhancement results respectively;
and acquiring the prediction type result of the target medical image based on the prediction type results respectively corresponding to the x random data enhancement results.
12. The method of any one of claims 1 to 11, wherein the first sample image and the second sample image are three-dimensional images.
13. An image processing apparatus for medical images, characterized in that the apparatus comprises:
the first processing module is used for calling a first channel network, processing a first sample image and obtaining a prediction type result corresponding to the first sample image and image characteristics of the first sample image; the prediction type result is used for indicating the type of the first sample image;
the second processing module is used for calling a second channel network, processing a second sample image and the image characteristics of the first sample image and obtaining a prediction classification result of the second sample image compared with the first sample image; the prediction classification result is used for indicating whether the second sample image and the first sample image are the same type of image;
a network training module, configured to train the first channel network and the second channel network based on the prediction type result, the prediction classification result, the type label of the first sample image, and the type label of the second sample;
and the channel network obtained after the training of the first channel network is completed is used for constructing an image classification model, and the image classification model is used for predicting the type of the target medical image.
14. A computer device, characterized in that the computer device comprises a processor and a memory, the memory storing at least one computer program which is loaded and executed by the processor to implement the image processing method for medical images according to any one of claims 1 to 12.
15. A computer-readable storage medium, in which at least one computer program is stored which is loaded and executed by a processor to implement the image processing method for medical images according to any one of claims 1 to 12.
CN202110976657.1A 2021-08-24 2021-08-24 Image processing method, device, equipment and storage medium for medical image Pending CN114283110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110976657.1A CN114283110A (en) 2021-08-24 2021-08-24 Image processing method, device, equipment and storage medium for medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110976657.1A CN114283110A (en) 2021-08-24 2021-08-24 Image processing method, device, equipment and storage medium for medical image

Publications (1)

Publication Number Publication Date
CN114283110A true CN114283110A (en) 2022-04-05

Family

ID=80868459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110976657.1A Pending CN114283110A (en) 2021-08-24 2021-08-24 Image processing method, device, equipment and storage medium for medical image

Country Status (1)

Country Link
CN (1) CN114283110A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147314A (en) * 2022-09-02 2022-10-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN115690592A (en) * 2023-01-05 2023-02-03 阿里巴巴(中国)有限公司 Image processing method and model training method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147314A (en) * 2022-09-02 2022-10-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN115690592A (en) * 2023-01-05 2023-02-03 阿里巴巴(中国)有限公司 Image processing method and model training method
CN115690592B (en) * 2023-01-05 2023-04-25 阿里巴巴(中国)有限公司 Image processing method and model training method

Similar Documents

Publication Publication Date Title
US10452899B2 (en) Unsupervised deep representation learning for fine-grained body part recognition
CN109166130B (en) Image processing method and image processing device
WO2020006961A1 (en) Image extraction method and device
Choi et al. Convolutional neural network technology in endoscopic imaging: artificial intelligence for endoscopy
US20180260957A1 (en) Automatic Liver Segmentation Using Adversarial Image-to-Image Network
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
WO2019167884A1 (en) Machine learning method and device, program, learned model, and identification device
CN110276741B (en) Method and device for nodule detection and model training thereof and electronic equipment
Rahman et al. A new method for lung nodule detection using deep neural networks for CT images
CN114283151A (en) Image processing method, device, equipment and storage medium for medical image
CN111932529B (en) Image classification and segmentation method, device and system
Zhao et al. Versatile framework for medical image processing and analysis with application to automatic bone age assessment
WO2021016087A1 (en) Systems for the generation of source models for transfer learning to application specific models
CN113256592B (en) Training method, system and device of image feature extraction model
Kumar et al. MobiHisNet: a lightweight CNN in mobile edge computing for histopathological image classification
CN114283110A (en) Image processing method, device, equipment and storage medium for medical image
CN108491812B (en) Method and device for generating face recognition model
Maity et al. Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
Cui et al. Supervised machine learning for coronary artery lumen segmentation in intravascular ultrasound images
Cui et al. Collaborative learning of cross-channel clinical attention for radiotherapy-related esophageal fistula prediction from ct
CN113724185A (en) Model processing method and device for image classification and storage medium
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
Belhadi et al. BIoMT-ISeg: Blockchain internet of medical things for intelligent segmentation
Zhu et al. Functional-realistic CT image super-resolution for early-stage pulmonary nodule detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination