CN111127430A - Method and device for determining medical image display parameters - Google Patents

Method and device for determining medical image display parameters Download PDF

Info

Publication number
CN111127430A
CN111127430A CN201911348430.1A CN201911348430A CN111127430A CN 111127430 A CN111127430 A CN 111127430A CN 201911348430 A CN201911348430 A CN 201911348430A CN 111127430 A CN111127430 A CN 111127430A
Authority
CN
China
Prior art keywords
medical image
image
determining
window
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911348430.1A
Other languages
Chinese (zh)
Inventor
周越
邹彤
王瑜
孙岩峰
张金
赵朝炜
李新阳
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tuoxiang Technology Co ltd
Original Assignee
Beijing Tuoxiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuoxiang Technology Co ltd filed Critical Beijing Tuoxiang Technology Co ltd
Priority to CN201911348430.1A priority Critical patent/CN111127430A/en
Publication of CN111127430A publication Critical patent/CN111127430A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention provides a method and a device for determining medical image display parameters, wherein the method comprises the following steps: acquiring medical image data, wherein the medical image data comprises a medical image and an integral pixel value set; determining a target set of pixel values for a target region in a medical image based on medical image data; and determining display parameters based on the target pixel value set by using a preset rule, wherein the display parameters comprise window width and window level. The technical scheme of the invention can dynamically adjust the display parameters and realize clear display of the detail part in the target area.

Description

Method and device for determining medical image display parameters
Technical Field
The invention relates to the field of medical artificial intelligence, in particular to a method and a device for determining medical image display parameters.
Background
The medical imaging device is various instruments capable of reproducing the internal structure of a human body into an image, and a clear medical image can help a doctor to diagnose diseases of a patient. The display parameters of the medical imaging equipment are required to be adjusted for obtaining the clear medical images, the display parameters of the existing medical imaging equipment are mostly preset, and the clear display of different images related to different patients is difficult to realize.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for determining medical image display parameters, which can dynamically adjust display parameters to achieve clear display of detailed portions in a target region.
In a first aspect, an embodiment of the present invention provides a method for determining medical image display parameters, including: acquiring medical image data, wherein the medical image data comprises a medical image and an integral pixel value set; determining a target set of pixel values for a target region in a medical image based on medical image data; and determining display parameters based on the target pixel value set by using a preset rule, wherein the display parameters comprise window width and window level.
In some embodiments of the invention, determining a set of target pixel values for a target region in a medical image based on medical image data comprises: a target set of pixel values is determined based on the medical image and the overall set of pixel values using a deep learning model.
In some embodiments of the invention, determining a set of target pixel values for a target region in a medical image based on medical image data comprises: determining a probability image based on the medical image by utilizing a semantic segmentation model, wherein the size of the probability image is consistent with that of the medical image, and probability information corresponding to each pixel point in the probability image represents the probability that a region on the medical image corresponding to each pixel point in the probability image belongs to a target region; a target set of pixel values is determined from the probability image and the overall set of pixel values.
In some embodiments of the invention, the semantic segmentation model comprises an improved U-net network model, wherein determining the probabilistic image based on the medical image using the semantic segmentation model comprises: when a semantic segmentation model is used for processing a medical image, before convolution operation is carried out on each convolution layer, image expansion processing is carried out on an input image of the convolution layer to obtain a first image, wherein the size of the first image is larger than that of the input image; performing convolution processing on the first image by using the convolution layer to obtain a second image, wherein the size of the second image is equal to that of the input image; and determining a probability image by using a second image output by the last convolutional layer of the semantic segmentation model.
In some embodiments of the present invention, determining a target set of pixel values from the probabilistic image and the global set of pixel values comprises: determining a binary image according to the probability image, wherein a numerical value corresponding to each pixel point in the binary image indicates that a region on the medical image corresponding to each pixel point in the binary image belongs to a background region or a target region; a target set of pixel values is determined from the binary image and the global set of pixel values.
In some embodiments of the present invention, determining display parameters based on the set of target pixel values using a predetermined rule comprises: determining a pixel value standard deviation according to each pixel value in the target pixel value set, and according to a formula: determining a window width, wherein the standard deviation of the pixel values is represented by A, the window width is represented by w, and N is a positive number; determining a pixel value mean value according to each pixel value in the target pixel value set, and according to a formula: and c-M w, wherein the pixel value mean is represented by B, the window level is represented by c, and M is a coefficient.
In some embodiments of the present invention, N is 5.5 to 6.5, and M is 0.03 to 0.07.
In some embodiments of the present invention, the display parameters further include an actual window upper limit and an actual window lower limit, wherein the determining the display parameters based on the target pixel value set by using a preset rule further includes: determining a theoretical window upper limit according to the sum of the half of the window width and the window level, and determining a theoretical window lower limit according to the difference between the window level and the half of the window width; when the maximum pixel value in the whole pixel value set is greater than or equal to the upper limit of the theoretical window, determining the upper limit of the theoretical window as the upper limit of the actual window, and when the maximum pixel value is less than the upper limit of the theoretical window, determining the maximum pixel value as the upper limit of the actual window; and when the minimum pixel value in the whole pixel value set is less than or equal to the lower limit of the theoretical window, determining the lower limit of the theoretical window as the lower limit of the actual window, and when the minimum pixel value is greater than the lower limit of the theoretical window, determining the minimum pixel value as the lower limit of the actual window.
In some embodiments of the invention, the medical image data comprises computed tomography image data, molybdenum target image data, magnetic resonance image data, or ultrasound image data, the format of the medical image comprises a medical digital imaging and communications standard format, the medical image comprises a molybdenum target image of the breast, and the target region is a breast distribution region.
In a second aspect, an embodiment of the present invention provides a training method for a semantic segmentation model, including: acquiring a plurality of sample medical image data; the method comprises the steps of training a deep learning model by utilizing a plurality of sample medical image data and a preset loss function to obtain a semantic segmentation model, wherein each sample medical image data in the plurality of sample medical image data comprises a sample medical image and a sample label set, and each label in the sample label set is used for indicating that a region where a corresponding pixel point in the sample medical image belongs to a background region or a target region.
In some embodiments of the present invention, the predetermined loss function is: FL (p)t)=-(1-pt)2log(pt) Wherein the loss function is represented by FL and in the target region, ptRepresenting the probability that pixel points in the target region belong to the target region after deep learning model calculation, in the background region, ptAnd the probability that the pixel points in the background area belong to the background area after the deep learning model is calculated is represented.
In a third aspect, an embodiment of the present invention provides an apparatus for determining medical image display parameters, including: an acquisition module for acquiring medical image data, the medical image data comprising a medical image and an overall set of pixel values; the determining module is used for determining a target pixel value set of a target area in the medical image based on the medical image data, and determining display parameters based on the target pixel value set by using a preset rule, wherein the display parameters comprise a window width and a window level.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores a computer program for executing the method for determining display parameters of medical images according to the first aspect or executing the method for training semantic segmentation models according to the second aspect.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including: a processor; a memory for storing processor executable instructions, wherein the processor is configured to perform the method for determining medical image display parameters according to the first aspect or the method for training the semantic segmentation model according to the second aspect.
The embodiment of the invention provides a method and a device for determining medical image display parameters.
Drawings
Fig. 1 is a flowchart illustrating a method for determining display parameters of medical images according to an exemplary embodiment of the present invention.
Fig. 2 is an improved U-net network model provided by an exemplary embodiment of the present invention.
Fig. 3(a) is a set of display effect graphs obtained by using (right side) and not using (left side) the method for determining medical image display parameters according to an exemplary embodiment of the present invention.
Fig. 3(b) is a diagram of display effects obtained by another set of methods for determining display parameters of medical images, which are provided by an exemplary embodiment of the present invention (right side) and not (left side).
Fig. 4 is a flowchart illustrating a method for determining display parameters of medical images according to another exemplary embodiment of the present invention.
Fig. 5 is a flowchart illustrating a training method of a semantic segmentation model according to an exemplary embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an apparatus for determining medical image display parameters according to an exemplary embodiment of the present invention.
Fig. 7 is a block diagram of an electronic device for determining medical image display parameters or training a semantic segmentation model according to an exemplary embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to facilitate the diagnosis of whether the internal structure or tissue of a human body is diseased, medical images of the human body are often acquired by medical equipment such as X-rays, and the medical images cannot be directly displayed on display equipment such as a computer, and need to be converted into a format supported by the display equipment such as the computer so as to be displayed on the display equipment such as the computer.
For example, Digital Imaging and Communications in medicine (DICOM) is an international standard for medical images and related information that defines a medical image format that can be used for data exchange with quality that meets clinical needs. DICOM is widely used in radiomedicine, cardiovascular imaging and radiodiagnosis and diagnosis equipment, and is also increasingly used in other medical fields such as ophthalmology and dentistry. A general-purpose computer cannot directly support the display of DICOM medical images acquired by medical devices, and thus, it is required to convert the format of the DICOM medical images into a format of images that can be displayed by the computer, such as Device Independent Bitmaps (DIBs) and the like. When the bitmap information head, the color table and the image data are stored in the memory of the computer, the bitmap can be displayed.
The DICOM medical image includes a background region and a target region, which may be a region to be diagnosed in a human body, and in order to make the display of the target region clearer and facilitate the diagnosis of a doctor, display parameters, such as a window width and a window level, of the DICOM medical image need to be adjusted. In general, when the medical imaging device displays the DICOM medical image with the converted format, the window width and the window level are default values provided by a device manufacturer, and it is difficult to provide a clear and fine display effect under the conditions of different patients, different human body parts, different lesions, different shooting projection light intensities, and the like.
Fig. 1 is a flowchart illustrating a method for determining display parameters of medical images according to an exemplary embodiment of the present invention. As shown in fig. 1, the method includes the following.
110: medical image data is acquired, the medical image data including a medical image and a set of global pixel values.
The medical image data may be medical image data satisfying international standards for digital medical imaging and communications, and the format of the medical image may be a standard format for digital medical imaging and communications. The medical image data may be referred to herein as DICOM medical image data and the medical image may be referred to as DICOM medical image.
Optionally, the medical image data in the embodiment of the present application may also be medical image data that meets other criteria, as long as the medical image in the medical image data needs to be displayed under the condition that the window width and the window level parameters are determined.
For convenience of description, the method for determining the medical image display parameters provided in the embodiments of the present application is described in detail by taking DICOM medical image data as an example.
The overall set of pixel values is a set of pixel values corresponding to individual pixel points on the medical image. Here, the pixel value may represent luminance information of a region corresponding to the pixel point. Due to different densities of different tissues or pathological changes of a human body, the absorption rates of light rays are different, and therefore pixel values (brightness information) of all pixel points are different. For example, the larger the pixel value of a pixel point is, the brighter the pixel point is; the smaller the pixel value of a pixel point is, the darker the pixel point is. The pixel values in the overall set of pixel values may or may not be contiguous. Similarly, the pixel values in the target set of pixel values may be continuous or discontinuous.
120: a set of target pixel values for a target region in a medical image is determined based on medical image data.
The medical image data may be Computed Tomography (CT) image data, Digital Radio (DR) image data, magnetic resonance image data, ultrasound image data, or the like.
In an embodiment, the medical image data is molybdenum target image data, the medical image includes a breast molybdenum target image, and the target region in the medical image is a breast distribution region.
130: and determining display parameters based on the target pixel value set by using a preset rule, wherein the display parameters comprise window width and window level.
In an embodiment, the medical image may be a grayscale image. The maximum gray level of the DICOM medical image may be 4096 levels, and after adjusting display parameters and converting an image format, three color components rgbrb green rgbRed of each pixel in the color table, and the maximum gray level of the image displayed via the display device may be 256 levels. Alternatively, the maximum gray level of the DICOM medical image and the maximum gray level of the image displayed by the display device may be other suitable values, which is not limited by the embodiment of the present application.
When the gray scale is 4096, the gray scale value may be any one of 0 to 4095. When the gray value of a pixel point is larger, the color of the pixel point is closer to white (or the pixel point is brighter); when the gray value of a pixel point is smaller, the color of the pixel point is closer to black (or the pixel point is darker).
In an embodiment, the gray scale value may be positively correlated with the pixel value, or the gray scale value may be the pixel value described above. For convenience of description, the following description will use gray scale values as pixel values to describe the technical solution of the present application. That is, the DICOM medical image has a pixel value range of [0, 4095], and the display image of the display apparatus has a pixel value range of [0, 255 ].
Since the dynamic range of pixel values of the medical image acquired by the medical device is large, e.g., the dynamic range of pixel values of the medical image is [0, 4095], it is difficult for the display device to provide such a high dynamic range of pixel values to display all the detailed information of the entire medical image at once. In order to realistically display the full detail information of the medical image, the pixel value dynamic range of the medical image may be subjected to a window transform (windowing). The range of image pixel values of the window area is linearly translated into the maximum display range of the display device through a data viewing window. Specifically, when the window region corresponds to a pixel value range of [20, 80], an image pixel value higher than the upper limit of the window may be set to be the brightest pixel value, such as 255, and an image pixel value lower than the lower limit of the window may be set to be the darkest pixel value, such as 0, and other pixel values in the window region may correspond to the [0, 255] range according to a linear mapping (the slope of the window transform is 255/w), so as to implement the display of the format-converted medical image by the display device.
The extent of the window area is defined by the window width and level.
The window width may be denoted by w and is used to represent the pixel value range of the window area, and the areas above the window area on the medical image are all displayed in white shadow, and the areas below the window area are all displayed in black shadow. The window width is increased, the organization structures with different densities in the image finally displayed by the display device are increased, but the contrast among the structures is low, and the detailed parts in the image are difficult to observe; the window width is reduced, the organization structures with different densities in the image finally displayed by the display device are reduced, but the contrast among the structures is high, and the detailed parts in the image can be clearly observed.
The window level may be denoted by c for representing the pixel value of the central position of the window area. In the case of a certain window width, the window levels are different, and the specific pixel value ranges of the window regions are also different. For example, the window width is 60, and when the window level is 40, the pixel value range of the window area is [10, 70 ]; when the window level is 50, the pixel value range of the window region is [20, 80 ]. The pixel value range of the window region is only exemplary, and is for explaining the technical solution of the present application, and in practical use, the pixel value range of the window region may be selected according to an actual situation.
In one embodiment, the pixel value of the location where a tissue structure or lesion to be observed is located may be selected as the window level. For example, to observe whether a breast area is diseased, the width of the pixel value range of the breast distribution area may be used as a window width, and the pixel value of the center position of the breast distribution area may be used as a window level, so that the display device can display a clear image of the breast area, thereby facilitating the diagnosis process of the disease.
In another embodiment, the acquired target pixel value set of the breast region may be processed according to a preset rule to determine a window width and a window level, so as to realize clear image display of the breast region and facilitate a diagnosis process of a disease.
In an embodiment, when the medical image is a CT image, the larger the CT value of the human tissue corresponding to the pixel point is, the closer the color of the pixel point on the CT image is to white (or the brighter the pixel point is); the smaller the CT value of the human tissue corresponding to the pixel point is, the closer the color of the pixel point on the CT image is to black (or the darker the pixel point is).
In one embodiment, the pixel values may be positively correlated with the CT values.
The embodiment of the invention provides a method for determining medical image display parameters, which can realize clear display of detail parts in a target region by acquiring a target pixel value set of the target region in a medical image and dynamically determining the display parameters based on the target pixel value set, and is convenient for improving the accuracy of the subsequent diagnosis process. In addition, the target pixel value set of the target area is determined firstly, and then the display parameters are determined, so that the method can meet the dynamic determination of the display parameters under the conditions of different patients, different human body parts, different pathological changes, different shooting projection light intensities and the like, and the adaptability of the method is improved.
According to an embodiment of the present invention, 120 comprises: a target set of pixel values is determined based on the medical image and the overall set of pixel values using a deep learning model.
In this embodiment, the medical image and the set of global pixel values may be input to a deep learning model that operates to output a set of target pixel values.
Specifically, the deep learning model may be formed by at least one of a back propagation neural network, a convolutional neural network, a cyclic neural network, a deep neural network, and other network structures. The deep learning model may be obtained after training with a plurality of sample data. Each sample data may include a sample medical image, a sample global set of pixel values, and a sample target set of pixel values. In the training process, the sample image and the sample integral pixel value set are input into the deep learning model, the deep learning model outputs a predicted target pixel value set through operation, and parameters of the deep learning model are continuously adjusted through a difference value between the predicted target pixel value set and the sample target pixel value set so as to realize the training process of the deep learning model.
The trained deep learning model can be used for determining a target pixel value set corresponding to a target region in a medical image to be detected.
Optionally, according to an embodiment of the present invention, 120 includes: determining a probability image based on the medical image by utilizing a semantic segmentation model, wherein the size of the probability image is consistent with that of the medical image, and probability information corresponding to each pixel point in the probability image represents the probability that a region on the medical image corresponding to each pixel point in the probability image belongs to a target region; a target set of pixel values is determined from the probability image and the overall set of pixel values.
In this embodiment, the medical image may be input to a semantic segmentation model that is operated on to output another image, i.e. a probabilistic image. The size of the probability image is consistent with that of the medical image, in other words, the pixel points on the medical image and the pixel points on the probability image are in one-to-one correspondence.
Specifically, the probability information corresponding to each pixel point on the probability image may include a parameter, a numerical value (or called probability value) of the parameter may be in a range of 0 to 1, and the parameter is used to indicate a probability that a region on the medical image corresponding to the pixel point belongs to the target region. When the probability value of a certain pixel point is greater than 0.5, the region on the medical image corresponding to the pixel point is considered to belong to a target region; when the probability value of a certain pixel point is less than 0.5, the region on the medical image corresponding to the pixel point is considered to belong to the background region; when the probability value of a certain pixel point is equal to 0.5, the region on the medical image corresponding to the pixel point is considered to belong to a target region or a background region, which can be determined according to the parameter setting of the semantic segmentation model.
Optionally, the probability information corresponding to each pixel point on the probability image may include two parameters, one is used to represent the probability that the region on the medical image corresponding to the pixel point belongs to the target region, and the other is used to represent the probability that the region on the medical image corresponding to the pixel point belongs to the background region, and the sum of the two is 1. For example, the probability information is (0.88, 0.12), where 0.88 represents the probability that the region on the medical image corresponding to the pixel point belongs to the target region, 0.12 represents the probability that the region on the medical image corresponding to the pixel point belongs to the background region, and since 0.88 is greater than 0.12, the region on the medical image corresponding to the pixel point belongs to the target region.
According to the probability information of each pixel point on the probability image, the position of a target region on the medical image can be determined, and then a target pixel value set corresponding to the target region can be determined by combining the whole pixel value set.
According to the method for determining the medical image display parameters, provided by the embodiment of the invention, the medical image is analyzed through the semantic segmentation model, and the probability image with the size consistent with that of the medical image is output, so that the pixel values corresponding to all pixel points in the target area on the medical image can be obtained, the omission of information is avoided, and a foundation is laid for the subsequent determination of the proper display parameters.
In an embodiment, the semantic segmentation model includes a full convolution neural network model, and the full convolution neural network model may segment a background region and a target region of the medical image to determine a target pixel value set corresponding to the target region.
In another embodiment, the semantic segmentation model comprises an improved U-net network model, wherein determining the probabilistic image based on the medical image using the semantic segmentation model comprises: when a semantic segmentation model is used for processing a medical image, before convolution operation is carried out on each convolution layer, image expansion processing is carried out on an input image of the convolution layer to obtain a first image, wherein the size of the first image is larger than that of the input image; performing convolution processing on the first image by using the convolution layer to obtain a second image, wherein the size of the second image is equal to that of the input image; and determining a probability image by using a second image output by the last convolutional layer of the semantic segmentation model.
After the convolution processing is performed on the input image by the general U-net network model, the size of the output image is smaller than that of the input image, so that all pixel value information corresponding to the target area on the medical image is difficult to obtain. In the embodiment of the application, the medical image is processed by adopting the improved U-net network model, and the probability image with the size consistent with that of the medical image can be output.
As shown in fig. 2, the improved U-net network model includes a down-sampling process on the left side and an up-sampling process on the right side. The down-sampling process on the left side comprises five convolution processes and four pooling processes; the upsampling process on the right includes four deconvolution processes and four convolution processes. Each convolution process may be one or more convolution operations. Taking any convolution operation as an example, assuming that the size of the input image of the convolution operation is 128 × 128, before performing the convolution operation, an image expansion process (or padding) is performed on the input image to obtain a first image, where the size of the first image is 130 × 130 and is larger than the size of the input image. A first image 130 x 130 is obtained from the input image 128 x 128, and may be regarded as a circle of pixels around the input image, and the pixel value of the circle of pixels may be consistent with the pixel value of the background pixels. For example, the pixel values of the circle of pixels are all 0. Then, a convolution operation, for example, a 3 × 3 convolution, is performed on the first image to obtain a second image, and the size of the second image is 128 × 128, that is, the size of the second image is consistent with the size of the input image. And the image expansion processing is carried out before each convolution operation, so that the final output image size of the improved U-net network model can be ensured to be consistent with the input image size.
In the down-sampling process, the image (or feature map) size is getting smaller as the pooling process proceeds. In alternating convolution and pooling, the image varies from high resolution (shallow features) to low resolution (deep features). In the up-sampling process, the size of the image is continuously increased along with the progress of the deconvolution process, and the sizes of the images output by the right-side deconvolution operation and the left-side convolution operation on the same layer are consistent. In one embodiment, the convolution result on the left side can be combined with the deconvolution result on the same layer on the right side by the concat function to obtain an image with unchanged size and increased channel number. The deep features (which can be regarded as the result of the deconvolution operation on the right side) contain rich semantic information, the shallow features (which can be regarded as the result of the deconvolution operation on the left side) have higher resolution, and the up-sampling process combines the deep features and the shallow features to obtain a clearer and more precise segmentation result.
The 128 x 128 input images described above are merely exemplary, and the actual input images may not be 128 x 128 in size, with the particular size of the input images being dependent on the size of the medical image. The number of convolution processes and pooling processes in fig. 2, and specific parameters of convolution operation, etc. may be set according to actual situations, and the embodiment of the present invention is not limited thereto.
According to one embodiment of the invention, determining a set of target pixel values from the probabilistic image and the set of global pixel values comprises: determining a binary image according to the probability image, wherein a numerical value corresponding to each pixel point in the binary image indicates that a region on the medical image corresponding to each pixel point in the binary image belongs to a background region or a target region; a target set of pixel values is determined from the binary image and the global set of pixel values.
For example, according to probability information corresponding to any pixel point on the probability image, whether a region on the medical image corresponding to the pixel point belongs to the target region is determined. If not, the corresponding pixel point on the binary image is represented by 0; if the binary image belongs to the binary image, the corresponding pixel point on the binary image is represented by 1. Thereby, a binary image can be obtained. Of course, other numerical values may be used to represent the region to which the pixel point belongs on the binary image.
In one embodiment, the improved U-net network model can directly output a binary image. For example, when the probability information corresponding to any pixel point in the probability image is represented by only 0 or 1, the probability image is a binary image.
According to an embodiment of the present invention, 130 includes: determining a pixel value standard deviation according to each pixel value in the target pixel value set, and according to a formula: determining a window width, wherein the standard deviation of the pixel values is represented by A, the window width is represented by w, and N is a positive number; determining a pixel value mean value according to each pixel value in the target pixel value set, and according to a formula: and c-M w, wherein the pixel value mean is represented by B, the window level is represented by c, and M is a coefficient.
The window width and the window level greatly affect the display effect of the image finally displayed by the display device. If the window level is too small, the overall display effect of the image will be too bright, and if the window level is too large, the overall display effect of the image will be too dark. If the window width is too small, the effective information in the original medical image may be lost, and if the window width is too large, the contrast of the image is low, and the details cannot be observed.
In this embodiment, the pixel value standard deviation and the pixel value mean both utilize each pixel value in the target pixel value set, and therefore, the window width obtained based on the pixel value standard deviation and the window level obtained based on the pixel value mean can optimize the display effect of the display device, so that the display of the target area is clearer.
Further, N is 5.5 to 6.5, and M is 0.03 to 0.07. For example, N may be 5.5, 5.6, 5.7, 5.8, 5.9, 6, 6.1, 6.2, 6.3, 6.4, or 6.5, etc. M may be 0.03, 0.04, 0.05, 0.06, 0.07, or the like.
In one embodiment, N is 6 and M is 0.05.
Fig. 3(a) and 3(b) are display effect graphs of two sets of mammary gland molybdenum target images. The left side is a display effect diagram of the mammary gland molybdenum target image obtained by adopting the default window width and window level provided by the equipment manufacturer, and the right side is a display effect diagram of the mammary gland molybdenum target image obtained by adopting the window width and window level confirmed by the method provided by the embodiment of the invention. By comparing the left and right images, it can be seen that by adopting the method for determining the medical image display parameters provided by the embodiment of the application, better window width and window level can be obtained, and a clearer and more precise display effect can be achieved.
According to an embodiment of the present invention, the display parameters further include an upper actual window limit and a lower actual window limit, wherein 130 further includes: determining a theoretical window upper limit according to the sum of the half of the window width and the window level, and determining a theoretical window lower limit according to the difference between the window level and the half of the window width; when the maximum pixel value in the whole pixel value set is greater than or equal to the upper limit of the theoretical window, determining the upper limit of the theoretical window as the upper limit of the actual window, and when the maximum pixel value is less than the upper limit of the theoretical window, determining the maximum pixel value as the upper limit of the actual window; and when the minimum pixel value in the whole pixel value set is less than or equal to the lower limit of the theoretical window, determining the lower limit of the theoretical window as the lower limit of the actual window, and when the minimum pixel value is greater than the lower limit of the theoretical window, determining the minimum pixel value as the lower limit of the actual window.
Specifically, the upper limit of the theoretical window is denoted by wmax, and the lower limit of the theoretical window is denoted by wmin.
wmax=c+w/2,wmin=c-w/2。
The overall set of pixel values includes pixel values for each pixel point on the medical image. When the maximum pixel value in the whole pixel value set is greater than or equal to the upper limit of the theoretical window, the upper limit of the theoretical window is the upper limit of the actual window; and when the maximum pixel value is smaller than the theoretical window upper limit, the maximum pixel value is the actual window upper limit. Therefore, when the theoretical window upper limit is larger than the maximum pixel value, the maximum pixel value is set as the actual window upper limit, the display parameters can be further optimized, the image contrast is further improved, and the display effect is improved. For similar reasons, the minimum pixel value may be set to the actual window lower limit when the theoretical window lower limit is less than the minimum pixel value.
Fig. 4 is a flowchart illustrating a method for determining display parameters of medical images according to another exemplary embodiment of the present invention. FIG. 4 is an example of the embodiment of FIG. 1, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 4, the method includes the following.
410: medical image data is acquired, the medical image data including a medical image and a set of global pixel values.
The types of the medical image data and the format of the medical image can be referred to the description in the embodiment of fig. 1, and are not described herein again to avoid repetition.
420: a probabilistic image is determined based on the medical image using a semantic segmentation model.
The size of the probability image is consistent with the size of the medical image, and the probability information corresponding to each pixel point in the probability image may include two parameters. For example, the probability information is (0.88, 0.12), where 0.88 represents the probability that the region on the medical image corresponding to the pixel point belongs to the target region, 0.12 represents the probability that the region on the medical image corresponding to the pixel point belongs to the background region, and since 0.88 is greater than 0.12, the region on the medical image corresponding to the pixel point belongs to the target region.
The semantic segmentation model is an improved U-net network model. The specific structure and operation principle of the improved U-net network model may be referred to the description in the embodiment of fig. 2, and are not described herein again to avoid repetition.
430: and determining a binary image according to the probability image.
The numerical value corresponding to each pixel point in the binary image indicates that the region on the medical image corresponding to each pixel point in the binary image belongs to a background region or a target region. For example, any pixel point of the binary image is represented by 0 or 1, 0 represents that the region on the medical image corresponding to the pixel point belongs to the background region, and 1 represents that the region on the medical image corresponding to the pixel point belongs to the target region.
440: a target set of pixel values is determined from the binary image and the global set of pixel values.
And selecting corresponding pixel values in the whole pixel value set to form a target pixel value set according to the pixel points with the numerical value of 1 on the binary image.
450: and determining a pixel value standard deviation A according to each pixel value in the target pixel value set, determining a window width w to be 6A, determining a pixel value mean value B according to each pixel value in the target pixel value set, and determining a window level c to be B-0.05W.
460: the theoretical window upper limit wmax is determined to be c + w/2 and the theoretical window lower limit wmin is determined to be c-w/2.
470: and determining an upper limit of the actual window based on the relation between the maximum pixel value and wmax in the whole pixel value set, and determining a lower limit of the actual window based on the relation between the minimum pixel value and wmin in the whole pixel value set.
When the maximum pixel value in the whole pixel value set is greater than or equal to wmax, determining wmax as the upper limit of the actual window, and when the maximum pixel value is less than wmax, determining the maximum pixel value as the upper limit of the actual window; and when the minimum pixel value in the whole pixel value set is less than or equal to wmin, determining wmin as the lower limit of the actual window, and when the minimum pixel value is greater than wmin, determining the minimum pixel value as the lower limit of the actual window.
And displaying the image based on the upper limit of the actual window, the lower limit of the actual window and the whole pixel value set, so that the display effect can be improved.
Fig. 5 is a flowchart illustrating a training method of a semantic segmentation model according to an exemplary embodiment of the present invention. As shown in fig. 5, the training method includes the following.
510: a plurality of sample medical image data is acquired.
520: and training the deep learning model by using a plurality of sample medical image data and a preset loss function to obtain a semantic segmentation model.
Each sample medical image data in the plurality of sample medical image data comprises a sample medical image and a sample label set, and each label in the sample label set is used for indicating that a region where a corresponding pixel point in the sample medical image is located belongs to a background region or a target region.
The sample label set may be represented by a probability image or a binary image in the embodiment of fig. 1, i.e., a sample probability image or a sample binary image.
Specifically, the sample medical image is input into a deep learning model, the deep learning model can output a predicted probability image or a predicted binary image, and the predicted probability image or the predicted binary image can indicate that the region where each pixel point in the sample medical image belongs to a background region or a target region. And comparing the predicted probability image with a sample probability image, or comparing the predicted binary image with the sample binary image, and adjusting the parameters of the deep learning model according to the comparison result. And finally, a semantic segmentation model can be obtained by continuously adjusting the parameters of the deep learning model. The semantic segmentation model may be an improved U-net network model, and the specific structure thereof may be referred to the description in the embodiment of fig. 2.
The semantic segmentation model can be used to implement the process of determining the probability image in the embodiments of fig. 1 and 4.
The embodiment of the invention provides a training method of a semantic segmentation model, which is used for acquiring a target pixel value set of a target region in a medical image and further dynamically determining display parameters based on the target pixel value set, so that clear display of detail parts in the target region can be realized, and the accuracy of a subsequent diagnosis process is improved conveniently.
According to an embodiment of the present invention, the predetermined loss function is: FL (p)t)=-(1-pt)2log(pt) Wherein the loss function is represented by FL and in the target region, ptRepresenting the probability that pixel points in the target region belong to the target region after deep learning model calculation, in the background region, ptAnd the probability that the pixel points in the background area belong to the background area after the deep learning model is calculated is represented.
During the training process, the parameters of the deep learning model are continuously adjusted so that the FL approaches 0. Specifically, the sample label corresponding to the pixel point of the target region may be 1, and the sample label corresponding to the pixel point of the background region may be 0. During training, for any pixel point of a target area, if the deep learning model predicts the probability p that the pixel point belongs to the target areatThe closer to 1, the better the deep learning model is trained; similarly, for background regionsIf the deep learning model predicts the probability p that the pixel belongs to the background regiontThe closer to 1, the better the deep learning model is trained.
The loss function provided in this embodiment can be adapted to both the background region and the target region, i.e., the expression of the loss function is the same regardless of whether it is the background region or the target region.
Fig. 6 is a schematic structural diagram of an apparatus 600 for determining medical image display parameters according to an exemplary embodiment of the present invention. As shown in fig. 6, the apparatus 600 includes: an acquisition module 610 and a determination module 620.
The obtaining module 610 is configured to obtain medical image data, where the medical image data includes a medical image and an overall set of pixel values; the determining module 620 is configured to determine a target pixel value set of a target region in the medical image based on the medical image data, and determine display parameters based on the target pixel value set by using a preset rule, where the display parameters include a window width and a window level.
The embodiment of the invention provides a device for determining medical image display parameters, which can realize clear display of detail parts in a target region by acquiring a target pixel value set of the target region in a medical image and dynamically determining the display parameters based on the target pixel value set, and is convenient for improving the accuracy of the subsequent diagnosis process. In addition, the target pixel value set of the target area is determined firstly, and then the display parameters are determined, so that the device can meet the dynamic determination of the display parameters under the conditions of different patients, different human body parts, different pathological changes, different shooting projection light intensities and the like, and the adaptability of the device is improved.
According to an embodiment of the invention, the determination module 620 is configured to determine a set of target pixel values based on the medical image and the set of global pixel values using a deep learning model.
According to an embodiment of the present invention, the determining module 620 is configured to: determining a probability image based on the medical image by utilizing a semantic segmentation model, wherein the size of the probability image is consistent with that of the medical image, and probability information corresponding to each pixel point in the probability image represents the probability that a region on the medical image corresponding to each pixel point in the probability image belongs to a target region; a target set of pixel values is determined from the probability image and the overall set of pixel values.
According to an embodiment of the present invention, the semantic segmentation model includes an improved U-net network model, wherein the determining module 620 is configured to: when a semantic segmentation model is used for processing a medical image, before convolution operation is carried out on each convolution layer, image expansion processing is carried out on an input image of the convolution layer to obtain a first image, wherein the size of the first image is larger than that of the input image; performing convolution processing on the first image by using the convolution layer to obtain a second image, wherein the size of the second image is equal to that of the input image; and determining a probability image by using a second image output by the last convolutional layer of the semantic segmentation model.
According to an embodiment of the present invention, the determining module 620 is configured to: determining a binary image according to the probability image, wherein a numerical value corresponding to each pixel point in the binary image indicates that a region on the medical image corresponding to each pixel point in the binary image belongs to a background region or a target region; a target set of pixel values is determined from the binary image and the global set of pixel values.
According to an embodiment of the present invention, the determining module 620 is configured to: determining a pixel value standard deviation according to each pixel value in the target pixel value set, and according to a formula: determining a window width, wherein the standard deviation of the pixel values is represented by A, the window width is represented by w, and N is a positive number; determining a pixel value mean value according to each pixel value in the target pixel value set, and according to a formula: and c-M w, wherein the pixel value mean is represented by B, the window level is represented by c, and M is a coefficient.
According to an embodiment of the present invention, N is 5.5 to 6.5, and M is 0.03 to 0.07.
According to an embodiment of the present invention, the display parameters further include an actual window upper limit and an actual window lower limit, wherein the determining module 620 is further configured to: determining a theoretical window upper limit according to the sum of the half of the window width and the window level, and determining a theoretical window lower limit according to the difference between the window level and the half of the window width; when the maximum pixel value in the whole pixel value set is greater than or equal to the upper limit of the theoretical window, determining the upper limit of the theoretical window as the upper limit of the actual window, and when the maximum pixel value is less than the upper limit of the theoretical window, determining the maximum pixel value as the upper limit of the actual window; and when the minimum pixel value in the whole pixel value set is less than or equal to the lower limit of the theoretical window, determining the lower limit of the theoretical window as the lower limit of the actual window, and when the minimum pixel value is greater than the lower limit of the theoretical window, determining the minimum pixel value as the lower limit of the actual window.
According to an embodiment of the present invention, the medical image data includes computed tomography image data, molybdenum target image data, magnetic resonance image data, or ultrasound image data, the format of the medical image includes a medical digital imaging and communication standard format, the medical image includes a breast molybdenum target image, and the target region is a breast distribution region.
It should be understood that the specific working processes and functions of the obtaining module 610 and the determining module 620 in the foregoing embodiments may refer to the description of the method for determining medical image display parameters provided in fig. 1 and fig. 4, and are not described herein again to avoid repetition.
Fig. 7 is a block diagram of an electronic device 700 for determining medical image display parameters or training a semantic segmentation model according to an exemplary embodiment of the present invention.
Referring to fig. 7, electronic device 700 includes a processing component 710 that further includes one or more processors, and memory resources, represented by memory 720, for storing instructions, such as applications, that are executable by processing component 710. The application programs stored in memory 720 may include one or more modules that each correspond to a set of instructions. Furthermore, the processing component 710 is configured to execute instructions to perform the above-described method of determining display parameters of medical images or to perform the above-described training method of semantic segmentation models.
The electronic device 700 may also include a power supply component configured to perform power management of the electronic device 700, a wired or wireless network interface configured to connect the electronic device 700 to a network, and an input-output (I/O) interface. The electronic device 700 may be operated based on an operating system, such as Windows Server, stored in the memory 720TM,Mac OSXTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of the electronic device 700, enable the electronic device 700 to perform a method for determining medical image display parameters, comprising: acquiring medical image data, wherein the medical image data comprises a medical image and an integral pixel value set; determining a target set of pixel values for a target region in a medical image based on medical image data; and determining display parameters based on the target pixel value set by using a preset rule, wherein the display parameters comprise window width and window level. Alternatively, when the instructions in the storage medium are executed by the processor of the electronic device 700, the electronic device 700 may execute a training method of a semantic segmentation model, including: acquiring a plurality of sample medical image data; the method comprises the steps of training a deep learning model by utilizing a plurality of sample medical image data and a preset loss function to obtain a semantic segmentation model, wherein each sample medical image data in the plurality of sample medical image data comprises a sample medical image and a sample label set, and each label in the sample label set is used for indicating that a region where a corresponding pixel point in the sample medical image belongs to a background region or a target region.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that the terms "first," "second," "third," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.

Claims (14)

1. A method for determining display parameters of medical images, comprising:
acquiring medical image data, the medical image data comprising a medical image and a set of global pixel values;
determining a set of target pixel values for a target region in the medical image based on the medical image data;
determining display parameters based on the target set of pixel values using a preset rule, the display parameters including a window width and a window level.
2. The method of claim 1, wherein said determining a set of target pixel values for a target region in the medical image based on the medical imagery data comprises:
determining the target set of pixel values based on the medical image and the set of global pixel values using a deep learning model.
3. The method of claim 1, wherein said determining a set of target pixel values for a target region in the medical image based on the medical imagery data comprises:
determining a probability image based on the medical image by utilizing a semantic segmentation model, wherein the size of the probability image is consistent with that of the medical image, and probability information corresponding to each pixel point in the probability image represents the probability that each pixel point in the probability image corresponds to a region on the medical image and belongs to the target region;
determining the target set of pixel values from the probability image and the overall set of pixel values.
4. The method of claim 3, wherein the semantic segmentation model comprises a modified U-net network model, wherein,
the determining a probabilistic image based on the medical image using a semantic segmentation model includes:
when the semantic segmentation model is used for processing the medical image, before convolution operation is carried out on each convolution layer, image expansion processing is carried out on the input image of the convolution layer to obtain a first image, and the size of the first image is larger than that of the input image;
performing convolution processing on the first image by using the convolution layer to obtain a second image, wherein the size of the second image is equal to that of the input image;
and determining the probability image by utilizing a second image output by the last convolution layer of the semantic segmentation model.
5. The method of claim 3, wherein said determining the target set of pixel values from the probability image and the overall set of pixel values comprises:
determining a binary image according to the probability image, wherein a numerical value corresponding to each pixel point in the binary image indicates that a region on the binary image, corresponding to each pixel point, on the medical image belongs to a background region or the target region;
determining the target set of pixel values from the binary image and the overall set of pixel values.
6. The method of claim 1, wherein determining display parameters based on the target set of pixel values using a predetermined rule comprises:
determining a pixel value standard deviation according to each pixel value in the target pixel value set, and according to a formula: determining the window width, wherein the standard deviation of the pixel values is represented by A, the window width is represented by w, and N is a positive number;
determining a pixel value mean value according to each pixel value in the target pixel value set, and according to a formula: and c-M w, wherein the pixel value mean is represented by B, the window level is represented by c, and M is a coefficient.
7. The method of claim 6, wherein N is 5.5-6.5 and M is 0.03-0.07.
8. The method of any of claims 1 to 7, wherein the display parameters further comprise an upper actual window limit and a lower actual window limit, wherein,
the determining, by using a preset rule, display parameters based on the target set of pixel values further includes:
determining a theoretical window upper limit according to the sum of the half of the window width and the window level, and determining a theoretical window lower limit according to the difference between the window level and the half of the window width;
when the maximum pixel value in the overall pixel value set is greater than or equal to the theoretical window upper limit, determining the theoretical window upper limit as the actual window upper limit, and when the maximum pixel value is less than the theoretical window upper limit, determining the maximum pixel value as the actual window upper limit;
and when the minimum pixel value in the whole pixel value set is less than or equal to the lower limit of the theoretical window, determining the lower limit of the theoretical window as the lower limit of the actual window, and when the minimum pixel value is greater than the lower limit of the theoretical window, determining the minimum pixel value as the lower limit of the actual window.
9. The method of any one of claims 1 to 7, wherein the medical imaging data comprises computed tomography imaging data, molybdenum target imaging data, magnetic resonance imaging data, or ultrasound imaging data, wherein the medical image format comprises a medical digital imaging and communications standard format, wherein the medical image comprises a molybdenum target image of the breast, and wherein the target region is a region of the breast distribution.
10. A training method of a semantic segmentation model is characterized by comprising the following steps:
acquiring a plurality of sample medical image data;
training a deep learning model by using the plurality of sample medical image data and a preset loss function to obtain a semantic segmentation model, wherein each sample medical image data in the plurality of sample medical image data comprises a sample medical image and a sample label set, and each label in the sample label set is used for indicating that a region where a corresponding pixel point in the sample medical image is located belongs to a background region or a target region.
11. Training method according to claim 10, wherein said preset loss function is:
FL(pt)=-(1-pt)2log(pt) Wherein, in the step (A),
said loss function is represented by FL, and in said target region, said ptRepresenting the probability that pixel points in the target region belong to the target region after calculation of the deep learning model, wherein p is in the background regiontAnd representing the probability that the pixel points in the background region belong to the background region after the deep learning model is calculated.
12. An apparatus for determining display parameters of medical images, comprising:
an acquisition module for acquiring medical image data, the medical image data comprising a medical image and a set of global pixel values;
a determining module, configured to determine a target pixel value set of a target region in the medical image based on the medical image data, and determine a display parameter based on the target pixel value set by using a preset rule, where the display parameter includes a window width and a window level.
13. A computer-readable storage medium, in which a computer program is stored, the computer program being adapted to perform the method for determining medical image display parameters according to any one of the preceding claims 1 to 9 or the method for training a semantic segmentation model according to claim 10 or 11.
14. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to execute the method for determining medical image display parameters according to any one of the claims 1 to 9 or the method for training the semantic segmentation model according to claim 10 or 11.
CN201911348430.1A 2019-12-24 2019-12-24 Method and device for determining medical image display parameters Pending CN111127430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911348430.1A CN111127430A (en) 2019-12-24 2019-12-24 Method and device for determining medical image display parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911348430.1A CN111127430A (en) 2019-12-24 2019-12-24 Method and device for determining medical image display parameters

Publications (1)

Publication Number Publication Date
CN111127430A true CN111127430A (en) 2020-05-08

Family

ID=70501918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911348430.1A Pending CN111127430A (en) 2019-12-24 2019-12-24 Method and device for determining medical image display parameters

Country Status (1)

Country Link
CN (1) CN111127430A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111803104A (en) * 2020-07-20 2020-10-23 上海市第六人民医院 Medical image display method, medium and electronic equipment
CN112215804A (en) * 2020-09-15 2021-01-12 数坤(北京)网络科技有限公司 Data processing method, equipment and computer storage medium
CN112233126A (en) * 2020-10-15 2021-01-15 推想医疗科技股份有限公司 Windowing method and device for medical image
CN112669235A (en) * 2020-12-30 2021-04-16 上海联影智能医疗科技有限公司 Method and device for adjusting image gray scale, electronic equipment and storage medium
CN113516328A (en) * 2020-07-13 2021-10-19 阿里巴巴集团控股有限公司 Data processing method, service providing method, device, equipment and storage medium
CN113902642A (en) * 2021-10-13 2022-01-07 数坤(北京)网络科技股份有限公司 Medical image processing method and device, electronic equipment and storage medium
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system
CN114219813A (en) * 2021-12-16 2022-03-22 数坤(北京)网络科技股份有限公司 Image processing method, intelligent terminal and storage medium
CN114500498A (en) * 2021-12-28 2022-05-13 武汉联影医疗科技有限公司 DICOM file transmission and storage method, system, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023273A (en) * 2015-07-01 2015-11-04 张逸凡 ROI (Region of Interest) window width and position adjusting method of medical image
CN108010041A (en) * 2017-12-22 2018-05-08 数坤(北京)网络科技有限公司 Human heart coronary artery extracting method based on deep learning neutral net cascade model
CN108596884A (en) * 2018-04-15 2018-09-28 桂林电子科技大学 A kind of cancer of the esophagus dividing method in chest CT image
CN109509195A (en) * 2018-12-12 2019-03-22 北京达佳互联信息技术有限公司 Perspective process method, apparatus, electronic equipment and storage medium
CN109685060A (en) * 2018-11-09 2019-04-26 科大讯飞股份有限公司 Image processing method and device
CN109801254A (en) * 2017-11-14 2019-05-24 西门子保健有限责任公司 Transmission function in medical imaging determines

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023273A (en) * 2015-07-01 2015-11-04 张逸凡 ROI (Region of Interest) window width and position adjusting method of medical image
CN109801254A (en) * 2017-11-14 2019-05-24 西门子保健有限责任公司 Transmission function in medical imaging determines
CN108010041A (en) * 2017-12-22 2018-05-08 数坤(北京)网络科技有限公司 Human heart coronary artery extracting method based on deep learning neutral net cascade model
CN108596884A (en) * 2018-04-15 2018-09-28 桂林电子科技大学 A kind of cancer of the esophagus dividing method in chest CT image
CN109685060A (en) * 2018-11-09 2019-04-26 科大讯飞股份有限公司 Image processing method and device
CN109509195A (en) * 2018-12-12 2019-03-22 北京达佳互联信息技术有限公司 Perspective process method, apparatus, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张沉石 等: "腰椎金属内固定术后骨去伪影技术的临床应用", 《中国医学影像技术》 *
杨威 等: "一种改进的Focal Loss在语义分割上的应用", 《半导体光电》 *
胡杰勋: "便携式数字X线诊断仪系统设计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516328A (en) * 2020-07-13 2021-10-19 阿里巴巴集团控股有限公司 Data processing method, service providing method, device, equipment and storage medium
CN111803104A (en) * 2020-07-20 2020-10-23 上海市第六人民医院 Medical image display method, medium and electronic equipment
CN112215804A (en) * 2020-09-15 2021-01-12 数坤(北京)网络科技有限公司 Data processing method, equipment and computer storage medium
CN112233126A (en) * 2020-10-15 2021-01-15 推想医疗科技股份有限公司 Windowing method and device for medical image
CN112669235A (en) * 2020-12-30 2021-04-16 上海联影智能医疗科技有限公司 Method and device for adjusting image gray scale, electronic equipment and storage medium
CN112669235B (en) * 2020-12-30 2024-03-05 上海联影智能医疗科技有限公司 Method, device, electronic equipment and storage medium for adjusting image gray scale
CN113902642A (en) * 2021-10-13 2022-01-07 数坤(北京)网络科技股份有限公司 Medical image processing method and device, electronic equipment and storage medium
CN114219813A (en) * 2021-12-16 2022-03-22 数坤(北京)网络科技股份有限公司 Image processing method, intelligent terminal and storage medium
CN114500498A (en) * 2021-12-28 2022-05-13 武汉联影医疗科技有限公司 DICOM file transmission and storage method, system, equipment and storage medium
CN114500498B (en) * 2021-12-28 2023-12-08 武汉联影医疗科技有限公司 DICOM file transmission and storage method, system, equipment and storage medium
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system
CN114037803B (en) * 2022-01-11 2022-04-15 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system

Similar Documents

Publication Publication Date Title
CN111127430A (en) Method and device for determining medical image display parameters
US11257261B2 (en) Computed tomography visualization adjustment
US20150287188A1 (en) Organ-specific image display
CN110503652B (en) Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal
US10867375B2 (en) Forecasting images for image processing
JP3836097B2 (en) MEDICAL IMAGE GENERATION DEVICE AND METHOD, AND PROGRAM
JP2016116774A (en) Image processor, image processing method, image processing system, and program
EP3391820B1 (en) Tomographic image processing device and method, and recording medium relating to method
CN110458837B (en) Image post-processing method and device, electronic equipment and storage medium
JP2000287955A (en) Image diagnostic supporting apparatus
JP2015058355A (en) Ct image evaluation device and ct image evaluation method
JP2016214857A (en) Medical image processor and medical image processing method
CN105678750B (en) The grey scale mapping curve generation method and device of medical image
JP2004174241A (en) Image forming method
EP3933759A1 (en) Image processing method, apparatus and system, and electronic device and storage medium
CN111918610A (en) Gradation conversion method for chest X-ray image, gradation conversion program, gradation conversion device, server device, and conversion method
JP2007325641A (en) Medical image display device
CN111325758A (en) Lung image segmentation method and device and training method of image segmentation model
CN113299369B (en) Medical image window adjusting optimization method
JP2006223894A (en) Device, method and program for medical image generation
US11138736B2 (en) Information processing apparatus and information processing method
KR20210073033A (en) Method for artificial intelligence nodule segmentation based on dynamic window and apparatus thereof
CN115192057B (en) CT-based composite imaging method and device
CN112001979B (en) Motion artifact processing method, system, readable storage medium and apparatus
CN100431494C (en) Method and apparatus for analyzing biological tissue images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508