WO2022121170A1 - 图像处理方法及装置、电子设备、存储介质和程序 - Google Patents

图像处理方法及装置、电子设备、存储介质和程序 Download PDF

Info

Publication number
WO2022121170A1
WO2022121170A1 PCT/CN2021/083682 CN2021083682W WO2022121170A1 WO 2022121170 A1 WO2022121170 A1 WO 2022121170A1 CN 2021083682 W CN2021083682 W CN 2021083682W WO 2022121170 A1 WO2022121170 A1 WO 2022121170A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
disease
processed
target
category
Prior art date
Application number
PCT/CN2021/083682
Other languages
English (en)
French (fr)
Inventor
宋涛
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022121170A1 publication Critical patent/WO2022121170A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method and apparatus, an electronic device, a computer storage medium, and a computer program.
  • the present application proposes an image processing method and apparatus, electronic device, computer storage medium and computer program.
  • An embodiment of the present application provides an image processing method, including: acquiring an image to be processed, where the image to be processed includes a chest X-ray image; performing disease classification processing on the image to be processed, and determining a disease corresponding to the image to be processed category; image processing is performed on the to-be-processed image according to the image processing method corresponding to the disease category to obtain a processing result of the to-be-processed image.
  • the disease category includes a first disease category
  • the first disease category indicates that an abnormal target of the first object exists in the to-be-processed image, and is processed according to the image corresponding to the disease category method, performing image processing on the to-be-processed image to obtain a processing result of the to-be-processed image, comprising: performing a first segmentation process on the to-be-processed image when the disease category is a first disease category, Determine the first image area where the abnormal object of the first object is located in the image to be processed, and the second image area where the corresponding normal object is located; determine the first image area and the second image area according to the first image area and the second image area The processing result of the image to be processed.
  • determining the processing result of the image to be processed according to the first image area and the second image area includes: according to the area of the first image area and the second image area The ratio of the areas of the image areas determines the first analysis result of the abnormal target of the first object, and the processing result includes the first analysis result and the first image area.
  • the disease category includes a third disease category
  • the third disease category includes multiple subcategories
  • the image to be processed is imaged according to an image processing method corresponding to the disease category processing to obtain the processing result of the image to be processed, including: in the case that the disease category is a third disease category, performing sub-category classification processing on the image to be processed, and determining the disease corresponding to the image to be processed subcategory, the processing result includes the disease subcategory.
  • the method further includes: performing key point detection processing on the to-be-processed image, and determining location information of target key points of multiple objects in the to-be-processed image; The location information of the key points determines the second analysis result of each category of targets.
  • the target key points of the multiple objects include cardiac key points, thoracic key points and diaphragm key points
  • the target of each category is determined according to the position information of the target key points of each category
  • the second analysis result includes: determining cardiothoracic ratio information according to the position information of the cardiac key points and the position information of the first thoracic key point in the thoracic key points; according to the second thoracic key point in the thoracic key point The position information of the key points and the position information of the diaphragm key points, and the costophrenic angle information is determined; wherein, the first thoracic key point is different from the second thoracic key point, and the second analysis result includes the Cardiothoracic ratio information and/or the costophrenic angle information.
  • the method further includes: performing a third segmentation process on the to-be-processed image, and determining a fourth image area where the target of the third object in the to-be-processed image is located; The fourth image area determines the third analysis result of the target of the second object.
  • determining the third analysis result of the target of the second object according to the fourth image area includes: determining, according to the fourth image area, the third analysis result of the object of the third object the center line corresponding to the target; according to the center line, determine the scoliosis angle corresponding to the target of the third object; wherein, the third object includes the spine.
  • the first disease category includes pneumothorax disease
  • the first object includes the lung
  • the abnormal target of the first object includes a pneumothorax area
  • the corresponding normal target includes a lung field area
  • the second disease category includes rib abnormalities
  • the second object includes ribs
  • the abnormal target of the second object includes abnormal ribs
  • the third disease category includes pulmonary edema, enlarged cardiac shadow, mediastinal lesions any of the .
  • An embodiment of the present application provides an image processing apparatus, including: an acquisition module configured to acquire an image to be processed, where the to-be-processed image includes a chest X-ray image; a first determination module configured to perform disease classification on the to-be-processed image processing, to determine a disease category corresponding to the image to be processed; a processing module configured to perform image processing on the image to be processed according to an image processing method corresponding to the disease category, to obtain a processing result of the image to be processed .
  • the disease category includes a first disease category
  • the first disease category indicates that an abnormal target of the first object exists in the to-be-processed image
  • the processing module includes: a first segmentation a sub-module configured to perform a first segmentation process on the to-be-processed image when the disease category is the first disease category, and to determine a first image area in the to-be-processed image where the abnormal target of the first object is located , and the second image area where the corresponding normal target is located; the result determination sub-module is configured to determine the processing result of the to-be-processed image according to the first image area and the second image area.
  • the result determination sub-module is specifically configured to determine the abnormal target of the first object according to the ratio of the area of the first image area to the area of the second image area
  • a first analysis result the processing result includes the first analysis result and the first image area.
  • the disease category includes a second disease category
  • the second disease category is used to indicate that an abnormal target of the second object exists in the to-be-processed image
  • the processing module includes: a first The binary segmentation sub-module is configured to perform a second segmentation process on the to-be-processed image when the disease category is the second disease category, and determine the first location where the multiple targets of the second object in the to-be-processed image are located. Three image areas, the processing result includes the third image area.
  • the disease category includes a third disease category
  • the third disease category includes a plurality of subcategories
  • the processing module includes: a classification submodule configured to be the first disease category when the disease category is the first
  • the sub-category classification processing is performed on the to-be-processed image to determine a disease sub-category corresponding to the to-be-processed image
  • the processing result includes the disease sub-category.
  • the apparatus further includes: a key point detection module, configured to perform key point detection processing on the to-be-processed image, and to determine the positions of target key points of multiple objects in the to-be-processed image information; a second determination module configured to determine the second analysis result of each category of targets according to the position information of each category of target key points.
  • a key point detection module configured to perform key point detection processing on the to-be-processed image, and to determine the positions of target key points of multiple objects in the to-be-processed image information
  • a second determination module configured to determine the second analysis result of each category of targets according to the position information of each category of target key points.
  • the target key points of the multiple objects include cardiac key points, thoracic key points, and diaphragm key points
  • the second determination module includes: a cardiothoracic ratio determination sub-module, configured according to The position information of the cardiac key points and the position information of the first thoracic key point in the thoracic key points determine the cardiothoracic ratio information; the costophrenic angle determination sub-module is configured to be based on the second thoracic key in the thoracic key points.
  • the position information of the key points and the position information of the diaphragm key points, and the costophrenic angle information is determined; wherein, the first thoracic key point is different from the second thoracic key point, and the second analysis result includes the Cardiothoracic ratio information and/or the costophrenic angle information.
  • the third determination module includes: a centerline determination sub-module configured to determine, according to the fourth image area, a centerline corresponding to the target of the third object; a side bend angle The determining submodule is configured to determine the scoliosis angle corresponding to the target of the third object according to the center line, wherein the third object includes the spine.
  • An embodiment of the present application provides an electronic device, including: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to execute any of the foregoing method.
  • An embodiment of the present application provides a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, any one of the foregoing methods is implemented.
  • Embodiments of the present application further provide a computer program, including computer-readable codes, when the computer-readable codes are executed in an electronic device, any one of the foregoing methods is implemented when executed by a processor in the electronic device.
  • corresponding image processing is performed for the disease types corresponding to the images to be processed to obtain corresponding processing results, and multiple processing results corresponding to different diseases can be output, thereby improving the accuracy of disease detection and assisting doctors. Improve the efficiency of disease diagnosis.
  • 1a is a schematic diagram of an application scenario of an embodiment of the present application.
  • FIG. 1b is a flowchart of an image processing method provided by an embodiment of the application.
  • FIG. 2 is a schematic diagram of a pneumothorax segmentation result based on a chest X-ray image provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a processing result of rib segmentation provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of measuring the scoliosis angle of a spine according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an extracted spine centerline provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • auxiliary diagnostic technology can not only detect suspicious positive cases, but also exclude a large number of negative cases, saving doctors' time for reading images.
  • multiple parameters such as the location of the lesion, cardiothoracic ratio and pneumothorax compression ratio can be accurately detected, which can greatly improve the efficiency of the doctor's positive diagnosis.
  • the embodiments of the present application provide an image processing method, apparatus, electronic device, computer storage medium, and computer program.
  • FIG. 1b is a flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 1b, the image processing method includes:
  • step S11 an image to be processed is acquired, and the image to be processed includes a chest X-ray image;
  • step S12 perform disease classification processing on the image to be processed, and determine the disease category corresponding to the image to be processed;
  • step S13 image processing is performed on the image to be processed according to the image processing method corresponding to the disease category, and a processing result of the image to be processed is obtained.
  • the image processing method may be executed by an electronic device such as a terminal device or a server
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless Phones, personal digital assistants (Personal Digital Assistants, PDAs), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the method can be implemented by the processor calling the computer-readable instructions stored in the memory.
  • the method may be performed by a server.
  • the image to be processed is subjected to disease classification processing, which may be multi-target detection of the to-be-processed image through a preset target detection algorithm, and then the disease category is determined according to the multi-target detection result.
  • the multi-target detection may be to detect multiple disease features of various diseases, so as to determine the corresponding disease category according to the detected disease features.
  • the target detection algorithm may be, for example, but not limited to: multi-scale fusion target detection algorithm, RetinaNet algorithm, Faster R-CNN algorithm, YOLO (You Only Look Once) algorithm (a real-time target detection algorithm), Single Shot multiBox Detector (SSD) algorithm, etc.
  • the embodiments of the present application do not limit which target detection algorithm is used.
  • a large number of chest X-ray images with marked lesion areas and corresponding disease categories can be used as sample data to train a preset target detection algorithm, and then the target detection algorithm obtained by training can be applied to the target detection algorithm.
  • Disease classification processing for processing images can be set.
  • those skilled in the art can set a training mode corresponding to the target detection algorithm used according to actual requirements, and the embodiment of the present application does not limit the training mode of the target detection algorithm.
  • the disease category may refer to the category of the disease that can be detected by the chest X-ray image, for example, the disease category may include: pneumothorax, fracture, pulmonary edema, enlarged cardiac shadow, mediastinum disease, etc.
  • pneumothorax a category of the disease that can be detected by the chest X-ray image
  • fracture a category of the disease that can be detected by the chest X-ray image
  • pulmonary edema enlarged cardiac shadow
  • mediastinum disease etc.
  • the number and types of diseases are not limited in the embodiments of the present application.
  • the image processing method corresponding to the pneumothorax can be set to image segmentation, so as to perform image segmentation processing on the image to be processed to obtain a segmented pneumothorax area; if the presence of pulmonary edema is detected in step S12, then The image processing method corresponding to pulmonary edema can be set as image classification, so as to perform image classification processing on the image to be processed to obtain the subtype classification of pulmonary edema.
  • the subtype classification of pulmonary edema may be, for example, cardiogenic pulmonary edema, non-cardiogenic pulmonary edema, and the like.
  • corresponding image processing is performed for the disease types corresponding to the images to be processed to obtain corresponding processing results, and multiple processing results corresponding to different diseases can be output, thereby improving the accuracy of disease detection and assisting Doctors improve the efficiency of disease diagnosis.
  • the disease category may include a first disease category indicating that an abnormal target of the first object exists in the image to be processed.
  • image processing is performed on the image to be processed, and a processing result of the image to be processed is obtained, which may include:
  • the disease category is the first disease category
  • FIG. 2 is a schematic diagram of a pneumothorax segmentation result based on a chest X-ray image according to an embodiment of the present application.
  • the pneumothorax area may be the first image area
  • the lung field area may be the second image area.
  • a pneumothorax area may exist only in the left thoracic cavity, or a pneumothorax area may exist only in the right thoracic cavity, or a pneumothorax area exists in both thoracic cavities
  • the segmentation may be performed when a pneumothorax area exists in one thoracic cavity
  • the corresponding lung field area in the thoracic cavity on the side; when there are pneumothorax areas on both sides of the thoracic cavity, the corresponding lung field areas in the thoracic cavity on both sides are segmented, so as to obtain the second image area where the corresponding normal target is located.
  • a segmentation algorithm when performing the first segmentation processing on the image to be processed, may be used to implement the image segmentation processing of the image to be processed.
  • the segmentation algorithm can be adopted but not limited to: segmentation algorithms based on various types of deep learning (for example, VGGNet deep network, ResNet deep network), segmentation algorithms based on edge detection (for example, Roberts algorithm, Sobel algorithm), based on Segmentation algorithms for active contour models (eg, Snake algorithm, level set algorithm), etc.
  • the embodiments of the present application do not limit which segmentation algorithm is used.
  • a large number of chest X-ray images marked with pneumothorax area and lung field area can be used as sample data to train the adopted segmentation algorithm, and then the segmentation algorithm obtained by training can be applied to the image to be processed.
  • the first segmentation process It can be understood that those skilled in the art can set a training mode corresponding to the adopted segmentation algorithm according to actual requirements, and the embodiment of the present application does not limit the training mode of the segmentation algorithm.
  • the disease category is the first disease category
  • the first image area and the second image area are segmented, and then the processing result is determined according to the first image area and the second image area, so that the For pneumothorax disease, the pneumothorax area and the lung field area are segmented, which can provide information related to pneumothorax disease and improve the diagnosis efficiency of pneumothorax disease.
  • the area of the first image area may be an area calculated according to the coordinates of pixels in the first image area, or may be an area calculated according to the number of pixels in the first image area.
  • the area of the second image area can be calculated in the same manner as the area of the first image area. The embodiments of the present application do not limit the calculation methods of the area of the first image area and the area of the second image area.
  • the first image area may be a pneumothorax area
  • the second image area may be a lung field area.
  • the first analysis result of the pneumothorax disease is determined based on the ratio of the area of the segmented pneumothorax area to the area of the lung field area.
  • the first analysis result of the pneumothorax disease may be, for example, a lung compression ratio parameter, and the severity of the pneumothorax disease may be determined by the lung compression ratio parameter.
  • the processing result includes the first analysis result and the first image area, which can assist the doctor in determining the severity of the pneumothorax disease, reduce the pressure of manual measurement by the doctor, and improve the efficiency of disease diagnosis.
  • the disease category may include a second disease category, and the second disease category is used to indicate that there is an abnormal target of the second object in the image to be processed.
  • step S13 according to the image processing corresponding to the disease category method, perform image processing on the image to be processed, and obtain the processing result of the image to be processed, which may include:
  • the image to be processed is subjected to a second segmentation process to determine a third image area where multiple targets of the second object in the image to be processed are located, and the processing result includes the third image area.
  • the second disease category may include rib abnormalities
  • the second subject may include ribs
  • the abnormal target of the second subject may include abnormal ribs.
  • abnormal rib may refer to abnormal rib in appearance, for example, rib fracture, rib deformation, rib dislocation, etc.
  • abnormal rib may refer to abnormal rib in appearance, for example, one or more fractures rib.
  • the abnormal rib may be a part of all ribs of the human body, and in order to locate the specific position of the abnormal rib, the third image area may be a plurality of rib areas. Then, determining the third image area where the multiple targets of the second object in the to-be-processed image are located is equivalent to determining the rib area of multiple ribs.
  • FIG. 3 is a schematic diagram of a processing result of rib segmentation according to an embodiment of the present application. As shown in the processing result shown in Fig. 3, the specific position of the abnormal rib can be clearly known.
  • a segmentation algorithm when performing the second segmentation processing on the image to be processed, may be used to implement the image segmentation processing of the image to be processed.
  • the segmentation algorithm can be adopted but not limited to: segmentation algorithms based on various types of deep learning (for example, VGGNet deep network, ResNet deep network), segmentation algorithms based on edge detection (for example, Roberts algorithm, Sobel algorithm), based on active contour model The segmentation algorithm (for example, Snake algorithm, level set algorithm) and so on.
  • the embodiments of the present application do not limit which segmentation algorithm is used.
  • the second segmentation process may use the same segmentation algorithm as the first segmentation process, or may be a different segmentation algorithm.
  • Those skilled in the art can select which segmentation algorithm to use to perform the second segmentation process according to actual requirements, as long as the ribs can be segmented.
  • a large number of chest X-ray images with abnormal ribs can be used as sample data, and a segmentation algorithm is used to train the second segmentation process, and then the segmentation algorithm obtained by training is applied to the first image to be processed.
  • Two division processing It can be understood that those skilled in the art can set a training mode corresponding to the adopted segmentation algorithm according to actual requirements, and the embodiment of the present application does not limit the training mode of the segmentation algorithm.
  • the third image area is segmented, and the processing result includes the third area, so that the rib area can be segmented for abnormal rib diseases, thereby improving the accuracy of the rib. Abnormal diagnostic efficiency.
  • the disease subcategory (ie, subtype classification) of pulmonary edema may include cardiogenic pulmonary edema, non-cardiogenic pulmonary edema, etc.
  • the disease subcategory of enlarged cardiac shadow may include pericardial effusion, myocarditis, myocardial Hypertrophy, etc.
  • disease subcategories of mediastinal lesions can include mediastinal emphysema, mediastinal lymphadenopathy, and the like.
  • the disease types included in the third disease category above are described as examples, those skilled in the art will understand that the present application should not be limited thereto. In fact, the user can flexibly set the disease types included in the third disease category according to the actual application scenario, as long as the disease category includes the corresponding subtype classification.
  • the sub-category classification processing is performed on the image to be processed to determine the disease sub-category corresponding to the to-be-processed image
  • an image classification algorithm can be used to implement the sub-category classification processing.
  • the image classification algorithm may include, for example, but not limited to: image classification algorithm based on deep learning technology (for example, convolutional neural network, multi-layer feedforward neural network), K-Nearest Neighbor (KNN) classification algorithm, support Vector Machine (Support Vector Machine, SVM) algorithm, etc.
  • KNN K-Nearest Neighbor
  • SVM Support Vector Machine
  • a large number of chest X-ray images marked with subtype classification diseases can be used as sample data, and image classification algorithm is used to train the subclass classification process, and then the image classification algorithm obtained by training is applied to the to-be-processed image classification algorithm.
  • Image subcategory classification processing It can be understood that those skilled in the art can set a training mode corresponding to the image classification algorithm used according to actual requirements, and the embodiments of the present application do not limit the training mode of the image classification algorithm.
  • the disease category is the third disease category
  • the disease sub-category is determined, and the processing result includes the disease sub-category, but the sub-category sub-category diseases can be subdivided, thereby giving doctors more detailed information. Disease information to improve the efficiency of disease diagnosis.
  • the quantitative indicators for the chest X-ray images may include: the scoliosis angle of the spine, the cardiothoracic ratio, the costophrenic angle, and the like.
  • the image processing method may further include: performing key point detection processing on the image to be processed, and determining location information of target key points of multiple objects in the image to be processed; information to determine the results of the second analysis of each category target.
  • FIG. 4 is a schematic diagram of a target key point based on a chest X-ray image provided by an embodiment of the present application.
  • Points 1-6 shown in the figure may be key points of the thorax
  • points 7 and 8 may be key points of the diaphragm
  • points 9-12 Can be a heart key point.
  • the location information of the target key points of the multiple objects may be coordinate information of the target key points in the image to be processed. According to the coordinate information of the target key point, corresponding second analysis results can be obtained for different quantitative analysis indicators.
  • performing key point detection processing on the image to be processed may be achieved by determining the position information of target key points of multiple objects in the image to be processed through a key point detection algorithm.
  • the key point detection algorithm may include, but is not limited to, differentiable key point detection algorithms (for example, Roberts algorithm, Sobel algorithm, etc.), key point detection algorithms based on deep learning technology (for example, convolutional pose machine (Convolutional Pose) Machine, CPM) algorithm, stacked hourglass (stacked hourglass) algorithm).
  • the target keypoint positions may be constrained based on a heatmap, that is, a heatmap is generated for each target keypoint.
  • a heatmap is generated for each target keypoint.
  • the target keypoints for the plurality of objects may include cardiac keypoints, thoracic keypoints, and diaphragm keypoints.
  • the determining of the second analysis result of the target of each category according to the position information of the key points of the target of each category may include:
  • the cardiothoracic ratio information is determined according to the cardiac key points and the first thoracic key point in the thoracic key points; the costophrenic angle information is determined according to the second thoracic key point and the diaphragm key point in the thoracic key points; among them, the first thoracic key point Unlike the second thoracic key point, the second analysis result includes cardiothoracic ratio information and/or costophrenic angle information.
  • the first thoracic key point among the cardiac key points and the thoracic key points may be a key point for calculating the cardiothoracic ratio.
  • the second thoracic key point and the diaphragm key point among the thoracic key points may be the key points for calculating the costophrenic angle.
  • FIG. 5 is a schematic diagram of a heart and chest provided by an embodiment of the present application.
  • T1 represents the maximum distance from the left cardiac border to the thoracic midline
  • T2 represents the maximum distance from the right cardiac border to the thoracic midline
  • T represents the maximum internal diameter of the thoracic cavity
  • O - O' represents the thoracic cavity midline.
  • an implementation manner of determining the cardiothoracic ratio information according to the cardiac key points and the first thoracic key point among the thoracic key points is described with the cardiothoracic schematic diagram shown in FIG. 5 .
  • FIG. 6 is a schematic diagram of a costophrenic angle according to an embodiment of the present application.
  • the triangular area formed by two thoracic points, two diaphragm points and two low points shown in FIG. 6 can be approximated as the costophrenic angle area, and the low The corresponding angle value at the point is used as costophrenic angle information.
  • the second thoracic key point may include the two thoracic points and the two low points
  • the diaphragm key point may include the two diaphragm points.
  • cardiothoracic ratio information and costophrenic angle information are described as examples, those skilled in the art can understand that the present application should not be limited thereto. In fact, the user can flexibly determine the calculation methods of the cardiothoracic ratio information and the costophrenic angle information according to the actual application scenario, and the embodiments of the present application do not limit the calculation methods of the cardiothoracic ratio information and the costophrenic angle information.
  • the second analysis result includes cardiothoracic ratio information and costophrenic angle information, which can provide more detailed analysis information for disease diagnosis , to improve the efficiency of disease diagnosis.
  • the quantitative analysis index for the chest X-ray image may include: the scoliosis angle of the spine.
  • the image processing method may also include:
  • FIG. 7 is a schematic diagram of measuring the scoliosis angle of a spine according to an embodiment of the present application.
  • the horizontal line corresponding to the upper edge of the upper end vertebra of the scoliotic spine is used as the vertical line
  • the horizontal line corresponding to the lower edge of the lower end vertebra is used as the vertical line.
  • the intersection angle of the two vertical lines is the Cobb angle
  • Determining the cobb angle determines the scoliosis angle of the spine.
  • the upper vertebra and the lower vertebra are the vertebral bodies with the greatest inclination toward the concave side of scoliosis.
  • the segmentation algorithms disclosed in the above embodiments of the present application may be used, for example, the Snake algorithm, the level set algorithm, etc., which are not limited by the embodiments of the present application. . It can be understood that those skilled in the art can select the segmentation algorithm used in the third segmentation process according to actual needs, and the third segmentation algorithm can use the same segmentation algorithm as the first segmentation algorithm and the second segmentation algorithm, or a different segmentation algorithm. algorithm.
  • a large number of chest X-ray images with marked spine regions can be used as sample data to train the segmentation algorithm used in the third segmentation process, and then the segmentation algorithm obtained by training is applied to the image to be processed.
  • the third segmentation process It can be understood that those skilled in the art can set a training mode corresponding to the adopted segmentation algorithm according to actual requirements, and the embodiment of the present application does not limit the training mode of the segmentation algorithm.
  • determining the third analysis result of the target of the second object according to the fourth image area may include: determining the center line corresponding to the target of the third object according to the fourth image area; , determine the scoliosis angle corresponding to the target of the third object; wherein, the third object includes the spine.
  • the fourth image area where the target of the third object is located may be a strip-shaped spine area, which may be understood as an entire spine area.
  • the third analysis result may include the scoliosis angle of the spine.
  • the center line corresponding to the target of the third object is determined according to the fourth image region, and the center line of the strip-shaped spine region can be extracted by using a center line extraction technology.
  • the centerline extraction technology for example, a centerline extraction algorithm based on a Hessian matrix, a centerline extraction algorithm based on a Gabor filter, a centerline extraction algorithm based on ridge tracking, and the like can be used.
  • the embodiments of the present application do not limit which centerline extraction technology to use.
  • FIG. 8 is a schematic diagram of an extracted spine centerline according to an embodiment of the present application. As shown in Figure 8, the curve in the figure can be the extracted spine centerline.
  • 1, 2, 3, and 4 can be the inflection points, L1, L2, L3, and L4 can be tangents, and ⁇ a and ⁇ b can be the cobb angle, that is, the scoliosis angle of the spine.
  • the manner of determining the inflection point according to the curvature as described above is described as an example, those skilled in the art can understand that the present application should not be limited thereto.
  • the user can flexibly set the inflection point determination method according to the actual application scenario.
  • the inflection point can also be determined according to the slope of the center line, and the function of the center line can be fitted and the function can be derived to determine the inflection point.
  • the embodiments of the present application do not limit the manner of determining the inflection point.
  • the scoliosis angle of the spine can be easily and quickly determined, which is a disease Diagnosis provides more detailed analysis information and improves the efficiency of disease diagnosis.
  • DR chest X-ray can diagnose multiple lung diseases and fractures, and accurately identify the type and location of the disease through detection algorithms.
  • the incidence of scoliosis is high, which affects the shape and function of the spine and affects physiological and cardiorespiratory health.
  • the classification and localization of multiple lung diseases and fractures of DR chest X-ray can be realized, and multiple diseases can be quickly and accurately detected at one time through multi-target detection.
  • the key point detection algorithm is used for the first time to calculate cardiothoracic ratio and costophrenic angle
  • the pneumothorax segmentation model is used to calculate pneumothorax compression ratio.
  • the DR chest X-ray uses the spinal band region segmentation algorithm, extracts the center line, and calculates the cobb angle of the spine to judge the degree of scoliosis.
  • FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in Figure 10, the image processing method includes:
  • Multiple disease detection Using deep learning multi-target detection technology, it can detect multiple diseases of the lungs, multiple mediastinal diseases, and multiple fractures on the full DR chest X-ray.
  • This solution mainly adopts the multi-scale detection algorithm of dynamic matching and multi-feature aggregation. , this is not limited to other detection algorithms, such as retinaNet, Faster RCNN, YOLO, SSD, etc.;
  • Key point location Using deep learning key point detection technology, 6 key points for calculating cardiothoracic ratio and 6 key points for calculating costophrenic angle are detected on all DR chest X-rays. This solution uses a differentiable key point detection algorithm. At the same time, the heatmap distribution is used to constrain the position of key points, which has better robustness and generalization performance, but it is not limited to other deep key point detection algorithms;
  • Spine segmentation Use deep learning segmentation algorithm to segment the spinal strip area, and then use centerline extraction technology to extract the centerline of the spine, and calculate the cobb angle according to the centerline.
  • the segmentation algorithm here is limited to various deep learning segmentation algorithms or traditional Snake, level set and other algorithms;
  • Pneumothorax segmentation When the multi-disease detection module detects a pneumothorax disease, a deep learning segmentation algorithm is used to segment the pneumothorax and the lung field, and the lung compression ratio (pneumothorax compression ratio) parameter is calculated by the ratio of the segmented areas.
  • the segmentation algorithm here is limited to Various deep learning segmentation algorithms or traditional Snake, level set and other algorithms;
  • the deep learning classification algorithm is used to classify the subtypes.
  • the full-picture DR chest X-ray can be performed on both lungs and mediastinum.
  • Multi-target detection and fracture multi-disease; and, using multi-target dynamic matching and multi-feature aggregation deep learning target detection algorithm, can detect multiple disease targets at the same time, and maintain a good balance among multiple diseases .
  • the cone segmentation algorithm is usually used to calculate the cobb angle, which is difficult to label and segment.
  • the cobb angle is calculated by segmenting the strip-shaped spine region and calculating the center line. .
  • a classification algorithm is used to perform subtype classification of certain diseases, and subtype classification of diseases such as pulmonary edema, enlarged cardiac shadow, and mediastinal disease can be performed.
  • the method can be applied to the computer-aided diagnosis system of image images, remote diagnosis system, DR chest X-ray large-scale screening auxiliary diagnosis system and other products, and can realize the high-precision auxiliary diagnosis function of DR chest X-ray, which can meet the actual needs of doctors. Diagnostic needs.
  • the image processing method according to the embodiment of the present application can be applied to the clinical screening auxiliary diagnosis.
  • doctors need to analyze and screen a large number of DR chest radiographs, determine which diseases or fracture areas are on the DR chest radiographs, and classify the subtypes of certain diseases, and need to measure the Cobb angle, cardiothoracic ratio and pneumothorax compression ratio at the same time, the application is implemented through this application.
  • the image processing method of the example can obtain the processing results on the DR chest X-ray to meet the actual needs of doctors.
  • the screening volume is huge.
  • it can be determined in a short time whether the disease corresponding to the DR chest X-ray is malignant, which can greatly reduce the manpower and material cost in the diagnosis process;
  • the specific category and location, and the cardiothoracic ratio, costophrenic angle and other parameters required for disease diagnosis are calculated and presented to the doctor.
  • it can provide support for the doctor's clinical decision-making.
  • this application also provides image processing apparatuses, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in this application.
  • image processing apparatuses electronic devices, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in this application.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application. As shown in FIG. 11 , the apparatus includes:
  • the disease category includes a first disease category
  • the first disease category indicates that an abnormal target of the first object exists in the to-be-processed image
  • the processing module 103 includes: a first A segmentation submodule, configured to perform a first segmentation process on the to-be-processed image when the disease category is the first disease category, and to determine a first image in the to-be-processed image where the abnormal target of the first object is located area, and the second image area where the corresponding normal target is located; the result determination sub-module is configured to determine the processing result of the to-be-processed image according to the first image area and the second image area.
  • the result determination sub-module is specifically configured to determine the abnormal target of the first object according to the ratio of the area of the first image area to the area of the second image area
  • a first analysis result the processing result includes the first analysis result and the first image area.
  • the disease category includes a second disease category
  • the second disease category is used to indicate that an abnormal target of the second object exists in the to-be-processed image
  • the processing module 103 includes: The second segmentation submodule is configured to perform a second segmentation process on the to-be-processed image when the disease category is the second disease category, and determine where the multiple targets of the second object in the to-be-processed image are located.
  • a third image area, and the processing result includes the third image area.
  • the disease category includes a third disease category
  • the third disease category includes a plurality of subcategories
  • the processing module 103 includes: a classification submodule configured to be configured when the disease category is In the case of the third disease category, subcategory classification processing is performed on the to-be-processed image to determine a disease sub-category corresponding to the to-be-processed image, and the processing result includes the disease sub-category.
  • the apparatus further includes: a key point detection module, configured to perform key point detection processing on the to-be-processed image, and to determine the positions of target key points of multiple objects in the to-be-processed image information; a second determination module configured to determine the second analysis result of each category of targets according to the position information of each category of target key points.
  • a key point detection module configured to perform key point detection processing on the to-be-processed image, and to determine the positions of target key points of multiple objects in the to-be-processed image information
  • a second determination module configured to determine the second analysis result of each category of targets according to the position information of each category of target key points.
  • the target key points of the multiple objects include cardiac key points, thoracic key points, and diaphragm key points
  • the second determination module includes: a cardiothoracic ratio determination sub-module, configured according to The position information of the cardiac key points and the position information of the first thoracic key point in the thoracic key points determine the cardiothoracic ratio information; the costophrenic angle determination sub-module is configured to be based on the second thoracic key in the thoracic key points.
  • the position information of the key points and the position information of the diaphragm key points, and the costophrenic angle information is determined; wherein, the first thoracic key point is different from the second thoracic key point, and the second analysis result includes the Cardiothoracic ratio information and/or the costophrenic angle information.
  • the apparatus further includes: a segmentation module, configured to perform a third segmentation process on the image to be processed, and determine a fourth image area where the target of the third object in the image to be processed is located ; a third determining module, configured to secondly determine a third analysis result of the target of the second object according to the fourth image area.
  • the third determination module includes: a centerline determination sub-module configured to determine, according to the fourth image area, a centerline corresponding to the target of the third object; a side bend angle The determining submodule is configured to determine the scoliosis angle corresponding to the target of the third object according to the center line, wherein the third object includes the spine.
  • corresponding image processing is performed for the disease types corresponding to the images to be processed to obtain corresponding processing results, and multiple processing results corresponding to different diseases can be output, thereby improving the accuracy of disease detection and assisting doctors. Improve the efficiency of disease diagnosis.
  • the functions or modules included in the apparatuses provided in the embodiments of the present application may be used to execute the methods described in the above method embodiments.
  • the functions or modules included in the apparatuses provided in the embodiments of the present application may be used to execute the methods described in the above method embodiments.
  • the embodiments of the present application further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, any one of the foregoing methods is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present application further provides an electronic device, including: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to execute any one of the foregoing method.
  • Embodiments of the present application also provide a computer program product, including computer-readable codes.
  • the processor in the device executes instructions for implementing the image processing method provided in any of the above embodiments. .
  • the embodiments of the present application further provide another computer program product, configured to store computer-readable instructions, and when the instructions are executed, cause the computer to perform the operations of the image processing method provided by any of the foregoing embodiments.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 12 is a schematic structural diagram of an electronic device 800 according to an embodiment of the present application.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
  • the first processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the first processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Additionally, the first processing component 802 may include one or more modules to facilitate interaction between the first processing component 802 and other components.
  • the first processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the first processing component 802.
  • the first memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the first memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory ( Read-Only Memory, ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM Static Random-Access Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • Read-Only Memory Read-Only Memory
  • magnetic memory
  • the first power supply component 806 provides power to various components of the electronic device 800 .
  • the first power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in the first memory 804 or transmitted via the communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the first I/O interface 812 provides an interface between the first processing component 802 and a peripheral interface module, and the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • CCD Charge Coupled Device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2nd Generation, 2G) or a third generation mobile communication technology (3rd-generation, 3G), or their combination.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (Blue Tooth, BT) technology and other technologies to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more Application Specific Integrated Circuit (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (Digital Signal Processing Device) , DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation, used to perform the above any method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processing
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor, or other electronic component implementation, used to perform the above any method.
  • a non-volatile computer-readable storage medium is also provided, such as a first memory 804 including computer program instructions that can be executed by the processor 820 of the electronic device 800 to accomplish any of the above a way.
  • the electronic device 1900 may also include a second power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) Interface 1958.
  • the electronic device 1900 can operate based on an operating system stored in the second memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface-based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or the like.
  • a non-volatile computer-readable storage medium is also provided, such as a second memory 1932 comprising computer program instructions executable by the second processing component 1922 of the electronic device 1900 to complete any of the above methods.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), ROM, EPROM or flash memory, SRAM, portable compact disk read only Memory (Compact Disc Read-Only Memory, CD-ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanical coding devices, such as punched cards or recessed protrusions on which instructions are stored structure, and any suitable combination of the above.
  • Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the embodiments of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcodes, firmware instructions, state setting data, or a Source or object code in any combination of programming languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming language.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to an external computer (eg, using Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • electronic circuits such as programmable logic circuits, FPGAs, or Programmable Logic Arrays (PLAs), that can execute computer-readable Program instructions are read to implement various aspects of the present application.
  • PDAs Programmable Logic Arrays
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks in the flowcharts and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagram.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executed on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or structural diagrams.
  • each block in the flowchart or block diagram may represent a module, segment, or portion of an instruction that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • the present application relates to an image processing method and device, electronic equipment, computer storage medium and computer program, which can be applied to the processing of chest plain films.
  • the method includes: acquiring an image to be processed, the image to be processed includes a chest radiograph image;
  • the image is subjected to disease classification processing to determine the disease category corresponding to the image to be processed;
  • image processing is performed on the to-be-processed image according to the image processing method corresponding to the disease category to obtain the processing result of the to-be-processed image.
  • the embodiments of the present application can improve the accuracy of disease detection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种图像处理方法及装置、电子设备、计算机存储介质和计算机程序,可以应用于胸部平片的处理,所述方法包括:获取待处理图像(S11),待处理图像包括胸片图像;对待处理图像进行疾病分类处理,确定与待处理图像对应的疾病类别(S12);根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果(S13)。所述方法可实现提高疾病检测的准确度。

Description

图像处理方法及装置、电子设备、存储介质和程序
相关申请的交叉引用
本申请基于申请号为202011461030.4、申请日为2020年12月11日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像处理方法及装置、电子设备、计算机存储介质和计算机程序。
背景技术
胸片是筛查肺部多种疾病和骨折的重要依据。对于放射科医生来说,诊断胸片比较费时,属于大量的重复性工作。一些特征不太明显的疾病往往要求放射科医生有丰富的经验,且受医疗设备水平的影响,胸片的成像质量可能不高,因而影响疾病诊断。因此,传统医生诊断疾病的方式容易受到医疗设备水平以及医生个人经验水平的影响,可能导致误诊和漏诊,诊断效率低。
发明内容
本申请提出了一种图像处理方法及装置、电子设备、计算机存储介质和计算机程序。
本申请实施例提供了一种图像处理方法,包括:获取待处理图像,所述待处理图像包括胸片图像;对所述待处理图像进行疾病分类处理,确定与所述待处理图像对应的疾病类别;根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果。
在本申请的一些实施例中,所述疾病类别包括第一疾病类别,所述第一疾病类别指示所述待处理图像中存在第一对象的异常目标,根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:在所述疾病类别为第一疾病类别的情况下,对所述待处理图像进行第一分割处理,确定所述待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果。
在本申请的一些实施例中,根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果,包括:根据所述第一图像区域的面积与所述第二图像区域的面积的比值,确定所述第一对象的异常目标的第一分析结果,所述处理结果包括所述第一分析结果和所述第一图像区域。
在本申请的一些实施例中,所述疾病类别包括第二疾病类别,所述第二疾病类别用于指示所述待处理图像中存在第二对象的异常目标,根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:在所述疾病类别为第二疾病类别的情况下,对所述待处理图像进行第二分割处理,确定所述待处理图像中第二对象的多个目标所在的第三图像区域,所述处理结果包括所述第三图像区域。
在本申请的一些实施例中,所述疾病类别包括第三疾病类别,所述第三疾病类别包括多个子类别,根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:在所述疾病类别为第三疾病类别的情况下,对所述待处理图像进行子类别分类处理,确定与所述待处理图像对应的疾病子类别,所述处理结果包括所述疾病子类别。
在本申请的一些实施例中,所述方法还包括:对所述待处理图像进行关键点检测处理,确定所述待处理图像中多个对象的目标关键点的位置信息;根据各类别的目标关键 点的位置信息,确定各类别的目标的第二分析结果。
在本申请的一些实施例中,所述多个对象的目标关键点包括心脏关键点、胸廓关键点及横膈关键点,所述根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果,包括:根据所述心脏关键点的位置信息及所述胸廓关键点中的第一胸廓关键点的位置信息,确定心胸比信息;根据所述胸廓关键点中的第二胸廓关键点的位置信息及所述横膈关键点的位置信息,确定肋膈角信息;其中,所述第一胸廓关键点与所述第二胸廓关键点不同,所述第二分析结果包括所述心胸比信息和/或所述肋膈角信息。
在本申请的一些实施例中,所述方法还包括:对所述待处理图像进行第三分割处理,确定所述待处理图像中第三对象的目标所在的第四图像区域;根据所述第四图像区域,确定所述第二对象的目标的第三分析结果。
在本申请的一些实施例中,所述根据所述第四图像区域,确定所述第二对象的目标的第三分析结果,包括:根据所述第四图像区域,确定所述第三对象的目标对应的中心线;根据所述中心线,确定所述第三对象的目标对应的侧弯角度;其中,所述第三对象包括脊柱。
在本申请的一些实施例中,所述第一疾病类别包括气胸疾病,所述第一对象包括肺部,所述第一对象的异常目标包括气胸区域,所述相应的正常目标包括肺野区域;所述第二疾病类别包括肋骨异常,所述第二对象包括肋骨,所述第二对象的异常目标包括异常的肋骨;所述第三疾病类别包括肺水肿、心影增大、纵隔病变中的任意一种。
本申请实施例提供了一种图像处理装置,包括:获取模块,配置为获取待处理图像,所述待处理图像包括胸片图像;第一确定模块,配置为对所述待处理图像进行疾病分类处理,确定与所述待处理图像对应的疾病类别;处理模块,配置为根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果。
在本申请的一些实施例中,所述疾病类别包括第一疾病类别,所述第一疾病类别指示所述待处理图像中存在第一对象的异常目标,所述处理模块,包括:第一分割子模块,配置为在所述疾病类别为第一疾病类别的情况下,对所述待处理图像进行第一分割处理,确定所述待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;结果确定子模块,配置为根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果。
在本申请的一些实施例中,所述结果确定子模块,具体配置为根据所述第一图像区域的面积与所述第二图像区域的面积的比值,确定所述第一对象的异常目标的第一分析结果,所述处理结果包括所述第一分析结果和所述第一图像区域。
在本申请的一些实施例中,所述疾病类别包括第二疾病类别,所述第二疾病类别用于指示所述待处理图像中存在第二对象的异常目标,所述处理模块,包括:第二分割子模块,配置为在所述疾病类别为第二疾病类别的情况下,对所述待处理图像进行第二分割处理,确定所述待处理图像中第二对象的多个目标所在的第三图像区域,所述处理结果包括所述第三图像区域。
在本申请的一些实施例中,所述疾病类别包括第三疾病类别,所述第三疾病类别包括多个子类别,所述处理模块,包括:分类子模块,配置为在所述疾病类别为第三疾病类别的情况下,对所述待处理图像进行子类别分类处理,确定与所述待处理图像对应的疾病子类别,所述处理结果包括所述疾病子类别。
在本申请的一些实施例中,所述装置还包括:关键点检测模块,配置为对所述待处理图像进行关键点检测处理,确定所述待处理图像中多个对象的目标关键点的位置信息;第二确定模块,配置为根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果。
在本申请的一些实施例中,所述多个对象的目标关键点包括心脏关键点、胸廓关键 点及横膈关键点,所述第二确定模块,包括:心胸比确定子模块,配置为根据所述心脏关键点的位置信息及所述胸廓关键点中的第一胸廓关键点的位置信息,确定心胸比信息;肋膈角确定子模块,配置为根据所述胸廓关键点中的第二胸廓关键点的位置信息及所述横膈关键点的位置信息,确定肋膈角信息;其中,所述第一胸廓关键点与所述第二胸廓关键点不同,所述第二分析结果包括所述心胸比信息和/或所述肋膈角信息。
在本申请的一些实施例中,所述装置还包括:分割模块,配置为对所述待处理图像进行第三分割处理,确定所述待处理图像中第三对象的目标所在的第四图像区域;第三确定模块,配置为第二根据所述第四图像区域,确定所述第二对象的目标的第三分析结果。
在本申请的一些实施例中,所述第三确定模块,包括:中心线确定子模块,配置为根据所述第四图像区域,确定所述第三对象的目标对应的中心线;侧弯角确定子模块,配置为根据所述中心线,确定所述第三对象的目标对应的侧弯角度;其中,所述第三对象包括脊柱。
在本申请的一些实施例中,所述第一疾病类别包括气胸疾病,所述第一对象包括肺部,所述第一对象的异常目标包括气胸区域,所述相应的正常目标包括肺野区域;所述第二疾病类别包括肋骨异常,所述第二对象包括肋骨,所述第二对象的异常目标包括异常的肋骨;所述第三疾病类别包括肺水肿、心影增大、纵隔病变中的任意一种。
本申请实施例提供了一种电子设备,包括:处理器;配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述任意一种方法。
本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述任意一种方法。
本申请实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行时实现上述任意一种方法。
在本申请实施例中,针对待处理图像对应的疾病种类,进行对应的图像处理,得到对应的处理结果,能够针对不同疾病输出对应多个处理结果,从而提高疾病检测的准确度,以及辅助医生提高疾病诊断的效率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。根据下面参考附图对示例性实施例的详细说明,本申请的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请实施例的技术方案。
图1a为本申请实施例的一个应用场景的示意图;
图1b为本申请实施例提供的一种图像处理方法的流程图;
图2为本申请实施例提供的一种基于胸片图像的气胸分割结果的示意图;
图3为本申请实施例提供的一种肋骨分割的处理结果的示意图;
图4为本申请实施例提供的一种基于胸片图像的目标关键点示意图;
图5为本申请实施例提供的一种心胸示意图;
图6为本申请实施例提供的一种肋膈角示意图;
图7为本申请实施例提供的一种脊柱的侧弯角度的测量示意图;
图8为本申请实施例提供的一种提取的脊柱中心线的示意图;
图9为本申请实施例提供的一种基于中心线确定脊柱的侧弯角度的示意图;
图10为本申请实施例提供的一种图像处理方法的流程示意图;
图11为本申请实施例提供的图像处理装置的结构示意图;
图12为本申请实施例提供的一种电子设备的结构示意图;
图13为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
胸片图像是医生筛查肺部多种疾病和骨折的重要依据。但是通常医生诊断直接数字平板X线成像(Digital Radiography,DR)图像,需要大量的临床经验,由于DR胸片可以看到20多种疾病异常和多个部位骨折,同时DR胸片是投影成像,成像质量不高以及多个疾病在征象比较难判断,所以医生诊断是会出现一些主观偏差,同时诊断DR胸片时需要测量计算多个参数来评估疾病的严重程度。随着深度学习技术的发展,推动了DR胸片图像辅助诊断的进步,并且提高了诊断的精度和效率。同时影像科医生每天将会诊断至少几百个DR胸片,但是真正阳性的案例出现的概率远低于10%。所以一个高效率的辅助诊断技术,不仅可以检测出可疑的阳性病例,并且可以排除大量阴性案例,节约医生的阅片时间。同时能精确检出病灶的位置、心胸比和气胸压缩比等多个参数,可以大大提高医生诊断阳性的效率。
在相关技术中,肺炎检测是通过输入DR胸片图像进行检测,只检测出肺炎位置,种类比较单一,很难满足实际的临床上复杂场景和多病检出的需求。针对气胸这一单类别比赛进行定量分析,同样是不满足临床的实际需求的。另外也有在Xray-14和X-pert这两个数据集做疾病分类,大概有十几种疾病。比如,在xray14数据集上进行14种疾病的分类,并通过热图(heatmap)进行粗略的定位;或通过GraphNet构建14种疾病的关系,从而提高分类的准确性。但这些多疾病分类只能作为简单的筛查,并不能满足医生的实际需求。
针对上述技术问题,本申请实施例提出了一种图像处理方法、装置、电子设备、计算机存储介质和计算机程序。
下面结合附图对本申请的应用场景进行说明。图1a为本申请实施例的一个应用场景的示意图,如图1a所示,胸片图像1为待处理图像,可以将胸片图像1输入至上述图像处理装置2中,对胸片图像1进行疾病分类处理,确定与胸片图像1对应的疾病类别;再根据与疾病类别对应的图像处理方式,对胸片图像1进行图像处理,得到胸片图像1的处理结果。进而,根据该处理结果可以确定与气胸疾病相关的信息,提高对气胸疾病的诊断效率。需要说明的是,图1a所示的场景仅仅是本申请实施例的一个示例性场景,本申请对具体的应用场景不作限制。
图1b为本申请实施例提供的一种图像处理方法的流程图,如图1b所示,所述图像处理方法包括:
在步骤S11中,获取待处理图像,所述待处理图像包括胸片图像;
在步骤S12中,对待处理图像进行疾病分类处理,确定与待处理图像对应的疾病类别;
在步骤S13中,根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果。
在本申请的一些实施例中,所述图像处理方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。或者,可通过服务器执行所述方法。
在本申请的一些实施例中,可在步骤S11中获取待处理图像。该待处理图像包括胸片图像,可以是通过X光成像技术所获得的人体胸部区域的图像数据,例如,胸片图像可以是X光片、计算机断层扫描(Computed Tomography,CT)图像、DR图像等。本申请实施例对胸片图像的具体图像类型不做限制。
在本申请的一些实施例中,在步骤S12中,对待处理图像进行疾病分类处理,可以是通过预设的目标检测算法,对待处理图像进行多目标检测,进而根据多目标检测结果,确定疾病类别。其中,多目标检测可以是检测各种疾病的多个疾病特征,以根据检测出的疾病特征,确定对应的疾病类别。
在本申请的一些实施例中,目标检测算法例如可以采用但不限于:多尺度融合的目标检测算法、RetinaNet算法、Faster R-CNN算法、YOLO(You Only Look Once)算法(一种实时的目标检测算法)、单镜头多盒探测器(Single Shot multiBox Detector,SSD)算法等。对于采用何种目标检测算法,本申请的实施例不做限制。
在本申请的一些实施例中,可以将大量已标注病灶区域和对应疾病类别的胸片图像作为样本数据,对预设的目标检测算法进行训练,进而将训练得到的目标检测算法,应用于对待处理图像的疾病分类处理。这里,本领域技术人员可根据实际需求设定与采用的目标检测算法对应的训练方式,本申请实施例对于目标检测算法的训练方式不做限制。
在本申请的一些实施例中,在步骤S12中,疾病类别可以是指通过胸片图像可检测出疾病的种类,例如,疾病类别可以包括:气胸、骨折、肺水肿、心影增大、纵隔病变等。对于疾病的数量和种类,本申请实施例不做限制。
在本申请的一些实施例中,在步骤S13中,图像处理方式可以包括但不限于:图像分割、图像分类等方式。与所述疾病类别对应的图像处理方式,可以是根据实际需求预先设置的与不同疾病类别对应的图像处理方式。基于不同的图像处理方式,对待处理图像进行对应的图像处理。
举例来说,若步骤S12检测出存在气胸,则可以设置气胸对应的图像处理方式为图像分割,从而对待处理图像进行图像分割处理,得到分割出气胸区域;若步骤S12检测出存在肺水肿,则可设置与肺水肿对应的图像处理方式为图像分类,从而对待处理图像进行图像分类处理,得到肺水肿的亚型分类。其中,肺水肿的亚型分类例如可以是心源性肺水肿、非心源性肺水肿等。
在本申请的一些实施例中,尽管以作为示例介绍了如上图像处理方式的设置,但本领域技术人员能够理解,本申请实施例应不限于此。事实上,用户完全可根据实际应用场景灵活设置与疾病类别对应的图像处理方式。
在本申请实施例中,针对与待处理图像对应的疾病种类,进行对应的图像处理,得到对应的处理结果,能够针对不同疾病输出对应多个处理结果,从而提高疾病检测的准确度,以及辅助医生提高疾病诊断的效率。
在本申请的一些实施例中,疾病类别可以包括第一疾病类别,第一疾病类别指示待处理图像中存在第一对象的异常目标。在步骤S13中,根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果,可以包括:
在疾病类别为第一疾病类别的情况下,对待处理图像进行第一分割处理,确定待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;根据第一图像区域及第二图像区域,确定待处理图像的处理结果。
在本申请的一些实施例中,第一疾病类别可以包括气胸疾病,第一对象可以包括肺部,第一对象的异常目标可以包括气胸区域,相应的正常目标可以包括肺野区域。
在本申请的一些实施例中,第一对象的异常目标所在的第一图像区域可以是气胸区域在待处理图像中的所在区域。相应的正常目标所在的第二图像区域可以是肺野区域在待处理图像中的所在区域。
图2为本申请实施例提供的一种基于胸片图像的气胸分割结果的示意图。如图2所示,气胸区域可以是第一图像区域,肺野区域可以是第二图像区域。
在本申请的一些实施例中,由于可能仅左侧胸腔存在气胸区域,或仅右侧胸腔存在气胸区域,或两侧胸腔均存在气胸区域,可以在一侧胸腔存在气胸区域的情况下,分割该侧胸腔内对应的肺野区域;在两侧胸腔均存在气胸区域的情况下,分割两侧胸腔内对应的肺野区域,从而得到相应的正常目标所在的第二图像区域。
在本申请的一些实施例中,在对待处理图像进行第一分割处理时,可以是采用分割算法实现对待处理图像的图像分割处理。其中,分割算法可以采用但不限于:基于各类深度学习的分割算法(例如,VGGNet深度网络、ResNet深度网络)、基于边缘检测的分割算法(例如,罗伯茨Roberts算法、索贝尔Sobel算法)、基于主动轮廓模型的分割算法(例如,蛇形Snake算法、水平集level set算法)等。本申请的实施例对于采用何种分割算法不做限制。
在本申请的一些实施例中,可以将大量已标注气胸区域和肺野区域的胸片图像作为样本数据,对采用的分割算法进行训练,进而将训练得到的分割算法,应用于对待处理图像的第一分割处理。可以理解的是,本领域技术人员可根据实际需求设定与采用的分割算法对应的训练方式,本申请实施例对于分割算法的训练方式不做限制。
在本申请实施例中,通过在疾病类别为第一疾病类别的情况下,分割出第一图像区域和第二图像区域,再根据第一图像区域和第二图像区域确定处理结果,能够实现针对气胸疾病,分割出气胸区域和肺野区域,从而能够提供气胸疾病相关的信息,提高对气胸疾病的诊断效率。
在本申请的一些实施例中,根据第一图像区域及第二图像区域,确定待处理图像的处理结果,可以包括:根据第一图像区域的面积与第二图像区域的面积的比值,确定第一对象的异常目标的第一分析结果,处理结果包括第一分析结果和第一图像区域。
在本申请的一些实施例中,第一图像区域的面积可以是根据第一图像区域内像素点的坐标计算出的面积,还可以是根据第一图像区域内像素点的数量计算出的面积。第二图像区域的面积可以采用与第一图像区域的面积相同的计算方式。本申请的实施例对于第一图像区域的面积和第二图像区域的面积的计算方式不做限定。
在本申请的一些实施例中,如上文所述第一图像区域可以是气胸区域,第二图像区域可以是肺野区域。在分割出气胸区域和肺野区域后,基于分割出的气胸区域的面积和肺野区域的面积的比值,确定气胸疾病的第一分析结果。其中,气胸疾病的第一分析结果例如可以是肺压缩比参数,通过肺压缩比参数可以确定气胸疾病的严重程度。
在本申请实施例中,通过确定第一分析结果,处理结果包括第一分析结果和第一图像区域,可以辅助医生确定气胸疾病的严重程度,减少医生手动测量的压力,提高疾病诊断效率。
在本申请的一些实施例中,疾病类别可以包括第二疾病类别,第二疾病类别用于指示待处理图像中存在第二对象的异常目标,在步骤S13中,根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果,可以包括:
在疾病类别为第二疾病类别的情况下,对待处理图像进行第二分割处理,确定待处 理图像中第二对象的多个目标所在的第三图像区域,处理结果包括第三图像区域。
在本申请的一些实施例中,第二疾病类别可以包括肋骨异常,第二对象可以包括肋骨,第二对象的异常目标可以包括异常的肋骨。其中,肋骨异常可以是指肋骨在外形上发生的异常,例如,肋骨骨折、肋骨变形、肋骨错位等;异常的肋骨可以是指在外形上发生的异常的肋骨,例如,一根或多根骨折的肋骨。
在本申请的一些实施例中,异常的肋骨可能是人体全部肋骨中的部分肋骨,为便于定位到异常的肋骨的具体位置,第三图像区域可以是多根肋骨区域。则确定待处理图像中第二对象的多个目标所在的第三图像区域,相当于确定出多根肋骨的肋骨区域。
图3为本申请实施例提供的一种肋骨分割的处理结果的示意图。如图3所示的处理结果,可以清楚的知晓异常肋骨的具体位置。
在本申请的一些实施例中,在对待处理图像进行第二分割处理时,可以是采用分割算法实现对待处理图像的图像分割处理。其中,分割算法可以采用但不限于:基于各类深度学习的分割算法(例如,VGGNet深度网络、ResNet深度网络)、基于边缘检测的分割算法(例如,Roberts算法、Sobel算法)、基于主动轮廓模型的分割算法(例如,Snake算法、level set算法)等。本申请的实施例对于采用何种分割算法不做限制。
在本申请的一些实施例中,第二分割处理可以采用与第一分割处理相同的分割算法,也可以是不同的分割算法。本领域技术人员可根据实际需求选定采用何种分割算法进行第二分割处理,只要可以实现对肋骨的分割即可。
在本申请的一些实施例中,可以将大量存在肋骨异常的胸片图像作为样本数据,对第二分割处理采用分割算法进行训练,进而将训练得到的该分割算法,应用于对待处理图像的第二分割处理。可以理解的是,本领域技术人员可根据实际需求设定与采用的分割算法对应的训练方式,本申请实施例对于分割算法的训练方式不做限制。
在本申请实施例中,通过在疾病类别为第二疾病类别的情况下,分割出第三图像区域,处理结果包括第三区域,能够实现针对肋骨异常疾病,分割出肋骨区域,从而提高对肋骨异常的诊断效率。
在本申请的一些实施例中,疾病类别可以包括第三疾病类别,第三疾病类别包括多个子类别,在步骤S13中,根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果,可以包括:
在疾病类别为第三疾病类别的情况下,对待处理图像进行子类别分类处理,确定与待处理图像对应的疾病子类别,处理结果包括疾病子类别。
在本申请的一些实施例中,第三疾病类别可以是指包含亚型分类的疾病类别,疾病子类别可以是指疾病的亚型分类。其中,第三疾病类别可以具体包括肺水肿、心影增大、纵隔病变中的任意一种。
举例来说,肺水肿的疾病子类别(也即亚型分类)可以包括心源性肺水肿、非心源性肺水肿等,心影增大的疾病子类别可以包括心包积液、心肌炎、心肌肥厚等,纵隔病变的疾病子类别可以包括纵隔气肿、纵隔淋巴结肿等。
需要说明的是,尽管以作为示例介绍了如上第三疾病类别包含的疾病种类,但本领域技术人员能够理解,本申请应不限于此。事实上,用户完全可根据实际应用场景灵活设置第三疾病类别包含的疾病种类,只要该疾病种类包含对应的亚型分类即可。
在本申请的一些实施例中,对待处理图像进行子类别分类处理,确定与待处理图像对应的疾病子类别,可以采用图像分类算法实现子类别分类处理。其中,图像分类算法例如可以包括但不限于:基于深度学习技术的图像分类算法(例如,卷积神经网络、多层前馈神经网络)、K最近邻(K-NearestNeighbor,KNN)分类算法、支持向量机(Support Vector Machine,SVM)算法等。对于采用何种图像分类算法,本申请实施例不做限制。
在本申请的一些实施例中,可以将大量标注亚型分类疾病的胸片图像作为样本数据,对子类别分类处理采用图像分类算法进行训练,进而将训练得到的图像分类算法,应用 于对待处理图像的子类别分类处理。可以理解的是,本领域技术人员可根据实际需求设定与采用的图像分类算法对应的训练方式,本申请实施例对于图像分类算法的训练方式不做限制。
在本申请实施例中,通过在疾病类别为第三疾病类别的情况下,确定疾病子类别,处理结果包括疾病子类别,可是实现对存在亚型分类疾病进行细分类,从而给予医生更详细的疾病信息,提高疾病的诊断效率。
在本申请的一些实施例中,考虑到医生在诊断是否存在某些疾病,或者诊断某些疾病的严重程度时,需要基于一些量化指标进行综合诊断,例如血液化验单中的各种指标(如血小板浓度、血氧饱和度等)。针对胸片图像的量化指标可以包括:脊柱的侧弯角度、心胸比、肋膈角等。
在本申请的一些实施例中,图像处理方法还可以包括:对待处理图像进行关键点检测处理,确定待处理图像中多个对象的目标关键点的位置信息;根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果。
在本申请的一些实施例中,多个对象可以包括心脏、胸廓和横膈等,多个对象的目标关键点可以包括心脏关键点、胸廓关键点及横膈关键点。
图4为本申请实施例提供的一种基于胸片图像的目标关键点示意图,图中示出的1-6点可以是胸廓关键点,7和8点可以是横膈关键点,9-12可以是心脏关键点。
在本申请的一些实施例中,第二分析结果可以包括针对胸片图像的定量分析指标的分析值,例如,第二分析结果可以包括:心胸比=0.3。
在本申请的一些实施例中,多个对象的目标关键点的位置信息可以是目标关键点在待处理图像中的坐标信息。根据目标关键点的坐标信息,可以针对不同的定量分析指标得到对应的第二分析结果。
在本申请的一些实施例中,对待处理图像进行关键点检测处理,可以是通过关键点检测算法,实现确定待处理图像中多个对象的目标关键点的位置信息。其中,关键点检测算法例如可以包括但不限于:可微分的关键点检测算法(例如,Roberts算法、Sobel算法等)、基于深度学习技术的关键点检测算法(例如,卷积姿态机(Convolutional Pose Machine,CPM)算法、堆叠沙漏(stacked hourglass)算法)。
在本申请的一些实施例中,可以基于热力图(heatmap)约束目标关键点位置,即,对每个目标关键点生成一个heatmap。在进行关键点检测处理时,可以将每个关键点对应的heatmap作为检测对象,这样在执行关键点检测时,可以提高关键点检测的鲁棒性和泛化性能。
在本申请实施例中,通过确定多个对象的目标关键点的位置信息,再根据目标关键点的位置信息确定第二分析结果,可以为疾病诊断提供更详细的分析信息,提高疾病诊断效率。
在本申请的一些实施例中,如上文所述,多个对象的目标关键点可以包括心脏关键点、胸廓关键点及横膈关键点。所述根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果,可以包括:
根据心脏关键点及胸廓关键点中的第一胸廓关键点,确定心胸比信息;根据胸廓关键点中的第二胸廓关键点及横膈关键点,确定肋膈角信息;其中,第一胸廓关键点与第二胸廓关键点不同,第二分析结果包括心胸比信息和/或肋膈角信息。
在本申请的一些实施例中,心脏关键点及胸廓关键点中的第一胸廓关键点可以是用于计算心胸比的关键点。胸廓关键点中的第二胸廓关键点及横膈关键点可以是用于计算肋膈角的关键点。
图5为本申请实施例提供的一种心胸示意图。其中,T 1代表左心缘到胸腔中线的最大距离,T 2代表右心缘到胸腔中线的最大距离,T代表胸腔的最大内径,O-O’代表胸腔中线。
在本申请的一些实施例中,以图5所示的心胸示意图,说明所述根据心脏关键点及胸廓关键点中的第一胸廓关键点确定心胸比信息的一种实现方式。可以知晓的是,心胸比为心脏横径与胸廓横径的比值,也就是说,心胸比=(T 1+T 2)/T。
如图5所示的6个目标关键点(4个心脏关键点和2个胸廓关键点),在确定出该6个目标关键点的坐标信息时,可以根据4个心脏关键点的坐标计算T 1和T 2,以及根据2个胸廓关键点计算T,从而在计算出T 1、T 2和T后,根据心胸比=(T 1+T 2)/T,得到心胸比信息。
可以知晓的是,肋膈角是指胸片中,横膈上方两侧,靠近胸廓边缘处,与肋骨内缘围成的锐角形的地方。图6为本申请实施例提供的一种肋膈角示意图。在本申请的一些实施例中,为便于计算肋膈角,可以以图6中所示2个胸廓点、2个横膈点和2个低点构成的三角区域近似为肋膈角区域,低点处对应的角度值作为肋膈角信息。相应的,第二胸廓关键点可以包括该2个胸廓点和2个低点,横膈关键点可以包括该2个横膈点。在确定出图6中示出的6个目标关键点的坐标信息时,可以根据该6个目标关键点的坐标信息确定出三角区域的边长,进而根据边长计算出肋膈角的角度值,得到肋膈角信息。
需要说明的是,尽管以作为示例介绍了如上确定心胸比信息和肋膈角信息的计算方式,但本领域技术人员能够理解,本申请应不限于此。事实上,用户完全可根据实际应用场景灵活确定心胸比信息和肋膈角信息的计算方式,对于心胸比信息和肋膈角信息计算方式,本申请实施例不做限制。
在本申请实施例中,通过根据各关键点的位置信息,确定心胸比信息和肋膈角信息,第二分析结果包括心胸比信息和肋膈角信息,可以为疾病诊断提供更详细的分析信息,提高疾病诊断效率。
在本申请的一些实施例中,如上文所述,针对胸片图像的定量分析指标可以包括:脊柱的侧弯角度。所述图像处理方法,还可以包括:
对待处理图像进行第三分割处理,确定待处理图像中第三对象的目标所在的第四图像区域;根据第四图像区域,确定第二对象的目标的第三分析结果。
在本申请的一些实施例中,第三对象可以包括脊柱。第三对象的目标所在的第四图像区域可以是带状的脊柱区域,可以理解为,整根脊柱区域。第三分析结果可以包括脊柱的侧弯角度。其中脊柱的侧弯角度可以采用cobb角来衡量。
图7为本申请实施例提供的一种脊柱的侧弯角度的测量示意图。如图7所示,以侧弯脊柱的上端椎的上缘对应的横线做垂线,并以下端椎的下缘对应的横线做垂线,该两条垂线的交角为cobb角,确定出cobb角即确定出脊柱的侧弯角度。其中,上端椎和下端椎是指向脊柱侧弯凹侧倾斜度最大的椎体。
在本申请的一些实施例中,对待处理图像进行第三分割处理,可以采用上述本申请实施例中公开的分割算法,例如,Snake算法、level set算法等,对此本申请实施例不做限制。可以理解的是,本领域技术人员可以根据实际需求选取第三分割处理采用的分割算法,第三分割算法可以与第一分割算法、第二分割算法采用相同的分割算法,也可以是不同的分割算法。
在本申请的一些实施例中,可以将大量已标注脊柱区域的胸片图像作为样本数据,对第三分割处理采用的分割算法进行训练,进而将训练得到的分割算法,应用于对待处理图像的第三分割处理。可以理解的是,本领域技术人员可根据实际需求设定与采用的分割算法对应的训练方式,本申请实施例对于分割算法的训练方式不做限制。
在本申请实施例中,通过分割出第四区域,根据第四区域确定第三分析结果,可以为疾病诊断提供更详细的分析信息,提高疾病诊断效率。
在本申请的一些实施例中,根据第四图像区域,确定第二对象的目标的第三分析结果,可以包括:根据第四图像区域,确定第三对象的目标对应的中心线;根据中心线,确定第三对象的目标对应的侧弯角;其中,第三对象包括脊柱。
如上文所述,第三对象的目标所在的第四图像区域可以是带状的脊柱区域,可以理解为,整根脊柱区域。第三分析结果可以包括脊柱的侧弯角度。
在本申请的一些实施例中,根据第四图像区域,确定第三对象的目标对应的中心线,可以通过中心线提取技术提取带状的脊柱区域的中心线。其中,中心线提取技术,例如可以采用基于黑塞矩阵(Hessian Matrix)的中心线提取算法、基于加伯Gabor滤波器的中心线提取算法、基于脊线跟踪的中心线提取算法等。对于采用何种中心线提取技术,本申请的实施例不做限制。图8为本申请实施例提供的一种提取的脊柱中心线的示意图。如图8所示,图中的曲线可以是提取的脊柱中心线。
在本申请的一些实施例中,根据中心线确定第三对象的目标对应的侧弯角,可以是根据中心线的曲率,确定中心点的拐点。进而对每相邻两个拐点,求拐点处对脊柱中心线的切线,两切线相交的夹角可以为cobb角,也就是脊柱的侧弯角度。可以理解的是,对于侧弯的脊柱,拐点至少有2个。图9为本申请实施例提供的一种基于中心线确定脊柱的侧弯角度的示意图。如图9所示,1、2、3、4可以是拐点,L1、L2、L3、L4可以是切线,∠a、∠b可以是cobb角,即脊柱的侧弯角度。
其中,根据中心线的曲率确定中心线的拐点,可以是通过设置曲率阈值,在中心线的曲率达到该阈值的情况下,将达到曲率阈值的位置处作为拐点。
需要说明的是,尽管以作为示例介绍了如上根据曲率确定拐点的方式,但本领域技术人员能够理解,本申请应不限于此。事实上,用户完全可根据实际应用场景灵活设定拐点的确定方式,例如还可以根据中心线的斜率确定拐点、拟合中心线的函数并对函数求导确定拐点等。本申请实施例对于拐点的确定方式不做限制。
在本申请实施例中,通过对带状的脊柱区域进行分割,根据分割的脊柱区域提取脊柱的中心线,进而根据中心线确定cobb角,可以便捷快速的确定出脊柱的侧弯角度,为疾病诊断提供更详细的分析信息,提高疾病诊断效率。
相关技术中,DR胸片可以诊断多个肺部疾病和骨折,通过检测算法精确的识别疾病的种类和位置,目前算法都主要几种在分类任务上,没办法定位。在实际的临床中,针对DR报告需要量化心胸比、肋膈角和气胸压缩比来判断疾病的恶性程度。脊柱侧弯发病率较高,会影响脊柱外形和功能并且影响生理和心肺健康,目前缺少相关的算法研究。
根据本申请实施例的图像处理方法,能够实现针对DR胸片肺部多疾病以及骨折的分类和定位,通过多目标检测,可以一次性地快速而准确的检出多个疾病。针对DR胸片的量化指标,如心胸比、肋膈角和气胸压缩比,首次使用关键点检测算法计算心胸比和肋膈角、利用气胸分割模型计算气胸压缩比。首次通过DR胸片利用脊柱带状区域分割算法,提取中心线,计算脊柱的cobb角来判断脊柱侧弯程度。
图10为本申请实施例提供的一种图像处理方法的流程示意图。如图10所示,该图像处理方法包括:
多病检测:利用深度学习多目标检测技术,在DR胸片全图上检测出双肺多病、纵隔多病和多部位骨折等,本方案主要采用动态匹配、多特征聚合的多尺度检测算法,此处也不限于其他检测算法,比如retinaNet、Faster RCNN、YOLO、SSD等;
关键点定位:利用深度学习关键点检测技术,在DR胸片全部上检测出计算心胸比的6个关键点和计算肋膈角的6个关键点,本方案采用可微分的关键点检测算法,同时利用heatmap分布约束关键点位置,其鲁棒性和泛化性能较佳,但其处不限于其他深度关键点检测算法;
脊柱分割:采用深度学习分割算法,将脊柱带状区域分割出来,再利用中心线提取技术,提取脊柱的中心线,根据中心线计算cobb角,此处分割算法限于各种深度学习分割算法或者传统的Snake、level set等算法;
肋骨分割:在检测模块检出有疾病时,采用深度学习分割算法,将每根肋骨分割出 来,通过实例化肋骨来辅助病灶定位,此处分割算法限于各种深度学习分割算法或者传统的Snake、level set等算法;
气胸分割:在多病检测模块检出有气胸疾病时,采用深度学习分割算法,将气胸和肺野分割出来,通过分割区域的比值计算肺压缩比(气胸压缩比)参数,此处分割算法限于各种深度学习分割算法或者传统的Snake、level set等算法;
分型:在检测模块检出有肺水肿、心影增大、纵隔病变等疾病时,采用深度学习分类算法,进行亚型分类。
相对于相关技术中通常是针对某一种疾病进行检测或者多种疾病进行分类,不能满足医生的实际需求,根据本申请实施例的图像处理方法,能够对全图DR胸片进行双肺、纵隔和骨折多疾病进行多目标检测;以及,使用多目标动态匹配和多特征聚合的深度学习目标检测算法,可以对多个疾病目标同时检出,并且在多个疾病之间保持较好的均衡性。
相对于相关技术中通常利用锥体分割算法计算cobb角,标注和分割难度都较大,根据本申请实施例的图像处理方法,通过对带状的脊柱区域进行分割,计算中心线来计算cobb角。
根据本申请实施例的图像处理方法,采用关键点检测算法计算心胸比和肋膈角参数,能够精确定位关键点,计算心胸比和肋膈角参数。
根据本申请实施例的图像处理方法,使用分类算法对某些疾病进行亚型分类,能够对肺水肿、心影增大、纵隔病变等疾病进行亚型分类。
在示例中,该方法可以应用于影像图像的计算机辅助诊断系统,远程诊断系统,DR胸片大规模筛查辅助诊断系统等产品,可以实现DR胸片的高精度辅助诊断功能,满足医生的实际诊断需求。
根据本申请实施例的图像处理方法,可应用于临床的筛查辅助诊断中。当医生需要分析筛查大量DR胸片时,判断DR胸片上有哪些疾病或者骨折区域,某些疾病的亚型分类,并同时需要测量cobb角、心胸比及气胸压缩比时,通过本申请实施例的图像处理方法,可以获得DR胸片上的处理结果,满足医生的实际需求。
另一方面,由于DR胸片作为CT检查等高端检查的入口,筛查量巨大。根据本申请实施例的图像处理方法,可以在短时间内确定出该张DR胸片对应的疾病是否为恶性,可以大幅度缩减了诊断过程中的人力和物力成本;以及,可以给出疾病的具体类别和位置,同时计算出疾病诊断所需的心胸比、肋膈角等参数呈现给医生,对于经验不足的医生,可以为医生的临床决策提供支持。
可以理解,本申请提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本申请不再赘述。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本申请还提供了图像处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本申请提供的任一种图像处理方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
图11为本申请实施例提供的图像处理装置的结构示意图,如图11所示,所述装置包括:
获取模块101,配置为获取待处理图像,所述待处理图像包括胸片图像;确定模块102,配置为对所述待处理图像进行疾病分类处理,确定与所述待处理图像对应的疾病类别;处理模块103,配置为根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果。
在本申请的一些实施例中,所述疾病类别包括第一疾病类别,所述第一疾病类别指示所述待处理图像中存在第一对象的异常目标,所述处理模块103,包括:第一分割子 模块,配置为在所述疾病类别为第一疾病类别的情况下,对所述待处理图像进行第一分割处理,确定所述待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;结果确定子模块,配置为根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果。
在本申请的一些实施例中,所述结果确定子模块,具体配置为根据所述第一图像区域的面积与所述第二图像区域的面积的比值,确定所述第一对象的异常目标的第一分析结果,所述处理结果包括所述第一分析结果和所述第一图像区域。
在本申请的一些实施例中,所述疾病类别包括第二疾病类别,所述第二疾病类别用于指示所述待处理图像中存在第二对象的异常目标,所述处理模块103,包括:第二分割子模块,配置为在所述疾病类别为第二疾病类别的情况下,对所述待处理图像进行第二分割处理,确定所述待处理图像中第二对象的多个目标所在的第三图像区域,所述处理结果包括所述第三图像区域。
在本申请的一些实施例中,所述疾病类别包括第三疾病类别,所述第三疾病类别包括多个子类别,所述处理模块103,包括:分类子模块,配置为在所述疾病类别为第三疾病类别的情况下,对所述待处理图像进行子类别分类处理,确定与所述待处理图像对应的疾病子类别,所述处理结果包括所述疾病子类别。
在本申请的一些实施例中,所述装置还包括:关键点检测模块,配置为对所述待处理图像进行关键点检测处理,确定所述待处理图像中多个对象的目标关键点的位置信息;第二确定模块,配置为根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果。
在本申请的一些实施例中,所述多个对象的目标关键点包括心脏关键点、胸廓关键点及横膈关键点,所述第二确定模块,包括:心胸比确定子模块,配置为根据所述心脏关键点的位置信息及所述胸廓关键点中的第一胸廓关键点的位置信息,确定心胸比信息;肋膈角确定子模块,配置为根据所述胸廓关键点中的第二胸廓关键点的位置信息及所述横膈关键点的位置信息,确定肋膈角信息;其中,所述第一胸廓关键点与所述第二胸廓关键点不同,所述第二分析结果包括所述心胸比信息和/或所述肋膈角信息。
在本申请的一些实施例中,所述装置还包括:分割模块,配置为对所述待处理图像进行第三分割处理,确定所述待处理图像中第三对象的目标所在的第四图像区域;第三确定模块,配置为第二根据所述第四图像区域,确定所述第二对象的目标的第三分析结果。
在本申请的一些实施例中,所述第三确定模块,包括:中心线确定子模块,配置为根据所述第四图像区域,确定所述第三对象的目标对应的中心线;侧弯角确定子模块,配置为根据所述中心线,确定所述第三对象的目标对应的侧弯角度;其中,所述第三对象包括脊柱。
在本申请的一些实施例中,所述第一疾病类别包括气胸疾病,所述第一对象包括肺部,所述第一对象的异常目标包括气胸区域,所述相应的正常目标包括肺野区域;所述第二疾病类别包括肋骨异常,所述第二对象包括肋骨,所述第二对象的异常目标包括异常的肋骨;所述第三疾病类别包括肺水肿、心影增大、纵隔病变中的任意一种。
在本申请实施例中,针对待处理图像对应的疾病种类,进行对应的图像处理,得到对应的处理结果,能够针对不同疾病输出对应多个处理结果,从而提高疾病检测的准确度,以及辅助医生提高疾病诊断的效率。
在一些实施例中,本申请实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本申请实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述任意一种方法。计算机可读存储介质可以是非 易失性计算机可读存储介质。
本申请实施例还提出一种电子设备,包括:处理器;配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述任意一种方法。
本申请实施例还提供了一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行时实现如上任一实施例提供的图像处理方法的指令。
本申请实施例还提供了另一种计算机程序产品,配置为存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的图像处理方法的操作。
电子设备可以被提供为终端、服务器或其它形态的设备。
图12为本申请实施例提供的一种电子设备800的结构示意图。例如,电子设备800可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等终端。
参照图12,电子设备800可以包括以下一个或多个组件:第一处理组件802,第一存储器804,电源组件806,多媒体组件808,音频组件810,第一输入/输出(Input/Output,I/O)的接口812,传感器组件814,以及通信组件816。
第一处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。第一处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,第一处理组件802可以包括一个或多个模块,便于第一处理组件802和其他组件之间的交互。例如,第一处理组件802可以包括多媒体模块,以方便多媒体组件808和第一处理组件802之间的交互。
第一存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。第一存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random-Access Memory,SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM),可编程只读存储器(Programmable Read-Only Memory,PROM),只读存储器(Read-Only Memory,ROM),磁存储器,快闪存储器,磁盘或光盘。
第一电源组件806为电子设备800的各种组件提供电力。第一电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(Touch Panel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在第一存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用 于输出音频信号。
第一I/O接口812为第一处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘、点击轮、按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)或电荷耦合装置(Charge Coupled Device,CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(WiFi),第二代移动通信技术(2nd Generation,2G)或第三代移动通信技术(3rd-generation,3G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(Blue Tooth,BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(Digital Signal Processing Device,DSPD)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述任意一种方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的第一存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述任意一种方法。
图13为本申请实施例提供的一种电子设备1900的结构示意图。例如,电子设备1900可以被提供为一服务器。参照图13,电子设备1900包括第二处理组件1922,其进一步包括一个或多个处理器,以及由第二存储器1932所代表的存储器资源,用于存储可由第二处理组件1922的执行的指令,例如应用程序。第二存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,第二处理组件1922被配置为执行指令,以执行上述任意一种方法。
电子设备1900还可以包括一个第二电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在第二存储器1932的操作系统,例如微软服务器操作系统(Windows ServerTM),苹果公司推出的基于图形用户界面操作系统(Mac OS XTM),多用户多进程的计算机操作系统(UnixTM),自由和开放原代码的类Unix操作系统(LinuxTM),开放原代码的类Unix操作系统(FreeBSDTM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机 程序指令的第二存储器1932,上述计算机程序指令可由电子设备1900的第二处理组件1922执行以完成上述任意一种方法。
本申请实施例可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本申请的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、ROM、EPROM或闪存、SRAM、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请实施例操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言,诸如Smalltalk、C++等,以及常规的过程式编程语言,诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、FPGA或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或结构示意图描述了本申请的各个方面。应当理解,流程图和/或结构示意图的每个方框以及流程图和/或结构示意图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或结构示意图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或结构示意图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以 产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或结构示意图中的一个或多个方框中规定的功能/动作。
附图中的流程图和结构示意图显示了根据本申请的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或结构示意图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,结构示意图和/或流程图中的每个方框、以及结构示意图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。
工业实用性
本申请涉及一种图像处理方法及装置、电子设备、计算机存储介质和计算机程序,可以应用于胸部平片的处理,所述方法包括:获取待处理图像,待处理图像包括胸片图像;对待处理图像进行疾病分类处理,确定与待处理图像对应的疾病类别;根据与疾病类别对应的图像处理方式,对待处理图像进行图像处理,得到待处理图像的处理结果。本申请实施例可实现提高疾病检测的准确度。

Claims (23)

  1. 一种图像处理方法,应用于电子设备中,包括:
    获取待处理图像,所述待处理图像包括胸片图像;
    对所述待处理图像进行疾病分类处理,确定与所述待处理图像对应的疾病类别;
    根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果。
  2. 根据权利要求1所述的方法,其中,所述疾病类别包括第一疾病类别,所述第一疾病类别指示所述待处理图像中存在第一对象的异常目标,
    根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:
    在所述疾病类别为第一疾病类别的情况下,对所述待处理图像进行第一分割处理,确定所述待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;
    根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果。
  3. 根据权利要求2所述的方法,其中,根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果,包括:
    根据所述第一图像区域的面积与所述第二图像区域的面积的比值,确定所述第一对象的异常目标的第一分析结果,
    所述处理结果包括所述第一分析结果和所述第一图像区域。
  4. 根据权利要求1所述的方法,其中,所述疾病类别包括第二疾病类别,所述第二疾病类别用于指示所述待处理图像中存在第二对象的异常目标,
    根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:
    在所述疾病类别为第二疾病类别的情况下,对所述待处理图像进行第二分割处理,确定所述待处理图像中第二对象的多个目标所在的第三图像区域,所述处理结果包括所述第三图像区域。
  5. 根据权利要求1所述的方法,其中,所述疾病类别包括第三疾病类别,所述第三疾病类别包括多个子类别,
    根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果,包括:
    在所述疾病类别为第三疾病类别的情况下,对所述待处理图像进行子类别分类处理,确定与所述待处理图像对应的疾病子类别,所述处理结果包括所述疾病子类别。
  6. 根据权利要求1至5任一项所述的方法,其中,所述方法还包括:
    对所述待处理图像进行关键点检测处理,确定所述待处理图像中多个对象的目标关键点的位置信息;
    根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果。
  7. 根据权利要求6所述的方法,其中,所述多个对象的目标关键点包括心脏关键点、胸廓关键点及横膈关键点,
    所述根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果,包括:
    根据所述心脏关键点的位置信息及所述胸廓关键点中的第一胸廓关键点的位置信息,确定心胸比信息;
    根据所述胸廓关键点中的第二胸廓关键点的位置信息及所述横膈关键点的位置信息,确定肋膈角信息;
    其中,所述第一胸廓关键点与所述第二胸廓关键点不同,所述第二分析结果包括所 述心胸比信息和/或所述肋膈角信息。
  8. 根据权利要求1至6任一项所述的方法,其中,所述方法还包括:
    对所述待处理图像进行第三分割处理,确定所述待处理图像中第三对象的目标所在的第四图像区域;
    根据所述第四图像区域,确定所述第二对象的目标的第三分析结果。
  9. 根据权利要求8所述的方法,其中,所述根据所述第四图像区域,确定所述第二对象的目标的第三分析结果,包括:
    根据所述第四图像区域,确定所述第三对象的目标对应的中心线;
    根据所述中心线,确定所述第三对象的目标对应的侧弯角度;
    其中,所述第三对象包括脊柱。
  10. 根据权利要求2、4或5所述的方法,其中,
    所述第一疾病类别包括气胸疾病,所述第一对象包括肺部,所述第一对象的异常目标包括气胸区域,所述相应的正常目标包括肺野区域;
    所述第二疾病类别包括肋骨异常,所述第二对象包括肋骨,所述第二对象的异常目标包括异常的肋骨;
    所述第三疾病类别包括肺水肿、心影增大、纵隔病变中的任意一种。
  11. 一种图像处理装置,包括:
    获取模块,配置为获取待处理图像,所述待处理图像包括胸片图像;
    确定模块,配置为对所述待处理图像进行疾病分类处理,确定与所述待处理图像对应的疾病类别;
    处理模块,配置为根据与所述疾病类别对应的图像处理方式,对所述待处理图像进行图像处理,得到所述待处理图像的处理结果。
  12. 根据权利要求11所述的装置,其中,所述疾病类别包括第一疾病类别,所述第一疾病类别指示所述待处理图像中存在第一对象的异常目标,所述处理模块,包括:
    第一分割子模块,配置为在所述疾病类别为第一疾病类别的情况下,对所述待处理图像进行第一分割处理,确定所述待处理图像中第一对象的异常目标所在的第一图像区域,以及相应的正常目标所在的第二图像区域;
    结果确定子模块,配置为根据所述第一图像区域及所述第二图像区域,确定所述待处理图像的处理结果。
  13. 根据权利要求12所述的装置,其中,所述结果确定子模块,具体配置为:
    根据所述第一图像区域的面积与所述第二图像区域的面积的比值,确定所述第一对象的异常目标的第一分析结果,
    所述处理结果包括所述第一分析结果和所述第一图像区域。
  14. 根据权利要求11所述的装置,其中,所述疾病类别包括第二疾病类别,所述第二疾病类别用于指示所述待处理图像中存在第二对象的异常目标,所述处理模块,包括:
    第二分割子模块,配置为在所述疾病类别为第二疾病类别的情况下,对所述待处理图像进行第二分割处理,确定所述待处理图像中第二对象的多个目标所在的第三图像区域,所述处理结果包括所述第三图像区域。
  15. 根据权利要求11所述的装置,其中,所述疾病类别包括第三疾病类别,所述第三疾病类别包括多个子类别,所述处理模块,包括:
    分类子模块,配置为在所述疾病类别为第三疾病类别的情况下,对所述待处理图像进行子类别分类处理,确定与所述待处理图像对应的疾病子类别,所述处理结果包括所述疾病子类别。
  16. 根据权利要求11至15任一项所述的装置,其中,所述装置还包括:
    关键点检测模块,配置为对所述待处理图像进行关键点检测处理,确定所述待处理 图像中多个对象的目标关键点的位置信息;
    第二确定模块,配置为根据各类别的目标关键点的位置信息,确定各类别的目标的第二分析结果。
  17. 根据权利要求16所述的装置,其中,所述多个对象的目标关键点包括心脏关键点、胸廓关键点及横膈关键点,所述第二确定模块,包括:
    心胸比确定子模块,配置为根据所述心脏关键点的位置信息及所述胸廓关键点中的第一胸廓关键点的位置信息,确定心胸比信息;
    肋膈角确定子模块,配置为根据所述胸廓关键点中的第二胸廓关键点的位置信息及所述横膈关键点的位置信息,确定肋膈角信息;
    其中,所述第一胸廓关键点与所述第二胸廓关键点不同,所述第二分析结果包括所述心胸比信息和/或所述肋膈角信息。
  18. 根据权利要求11至16任一项所述的装置,其中,所述装置还包括:
    分割模块,配置为对所述待处理图像进行第三分割处理,确定所述待处理图像中第三对象的目标所在的第四图像区域;
    第三确定模块,配置为根据所述第四图像区域,确定所述第二对象的目标的第三分析结果。
  19. 根据权利要求18所述的装置,其中,所述第三确定模块,包括:
    中心线确定子模块,配置为根据所述第四图像区域,确定所述第三对象的目标对应的中心线;
    侧弯角确定子模块,配置为根据所述中心线,确定所述第三对象的目标对应的侧弯角度;
    其中,所述第三对象包括脊柱。
  20. 根据权利要求12、14或15所述的装置,其中,
    所述第一疾病类别包括气胸疾病,所述第一对象包括肺部,所述第一对象的异常目标包括气胸区域,所述相应的正常目标包括肺野区域;
    所述第二疾病类别包括肋骨异常,所述第二对象包括肋骨,所述第二对象的异常目标包括异常的肋骨;
    所述第三疾病类别包括肺水肿、心影增大、纵隔病变中的任意一种。
  21. 一种电子设备,包括:
    处理器;
    配置为存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至10中任意一项所述的方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1至10中任意一项所述的方法。
  23. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至10任一项所述的方法。
PCT/CN2021/083682 2020-12-11 2021-03-29 图像处理方法及装置、电子设备、存储介质和程序 WO2022121170A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011461030.4A CN112508918A (zh) 2020-12-11 2020-12-11 图像处理方法及装置、电子设备和存储介质
CN202011461030.4 2020-12-11

Publications (1)

Publication Number Publication Date
WO2022121170A1 true WO2022121170A1 (zh) 2022-06-16

Family

ID=74972409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083682 WO2022121170A1 (zh) 2020-12-11 2021-03-29 图像处理方法及装置、电子设备、存储介质和程序

Country Status (2)

Country Link
CN (1) CN112508918A (zh)
WO (1) WO2022121170A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170571A (zh) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 胸腹水细胞病理图像识别方法、图像识别装置、介质
CN116052847A (zh) * 2023-02-08 2023-05-02 中国人民解放军陆军军医大学第二附属医院 基于深度学习的胸片多异常识别系统、装置及方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508918A (zh) * 2020-12-11 2021-03-16 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质
CN112686899B (zh) 2021-03-22 2021-06-18 深圳科亚医疗科技有限公司 医学图像分析方法和装置、计算机设备及存储介质
CN113450399B (zh) * 2021-05-28 2022-02-25 北京医准智能科技有限公司 一种正位胸片心胸比测量方法及装置
CN114078120B (zh) * 2021-11-22 2022-05-20 北京欧应信息技术有限公司 用于检测脊柱侧弯的方法、设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030053674A1 (en) * 1998-02-23 2003-03-20 Arch Development Corporation Method and system for the automated delineation of lung regions and costophrenic angles in chest radiographs
CN110827345A (zh) * 2019-10-31 2020-02-21 北京推想科技有限公司 心胸比确定方法、装置、设备、存储介质及计算机设备
CN111476776A (zh) * 2020-04-07 2020-07-31 上海联影智能医疗科技有限公司 胸部病灶位置确定方法、系统、可读存储介质和设备
CN112508918A (zh) * 2020-12-11 2021-03-16 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298820A (zh) * 2019-05-28 2019-10-01 上海联影智能医疗科技有限公司 影像分析方法、计算机设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030053674A1 (en) * 1998-02-23 2003-03-20 Arch Development Corporation Method and system for the automated delineation of lung regions and costophrenic angles in chest radiographs
CN110827345A (zh) * 2019-10-31 2020-02-21 北京推想科技有限公司 心胸比确定方法、装置、设备、存储介质及计算机设备
CN111476776A (zh) * 2020-04-07 2020-07-31 上海联影智能医疗科技有限公司 胸部病灶位置确定方法、系统、可读存储介质和设备
CN112508918A (zh) * 2020-12-11 2021-03-16 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170571A (zh) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 胸腹水细胞病理图像识别方法、图像识别装置、介质
CN115170571B (zh) * 2022-09-07 2023-02-07 赛维森(广州)医疗科技服务有限公司 胸腹水细胞病理图像识别方法、图像识别装置、介质
CN116052847A (zh) * 2023-02-08 2023-05-02 中国人民解放军陆军军医大学第二附属医院 基于深度学习的胸片多异常识别系统、装置及方法
CN116052847B (zh) * 2023-02-08 2024-01-23 中国人民解放军陆军军医大学第二附属医院 基于深度学习的胸片多异常识别系统、装置及方法

Also Published As

Publication number Publication date
CN112508918A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
WO2022121170A1 (zh) 图像处理方法及装置、电子设备、存储介质和程序
WO2022151755A1 (zh) 目标检测方法及装置、电子设备、存储介质、计算机程序产品和计算机程序
TWI770754B (zh) 神經網路訓練方法及電子設備和儲存介質
TWI755175B (zh) 圖像分割方法、電子設備和儲存介質
WO2020238623A1 (zh) 图像标注的方法、标注展示方法、装置、设备及存储介质
CN112767329B (zh) 图像处理方法及装置、电子设备
WO2021259391A2 (zh) 图像处理方法及装置、电子设备和存储介质
CN113573654A (zh) 用于检测并测定病灶尺寸的ai系统
US8811699B2 (en) Detection of landmarks and key-frames in cardiac perfusion MRI using a joint spatial-temporal context model
US9743824B2 (en) Accurate and efficient polyp detection in wireless capsule endoscopy images
WO2022036972A1 (zh) 图像分割方法及装置、电子设备和存储介质
US20220058821A1 (en) Medical image processing method, apparatus, and device, medium, and endoscope
CN113222038B (zh) 基于核磁图像的乳腺病灶分类和定位方法及装置
WO2021189848A1 (zh) 模型训练方法、杯盘比确定方法、装置、设备及存储介质
WO2022156235A1 (zh) 神经网络训练和图像处理方法及装置、电子设备及存储介质
US11854195B2 (en) Systems and methods for automated analysis of medical images
WO2023050691A1 (zh) 图像处理方法及装置、电子设备、存储介质和程序
WO2021259390A2 (zh) 一种冠脉钙化斑块检测方法及装置
CN112561908A (zh) 乳腺图像病灶匹配方法、装置及存储介质
WO2022242046A1 (zh) 医学图像的展示方法及装置、电子设备、存储介质和计算机程序
WO2020168647A1 (zh) 图像识别方法及相关设备
CN115170464A (zh) 肺图像的处理方法、装置、电子设备和存储介质
WO2023050690A1 (zh) 图像处理方法、装置、电子设备、存储介质和程序
JP2022548453A (ja) 画像分割方法及び装置、電子デバイス並びに記憶媒体
WO2021259394A2 (zh) 一种图像处理方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21901886

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21901886

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2023)