WO2021017297A1 - 基于人工智能的脊柱影像处理方法及相关设备 - Google Patents

基于人工智能的脊柱影像处理方法及相关设备 Download PDF

Info

Publication number
WO2021017297A1
WO2021017297A1 PCT/CN2019/117948 CN2019117948W WO2021017297A1 WO 2021017297 A1 WO2021017297 A1 WO 2021017297A1 CN 2019117948 W CN2019117948 W CN 2019117948W WO 2021017297 A1 WO2021017297 A1 WO 2021017297A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
spine image
vertebral
vertebrae
spine
Prior art date
Application number
PCT/CN2019/117948
Other languages
English (en)
French (fr)
Inventor
陶蓉
吴海萍
吕传峰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021017297A1 publication Critical patent/WO2021017297A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Definitions

  • This application relates to the field of artificial intelligence, and in particular to an artificial intelligence-based spinal image processing method and related equipment.
  • Spine sagittal imaging has important clinical significance in assessing spinal function and diagnosing spine related diseases.
  • the specific manifestations are as follows: 1.
  • a wide range of spine sagittal data sources including X-ray, computed tomography (CT) and Various scanning methods such as magnetic resonance imaging (MRI) provide sagittal images.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • Sagittal images have a wide range of applications in disease diagnosis, involving fractures, spondylolisthesis, bone hyperplasia, and other clinical signs, covering lesions in multiple areas of the sacral, lumbar, thoracic, and cervical spine.
  • the spine sagittal image analysis method uses traditional imaging methods, such as edge detection, to locate the edges and corners of the vertebrae.
  • edge detection a method that uses traditional imaging methods, such as edge detection, to locate the edges and corners of the vertebrae.
  • the inventor realizes that this method has low accuracy and is affected by image quality.
  • the recognition success rate of the edges and points of the lesion area will be greatly reduced.
  • This application provides an artificial intelligence-based spinal image processing method and related equipment, which are used to achieve vertebral segmentation and disease sign classification through two different depth networks, which improves the accuracy of vertebral segmentation and improves the identification of signs Specificity and sensitivity.
  • the first aspect of the embodiments of the present application provides an artificial intelligence-based spinal image processing method, including: obtaining an original spinal image, the original spinal image being a sagittal radiographic image of the spinal column; and preprocessing the original spinal image , Generate a target spine image; segment each vertebra in the target spine image through a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra; through a preset clustering algorithm Perform vertebral contour recognition and corner point detection on the multiple vertebral masks to obtain N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1, and the N bone block contours and the N *4 vertebral corner points and the target spine image are synthesized to generate a synthetic vertebral image; multiple small images are extracted from the synthetic vertebral image, and each small image includes the information of the target area; through the preset signs
  • the classification model recognizes the multiple small images and generates a recognition result.
  • a second aspect of the embodiments of the present application provides an artificial intelligence-based spinal image processing device, including: a first acquiring unit for acquiring original spinal images, the original spinal images being sagittal radiographic images of the spine;
  • the processing unit is used to preprocess the original spine image to generate a target spine image;
  • the segmentation unit is used to segment each vertebra in the target spine image through a preset segmentation model to generate multiple vertebral masks , Each vertebral mask corresponds to a different vertebra;
  • the identification detection unit is used to perform vertebral contour recognition and corner detection on the multiple vertebral masks through a preset clustering algorithm to obtain N bone block contours and N *4 vertebral corner points, N is greater than or equal to 1;
  • a synthesis unit for synthesizing the N bone block contours, the N*4 vertebral corner points and the target spine image to generate a synthetic vertebral image;
  • the extraction unit is used to extract multiple small images from the synthetic vertebral image, each small image includes the information
  • the third aspect of the embodiments of the present application provides an artificial intelligence-based spinal image processing device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor The above-mentioned artificial intelligence-based spinal image processing method is realized when the computer program is executed.
  • the fourth aspect of the embodiments of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium.
  • the computer executes the above-mentioned artificial intelligence-based spinal imaging. Processing method steps.
  • the original spine image is acquired, which is the sagittal radiographic image of the spine; the original spine image is preprocessed to generate the target spine image; the target spine image is imaged through a preset segmentation model Segment each vertebrae in the middle to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra; through the preset clustering algorithm, the vertebral contour recognition and corner detection of multiple vertebral masks are used to obtain N Bone block contours and N*4 vertebral corner points, N is greater than or equal to 1; N bone block contours, N*4 vertebral corner points and the target spine image are synthesized to generate a synthetic vertebral image; extracted from the synthetic vertebral image Multiple small-block images, each small-block image includes the information of the target area; the multiple small-block images are recognized through a preset sign classification model, and the recognition result is generated.
  • vertebrae segmentation and disease sign classification are respectively realized
  • FIG. 1 is a schematic diagram of an embodiment of a spine image processing method based on artificial intelligence in an embodiment of the application
  • FIG. 2 is a schematic diagram of another embodiment of a method for processing an image of the spine based on artificial intelligence in an embodiment of the application;
  • FIG. 3 is a schematic diagram of an embodiment of an artificial intelligence-based spine image processing device in an embodiment of the application
  • FIG. 4 is a schematic diagram of another embodiment of the spine image processing device based on artificial intelligence in an embodiment of the application;
  • Fig. 5 is a schematic diagram of an embodiment of a spine image processing device based on artificial intelligence in an embodiment of the application.
  • This application provides an artificial intelligence-based spinal image processing method and related equipment, which are used to achieve vertebral segmentation and disease sign classification through two different depth networks, which improves the accuracy of vertebral segmentation and improves the identification of signs Specificity and sensitivity.
  • the flowchart of the artificial intelligence-based spinal image processing method specifically includes:
  • the server obtains the original spine image, and the original spine image is a sagittal radiographic image of the spine.
  • the sagittal plane is the anatomical plane that divides the human body into left and right sides, and the plane parallel to this anatomical plane is also the sagittal plane. The one in this position is called the median sagittal section.
  • the sagittal plane is relative to the coronal plane and the horizontal plane.
  • the coronal plane refers to the longitudinal section that divides the human body into front and rear parts in the left-right direction.
  • the section is perpendicular to the sagittal plane and the horizontal plane; horizontal plane, also called
  • the transverse plane is a plane parallel to the ground plane that divides the human body into upper and lower parts.
  • the plane is perpendicular to the coronal and sagittal planes.
  • the execution subject of this application may be a spine image processing device based on artificial intelligence, or may also be a terminal or a server, which is not specifically limited here.
  • the embodiment of the present application takes the server as the execution subject as an example for description.
  • the server preprocesses the original spine image to generate the target spine image. Specifically, the server processes the pixel size of the original spine image to obtain the processed first spine image; the server adjusts the parameters of the first spine image to generate the target spine image.
  • the server processes the pixel size of the original spine image, and obtains the processed first spine image specifically including: processing the original spine image with black edges; cropping the spine image after the black edges are removed; The size of the spinal image is adjusted to obtain the processed first spinal image. For example, suppose the input file size of the model is N*N pixels.
  • the server adjusts the parameters of the first spine image to generate the target spine image specifically including: determining the number of image channels of the first spine image; adjusting the window width and window level of the first spine image according to the number of image channels to generate the target spine image.
  • each vertebra in the target spine image by using a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra.
  • the server segments each vertebra in the target spine image through a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra.
  • Each vertebral mask corresponds to a mask category label. Different colors or numbers or letters can be used to distinguish the category labels.
  • the segmentation model can be a maskrcnn model.
  • the segmentation model is a neural network model, and the training process of the model is an existing technology, which will not be repeated here.
  • model training strategy includes: 1. Use the sacrum and other vertebrae as different types of training segmentation models; 2. Since the fifth section of the lumbar vertebra is adjacent to the sacrum, and the remaining vertebrae and vertebrae are adjacent, the lumbar vertebrae The five sections are also regarded as a separate category; 3. The lumbar spine and thoracic spine are regarded as one category; 4. The cervical spine is regarded as one category.
  • each vertebra appearing on the target spine image is segmented separately, assuming that sacrum 1, lumbar vertebrae 5 to 1, and thoracic vertebrae 12 to 11 appear on one image.
  • the mask label of sacrum 1 is category 1 (red)
  • the mask label of lumbar spine 5 is category 2 (green)
  • the mask label of lumbar spine 4 to 1 and thoracic spine 12 to 11 is category 3 (blue)
  • the vertebrae masks of different labels are merged into one output, and different vertebrae are distinguished by different labels.
  • the server performs vertebral contour recognition and corner detection on multiple vertebral masks through a preset clustering algorithm to obtain N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1.
  • the preset fuzzy energy algorithm For example, first use the preset fuzzy energy algorithm to identify the bone contours of multiple vertebrae.
  • An approximately rectangular vertebra produces four clusters of points, but in practice, due to changes in the shape of the vertebrae, or the edges of the segmented mask are not smooth, the point clusters The number of may be greater than or equal to four.
  • the Harris corner detection (Harris Coner Detection) algorithm can only provide the location information of the points, not the information of the point clusters, that is, it will not output which points belong to the same point cluster. Therefore, in the Harris corner detection algorithm, a wider threshold is given, so that each vertebra produces N point clusters (N is not necessarily equal to 4), and each point cluster includes 20-40 points, giving a total of 100- 200 points.
  • a density-based spatial clustering of applications with noise (DBSCAN) algorithm with noise is used to divide the 100-200 into N point clusters. Take the center points of N point clusters as corner points.
  • N is not equal to 4
  • the minimum bounding rectangle algorithm is used to eliminate redundant vertebral corner points or fill in missing vertebral corner points, and output four Vertebrae corner points.
  • the output four vertebrae corner points are sorted counterclockwise.
  • the server performs corner detection on the mask, and the key points detected on the mask are not interfered by image ghosting and low pixels, and will be more accurate, and then map the detected points from the mask to the target spine The corresponding position of the image.
  • the server synthesizes the N bone block contours, N*4 vertebra corner points and the target spine image to generate a composite vertebra image. Among them, each vertebra mask is superimposed on the corresponding vertebra in the target spine image.
  • the colored areas in the generated synthetic vertebral image are masks output by the segmentation model, different colors represent different category labels, and the gray areas are target spine images, which are background images.
  • the model only outputs the mask and the category corresponding to the mask. For display convenience, the generated mask is drawn on the target spine image.
  • the server extracts multiple small images from the composite vertebral image, and each small image includes information about the target area.
  • the target area may include intervertebral discs, vertebrae, sacral vertebrae, vertebral corner points, double vertebrae, etc.
  • this small image can include 1 to 2 vertebrae, intervertebral discs, or sacral vertebrae (different small images are extracted according to different detection targets).
  • N image pieces including 1 vertebra can be extracted, (N-1) including two adjacent Small vertebrae images, N*4 small images including the corners of the vertebrae, (N-1) small images including intervertebral discs, 1 small sacral image (only in the presence of sacral vertebrae) .
  • the server recognizes multiple small images through a preset sign classification model and generates recognition results.
  • the sign classification model may be a deep residual network (deep residual network, ResNet) model.
  • the server separates the small block image including disease signs, outputs the position of the small block image of the disease in the synthesized vertebral image, and outputs the recognition result.
  • the recognition result is the result of qualitative analysis.
  • the server uses the position information of the edge and corner of the bone to calculate the center point of the bone, the offset of the corner, the thickness of the intervertebral disc, etc., and can further calculate the thickness of the vertebrae and the posterior thoracic spine. Convex curvature, lumbar lordosis curvature, sacral tilt angle, sagittal axial distance and other parameters, assist doctors in quantitative analysis.
  • vertebrae segmentation and disease sign classification are respectively realized through two different deep networks, which improves the accuracy of vertebrae segmentation and improves the specificity and sensitivity of sign recognition.
  • FIG. 2 another flowchart of the artificial intelligence-based spinal image processing method provided by the embodiment of the present application specifically includes:
  • the server obtains the original spine image, and the original spine image is a sagittal radiographic image of the spine.
  • the sagittal plane is the anatomical plane that divides the human body into left and right sides, and the plane parallel to this anatomical plane is also the sagittal plane. The one in this position is called the median sagittal section.
  • the sagittal plane is relative to the coronal plane and the horizontal plane.
  • the coronal plane refers to the longitudinal section that divides the human body into front and rear parts in the left-right direction.
  • the section is perpendicular to the sagittal plane and the horizontal plane; horizontal plane, also called
  • the transverse plane is a plane parallel to the ground plane that divides the human body into upper and lower parts.
  • the plane is perpendicular to the coronal and sagittal planes.
  • the execution subject of this application may be a spine image processing device based on artificial intelligence, or may also be a terminal or a server, which is not specifically limited here.
  • the embodiment of the present application takes the server as the execution subject as an example for description.
  • the server processes the pixel size of the original spine image to obtain the processed first spine image. Specifically, the server performs black border removal processing on the original spine image; the server crops the spine image after the black border removal processing; the server adjusts the size of the cropped spine image to obtain the processed first spine image. For example, suppose the input file size of the model is N*N pixels.
  • the server first checks whether the image has a border. If it finds a border with a thickness of 77 pixels around the image, the server first cuts off the border and the image size is changed. It is 435*435 pixels, and then the server scales the image after the border is cut to a size of N*N pixels. For another example, when the input original spine image size is 888*678 pixels, the server first checks whether the image has a border. If there is no border around the image, the server crops the image to 678*678 pixels, and then the server will The image is scaled equally to the size of N*N pixels.
  • the original spine image is adjusted consistently, and the adjusted image spine occupies the main body of the target spine image, and the image is stretched to a uniform N*N size without borders.
  • the server adjusts the parameters of the first spine image to generate the target spine image. Specifically, the server determines the number of image channels of the first spine image; the server adjusts the window width and window level of the first spine image according to the number of image channels to generate the target spine image.
  • the server first determines the number of image channels of the first spine image; if the number of image channels of the first spine image is 1, it means that the first spine image is a grayscale image; if the number of image channels of the first spine image is 3, then Indicates that the first spine image is an RGB image. It is understandable that medical images are generally single-channel images, that is, the number of image channels is 1, which will not be repeated here. In the following, the number of channels is 1 as an example.
  • the server first calculates the grayscale histogram of the first spine image, and then calculates the area from the histogram to the coordinate axis, and uses the ratio of the envelope area to the total area as the threshold (including the area upper threshold and the area lower threshold), and intercepts the target grayscale The image in the interval.
  • the area upper threshold is 0.01 and the area lower threshold is 0.6
  • the area of the histogram after removing the upper threshold and the lower threshold is cut, for example, the gray histogram with the gray value in the interval of 209-685 is cut.
  • each vertebra in the target spine image by using a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra.
  • the server segments each vertebra in the target spine image through a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra.
  • Each vertebral mask corresponds to a mask category label. Different colors or numbers or letters can be used to distinguish the category labels.
  • the segmentation model can be a maskrcnn model. Among them, for vertebrae in different positions, such as sacral vertebrae, lumbar vertebrae, thoracic vertebrae, etc., different model training strategies are used for training in advance, so that the segmentation model can accurately segment vertebrae in different positions.
  • the segmentation model is a neural network model, and the training process of the model is an existing technology, which will not be repeated here.
  • the model training strategy includes: 1. Use the sacrum and other vertebrae as different types of training segmentation models; 2. Since the fifth section of the lumbar vertebra is adjacent to the sacrum, and the remaining vertebrae and vertebrae are adjacent, the lumbar vertebrae The five sections are also regarded as a separate category; 3. The lumbar spine and thoracic spine are regarded as one category; 4. The cervical spine is regarded as one category. For example, each vertebra appearing on the target spine image is segmented separately, assuming that sacrum 1, lumbar vertebrae 5 to 1, and thoracic vertebrae 12 to 11 appear on one image.
  • the mask label of sacrum 1 is category 1 (red)
  • the mask label of lumbar spine 5 is category 2 (green)
  • the mask label of lumbar spine 4 to 1 and thoracic spine 12 to 11 is category 3 (blue)
  • the vertebrae masks of different labels are merged into one output, and different vertebrae are distinguished by different labels.
  • the server judges whether there is a sacral vertebra in the target spine image according to the preset first segmentation model; if there is a sacral vertebra in the target spine image, the server separates the sacral vertebra into one, generates a sacral vertebra mask, and marks It is category one; the server judges whether there is a fifth lumbar vertebra adjacent to the sacrum in the target spine image through the preset second segmentation model; if there is a fifth lumbar vertebra adjacent to the sacrum in the target spine image, the server will The five lumbar vertebrae are separated, and the fifth lumbar vertebra vertebrae mask is generated and marked as category two; the server judges whether there is a thoracic spine, the first lumbar spine and the second lumbar spine connected to the thoracic spine in the target spine image through the preset third segmentation model , The third lumbar vertebra or the fourth lumbar vertebra; if there is a thoracic spine, the first lumbar spine, the second lumbar
  • the server performs vertebral contour recognition and corner detection on multiple vertebral masks through a preset clustering algorithm to obtain N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1.
  • the server recognizes the bone block contours of N vertebrae through a preset fuzzy energy algorithm; the server obtains M candidate points of each vertebra through a preset Harris corner detection algorithm, and M is greater than or equal to 4; the server passes The noisy density-based clustering algorithm DBSCAN algorithm divides the M candidate points into P point clusters; the server calculates the center points of the P point clusters, and determines the P center points as P vertebral corner points; the server passes the minimum external connection
  • the rectangle algorithm eliminates redundant vertebral corner points or fills in missing vertebral corner points, and obtains N*4 vertebral corner points.
  • the preset fuzzy energy algorithm For example, first use the preset fuzzy energy algorithm to identify the bone contours of multiple vertebrae.
  • An approximately rectangular vertebra produces four clusters of points, but in practice, due to changes in the shape of the vertebrae, or the edges of the segmented mask are not smooth, the point clusters The number of may be greater than or equal to four.
  • the Harris corner detection (Harris Coner Detection) algorithm can only provide the location information of the points, not the information of the point clusters, that is, it will not output which points belong to the same point cluster. Therefore, in the Harris corner detection algorithm, a wider threshold is given, so that each vertebra produces N point clusters (N is not necessarily equal to 4), and each point cluster includes 20-40 points, giving a total of 100- 200 points.
  • a density-based spatial clustering of applications with noise (DBSCAN) algorithm with noise is used to divide the 100-200 into N point clusters. Take the center points of N point clusters as corner points.
  • N is not equal to 4
  • the minimum bounding rectangle algorithm is used to eliminate redundant vertebral corner points or fill in missing vertebral corner points, and output four Vertebrae corner points.
  • the output four vertebrae corner points are sorted counterclockwise.
  • the server performs corner detection on the mask, and the key points detected on the mask are not interfered by image ghosting and low pixels, and will be more accurate, and then map the detected points from the mask to the target spine The corresponding position of the image.
  • the server synthesizes the N bone block contours, N*4 vertebra corner points and the target spine image to generate a composite vertebra image. Among them, each vertebra mask is superimposed on the corresponding vertebra in the target spine image.
  • the colored areas in the generated synthetic vertebral image are masks output by the segmentation model, different colors represent different category labels, and the gray areas are target spine images, which are background images.
  • the model only outputs the mask and the category corresponding to the mask. For display convenience, the generated mask is drawn on the target spine image.
  • the server extracts multiple small images from the composite vertebral image, and each small image includes information about the target area.
  • the target area may include intervertebral discs, vertebrae, sacral vertebrae, vertebral corner points, double vertebrae, etc.
  • this small image can include 1 to 2 vertebrae, intervertebral discs, or sacral vertebrae (different small images are extracted according to different detection targets).
  • N image pieces including 1 vertebra can be extracted, (N-1) including two adjacent Small vertebrae images, N*4 small images including the corners of the vertebrae, (N-1) small images including intervertebral discs, 1 small sacral image (only in the presence of sacral vertebrae) .
  • the server recognizes multiple small images through a preset sign classification model and generates recognition results.
  • the sign classification model may be a deep residual network (deep residual network, ResNet) model.
  • the server separates the small block image including disease signs, outputs the position of the small block image of the disease in the synthesized vertebral image, and outputs the recognition result.
  • the recognition result is the result of qualitative analysis.
  • the server uses the position information of the edge and corner of the bone to calculate the center point of the bone, the offset of the corner, the thickness of the intervertebral disc, etc., and can further calculate the thickness of the vertebrae and the posterior thoracic spine. Convex curvature, lumbar lordosis curvature, sacral tilt angle, sagittal axial distance and other parameters, assist doctors in quantitative analysis.
  • the server will combine the gold standard of the clinical spine disease sign database marked by the doctor, and use the extracted corresponding small image to train to obtain the sign classification model.
  • the artificial intelligence-based spine image processing method in the embodiment of the application is described above.
  • the artificial intelligence-based spine image processing device in the embodiment of the application is described below. Please refer to FIG. 3, the artificial intelligence-based spine in the embodiment of the application
  • An embodiment of the image processing device includes:
  • the first acquiring unit 301 is configured to acquire an original spine image, where the original spine image is a sagittal radiographic image of the spine;
  • the preprocessing unit 302 is configured to preprocess the original spine image to generate a target spine image
  • the segmentation unit 303 is configured to segment each vertebra in the target spine image by using a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra;
  • the recognition and detection unit 304 is configured to perform vertebral contour recognition and corner point detection on the multiple vertebral masks through a preset clustering algorithm to obtain N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1;
  • a synthesis unit 305 configured to synthesize the N bone block contours, the N*4 vertebra corner points, and the target spine image to generate a synthesized vertebral image
  • the extracting unit 306 is configured to extract multiple small images from the synthetic vertebral image, and each small image includes information of the target area;
  • the recognition generating unit 307 is used for recognizing the multiple small images through a preset sign classification model, and generating recognition results.
  • vertebrae segmentation and disease sign classification are respectively realized through two different deep networks, which improves the accuracy of vertebrae segmentation and improves the specificity and sensitivity of sign recognition.
  • another embodiment of the spine image processing device based on artificial intelligence in the embodiment of the present application includes:
  • the first acquiring unit 301 is configured to acquire an original spine image, where the original spine image is a sagittal radiographic image of the spine;
  • the preprocessing unit 302 is configured to preprocess the original spine image to generate a target spine image
  • the segmentation unit 303 is configured to segment each vertebra in the target spine image by using a preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra;
  • the recognition and detection unit 304 is configured to perform vertebral contour recognition and corner point detection on the multiple vertebral masks through a preset clustering algorithm to obtain N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1;
  • a synthesis unit 305 configured to synthesize the N bone block contours, the N*4 vertebra corner points, and the target spine image to generate a synthesized vertebral image
  • the extracting unit 306 is configured to extract multiple small images from the synthetic vertebral image, and each small image includes information of the target area;
  • the recognition generating unit 307 is used for recognizing the multiple small images through a preset sign classification model, and generating recognition results.
  • the preprocessing unit 302 includes:
  • the processing module 3021 is configured to process the pixel size of the original spine image to obtain a processed first spine image
  • the adjustment module 3022 is configured to adjust the parameters of the first spine image to generate a target spine image.
  • processing module 3021 is specifically used for:
  • the adjustment module 3022 is specifically used for:
  • the dividing unit 303 is specifically configured to:
  • the first lumbar vertebrae, the second lumbar vertebrae, the third lumbar vertebrae or the fourth lumbar vertebrae connected in sequence with the thoracic vertebrae; if there is a thoracic vertebra in the target spine image, the first lumbar vertebrae, the second lumbar vertebrae, the third lumbar vertebrae or the For the fourth lumbar vertebrae, the existing thoracic vertebrae, the first lumbar vertebrae, the second lumbar vertebrae, the third lumbar vertebrae, or the fourth lumbar vertebrae are separated into a corresponding thoracic vertebrae mask or lumbar vertebrae mask.
  • the third segmentation model is used to determine whether there is a cervical spine in the target spine image; if there is a cervical spine in the target spine image, the cervical spine is separated into one to generate a cervical spine mask.
  • the identification and detection unit 304 is specifically configured to:
  • Identify the bone contours of N vertebrae through the preset fuzzy energy algorithm obtain M candidate points of each vertebra through the preset Harris corner detection algorithm, M is greater than or equal to 4; through the noise-based density-based
  • the clustering algorithm DBSCAN algorithm divides the M candidate points into P point clusters; respectively calculates the center points of the P point clusters, and determines the P center points as P vertebral corner points; eliminates the excess by the minimum bounding rectangle algorithm Or fill in the missing vertebral corner points to obtain N*4 vertebral corner points.
  • the identification generating unit 307 is specifically configured to:
  • the preset deep residual network model to recognize the multiple small patches of images; isolate small disease images containing disease signs; determine the position of each small disease image on the synthetic vertebral image, and output
  • the recognition result includes the center point of the vertebra, the offset of the corner point, and the thickness of the intervertebral disc.
  • the original spine image is obtained, which is a sagittal radiographic image of the spine; the pixel size of the original spine image is processed to obtain the processed first spine image; the parameters of the first spine image are adjusted, Generate the target spine image; segment each vertebra in the target spine image through the preset segmentation model to generate multiple vertebral masks, and each vertebral mask corresponds to a different vertebra; multiple preset clustering algorithms
  • the vertebral mask performs vertebral contour recognition and corner detection, and obtains N bone block contours and N*4 vertebral corner points, where N is greater than or equal to 1, and combines N bone block contours, N*4 vertebral corner points and the target spine
  • the images are synthesized to generate a synthetic vertebral image; multiple small images are extracted from the synthetic vertebral image, and each small image includes the information of the target area; multiple small images are recognized through a preset sign classification model to generate recognition result.
  • multi-modal and multi-size spinal medical images are input, the consistency of different types of images is enhanced through preprocessing, and two different deep networks are used to separate vertebrae and disease signs.
  • two different deep networks are used to separate vertebrae and disease signs.
  • vertebrae segmentation On the basis of vertebrae segmentation, a variety of signs of the spine are further recognized, which improves the accuracy of vertebrae segmentation and improves the specificity and sensitivity of sign recognition.
  • FIGS 3 to 4 above describe in detail the artificial intelligence-based spine image processing device in the embodiment of the present application from the perspective of modular functional entities.
  • the following describes the artificial intelligence-based spine image processing device in the embodiment of the present application from the perspective of hardware processing. Give a detailed description.
  • the artificial intelligence-based spine image processing device 500 may have relatively large differences due to different configurations or performances, and may include one or One or more processors (central processing units, CPU) 501 (for example, one or more processors) and a memory 509, one or more storage media 508 for storing application programs 507 or data 506 (for example, one or one storage device with a large amount of ).
  • the memory 509 and the storage medium 508 may be short-term storage or persistent storage.
  • the program stored in the storage medium 508 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the artificial intelligence-based spinal image processing device.
  • the processor 501 may be configured to communicate with the storage medium 508, and execute a series of instruction operations in the storage medium 508 on the artificial intelligence-based spinal image processing device 500.
  • the artificial intelligence-based spine image processing device 500 may also include one or more power supplies 502, one or more wired or wireless network interfaces 503, one or more input and output interfaces 504, and/or, one or more operating systems 505 , Such as Windows Serve, Mac OS X, Unix, Linux, FreeBSD and so on.
  • Windows Serve Windows Serve
  • Mac OS X Unix
  • Linux FreeBSD
  • the processor 501 can perform the functions of the first acquisition unit 301, the preprocessing unit 302, the segmentation unit 303, the recognition detection unit 304, the synthesis unit 305, the extraction unit 306, and the recognition generation unit 307 in the foregoing embodiment.
  • the processor 501 is the control center of the artificial intelligence-based spinal image processing equipment, and can perform processing in accordance with the set artificial intelligence-based spinal image processing method.
  • the processor 501 uses various interfaces and lines to connect the various parts of the entire artificial intelligence-based spine image processing device, and by running or executing software programs and/or modules stored in the memory 509, and calling data stored in the memory 509, Perform various functions and data processing of the spine image processing equipment based on artificial intelligence, so as to realize the segmentation of vertebrae and classification of disease signs.
  • the storage medium 508 and the memory 509 are both carriers for storing data.
  • the storage medium 508 may refer to an internal memory with a small storage capacity but high speed, and the storage 509 may have a large storage capacity but a slow storage speed. External memory.
  • the memory 509 may be used to store software programs and modules.
  • the processor 501 executes various functional applications and data processing of the spine image processing device 500 based on artificial intelligence by running the software programs and modules stored in the memory 509.
  • the memory 509 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system and at least one application program required by at least one function (such as performing a preset segmentation model on each vertebra in the target spine image). Segmentation, generating multiple vertebral masks, each vertebral mask corresponding to a different vertebra), etc.; the storage data area can store data created according to the use of artificial intelligence-based spinal image processing equipment (such as synthetic vertebral images, etc.), etc. .
  • the memory 509 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and the computer-readable storage medium may also be a volatile computer-readable storage medium.
  • the computer-readable storage medium stores instructions, and when the instructions run on the computer, the computer executes the following steps of the artificial intelligence-based spinal image processing method:
  • each small-block image including information of a target area
  • the multiple small-block images are recognized through a preset sign classification model to generate a recognition result.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, twisted pair) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, an optical disc), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

基于人工智能的脊柱影像处理方法及相关设备,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。所述方法包括:获取原始脊柱影像;对原始脊柱影像进行预处理,生成目标脊柱影像(102);通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜;通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点;将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像(105);从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息(106);通过预置的征象分类模型对多个小块图像进行识别,生成识别结果(107)。

Description

基于人工智能的脊柱影像处理方法及相关设备
本申请要求于2019年8月1日提交中国专利局、申请号为201910706559.9、发明名称为“基于人工智能的脊柱影像处理方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种基于人工智能的脊柱影像处理方法及相关设备。
背景技术
脊柱矢状位影像在评估脊柱功能,诊断脊柱相关疾病上有重要临床意义,具体表现在:1、脊柱矢状位数据来源范围广,包括X光,电子计算机断层扫描(computed tomography,CT)和磁共振成像(magnetic resonance imaging,MRI)等多种扫描方式提供矢状位影像。2、矢状位图像在疾病诊断上应用范围广,涉及骨折、滑脱、骨质增生等多种临床相关征象,涵盖骶椎、腰椎、胸椎、颈椎多个区域的病变。3、和患者预后和生活质量密切相关,临床上会使用矢状位图像提取的参数定量评估脊柱手术后患者恢复情况。
目前市场上,脊椎矢状位影像分析方法使用传统图像学方法,如边缘检测等,去定位椎骨的边缘和角等位置,发明人意识到这种方法精度低,受图像质量影像大,特别对于病变区域的边线和点识别成功率会大幅下降。
发明内容
本申请提供了一种基于人工智能的脊柱影像处理方法及相关设备,用于通过两个不同的深度网络分别实现椎骨分割和疾病征象分类,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
本申请实施例的第一方面提供一种基于人工智能的脊柱影像处理方法,包括:获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;对所述原始脊柱影像进行预处理,生成目标脊柱影像;通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
本申请实施例的第二方面提供了一种基于人工智能的脊柱影像处理装置,包括:第一获取单元,用于获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;预处理单元,用于对所述原始脊柱影像进行预处理,生成目标脊柱影像;分割单元,用于通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;识别检测单元,用于通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;合成单元,用于将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生 成合成椎骨图像;提取单元,用于从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;识别生成单元,用于通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
本申请实施例的第三方面提供了一种基于人工智能的脊柱影像处理设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述基于人工智能的脊柱影像处理方法。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行上述基于人工智能的脊柱影像处理方法的步骤。
本申请实施例提供的技术方案中,获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像;对原始脊柱影像进行预处理,生成目标脊柱影像;通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像;从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。本申请实施例,通过两个不同的深度网络分别实现椎骨分割和疾病征象分类,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
附图说明
图1为本申请实施例中基于人工智能的脊柱影像处理方法的一个实施例示意图;
图2为本申请实施例中基于人工智能的脊柱影像处理方法的另一个实施例示意图;
图3为本申请实施例中基于人工智能的脊柱影像处理装置的一个实施例示意图;
图4为本申请实施例中基于人工智能的脊柱影像处理装置的另一个实施例示意图;
图5为本申请实施例中基于人工智能的脊柱影像处理设备的一个实施例示意图。
具体实施方式
本申请提供了一种基于人工智能的脊柱影像处理方法及相关设备,用于通过两个不同的深度网络分别实现椎骨分割和疾病征象分类,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例进行描述。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方 法、产品或设备固有的其它步骤或单元。
请参阅图1,本申请实施例提供的基于人工智能的脊柱影像处理方法的流程图,具体包括:
101、获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像。
服务器获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像。矢状面就是把人体分成左右两面的解剖面,与这个解剖面平行的也是矢状面。处于这个位置的叫矢状位(Median sagittal section)。矢状面是相对于冠状面和水平面,其中,冠状面是指左右方向,将人体分为前后两部分的纵切面,该切面与矢状面及水平面相互垂直;水平面(horizontal plane),也称横切面,是与地平面平行将人体分为上、下两部的平面,该平面与冠状面和矢状面相互垂直。
可以理解的是,本申请的执行主体可以为基于人工智能的脊椎影像处理装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。
102、对原始脊柱影像进行预处理,生成目标脊柱影像。
服务器对原始脊柱影像进行预处理,生成目标脊柱影像。具体的,服务器对原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;服务器对第一脊柱影像进行参数调整,生成目标脊柱影像。
其中,服务器对原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像具体包括:对原始脊柱影像进行切除黑边处理;对切除黑边处理后的脊柱影像进行裁剪;对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。举例说明,假设模型的输入文件尺寸为N*N像素。
其中,服务器对第一脊柱影像进行参数调整,生成目标脊柱影像具体包括:确定第一脊柱影像的图像通道数;根据图像通道数调整第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
103、通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨。
服务器通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨。每一个椎骨掩膜都对应有一个掩膜类别标签,可以采用不同的颜色或数字或字母对类别标签进行区分,分割模型可以为maskrcnn模型。
其中,对于不同位置的椎骨,如骶椎、腰椎、胸椎等,预先采用不同的模型训练策略进行训练,以使得分割模型能准确分割出不同位置的椎骨。分割模型为神经网络模型,模型的训练过程为现有技术,此处不再赘述。
需要说明的是,模型训练策略包括:1、将骶椎和其他椎骨作为不同的类别训练分割模型;2、由于腰椎第五节与骶椎相邻,而其余椎骨和椎骨相邻,将腰椎第五节也作为单独的一个类别;3、将腰椎4~1节和胸椎作为一个类别;4、将颈椎作为一类别。
例如,将目标脊柱影像上出现的每一个椎骨单独分割出来,假设一个图像上出现骶骨1,腰椎5~1,胸椎12~11。分割后得到:骶骨1的掩模标签为类别一(红色),腰椎5的掩模标签为类别二(绿色),腰4~1和胸椎12~11的掩模标签为类别三(蓝色),不同标签的椎 骨掩模合并为一张输出,通过不同的标签区分不同的椎骨。
104、通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1。
服务器通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1。
例如,先采用预置的模糊能量算法识别出多个椎骨的骨块轮廓,一块近似矩形的椎骨产生四团点簇,但是实际中由于椎骨形态变化,或者分割的掩膜边缘不平滑,点簇的数量可能大于或等于四。而哈里斯角点检测(Harris Coner Detection)算法只能提供点的位置信息,不能提供点簇的信息,即不会输出哪些点是属于同一个点簇。因此在哈里斯角点检测算法时,给了一个较宽的阈值,使得每块椎骨产生N个点簇(N不一定等于4),每团点簇包括20~40个点,共得到100~200个点。再使用具有噪声的基于密度的聚类算法(density-based spatial clustering of applications with noise,DBSCAN)算法将这100~200分成N个点簇。取N个点簇中心点作为角点。
可以理解的是,对于大部分椎骨(非骶椎)N=4,少数情况下,当N不等于4,采用最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,输出四个椎骨角点。为了提高算法的效率,输出的四个椎骨角点按照逆时针进行排序。
需要说明的是,服务器在掩膜上做角点检测,在掩膜上检测关键点不受图像重影和低像素的干扰,会更准确,再把检测到的点从掩膜映射到目标脊柱影像的对应位置。
105、将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像。
服务器将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像。其中,每个椎骨掩膜叠加在与目标脊柱影像中对应椎骨的上方。
需要说明的是,生成合成椎骨图像中的彩色区域为分割模型输出的掩模,不同颜色代表不同类别标签,灰色区域为目标脊柱影像,为背景图像。模型仅输出掩模和掩模对应的类别,为了显示方便,把生成的掩模绘制在目标脊柱影像上。
106、从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息。
服务器从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息。目标区域可以包括椎间盘、椎骨、骶椎、椎骨角点、双椎骨等。根据检测目标不同,这个小块图像可以包括1到2个椎骨,椎间盘,或者是骶椎(根据不同的检测目标,提取不同的小块图像)。例如,在一张合成椎骨图像上,假设通过分割得到N个骨块轮廓和N*4个角点,可以提取N个包括1个椎骨的图像小块,(N-1)个包括相邻两个椎骨的小块图像,N*4个包括椎骨角点的小块图像,(N-1)个包括椎间盘的小块图像,1个骶椎小块图像(仅在骶椎存在的情况下)。
107、通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。
服务器通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。其中,征象分类模型可以为深度残差网络(deep residual network,ResNet)模型。服务器分离出包括疾病征象的小块图像,输出疾病小块图像在合成椎骨图像的位置,并输出识别结果。其中,识别结果为定性分析结果,服务器通过骨块边缘和角点的位置信息,计算得到骨块 的中心点、角点偏移量、间盘厚度等参数,并可进一步计算椎骨厚度、胸椎后凸曲率、腰椎前凸曲率、骶骨倾斜角、矢状面轴向距离等参数,辅助医生进行定量分析。
本申请实施例,通过两个不同的深度网络分别实现椎骨分割和疾病征象分类,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
请参阅图2,本申请实施例提供的基于人工智能的脊柱影像处理方法的另一个流程图,具体包括:
201、获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像。
服务器获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像。矢状面就是把人体分成左右两面的解剖面,与这个解剖面平行的也是矢状面。处于这个位置的叫矢状位(Median sagittal section)。矢状面是相对于冠状面和水平面,其中,冠状面是指左右方向,将人体分为前后两部分的纵切面,该切面与矢状面及水平面相互垂直;水平面(horizontal plane),也称横切面,是与地平面平行将人体分为上、下两部的平面,该平面与冠状面和矢状面相互垂直。
可以理解的是,本申请的执行主体可以为基于人工智能的脊椎影像处理装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。
202、对原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像。
服务器对原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像。具体的,服务器对原始脊柱影像进行切除黑边处理;服务器对切除黑边处理后的脊柱影像进行裁剪;服务器对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。举例说明,假设模型的输入文件尺寸为N*N像素。
例如,当输入的原始脊柱影像(即DICOM文件)大小为512*512像素时,服务器先检查影像有无边框,若发现影像周围有厚度为77像素的边框,则服务器首先切除边框,影像大小转为435*435像素,然后服务器再将切除边框后的影像等比例缩放为N*N像素大小。又例如,当输入的原始脊柱影像大小为888*678像素时,服务器先检查影像有无边框,若发现影像周围无边框,则服务器裁剪影像尺寸为678*678像素,然后服务器再将裁剪后的影像等比例缩放为N*N像素大小。
需要说明的是,原始脊柱影像经过一致性调整,得到的调整后的图像脊柱占据目标脊柱影像主体,影像等比例拉伸为统一N*N尺寸,无边框。
203、对第一脊柱影像进行参数调整,生成目标脊柱影像。
服务器对第一脊柱影像进行参数调整,生成目标脊柱影像。具体的,服务器确定第一脊柱影像的图像通道数;服务器根据图像通道数调整第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
例如,服务器先确定第一脊柱影像的图像通道数;若第一脊柱影像的图像通道数为1,则表示第一脊柱影像为灰度图;若第一脊柱影像的图像通道数为3,则表示第一脊柱影像为RGB图。可以理解的是,医疗影像一般为单通道影像,即图像通道数量为1,此处不再赘述。后续以通道数为1为例进行说明。
例如,假设第一脊柱影像的灰度值范围为:-638~904。服务器首先计算该第一脊柱影像的灰度直方图,再计算直方图到坐标轴的面积,以包络面积占总面积的比例为阈值(包括面积上限阈值和面积下限阈值),截取目标灰度区间内的图像。假设面积上限阈值为0.01,面积下限阈值为0.6,截取去除上限阈值和下限阈值后的直方图的面积,例如,截取灰度直方图灰度值为209~685区间的灰度。具体计算公式为:CT图像的窗宽=(685-209)/2;CT图像的窗位=209+685)/2;将截取后的灰度值均匀拉伸到0~255区间,实现窗宽窗位调整。
204、通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨。
服务器通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨。每一个椎骨掩膜都对应有一个掩膜类别标签,可以采用不同的颜色或数字或字母对类别标签进行区分,分割模型可以为maskrcnn模型。其中,对于不同位置的椎骨,如骶椎、腰椎、胸椎等,预先采用不同的模型训练策略进行训练,以使得分割模型能准确分割出不同位置的椎骨。分割模型为神经网络模型,模型的训练过程为现有技术,此处不再赘述。
需要说明的是,模型训练策略包括:1、将骶椎和其他椎骨作为不同的类别训练分割模型;2、由于腰椎第五节与骶椎相邻,而其余椎骨和椎骨相邻,将腰椎第五节也作为单独的一个类别;3、将腰椎4~1节和胸椎作为一个类别;4、将颈椎作为一类别。例如,将目标脊柱影像上出现的每一个椎骨单独分割出来,假设一个图像上出现骶骨1,腰椎5~1,胸椎12~11。分割后得到:骶骨1的掩模标签为类别一(红色),腰椎5的掩模标签为类别二(绿色),腰4~1和胸椎12~11的掩模标签为类别三(蓝色),不同标签的椎骨掩模合并为一张输出,通过不同的标签区分不同的椎骨。
具体的,服务器通过预置的第一分割模型判断目标脊柱影像中是否存在骶椎;若目标脊柱影像中是否存在骶椎,则服务器将骶椎分离为出来,生成骶椎椎骨掩膜,并标记为类别一;服务器通过预置的第二分割模型判断目标脊柱影像中是否存在与骶椎相邻的第五腰椎;若目标脊柱影像中存在与骶椎相邻的第五腰椎,则服务器将第五腰椎分离为出来,生成第五腰椎椎骨掩膜,并标记为类别二;服务器通过预置的第三分割模型判断目标脊柱影像中是否存在胸椎、与胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎;若目标脊柱影像中存在胸椎、与胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎,则服务器将存在的胸椎、第一腰椎、第二腰椎、第三腰椎或第四腰椎分离为出来,生成对应的胸椎椎骨掩膜或腰椎椎骨掩膜,并标记为类别三;服务器通过预置的第四分割模型判断目标脊柱影像中是否存在颈椎;若目标脊柱影像中存在颈椎,则服务器将颈椎分离为出来,生成颈椎椎骨掩膜,并标记为类别四。
205、通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1。
服务器通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1。具体的,服务器通过预置的模糊能量算法识 别出N个椎骨的骨块轮廓;服务器通过预置的哈里斯角点检测算法获取每个椎骨的M个候选点,M大于或等于4;服务器通过具有噪声的基于密度的聚类算法DBSCAN算法将M个候选点分成P个点簇;服务器分别计算P个点簇的中心点,并P个中心点确定为P个椎骨角点;服务器通过最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,得到N*4个椎骨角点。
例如,先采用预置的模糊能量算法识别出多个椎骨的骨块轮廓,一块近似矩形的椎骨产生四团点簇,但是实际中由于椎骨形态变化,或者分割的掩膜边缘不平滑,点簇的数量可能大于或等于四。而哈里斯角点检测(Harris Coner Detection)算法只能提供点的位置信息,不能提供点簇的信息,即不会输出哪些点是属于同一个点簇。因此在哈里斯角点检测算法时,给了一个较宽的阈值,使得每块椎骨产生N个点簇(N不一定等于4),每团点簇包括20~40个点,共得到100~200个点。再使用具有噪声的基于密度的聚类算法(density-based spatial clustering of applications with noise,DBSCAN)算法将这100~200分成N个点簇。取N个点簇中心点作为角点。
可以理解的是,对于大部分椎骨(非骶椎)N=4,少数情况下,当N不等于4,采用最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,输出四个椎骨角点。为了提高算法的效率,输出的四个椎骨角点按照逆时针进行排序。
需要说明的是,服务器在掩膜上做角点检测,在掩膜上检测关键点不受图像重影和低像素的干扰,会更准确,再把检测到的点从掩膜映射到目标脊柱影像的对应位置。
206、将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像。
服务器将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像。其中,每个椎骨掩膜叠加在与目标脊柱影像中对应椎骨的上方。
需要说明的是,生成合成椎骨图像中的彩色区域为分割模型输出的掩模,不同颜色代表不同类别标签,灰色区域为目标脊柱影像,为背景图像。模型仅输出掩模和掩模对应的类别,为了显示方便,把生成的掩模绘制在目标脊柱影像上。
207、从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息。
服务器从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息。目标区域可以包括椎间盘、椎骨、骶椎、椎骨角点、双椎骨等。根据检测目标不同,这个小块图像可以包括1到2个椎骨,椎间盘,或者是骶椎(根据不同的检测目标,提取不同的小块图像)。例如,在一张合成椎骨图像上,假设通过分割得到N个骨块轮廓和N*4个角点,可以提取N个包括1个椎骨的图像小块,(N-1)个包括相邻两个椎骨的小块图像,N*4个包括椎骨角点的小块图像,(N-1)个包括椎间盘的小块图像,1个骶椎小块图像(仅在骶椎存在的情况下)。
208、通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。
服务器通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。其中,征象分类模型可以为深度残差网络(deep residual network,ResNet)模型。服务器分离出包括疾病征象的小块图像,输出疾病小块图像在合成椎骨图像的位置,并输出识别结果。其中,识别结果为定性分析结果,服务器通过骨块边缘和角点的位置信息,计算得到骨块 的中心点、角点偏移量、间盘厚度等参数,并可进一步计算椎骨厚度、胸椎后凸曲率、腰椎前凸曲率、骶骨倾斜角、矢状面轴向距离等参数,辅助医生进行定量分析。
需要说明的是,在调用预置的征象分类模型之前,服务器会结合医生标注的临床脊椎疾病征象数据库的金标准,使用提取的对应小块图像训练得到征象分类模型。
本申请实施例,基于深度神经网络,输入多模态多尺寸的脊柱医学影像,通过预处理增强不同类型图像的一致性,并使用两个不同的深度网络将椎骨分割和疾病征象分类分别实现,在椎骨分割基础上进一步做了脊椎的多种征象识别,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。上面对本申请实施例中基于人工智能的脊柱影像处理方法进行了描述,下面对本申请实施例中基于人工智能的脊柱影像处理装置进行描述,请参阅图3,本申请实施例中基于人工智能的脊柱影像处理装置的一个实施例包括:
第一获取单元301,用于获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
预处理单元302,用于对所述原始脊柱影像进行预处理,生成目标脊柱影像;
分割单元303,用于通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
识别检测单元304,用于通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
合成单元305,用于将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
提取单元306,用于从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
识别生成单元307,用于通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
本申请实施例,通过两个不同的深度网络分别实现椎骨分割和疾病征象分类,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
请参阅图4,本申请实施例中基于人工智能的脊柱影像处理装置的另一个实施例包括:
第一获取单元301,用于获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
预处理单元302,用于对所述原始脊柱影像进行预处理,生成目标脊柱影像;
分割单元303,用于通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
识别检测单元304,用于通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
合成单元305,用于将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
提取单元306,用于从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
识别生成单元307,用于通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
可选的,预处理单元302包括:
处理模块3021,用于对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;
调整模块3022,用于对所述第一脊柱影像进行参数调整,生成目标脊柱影像。
可选的,处理模块3021具体用于:
对所述原始脊柱影像进行切除黑边处理;对切除黑边处理后的脊柱影像进行裁剪;对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。可选的,调整模块3022具体用于:
确定所述第一脊柱影像的图像通道数;根据所述图像通道数调整所述第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
可选的,分割单元303具体用于:
通过预置的第一分割模型判断所述目标脊柱影像中是否存在骶椎;若所述目标脊柱影像中是否存在骶椎,则将所述骶椎分离为出来,生成骶椎椎骨掩膜,并标记为类别一;通过预置的第二分割模型判断所述目标脊柱影像中是否存在与所述骶椎相邻的第五腰椎;若所述目标脊柱影像中存在与所述骶椎相邻的第五腰椎,则将所述第五腰椎分离为出来,生成第五腰椎椎骨掩膜,并标记为类别二;通过预置的第三分割模型判断所述目标脊柱影像中是否存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎;若所述目标脊柱影像中存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎,则将存在的所述胸椎、所述第一腰椎、所述第二腰椎、所述第三腰椎或所述第四腰椎分离为出来,生成对应的胸椎椎骨掩膜或腰椎椎骨掩膜,并标记为类别三;通过预置的第四分割模型判断所述目标脊柱影像中是否存在颈椎;若所述目标脊柱影像中存在颈椎,则将所述颈椎分离为出来,生成颈椎椎骨掩膜,并标记为类别四。
可选的,识别检测单元304具体用于:
通过预置的模糊能量算法识别出N个椎骨的骨块轮廓;通过预置的哈里斯角点检测算法获取每个椎骨的M个候选点,M大于或等于4;通过具有噪声的基于密度的聚类算法DBSCAN算法将所述M个候选点分成P个点簇;分别计算所述P个点簇的中心点,并P个中心点确定为P个椎骨角点;通过最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,得到N*4个椎骨角点。
可选的,识别生成单元307具体用于:
调用预置的深度残差网络模型对所述多个小块图像进行识别;分离出包含疾病征象的疾病小块图像;确定每个疾病小块图像在所述合成椎骨图像上的位置,并输出识别结果,所述识别结果包括椎骨中心点、角点偏移量和间盘厚度。
本申请实施例,获取原始脊柱影像,原始脊柱影像为脊柱的矢状位放射影像;对原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;对第一脊柱影像进行参数调整,生成目标脊柱影像;通过预置的分割模型对目标脊柱影像中每个椎骨进行分割,生成 多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;通过预置的聚类算法对多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;将N个骨块轮廓、N*4个椎骨角点和目标脊柱影像进行合成,生成合成椎骨图像;从合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;通过预置的征象分类模型对多个小块图像进行识别,生成识别结果。本申请实施例,基于深度神经网络,输入多模态多尺寸的脊柱医学影像,通过预处理增强不同类型图像的一致性,并使用两个不同的深度网络将椎骨分割和疾病征象分类分别实现,在椎骨分割基础上进一步做了脊椎的多种征象识别,提高了对椎骨的分割准确率,提高了征象识别的特异性和敏感性。
上面图3至图4从模块化功能实体的角度对本申请实施例中的基于人工智能的脊柱影像处理装置进行详细描述,下面从硬件处理的角度对本申请实施例中基于人工智能的脊柱影像处理设备进行详细描述。
图5是本申请实施例提供的一种基于人工智能的脊柱影像处理设备的结构示意图,该基于人工智能的脊柱影像处理设备500可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)501(例如,一个或一个以上处理器)和存储器509,一个或一个以上存储应用程序507或数据506的存储介质508(例如一个或一个以上海量存储设备)。其中,存储器509和存储介质508可以是短暂存储或持久存储。存储在存储介质508的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对基于人工智能的脊柱影像处理设备中的一系列指令操作。更进一步地,处理器501可以设置为与存储介质508通信,在基于人工智能的脊柱影像处理设备500上执行存储介质508中的一系列指令操作。
基于人工智能的脊柱影像处理设备500还可以包括一个或一个以上电源502,一个或一个以上有线或无线网络接口503,一个或一个以上输入输出接口504,和/或,一个或一个以上操作系统505,例如Windows Serve,Mac OS X,Unix,Linux,FreeBSD等等。本领域技术人员可以理解,图5中示出的基于人工智能的脊柱影像处理设备结构并不构成对基于人工智能的脊柱影像处理设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。处理器501可以执行上述实施例中第一获取单元301、预处理单元302、分割单元303、识别检测单元304、合成单元305、提取单元306和识别生成单元307的功能。
下面结合图5对基于人工智能的脊柱影像处理设备的各个构成部件进行具体的介绍:
处理器501是基于人工智能的脊柱影像处理设备的控制中心,可以按照设置的基于人工智能的脊柱影像处理方法进行处理。处理器501利用各种接口和线路连接整个基于人工智能的脊柱影像处理设备的各个部分,通过运行或执行存储在存储器509内的软件程序和/或模块,以及调用存储在存储器509内的数据,执行基于人工智能的脊柱影像处理设备的各种功能和处理数据,从而实现椎骨分割和疾病征象分类。存储介质508和存储器509都是存储数据的载体,本申请实施例中,存储介质508可以是指储存容量较小,但速度快的内存储器,而存储器509可以是储存容量大,但储存速度慢的外存储器。
存储器509可用于存储软件程序以及模块,处理器501通过运行存储在存储器509的 软件程序以及模块,从而执行基于人工智能的脊柱影像处理设备500的各种功能应用以及数据处理。存储器509可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨)等;存储数据区可存储根据基于人工智能的脊柱影像处理设备的使用所创建的数据(比如合成椎骨图像等)等。此外,存储器509可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。在本申请实施例中提供的基于人工智能的脊柱影像处理方法程序和接收到的数据流存储在存储器中,当需要使用时,处理器501从存储器509中调用。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质还可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如下基于人工智能的脊柱影像处理方法的步骤:
获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
对所述原始脊柱影像进行预处理,生成目标脊柱影像;
通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、双绞线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,光盘)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。

Claims (20)

  1. 一种基于人工智能的脊柱影像处理方法,包括:
    获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
    对所述原始脊柱影像进行预处理,生成目标脊柱影像;
    通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
    通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
    将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
    从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
    通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
  2. 根据权利要求1所述的基于人工智能的脊柱影像处理方法,所述对所述原始脊柱影像进行预处理,生成目标脊柱影像包括:
    对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;
    对所述第一脊柱影像进行参数调整,生成目标脊柱影像。
  3. 根据权利要求2所述的基于人工智能的脊柱影像处理方法,所述对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像包括:
    对所述原始脊柱影像进行切除黑边处理;
    对切除黑边处理后的脊柱影像进行裁剪;
    对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。
  4. 根据权利要求2所述的基于人工智能的脊柱影像处理方法,所述对所述第一脊柱影像进行参数调整,生成目标脊柱影像包括:
    确定所述第一脊柱影像的图像通道数;
    根据所述图像通道数调整所述第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
  5. 根据权利要求1所述的基于人工智能的脊柱影像处理方法,所述通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨,包括:
    通过预置的第一分割模型判断所述目标脊柱影像中是否存在骶椎;
    若所述目标脊柱影像中是否存在骶椎,则将所述骶椎分离为出来,生成骶椎椎骨掩膜,并标记为类别一;
    通过预置的第二分割模型判断所述目标脊柱影像中是否存在与所述骶椎相邻的第五腰椎;
    若所述目标脊柱影像中存在与所述骶椎相邻的第五腰椎,则将所述第五腰椎分离为出来,生成第五腰椎椎骨掩膜,并标记为类别二;
    通过预置的第三分割模型判断所述目标脊柱影像中是否存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎;
    若所述目标脊柱影像中存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎,则将存在的所述胸椎、所述第一腰椎、所述第二腰椎、所述第三腰椎或所述第四腰椎分离为出来,生成对应的胸椎椎骨掩膜或腰椎椎骨掩膜,并标记为类别三;
    通过预置的第四分割模型判断所述目标脊柱影像中是否存在颈椎;
    若所述目标脊柱影像中存在颈椎,则将所述颈椎分离为出来,生成颈椎椎骨掩膜,并标记为类别四。
  6. 根据权利要求1所述的基于人工智能的脊柱影像处理方法,所述通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1,包括:
    通过预置的模糊能量算法识别出N个椎骨的骨块轮廓;
    通过预置的哈里斯角点检测算法获取每个椎骨的M个候选点,M大于或等于4;
    通过具有噪声的基于密度的聚类算法DBSCAN算法将所述M个候选点分成P个点簇;
    分别计算所述P个点簇的中心点,并P个中心点确定为P个椎骨角点;
    通过最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,得到N*4个椎骨角点。
  7. 根据权利要求1-6中任一所述的基于人工智能的脊柱影像处理方法,所述通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果包括:
    调用预置的深度残差网络模型对所述多个小块图像进行识别;
    分离出包含疾病征象的疾病小块图像;
    确定每个疾病小块图像在所述合成椎骨图像上的位置,并输出识别结果,所述识别结果包括椎骨中心点、角点偏移量和间盘厚度。
  8. 一种基于人工智能的脊柱影像处理装置,包括:
    第一获取单元,用于获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
    预处理单元,用于对所述原始脊柱影像进行预处理,生成目标脊柱影像;
    分割单元,用于通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
    识别检测单元,用于通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
    合成单元,用于将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
    提取单元,用于从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
    识别生成单元,用于通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
  9. 根据权利要求8所述的基于人工智能的脊柱影像处理装置,预处理单元包括:
    处理模块,用于对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;
    调整模块,用于对所述第一脊柱影像进行参数调整,生成目标脊柱影像。
  10. 根据权利要求9所述的基于人工智能的脊柱影像处理装置,所述处理模块具体用于:
    对所述原始脊柱影像进行切除黑边处理;
    对切除黑边处理后的脊柱影像进行裁剪;
    对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。
  11. 根据权利要求9所述的基于人工智能的脊柱影像处理装置,所述调整模块具体用于:
    确定所述第一脊柱影像的图像通道数;
    根据所述图像通道数调整所述第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
  12. 根据权利要求8所述的基于人工智能的脊柱影像处理装置,所述分割单元具体用于:
    通过预置的第一分割模型判断所述目标脊柱影像中是否存在骶椎;
    若所述目标脊柱影像中是否存在骶椎,则将所述骶椎分离为出来,生成骶椎椎骨掩膜,并标记为类别一;
    通过预置的第二分割模型判断所述目标脊柱影像中是否存在与所述骶椎相邻的第五腰椎;
    若所述目标脊柱影像中存在与所述骶椎相邻的第五腰椎,则将所述第五腰椎分离为出来,生成第五腰椎椎骨掩膜,并标记为类别二;
    通过预置的第三分割模型判断所述目标脊柱影像中是否存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎;
    若所述目标脊柱影像中存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎,则将存在的所述胸椎、所述第一腰椎、所述第二腰椎、所述第三腰椎或所述第四腰椎分离为出来,生成对应的胸椎椎骨掩膜或腰椎椎骨掩膜,并标记为类别三;
    通过预置的第四分割模型判断所述目标脊柱影像中是否存在颈椎;
    若所述目标脊柱影像中存在颈椎,则将所述颈椎分离为出来,生成颈椎椎骨掩膜,并标记为类别四。
  13. 根据权利要求8所述的基于人工智能的脊柱影像处理装置,所述识别检测单元具体用于:
    通过预置的模糊能量算法识别出N个椎骨的骨块轮廓;
    通过预置的哈里斯角点检测算法获取每个椎骨的M个候选点,M大于或等于4;
    通过具有噪声的基于密度的聚类算法DBSCAN算法将所述M个候选点分成P个点簇;
    分别计算所述P个点簇的中心点,并P个中心点确定为P个椎骨角点;
    通过最小外接矩形算法剔除多余的椎骨角点或者填补缺少的椎骨角点,得到N*4个椎骨角点。
  14. 根据权利要求8-13中任一所述的基于人工智能的脊柱影像处理装置,所述识别生成单元具体用于:
    调用预置的深度残差网络模型对所述多个小块图像进行识别;
    分离出包含疾病征象的疾病小块图像;
    确定每个疾病小块图像在所述合成椎骨图像上的位置,并输出识别结果,所述识别结果包括椎骨中心点、角点偏移量和间盘厚度。
  15. 一种基于人工智能的脊柱影像处理设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如下步骤:
    获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
    对所述原始脊柱影像进行预处理,生成目标脊柱影像;
    通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
    通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
    将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
    从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
    通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
  16. 根据权利要求15所述的基于人工智能的脊柱影像处理设备,所述处理器执行所述计算机程序实现所述对所述原始脊柱影像进行预处理,生成目标脊柱影像时,包括以下步骤:
    对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像;
    对所述第一脊柱影像进行参数调整,生成目标脊柱影像。
  17. 根据权利要求16所述的基于人工智能的脊柱影像处理设备,所述处理器执行所述计算机程序实现所述对所述原始脊柱影像的像素大小进行处理,得到处理后的第一脊柱影像时,包括以下步骤:
    对所述原始脊柱影像进行切除黑边处理;
    对切除黑边处理后的脊柱影像进行裁剪;
    对裁剪后的脊柱影像进行尺寸调整,得到处理后的第一脊柱影像。
  18. 根据权利要求16所述的基于人工智能的脊柱影像处理设备,所述处理器执行所述计算机程序实现所述对所述第一脊柱影像进行参数调整,生成目标脊柱影像时,包括以下步骤:
    确定所述第一脊柱影像的图像通道数;
    根据所述图像通道数调整所述第一脊柱影像的窗宽和窗位,生成目标脊柱影像。
  19. 根据权利要求15所述的基于人工智能的脊柱影像处理设备,所述处理器执行所述计算机程序实现所述通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨时,包括以下步骤:
    通过预置的第一分割模型判断所述目标脊柱影像中是否存在骶椎;
    若所述目标脊柱影像中是否存在骶椎,则将所述骶椎分离为出来,生成骶椎椎骨掩膜,并标记为类别一;
    通过预置的第二分割模型判断所述目标脊柱影像中是否存在与所述骶椎相邻的第五腰椎;
    若所述目标脊柱影像中存在与所述骶椎相邻的第五腰椎,则将所述第五腰椎分离为出来,生成第五腰椎椎骨掩膜,并标记为类别二;
    通过预置的第三分割模型判断所述目标脊柱影像中是否存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎;
    若所述目标脊柱影像中存在胸椎、与所述胸椎顺序连接的第一腰椎、第二腰椎、第三腰椎或第四腰椎,则将存在的所述胸椎、所述第一腰椎、所述第二腰椎、所述第三腰椎或所述第四腰椎分离为出来,生成对应的胸椎椎骨掩膜或腰椎椎骨掩膜,并标记为类别三;
    通过预置的第四分割模型判断所述目标脊柱影像中是否存在颈椎;
    若所述目标脊柱影像中存在颈椎,则将所述颈椎分离为出来,生成颈椎椎骨掩膜,并标记为类别四。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如下步骤:
    获取原始脊柱影像,所述原始脊柱影像为脊柱的矢状位放射影像;
    对所述原始脊柱影像进行预处理,生成目标脊柱影像;
    通过预置的分割模型对所述目标脊柱影像中每个椎骨进行分割,生成多个椎骨掩膜,每个椎骨掩膜对应一个不同的椎骨;
    通过预置的聚类算法对所述多个椎骨掩膜进行椎骨轮廓识别和角点检测,得到N个骨块轮廓和N*4个椎骨角点,N大于或等于1;
    将所述N个骨块轮廓、所述N*4个椎骨角点和所述目标脊柱影像进行合成,生成合成椎骨图像;
    从所述合成椎骨图像中提取多个小块图像,每个小块图像包括目标区域的信息;
    通过预置的征象分类模型对所述多个小块图像进行识别,生成识别结果。
PCT/CN2019/117948 2019-08-01 2019-11-13 基于人工智能的脊柱影像处理方法及相关设备 WO2021017297A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910706559.9 2019-08-01
CN201910706559.9A CN110599508B (zh) 2019-08-01 2019-08-01 基于人工智能的脊柱影像处理方法及相关设备

Publications (1)

Publication Number Publication Date
WO2021017297A1 true WO2021017297A1 (zh) 2021-02-04

Family

ID=68853270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117948 WO2021017297A1 (zh) 2019-08-01 2019-11-13 基于人工智能的脊柱影像处理方法及相关设备

Country Status (2)

Country Link
CN (1) CN110599508B (zh)
WO (1) WO2021017297A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421275A (zh) * 2021-05-13 2021-09-21 影石创新科技股份有限公司 图像处理方法、装置、计算机设备和存储介质
CN113837192A (zh) * 2021-09-22 2021-12-24 推想医疗科技股份有限公司 图像分割方法及装置,神经网络的训练方法及装置
CN114331964A (zh) * 2021-11-30 2022-04-12 北京赛迈特锐医疗科技有限公司 胸腰椎创伤ct影像评估方法及系统
CN115618694A (zh) * 2022-12-15 2023-01-17 博志生物科技(深圳)有限公司 基于图像的颈椎分析方法、装置、设备及存储介质
US11841923B2 (en) 2020-07-06 2023-12-12 Alibaba Group Holding Limited Processing method, model training method, means, and storage medium for spinal images
CN117291927A (zh) * 2023-09-22 2023-12-26 中欧智薇(上海)机器人有限公司 脊柱分割方法、系统、电子设备和非瞬时机器可读介质
CN118297970A (zh) * 2024-04-08 2024-07-05 中国人民解放军空军特色医学中心 一种胸腰椎x射线片分割方法及装置
US12056847B2 (en) 2020-07-06 2024-08-06 Alibaba Group Holding Limited Image processing method, means, electronic device and storage medium

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063424B (zh) * 2019-12-25 2023-09-19 上海联影医疗科技股份有限公司 一种椎间盘数据处理方法、装置、电子设备及存储介质
CN111276221B (zh) * 2020-02-03 2024-01-30 杭州依图医疗技术有限公司 椎骨影像信息的处理方法、显示方法及存储介质
US11452492B2 (en) 2020-04-21 2022-09-27 Mazor Robotics Ltd. System and method for positioning an imaging device
CN111524188A (zh) * 2020-04-24 2020-08-11 杭州健培科技有限公司 腰椎定位点获取方法、设备及介质
CN111709436A (zh) * 2020-05-21 2020-09-25 浙江康源医疗器械有限公司 一种医学影像轮廓的标记方法及系统、分类方法及系统
CN111652300A (zh) * 2020-05-27 2020-09-11 联影智能医疗科技(北京)有限公司 脊柱曲度分类方法、计算机设备和存储介质
CN111951216B (zh) * 2020-07-02 2023-08-01 杭州电子科技大学 基于计算机视觉的脊柱冠状面平衡参数自动测量方法
CN112184617B (zh) * 2020-08-17 2022-09-16 浙江大学 一种基于深度学习的脊椎mri影像关键点检测方法
CN112614092A (zh) * 2020-12-11 2021-04-06 北京大学 脊柱检测方法和装置
CN112967235B (zh) * 2021-02-19 2024-09-24 联影智能医疗科技(北京)有限公司 图像检测方法、装置、计算机设备和存储介质
CN113128580A (zh) * 2021-04-12 2021-07-16 天津大学 一种基于多维残差网络的脊柱ct图像识别方法
CN112884786B (zh) * 2021-04-12 2021-11-02 杭州健培科技有限公司 应用于ct影像的腰椎间盘观测面定位方法、装置及应用
CN113034495B (zh) * 2021-04-21 2022-05-06 上海交通大学 一种脊柱影像分割方法、介质及电子设备
CN113205535B (zh) * 2021-05-27 2022-05-06 青岛大学 一种x光片脊椎自动分割及标识方法
CN113240661B (zh) * 2021-05-31 2023-09-26 平安科技(深圳)有限公司 基于深度学习的腰椎骨分析方法、装置、设备及存储介质
CN114078125A (zh) * 2021-11-29 2022-02-22 开封市人民医院 一种基于核磁影像的脊椎图像处理方法
CN114372970B (zh) * 2022-01-04 2024-02-06 杭州三坛医疗科技有限公司 一种手术参考信息生成方法及装置
CN115439453B (zh) * 2022-09-13 2023-05-26 北京医准智能科技有限公司 一种脊椎椎体定位方法、装置、电子设备及存储介质
TWI817789B (zh) * 2022-10-26 2023-10-01 宏碁智醫股份有限公司 評估僵直性脊椎炎的電子裝置及方法
CN115690498B (zh) * 2022-10-31 2023-06-13 北京医准智能科技有限公司 椎体骨密度确认方法、装置、电子设备及存储介质
CN115640417B (zh) * 2022-12-22 2023-03-21 北京理贝尔生物工程研究所有限公司 人工椎间盘库的构建方法、装置、存储介质及处理器
CN116883328B (zh) * 2023-06-21 2024-01-05 查维斯机械制造(北京)有限公司 基于计算机视觉的牛胴体脊柱区域快速提取方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058720A1 (en) * 2009-09-10 2011-03-10 Siemens Medical Solutions Usa, Inc. Systems and Methods for Automatic Vertebra Edge Detection, Segmentation and Identification in 3D Imaging
CN107977971A (zh) * 2017-11-09 2018-05-01 哈尔滨理工大学 基于卷积神经网络的椎骨定位的方法
CN108038860A (zh) * 2017-11-30 2018-05-15 杭州电子科技大学 基于3d全卷积神经网络的脊柱分割方法
CN109493317A (zh) * 2018-09-25 2019-03-19 哈尔滨理工大学 基于级联卷积神经网络的3d多椎骨分割方法
CN109523523A (zh) * 2018-11-01 2019-03-26 郑宇铄 基于fcn神经网络和对抗学习的椎体定位识别分割方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1657681B1 (en) * 2004-11-10 2009-01-21 Agfa HealthCare NV Method of performing measurements on digital images
CN103606148B (zh) * 2013-11-14 2017-10-10 深圳先进技术研究院 一种磁共振脊柱影像混合分割方法和装置
US11010630B2 (en) * 2017-04-27 2021-05-18 Washington University Systems and methods for detecting landmark pairs in images
WO2019041262A1 (en) * 2017-08-31 2019-03-07 Shenzhen United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR IMAGE SEGMENTATION
CN108230301A (zh) * 2017-12-12 2018-06-29 哈尔滨理工大学 一种基于主动轮廓模型的脊柱ct图像自动定位分割方法
CN108537779A (zh) * 2018-03-27 2018-09-14 哈尔滨理工大学 基于聚类的椎骨分割与质心检测的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058720A1 (en) * 2009-09-10 2011-03-10 Siemens Medical Solutions Usa, Inc. Systems and Methods for Automatic Vertebra Edge Detection, Segmentation and Identification in 3D Imaging
CN107977971A (zh) * 2017-11-09 2018-05-01 哈尔滨理工大学 基于卷积神经网络的椎骨定位的方法
CN108038860A (zh) * 2017-11-30 2018-05-15 杭州电子科技大学 基于3d全卷积神经网络的脊柱分割方法
CN109493317A (zh) * 2018-09-25 2019-03-19 哈尔滨理工大学 基于级联卷积神经网络的3d多椎骨分割方法
CN109523523A (zh) * 2018-11-01 2019-03-26 郑宇铄 基于fcn神经网络和对抗学习的椎体定位识别分割方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11841923B2 (en) 2020-07-06 2023-12-12 Alibaba Group Holding Limited Processing method, model training method, means, and storage medium for spinal images
US12056847B2 (en) 2020-07-06 2024-08-06 Alibaba Group Holding Limited Image processing method, means, electronic device and storage medium
CN113421275A (zh) * 2021-05-13 2021-09-21 影石创新科技股份有限公司 图像处理方法、装置、计算机设备和存储介质
CN113837192A (zh) * 2021-09-22 2021-12-24 推想医疗科技股份有限公司 图像分割方法及装置,神经网络的训练方法及装置
CN113837192B (zh) * 2021-09-22 2024-04-19 推想医疗科技股份有限公司 图像分割方法及装置,神经网络的训练方法及装置
CN114331964A (zh) * 2021-11-30 2022-04-12 北京赛迈特锐医疗科技有限公司 胸腰椎创伤ct影像评估方法及系统
CN115618694A (zh) * 2022-12-15 2023-01-17 博志生物科技(深圳)有限公司 基于图像的颈椎分析方法、装置、设备及存储介质
CN117291927A (zh) * 2023-09-22 2023-12-26 中欧智薇(上海)机器人有限公司 脊柱分割方法、系统、电子设备和非瞬时机器可读介质
CN118297970A (zh) * 2024-04-08 2024-07-05 中国人民解放军空军特色医学中心 一种胸腰椎x射线片分割方法及装置

Also Published As

Publication number Publication date
CN110599508B (zh) 2023-10-27
CN110599508A (zh) 2019-12-20

Similar Documents

Publication Publication Date Title
WO2021017297A1 (zh) 基于人工智能的脊柱影像处理方法及相关设备
CN108520519B (zh) 一种图像处理方法、装置及计算机可读存储介质
CN107798682B (zh) 图像分割系统、方法、装置和计算机可读存储介质
CN110021025B (zh) 感兴趣区域的匹配和显示方法、装置、设备及存储介质
WO2021115312A1 (zh) 医学影像中正常器官的轮廓线自动勾画方法
EP2690596B1 (en) Method, apparatus and system for automated spine labeling
US20220375079A1 (en) Automatically segmenting vertebral bones in 3d medical images
CN110310281A (zh) 一种基于Mask-RCNN深度学习的虚拟医疗中肺结节检测与分割方法
WO2018205232A1 (zh) 一种针对拼接结果自动精确定位参考线的方法
EP2639763B1 (en) Method, Apparatus and System for Localizing a Spine
US8385614B2 (en) Slice image display apparatus, method and recording-medium having stored therein program
KR20210051141A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
CN114343604B (zh) 基于医学影像的肿瘤检测及诊断装置
JP2017067489A (ja) 診断支援装置、方法及びコンピュータプログラム
CN116580068B (zh) 一种基于点云配准的多模态医学配准方法
CN112001889A (zh) 医学影像处理方法、装置及医学影像显示方法
CN108537779A (zh) 基于聚类的椎骨分割与质心检测的方法
CA2778599C (en) Bone imagery segmentation method and apparatus
Hacihaliloglu et al. Statistical shape model to 3D ultrasound registration for spine interventions using enhanced local phase features
CN111445575B (zh) 威利斯环的图像重建方法、装置、电子设备、存储介质
CN112349391A (zh) 一种优化肋骨自动标号方法
CN113870098A (zh) 一种基于脊柱分层重建的Cobb角自动测量方法
KR20210052270A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
CN110752029B (zh) 一种病灶的定位方法及装置
CN115689987A (zh) 一种基于dr图像双视角脊椎骨折特征检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939682

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939682

Country of ref document: EP

Kind code of ref document: A1