WO2020093987A1 - Procédé et système de traitement d'images médicales, dispositif informatique, et support de stockage lisible - Google Patents

Procédé et système de traitement d'images médicales, dispositif informatique, et support de stockage lisible Download PDF

Info

Publication number
WO2020093987A1
WO2020093987A1 PCT/CN2019/115549 CN2019115549W WO2020093987A1 WO 2020093987 A1 WO2020093987 A1 WO 2020093987A1 CN 2019115549 W CN2019115549 W CN 2019115549W WO 2020093987 A1 WO2020093987 A1 WO 2020093987A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
interest
region
target
area
Prior art date
Application number
PCT/CN2019/115549
Other languages
English (en)
Chinese (zh)
Inventor
唐章源
王誉
张剑锋
宋艳丽
吴迪嘉
詹翊强
周翔
高耀宗
Original Assignee
上海联影智能医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811306115.8A external-priority patent/CN109493328B/zh
Priority claimed from CN201811626399.9A external-priority patent/CN109859233B/zh
Priority claimed from CN201910133231.2A external-priority patent/CN109934220B/zh
Application filed by 上海联影智能医疗科技有限公司 filed Critical 上海联影智能医疗科技有限公司
Publication of WO2020093987A1 publication Critical patent/WO2020093987A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This application relates to the field of image processing, and in particular to a medical image processing method, system, computer device, and readable storage medium.
  • Medical imaging equipment refers to various instruments that use various media as information carriers to reproduce the internal structure of the human body as images.
  • Electronic computer tomography is a device that uses precise collimated X-ray beams, gamma rays, ultrasonic waves, etc. to perform a cross-sectional scan one after another around a part of the human body with a highly sensitive detector to finally generate a medical image .
  • CT computer tomography
  • the patient can be scanned by a CT scanner to generate scan data.
  • Generate image sequences based on scan data.
  • the image sequence includes multiple cross-sectional images, and each cross-sectional image represents a cross-sectional image of the patient. Then generate a three-dimensional image of the patient according to the image sequence.
  • the cross-sectional images can also be processed and reconstructed by computer software to obtain multi-planar cross-sectional images required for diagnosis, such as coronal, sagittal, oblique, curved, and other two-dimensional images.
  • the physician observes the image sequence and three-dimensional images Further determine the patient's lesion area.
  • a medical image processing method characterized in that the method includes:
  • the dynamic image is displayed.
  • the inputting the image to be detected into a neural network model for processing to obtain the detection result of the region of interest includes: inputting the image to be detected into the neural network model for network forward propagation calculation To obtain the detection result of the region of interest.
  • the method further includes: acquiring the attribute parameter threshold value input by the user in real time; and acquiring the attribute parameter threshold value input by the user in real time includes: controlling the control information of the component and the attribute parameter threshold value according to a preset threshold value The mapping relationship determines the attribute parameter threshold input by the user.
  • the region of interest attribute parameters include region of interest confidence, region of interest category, region of interest size; information of the target region of interest includes location information of the target region of interest and / Or size information of the target region of interest.
  • the acquiring multiple images based on the target region of interest and generating a dynamic image according to a preset order of the multiple images includes:
  • a plurality of the planar images are generated in a predetermined order to generate dynamic images.
  • acquiring the target region of interest image based on the target region of interest includes: using the target region of interest as a reference, selecting an image within a preset range as the target sense Area of interest image.
  • the acquiring multiple plane images according to the target area of interest image includes: acquiring multiple plane images in the target area of interest image according to a preset acquisition method.
  • the generating the dynamic images in the predetermined order by the plurality of the planar images includes: generating the dynamic images in the acquisition order or in an order opposite to the acquisition order.
  • displaying the dynamic image includes: displaying the dynamic image according to a preset position.
  • the system includes:
  • the processing module is configured to input the image to be detected into a neural network model for processing to obtain a detection result of the region of interest, wherein the detection result of the region of interest includes information of the region of interest and attribute parameters of the region of interest;
  • An information obtaining module configured to obtain information of the target interest area from the detection result of the interest area according to the attribute parameter of the interest area and the attribute parameter threshold;
  • An interest area acquisition module configured to determine the target interest area in the image to be detected according to the information of the target interest area
  • a dynamic image generation module configured to acquire multiple images based on the target region of interest, and generate dynamic images according to the preset order of the multiple images;
  • the display module is used for displaying the dynamic image.
  • An embodiment of the present application provides a computer device, including a memory and a processor.
  • a computer program that can run on the processor is stored on the memory.
  • the processor implements the computer program to implement the following steps:
  • the dynamic image is displayed.
  • An embodiment of the present application provides a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • the dynamic image is displayed.
  • An image processing method includes:
  • Input the image to be detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result;
  • the neural network model is determined by machine training and learning based on the training image.
  • the inputting the image to be detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result and a bone fracture detection result includes:
  • n is less than n, and m and n are positive integers.
  • the method further includes:
  • a training method for processing an image processing model includes:
  • the trained neural network model is configured to output the bone segmentation result, the bone centerline segmentation result and the bone fracture detection result simultaneously according to the input image.
  • the training a neural network model based on the training image includes:
  • a processing image processing system includes:
  • the image-to-be-detected module is used to obtain the image to be detected
  • a to-be-detected image processing module configured to input the to-be-detected image into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result;
  • the neural network model is determined by machine training and learning based on the training image.
  • the image processing module block to be detected includes:
  • a first acquiring unit configured to input the image to be detected into the neural network model for network forward propagation calculation, and insert m times of upsampling codes after the mth downsampling code to obtain the bone fracture detection result;
  • a second obtaining unit configured to continue down-sampling and encoding, and after performing the n-th down-sampling and encoding, perform n up-sampling and encoding to obtain the bone segmentation result and the bone centerline segmentation result;
  • n is less than n, and m and n are positive integers.
  • system further includes a post-processing module, the post-processing module is configured to:
  • An image processing model training system includes:
  • Training image acquisition module for acquiring training images
  • a model training module for training a neural network model based on the training image
  • the trained neural network model is configured to output the bone segmentation result, the bone centerline segmentation result and the bone fracture detection result simultaneously according to the input image.
  • the model training module includes a first training unit and a second training unit
  • the first training unit is used for inputting the training image into a preset neural network for bone segmentation training and bone centerline segmentation training, and fixing parameters in the training process to obtain a bone segmentation module and a bone centerline segmentation module;
  • the second training unit is configured to continue to perform bone fracture detection training on the training image through the preset neural network, and fix parameters in the training process to obtain a bone fracture detection module.
  • the ratio of positive and negative samples of bone fractures is much larger than the ratio of positive and negative samples of bones.
  • the two tasks are simultaneously performed on a single network Training will cause the loss function to be extremely difficult to converge.
  • This application adopts training bone segmentation and bone centerline segmentation first, in the way of transfer learning, fixes the already trained parameters, and then trains bone fracture detection, which can make the loss function quickly converge and solve the data between different tasks in the multi-task model Extremely unbalanced situation; the use of a trained deep learning network to achieve bone segmentation, bone centerline segmentation and bone fracture detection functions, compared with the three processes of bone segmentation, bone centerline segmentation and bone fracture detection separately, the network implementation
  • the three processes can shorten the total time consumption by 50% and the model saves memory space by 40%; using artificial intelligence to achieve the extraction of bone centerlines (such as rib centerline) and fracture detection, bone detection rate For more than 90%, the segmentation of the skeletal centerline can help us to perform visual post-processing such as labeling and unfolding of the ribs, helping doctors to see the rib lesions more easily; at the same time, rib segmentation, rib centerline and rib fracture detection are integrated at a depth Learning network can help doctors reduce the burden of reading and speed
  • a method for displaying a region of interest in an image comprising:
  • the detection result of the region of interest in the image where the detection result of the region of interest includes information of the region of interest and attribute parameters of the region of interest;
  • the comparison result of the attribute parameter of the region of interest and the threshold value of the attribute parameter obtain the information of the target region of interest from the detection result of the region of interest;
  • the region of interest attribute parameter includes one of the following: region of interest confidence, region of interest category, and region of interest size.
  • the real-time acquisition of the attribute parameter threshold input by the user includes:
  • the attribute parameter threshold input by the user is determined according to the mapping relationship between the control information of the preset threshold control component and the attribute parameter threshold.
  • the information displaying the target region of interest includes:
  • the rendered partial image is displayed.
  • the information of the target interest area includes position information of the target interest area and / or size information of the target interest area;
  • the information displaying the target area of interest further includes:
  • the original image containing the target region of interest is displayed.
  • the information displaying the target region of interest further includes:
  • the target index is displayed.
  • the method further includes:
  • the selection signal determine the target interest region information corresponding to the target index
  • the target interest area corresponding to the information of the target interest area is identified in the original image, and / or the rendered partial image corresponding to the information of the target interest area is identified.
  • the region of interest includes an anatomical structure or a lesion.
  • a display device for video interest regions includes:
  • a first acquisition module configured to acquire the detection result of the region of interest in the image, the detection result of the region of interest including information of the region of interest and attribute parameters of the region of interest;
  • the second obtaining module is used to obtain the attribute parameter threshold value input by the user in real time
  • a third obtaining module configured to obtain the information of the target interest area from the detection result of the interest area according to the comparison result of the attribute parameter of the interest area and the threshold value of the attribute parameter;
  • a display module is used to display the information of the target region of interest.
  • a terminal includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set, or instruction set.
  • the at least one instruction, the at least one program, code set, or instruction set is composed of
  • the processor loads and executes to implement any one of the methods for displaying image interest regions as described above.
  • the method, device and terminal for displaying the image interest area described above obtain the detection result of the interest area in the image, and the detection result of the interest area includes the information of the interest area and the attribute parameter of the interest area, and obtain the attribute input by the user in real time Parameter threshold, according to the comparison result of the attribute parameter of the region of interest and the threshold value of the attribute parameter, obtain the information of the target region of interest from the detection result of the region of interest, and display the information of the target region of interest , Realizing that users can adjust the threshold of attribute parameters in real time, so that they can display the detection results under different attribute parameter thresholds in real time, which is beneficial to users according to different usage scenarios and different case characteristics, weighing different degrees of diagnostic accuracy and reading The balance of the film time improves the flexibility of the computer-aided diagnosis system.
  • a medical image display method includes:
  • the dynamic image is displayed according to a preset position.
  • the acquiring the region of interest selected in the original image includes:
  • the original image is input into the neural network trained based on the image training set to obtain the target region of interest.
  • the acquiring multiple plane images in the target area of interest image according to a preset acquisition method includes:
  • a plurality of slice images perpendicular to the preset direction are intercepted in the target interest region image in sequence along the preset direction as a plane image.
  • the acquiring multiple plane images in the target area of interest image according to a preset acquisition method includes:
  • the target region of interest image is rotated according to a preset direction, and each time the preset angle is rotated, a maximum density projection is performed on the target region of interest image in the Z-axis direction to project the maximum density
  • the image is used as a planar image until the target region of interest image is rotated to the initial position including:
  • the plane image after being rotated around the Y axis by a preset angle and the plane image after being rotated around the X axis by a preset angle are alternately obtained until the target region of interest image is rotated to the initial position.
  • the preset direction is: clockwise direction or counterclockwise direction.
  • the generating the dynamic images according to the preset order from the plurality of planar images includes:
  • a plurality of the planar images are generated in the order of acquisition or the order opposite to the order of acquisition.
  • a medical image viewing device includes: an original image acquisition module for acquiring an original image of a detected object;
  • An interest area acquisition module configured to acquire an interest area selected in the original image
  • An image selection module which is used to select an image within a preset range as the target region of interest image based on the region of interest;
  • a planar image extraction module configured to acquire multiple planar images in the target area of interest image according to a preset acquisition method
  • a dynamic image generation module configured to generate a plurality of the planar images according to a preset order
  • the display module is configured to display the dynamic image according to a preset position.
  • a computer device includes a memory and a processor.
  • the memory stores a computer program, and is characterized in that when the processor executes the computer program, any of the steps of the above method is implemented.
  • the above medical image display method, viewing device, computer device and storage medium obtain the original image of the detected object, select the target interest area from the original image, and then select the image within the preset range as the center of the target interest area as the center
  • For the target region of interest image a plurality of plane images are acquired in the target region of interest image in a preset manner, and the multiple plane images are generated in a predetermined order to generate a dynamic image.
  • the physician determines the location of the lesion by observing the dynamic image, which can save the workload of the physician and save the doctor's time for determining the lesion.
  • FIG. 1 is a schematic flowchart of a medical image processing method provided by an embodiment
  • FIG. 2 is a schematic flowchart of a medical image processing method provided by another embodiment
  • FIG. 3 is a schematic flowchart of a medical image processing method provided by another embodiment
  • FIG. 4 is a schematic diagram of results of a medical image processing system provided by an embodiment
  • FIG. 5 is a schematic structural diagram of a terminal provided by an embodiment
  • FIG. 6 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a process of inputting an image to be detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result provided by an embodiment of the present application;
  • FIG. 8 is a structural block diagram of a neural network model provided by an embodiment of the present application.
  • FIG. 9 is another schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 10 is a block diagram of the working principle of image processing provided by an embodiment of the present application.
  • FIG. 11 is a screenshot of a cross section, a sagittal plane, and a coronal plane of an image to be detected provided by an embodiment of the present application;
  • FIG. 12 is a schematic diagram of an analysis result of performing image processing on the image to be detected in FIG. 11;
  • FIG. 13 is a schematic flowchart of a training method of an image processing model provided by an embodiment of the present application.
  • FIG. 14 is a schematic flow chart of training a neural network model based on training images provided by an embodiment of the present application
  • 16 is a structural block diagram of an image processing system provided by an embodiment of the present application.
  • FIG 17 is another structural block diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 18 is a structural block diagram of an image processing model training system provided by an embodiment of the present application.
  • FIG. 19 is another structural block diagram of an image processing model training system provided by an embodiment of the present application.
  • FIG. 20 is another structural block diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 21 is a schematic flowchart of a method for displaying video interest points according to an embodiment of the present application.
  • FIG. 22 is a schematic flowchart of a method for obtaining an attribute parameter threshold input by a user in real time according to an embodiment of the present application
  • 23 is a schematic diagram of an interface for displaying information on target points of interest provided by an embodiment of the present application.
  • 24 is a schematic diagram of another interface for displaying information on target points of interest provided by an embodiment of the present application.
  • FIG. 25 is a schematic flowchart of another method for displaying video interest points according to an embodiment of the present application.
  • 26 is a schematic structural diagram of a video interest point display device provided by an embodiment of the present application.
  • FIG. 27 is a schematic structural diagram of a second acquisition module provided by an embodiment of the present application.
  • FIG. 28 is a schematic structural diagram of a display module provided by an embodiment of the present application.
  • 29 is a schematic structural diagram of another video interest point display device provided by an embodiment of the present application.
  • FIG. 30 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 31 is a schematic flowchart of a medical image display method in an embodiment
  • FIG. 33 is a schematic flowchart of a method for acquiring a planar image in an embodiment
  • 34 is a first state diagram of dynamic display of rib fractures in an embodiment
  • 35 is a second state diagram of dynamic display of rib fractures in an embodiment
  • 36 is a third state diagram of dynamic display of rib fractures in an embodiment
  • 39 is a third state diagram of dynamic display of rib fractures in another embodiment.
  • 40 is a first state diagram of dynamic display of lung nodules in an embodiment
  • 41 is a second state diagram of dynamic display of lung nodules in an embodiment
  • 43 is a first state diagram of dynamic display of lung nodules in another embodiment
  • 44 is a second state diagram of dynamic display of lung nodules in another embodiment
  • 46 is a structural block diagram of a medical image viewing device in an embodiment
  • FIG. 48 is a structural block diagram of a planar image extraction module in another embodiment
  • 49 is a structural block diagram of a rotating unit in an embodiment
  • FIG. 50 is an internal structure diagram of a computer device in an embodiment.
  • Reference signs: 4100 is the original image acquisition module, 4200 is the lesion area acquisition module, 4300 is the region of interest image selection module, 4400 is the planar image extraction module, 4410 is the interception unit, 4420 is the coordinate system establishment unit, and 4430 is the initial position
  • the maximum density projection unit 4440 is a rotation unit, 4441 is an X-axis rotation subunit, 4442 is a Y-axis rotation subunit, 4443 is an acquisition subunit, 4500 is a dynamic image generation module, and 4600 is a display module.
  • patients can be scanned by CT scanners to generate scan data.
  • An image sequence is generated based on the scan data, and the image sequence includes a plurality of slice images, each slice image represents a cross-sectional image of the patient, and then a three-dimensional image of the patient is generated according to the image sequence.
  • the physician further determines the target region of interest of the patient by observing the image sequence and the three-dimensional image.
  • one embodiment of the present application proposes a medical image processing method and medical image processing system.
  • a medical image processing method including the following steps:
  • step S1002 the image to be detected is input into a neural network model for processing to obtain a detection result of the region of interest, wherein the detection result of the region of interest includes information of the region of interest and attribute parameters of the region of interest.
  • the above medical image processing method may further include the steps of acquiring an image to be detected and preprocessing the image to be detected, wherein the preprocessing includes:
  • Step S1004 Acquire the target region of interest information from the detection result of the region of interest based on the region of interest region attribute parameter and the attribute parameter threshold.
  • the region of interest attribute parameters include region of interest confidence, category of region of interest, and region of interest size; the information of the target region of interest includes location information of the target region of interest and / or Describe the size information of the target area of interest.
  • the neural network model may be determined based on the training image for machine learning training, specifically for the machine learning training based on the training image and the corresponding region of interest label.
  • the region of interest may include different types of lesions, that is, tissues or organs that are affected by pathogenic factors and cause lesions, and are the parts of the body where lesions occur, such as fractures, lung nodules, tumors, cerebral hemorrhage, Heart disease, nerve disease, etc .; can also include anatomical structures, such as blood vessels, ossification centers, nerves, muscles, soft tissue, trachea, cartilage, ligaments, cracks, etc.
  • the area of interest can also be other senses in the image Interest characteristics.
  • the region of interest may include a target region of interest, which is equivalent to that the region of interest may include not only a target lesion area to be determined by the physician, but also a lesion area that the physician does not currently need to determine.
  • the target region of interest may include a specific type of lesion, and may also include a specific type of specific lesion distinction, which is not limited in this embodiment.
  • the information of the target interest area may include size information of the target interest area and position information of the target interest area.
  • the medical image processing system may perform judgment processing on the attribute parameters of the region of interest and the thresholds of the attribute parameters, and obtain information on the target region of interest from the detection result of the region of interest according to the judgment result.
  • the attribute parameter of the region of interest may be any parameter that affects the detection result of the region of interest and can be adjusted in real time during the use phase of the medical image processing system of the embodiments of the present specification.
  • the confidence level of the region of interest may be characterized as the degree of certainty that the region or part in the image detected by the detection model, such as the deep learning model, belongs to the region of interest.
  • the size of the region of interest may be characterized as a parameter of the size of the region or part corresponding to the region of interest.
  • the threshold value of the interest area attribute parameter corresponds to the interest area attribute parameter, and may include the confidence level of the interest area, the category of the interest area, the size of the interest area, and the like.
  • the information of the interest area may be the detection result information of all the interest areas, or may be the detection result information of a part of the interest areas.
  • an image block is selected from the image blocks to be detected to form an image block to be detected, and the image block to be detected is input to a neural network model for processing.
  • Step S1006 Determine the target area of interest in the image to be detected according to the information of the target area of interest.
  • the medical image processing system may determine the corresponding target interest area in the image to be detected according to the size information and position information of the target interest area.
  • the target interest area may be a target lesion area.
  • the target region of interest may be a limited diseased tissue with pathogenic microorganisms. For example, a part of the lung is destroyed by tuberculosis bacteria, then the destroyed part is called the target area of interest.
  • a part of the image has nothing to do with determining the target lesion.
  • Step S1008 Acquire multiple images based on the target region of interest, and generate a dynamic image according to the preset order of the multiple images.
  • the above-mentioned preset order may be the order in which the plane images are captured, or may be a specific interception order preset when the plane images are captured.
  • the medical image processing system may acquire a plurality of planar images based on the target region of interest, and the planar image may be a cross-sectional image.
  • Step S1010 displaying the dynamic image.
  • displaying a dynamic image can be characterized as a way in which the image can be displayed at any viewing angle.
  • the step of displaying the dynamic image in the above step S1010 may specifically include: displaying the dynamic image according to a preset position.
  • the display interface layout has a plurality of display windows (usually interpreted as cells in the art), and a plurality of cells respectively display a curved surface reconstruction image in which the region of interest is a rib and a corresponding multi-plane reconstruction image (For example, a cross-sectional image), a dynamic image of the rib obtained through the previous steps.
  • the above rib dynamic image can also be displayed in a floating window.
  • steps S1006 to S1010 in the display window of the rib curved surface reconstruction image during adjustment of the reading area during the observation of the rib dynamic image by the physician, and display the adjusted Dynamic images to meet the doctor's reading habits, improve diagnosis efficiency and accuracy.
  • the above display manner may include selecting the playback speed of the dynamic image according to the input of the responding physician, such as accelerated playback or slow playback, forward or reverse playback, infinite loop playback or pause playback.
  • the medical image processing system can first obtain the detection information of the region of interest, and then obtain the information of the target region of interest from the detection result of the region of interest according to the attribute parameters of the region of interest and the threshold of the attribute parameters Determine the target area of interest, and then obtain multiple images based on the target area of interest, generate dynamic images from the multiple images in a predetermined order, and display them; the physician can observe the dynamic images to determine the location of the lesion, thereby saving the doctor's work Volume and saves the physician time to determine the lesion.
  • the step of inputting the image to be detected in the neural network model in the above step S1002 for processing to obtain the detection result of the region of interest may specifically include: Step S1002a, inputting the image to be detected into the nerve
  • the network model performs network forward propagation calculation to obtain the detection result of the region of interest.
  • the medical image processing system may input the image to be detected into a neural network model for forward propagation calculation, and after performing multiple downsampling encoding and multiple upsampling encoding methods, the detection result of the region of interest is obtained.
  • the medical image processing system can obtain the detection information of the region of interest, and then obtain the information of the target region of interest from the detection result of the region of interest according to the attribute parameter of the region of interest and the threshold value of the attribute parameter to determine the target Area of interest, and then obtain multiple images based on the target area of interest, generate dynamic images from the multiple images in a predetermined order, and display them; the physician can observe the dynamic images to determine the location of the lesion, thereby saving the workload of the physician, And save the doctor's time to determine the lesion.
  • the medical image processing method may further include:
  • step S1002b the attribute parameter threshold value input by the user is obtained in real time.
  • acquiring the attribute parameter threshold value input by the user in real time includes: determining the attribute parameter threshold value input by the user according to a mapping relationship between the control information of the preset threshold value control component and the attribute parameter threshold value.
  • the user can adjust the attribute parameter threshold in real time according to actual needs.
  • the medical image processing system can obtain the attribute parameter threshold input by the user in real time.
  • the mapping relationship between the control information of the threshold control component and the attribute parameter threshold may be preset.
  • the above mapping relationship may be searched to obtain an attribute parameter threshold corresponding to the control information of the current threshold control component.
  • the medical image processing system can obtain the information of the target interest area from the detection result of the interest area according to the threshold value of the attribute parameter of the interest area and the attribute parameter to determine the target interest area and then use the target Obtain multiple images based on the area, and generate dynamic images according to the preset order and display them; the physician can observe the dynamic images to determine the location of the lesion, thereby saving the workload of the physician and the time for the physician to determine the lesion .
  • a schematic flowchart of another medical image processing method is provided.
  • step S1004 multiple images based on the target region of interest are obtained, and based on the multiple images
  • the steps of generating dynamic images in the preset order of may include:
  • Step S1014 Acquire the target region of interest image based on the target region of interest.
  • the target region of interest taking the target region of interest as a reference, it extends around the target region of interest to obtain another range of regions larger than the range of the target region of interest, and selects the image within the region as the target region of interest image.
  • the step of acquiring an image of a target area of interest based on the target area of interest may specifically include: using the target area of interest as a reference, selecting an image within a preset range as the target sense Area of interest image.
  • the medical image processing system may take the target region of interest as the center region, and select the images within the preset range as the target region of interest image by uniformly extending the center region.
  • the shape of the preset range may be circular, square, rectangular, and various other shapes.
  • the shape of the target interest area may also be circular, square, rectangular, and various other shapes.
  • the number of images in the selected preset range may be one; or it may be multiple.
  • the target region of interest image may include not only the image of the target lesion area, but also sufficient related background images and medical information of the target lesion area, such as the size and location of the target lesion, to help the doctor make the final lesion confirmation.
  • the target region of interest image may be a three-dimensional image.
  • Step S1024 Acquire a plurality of planar images according to the target region of interest image.
  • the manner of rotation may be clockwise rotation or counterclockwise rotation, which is not limited in this embodiment.
  • the step of acquiring a plurality of planar images according to the target area of interest image in step S1024 may specifically include: acquiring a plurality of the planar images in the target area of interest image according to a preset acquisition method .
  • the above-mentioned plane image may be that, in the target region of interest image, multiple slice images perpendicular to the preset direction are sequentially intercepted along the preset direction as the plane image.
  • the plane image may be an image captured on the cross-section of the target region of interest image; the plane image may also be an image captured on the sagittal plane of the target region of interest image; the plane image may also be an image on the target region of interest An image captured on the coronal plane; the planar image may also be an image captured from one end to the other end of the target region of interest image in any direction, and multiple captured images are used as planar images.
  • the planar image may also be that, by establishing a rectangular coordinate system in the target area of interest image, first, the maximum density projection of the target area of interest image in the Z-axis direction is performed at the initial position of the target area of interest image, and the maximum Density projection image as a plane image, and then rotate the target area of interest image in a preset direction, each rotation of a preset angle, the target area of interest image in the Z-axis direction of the maximum density projection, the maximum density projection image as Planar images until the target region of interest image is rotated to the initial position, and multiple planar images are obtained.
  • the above rectangular coordinate system can be established based on the position of the bed during the CT scanning process, with the rectangular coordinate system from left to right as the x axis, from top to bottom as the y axis, and from foot to head as the z axis.
  • the above rectangular coordinate system can also be established based on the medical information of the target area of interest, such as spatial morphology, for example, in the CT image of the rib, the plane of the central axis of the rib is the xy axis plane, and the normal vector of the central axis plane is z Axis direction.
  • step S1034 a plurality of the planar images are generated in a predetermined order to generate dynamic images.
  • the above preset order may be the numbering order from the largest to the smallest after multiple planar images are acquired; then the medical image processing system may generate the multiple planar images in the order of the largest to the smallest Dynamic image.
  • the above-mentioned preset order can also be a numbering sequence from small to large after arbitrarily numbering the acquired multiple planar images; then the medical image processing system can generate dynamic images from the multiple planar images in the order of small to large numbers .
  • the step of generating a plurality of the planar images in a preset order in step S1034 may specifically include: generating the dynamics in a plurality of the planar images in an acquisition order or an order opposite to the acquisition order image.
  • the above-mentioned preset order may also be based on intercepting a part of the planar image based on a certain layer thickness, and generating a dynamic image according to the order in which the acquisition order is opposite to the acquisition order.
  • the target area of interest image is respectively in Z
  • the maximum density projection is performed in the axis direction, and the maximum density projection image is used as the plane image until the target region of interest image is rotated to the initial position, and then the dynamic image is generated in the order of acquiring the plane image or in the reverse order of acquiring the plane image.
  • the above dynamic image may be a video obtained by using a video encoder of a different video compression format to obtain multiple planar images; the dynamic image may also be to compress the acquired multiple planar images into GIF images. Change format file.
  • the medical image processing system can acquire the target region of interest image based on the target region of interest, acquire multiple plane images based on the target region of interest image, and generate dynamic images from the multiple plane images in a preset order , And then display the dynamic image; the physician can observe the dynamic image to determine the location of the lesion, thereby saving the workload of the physician and saving the doctor's time for determining the lesion.
  • Each module in the medical image processing system of the computer device described above may be implemented in whole or in part by software, hardware, or a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • the medical image processing system includes: a processing module 1100, an information acquisition module 1200, a region of interest acquisition module 1300, and a dynamic image
  • the generation module 1400 and the display module 1500 are provided.
  • the processing module 1100 is configured to input the image to be detected into a neural network model for processing to obtain the detection result of the region of interest, wherein the detection result of the region of interest includes information of the region of interest and interest Regional attribute parameters;
  • the information obtaining module 1200 is configured to obtain information of the target interest area from the detection result of the interest area according to the attribute parameter of the interest area and the threshold value of the attribute parameter;
  • the region of interest acquisition module 1300 is configured to determine the target region of interest in the image to be detected according to the information of the target region of interest;
  • the dynamic image generating module 1400 is configured to acquire multiple images based on the target region of interest, and generate dynamic images according to the preset order of the multiple images;
  • the display module 1500 is used to display the dynamic image.
  • the medical image processing system provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, which will not be repeated here.
  • a computer device is provided, and its internal structure diagram may be as shown in FIG. 5.
  • the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external computer device through a network connection.
  • the computer program is executed by the processor to implement an image processing method.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the computer device housing , Can also be an external keyboard, touchpad or mouse.
  • a computer device which includes a memory and a processor, and a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • the dynamic image is displayed.
  • a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • the dynamic image is displayed.
  • Bone fractures (such as rib fractures) are a common phenomenon. X-ray plain films show low sensitivity to bone fractures and are difficult to display clearly. Other lesions of the chest and chest wall, so CT is the preferred imaging method for chest diseases, and is often used as an important means of responsibility identification for chest trauma respondents, especially after bone trauma. Although most of the bone fractures are not harmful, but because of the judicial evaluation of the number of rib fractures, and the performance of some bone fractures is hidden, small bone fractures are easy to miss the diagnosis and are prone to disputes.
  • Existing bone segmentation and bone fracture detection are handled separately.
  • the user can manually set an appropriate threshold to determine the approximate range of the ribs, and then use the area growth or watershed algorithm to fill the holes and smooth boundaries, or through machine learning Method, or combining the texture features and grayscale features of the ribs to achieve bone segmentation, for example, based on the user's medical knowledge, the software can be used to view the features of the ribs layer by layer to determine whether it is a fracture or fracture detection through machine learning It is found that the existing process of separate processing of bone segmentation and bone fracture detection is complicated and takes a long time.
  • Another embodiment of the present application proposes an image processing, image processing model training method and system.
  • an embodiment of the present application provides an image processing method.
  • the method includes:
  • the method further includes the step of preprocessing the image to be detected, and the preprocessing includes:
  • S2020 Input the image to be detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result;
  • the neural network model is determined by machine training and learning based on the training image, specifically for machine training and learning based on the training image and corresponding bone labels, bone midline labels, and bone fracture labels.
  • an image block is selected from the image blocks to be detected to form an image block to be detected, and the image block to be detected is input to a neural network model for processing.
  • an improved or optimized neural network model is used for image processing, and the segmentation and detection functions are mainly realized by coupling a down-sampling encoding module and a multi-branch up-sampling decoding module, as shown in FIG. 7,
  • S2020 specifically includes the following steps:
  • S2021 Input the image to be detected (specifically, the image block to be detected) into the neural network model to perform network forward propagation calculation;
  • n is less than n, and m and n are positive integers.
  • S2030 specifically includes the following steps:
  • S2031 Perform binarization processing on the bone segmentation result, bone centerline segmentation result, and bone fracture detection result according to a preset threshold;
  • the probability map is processed according to a preset threshold to obtain a binary mask, the threshold is set to 0.5, and the probability value in the mask greater than or equal to the preset threshold is set to 1. , The rest is set to 0, that is, the probability value of the binary mask value of 1 is retained, and the probability value corresponding to the position of the mask value of 0 becomes 0;
  • the binarized image is provided with multiple connected domains, and the connected domains are labeled to obtain a multi-labeled image;
  • S2033 Count the number of pixels of each marker in the multi-marker image according to a preset threshold to obtain a skeleton segmentation mask, a skeleton centerline mask and each fracture in the to-be-detected image at high resolution Position coordinates;
  • the number of pixels in the connected domain of the tag is counted, and the tag with the number of pixels less than the preset threshold is set to 0, and the tag greater than or equal to the preset threshold is set to 1.
  • the threshold in is different.
  • the image block to be detected is processed by image blocks, rather than the entire original image, mainly considering the limitation of graphics processor (GPU) memory, and processing with part of the image can be regarded as a regular It can improve the efficiency and accuracy of image processing.
  • GPU graphics processor
  • the neural network in this embodiment is preferably an improved and optimized V-Net network. Of course, it is not limited to the V-Net network, and may also be other convolutional neural networks.
  • the bones in this embodiment are preferably ribs, and may also be vertebrae, limb bones, or sacrums.
  • the segmentation result or detection result output by the neural network model in this embodiment is preferably a probability graph, and may also be a coordinate graph or the like.
  • the core of the V-Net network includes the image down-sampling the encoding channel n times, then up-sampling the encoding channel n times, and finally using softmax to classify the pixels.
  • Fig. 8 shows the structure of the neural network model in this embodiment.
  • the data input in Fig. 8 is a 3D medical image, the solid arrow is the network path direction, and the dashed process is the data splicing process.
  • One parameter is the input channel and the second is the output channel.
  • the structure used in the module is a residual network or a bottleneck network or a dense network structure.
  • V-Net is improved and optimized to obtain an optimized neural network model, and image processing is performed through the neural network model, that is, after the m-th down sampling of V-Net (m is less than n), m times are inserted.
  • the softmax (normalized exponential function) module of the second classification is used to output the detection of rib fractures; after the nth downsampling, n times of upsampling are performed, and finally the softmax module of the third classification is used to output the rib segmentation and rib center Line segmentation.
  • n The reason why m is required to be less than n is because rib segmentation requires a larger field of view to determine whether it is a rib, and fractures require more local features to judge.
  • the reason for sharing down-sampling encoding channel parameters is that in addition to faster time and smaller memory, the detection and segmentation targets have common features that can be extracted, such as rib edge information and bone cortical distortion information have common features, these features are Both rib segmentation and fracture detection are useful.
  • an embodiment of the present application provides another image processing method.
  • the method includes:
  • This step is the same as S2110, and is not cumbersome here.
  • the neural network model in this embodiment is specifically determined by performing machine training and learning based on the training image and the corresponding bone label, bone centerline label, and bone fracture label.
  • the coarse network model in this embodiment is mainly used for positioning processing of the image to be detected, so as to improve the accuracy of subsequent image processing.
  • the neural network in this embodiment may be an improved and optimized V-Net network. Of course, it is not limited to the V-Net network, and may also be other convolutional neural networks.
  • the bones in this embodiment may be ribs, vertebrae, limb bones or sacrums.
  • the segmentation result or detection result output by the neural network model in this embodiment may be a probability graph, a coordinate graph, or the like.
  • the neural network in this embodiment is preferably an improved and optimized V-Net network, the fracture is preferably a rib, and the segmentation result or detection result is preferably a probability map.
  • S2120 further includes:
  • S2121 Input the image to be detected (specifically, the image block to be detected) into the coarse network model to perform network forward propagation calculation to obtain a rib distribution probability map;
  • the coarse network model has multiple hidden layers in the forward process, and each hidden layer includes a convolution layer and an excitation layer.
  • the network structure of the coarse network model in this embodiment adopts the following accumulation formula:
  • y 1 w 1 * x l-1 + b 1 ;
  • l represents the hidden layer of the lth layer
  • y represents the output of the convolution
  • x represents the input of the convolution
  • w and b are the trained parameters.
  • the incentive layer uses ReLU, the specific formula is as follows:
  • z i is equal to x i
  • x represents the input of the excitation layer
  • i represents the subscript of the data vector .
  • Post-process the rib distribution probability map to obtain a rib segmentation mask at a low resolution (that is, a coarser resolution), mark a target frame for the rib segmentation mask at the low resolution, and obtain a positioning area ;
  • This step specifically includes: performing a binarization process on the probability map of the rib distribution through a preset threshold, removing connected domains less than the preset threshold, and performing a frame selection operation on the connected domains greater than or equal to the preset threshold, that is, using a border Frame the connected domain greater than or equal to the preset threshold;
  • S2123 Input the image in the target frame into the thin network model to perform network forward propagation calculation to obtain a rib probability map, a rib centerline probability map, and a rib fracture probability map;
  • This step specifically includes:
  • the pre-processing method is similar to the pre-processing method of the image to be detected, and is no longer cumbersome here;
  • the process of the forward propagation of the fine network model is the same as that of the coarse model, the difference is that the data resolution is different (for example, the resolution of the coarse model and the fine model are 4mm and 1mm, respectively), and the network parameters are different.
  • This step specifically includes:
  • S2131 Perform binarization processing on the rib probability map, the rib centerline probability map, and the rib fracture probability map according to a preset threshold, for example, setting a threshold to 0.5, and setting the probability map to be greater than or equal to a preset threshold
  • the probability value is set to 1, and the rest is set to 0:
  • S2131 Process the number of pixels of each mark in the multi-mark image according to the preset threshold statistics (for example, set a mark whose pixel number is less than the preset threshold to 0, and set a mark greater than or equal to the preset threshold to 1) Obtain the high-resolution rib segmentation mask, rib centerline mask, and position coordinates of each fracture in the image to be detected.
  • the preset threshold statistics for example, set a mark whose pixel number is less than the preset threshold to 0, and set a mark greater than or equal to the preset threshold to 1
  • the neural network model is divided into a fine network model and a coarse network model.
  • the coarse network model is used to locate the image to be inspected, and the fine network model is used to process the image to be inspected.
  • the fine network model is trained at a high resolution (for example, the resolution is 1mm).
  • a coarse model positioning step is added to improve the accuracy of subsequent image processing.
  • FIG. 10 is a block diagram showing the working principle of image processing in this embodiment (using ribs as an example), and FIG. 11 is a screenshot of the cross-section, sagittal plane, and coronal plane of the image to be detected in this embodiment.
  • a fracture can be seen in the figure (in the dotted circle in Figure 11), the image to be detected is analyzed according to the process shown in Figure 10, and the output result is shown in Figure 12, where the white bright area is the rib centerline segmentation As a result, gray is the rib segmentation result, and the white dotted rectangular frame is the fracture detection result frame.
  • this embodiment discloses an image processing model training method.
  • the training method is used to train the neural network model in Embodiment 1.
  • the method includes:
  • the image block operation is performed on the image, and the image block training is used for training instead of the entire training image, mainly considering the limitation of the graphics processor (GPU) memory, and the image block training can be regarded as a regularization Means to make the model performance better.
  • GPU graphics processor
  • this step specifically includes:
  • S2221 Input the training image into a preset neural network for bone segmentation training and bone centerline segmentation training, and fix the parameters in the training process to obtain a bone segmentation module and a bone centerline segmentation module;
  • the neural network model is trained based on the training image.
  • the trained neural network model has the function of simultaneously outputting the bone segmentation result, the bone centerline segmentation result, and the bone fracture detection result.
  • this embodiment first performs bone segmentation training and bone centerline segmentation training (the training path is input module_1_16, downsampling module_16_32, downsampling module_32_64, downsampling module in FIG. 8).
  • this embodiment needs to train the segmentation module first and then the detection module is because the segmentation information of the segmentation module differs greatly from the information of the surrounding environment, and the loss (Loss) of the training process can reach a lower value more quickly; Part of the detection information is not much different from the surrounding environment information, such as bone cortical distortion and minor fractures. Loss during the training process requires more time to reach a lower value.
  • Using a segmentation module that is easier to train first and a detection module that is more difficult to train can speed up the Loss of the detection module to reach a lower value faster, because some parameters have been trained, and the remaining parameters that need to be trained become fewer, so Loss can reach lower values faster. After many iterations, when the Loss to be trained is low, the training model file is saved.
  • the neural network model preset in S2210 is preferably an improved and optimized V-Net network.
  • V-Net network it is not limited to the V-Net network, but may also be other convolutional neural networks. It is preferably a V-Net network.
  • the trained neural network model in this embodiment is configured to simultaneously output a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result according to the input image.
  • the bone is preferably a rib. It can be vertebrae, limb bones or sacrum.
  • the split knot / detection result in this embodiment is preferably a probability map or a coordinate map.
  • this embodiment discloses another training method for an image processing model.
  • the training method is used to train the neural network model in Embodiment 2.
  • the method includes:
  • the step of obtaining the training image in this step is the same as that in Embodiment 3, and is not cumbersome here.
  • the trained neural network model is configured to be able to simultaneously output a bone probability map, a bone centerline probability map and a bone fracture probability map according to the input image
  • the trained neural network models include a coarse network model and a fine network model.
  • the coarse network model is used for positioning the fine network model.
  • This step specifically includes:
  • the coarse network model only trains the bone segmentation, which only includes the bone segmentation module, which is used for the subsequent positioning and detection of the fine network model segmentation, which can improve the efficiency and accuracy of the fine network model training.
  • the three modules are the centerline segmentation module and the bone fracture detection module.
  • the preset neural network model in S2321 is preferably an improved and optimized V-Net network, of course, it is not limited to the V-Net network, and may also be other convolutional neural networks.
  • the trained neural network model in this embodiment is configured to output the bone segmentation result, bone centerline segmentation result, and bone fracture detection result at the same time according to the input image.
  • the bones are preferably ribs, vertebrae, and limb bone Or sacrum.
  • the split knot / detection result in this embodiment is preferably a probability map, and may also be a coordinate map or the like.
  • this embodiment provides an image processing system.
  • the system includes:
  • An image-to-be-detected module 2510 is used to obtain an image to be detected
  • the image processing module 2520 to be detected is configured to input the image to be detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result;
  • the neural network model is determined by machine training and learning based on the training image.
  • the image processing module 2520 to be detected further includes:
  • the first obtaining unit 2521 is configured to input the image to be detected into the neural network model for network forward propagation calculation, and insert m times of upsampling codes after the mth downsampling code to obtain the bone fracture detection result ;
  • the second obtaining unit 2522 is configured to continue to perform downsampling coding, and perform n times upsampling coding after the nth downsampling coding to obtain the bone segmentation result and the bone centerline segmentation result;
  • n is less than n, and m and n are positive integers.
  • the post-processing module 2530 is configured to perform binarization processing on the bone segmentation result, bone centerline segmentation result, and bone fracture detection result according to a preset threshold; the bone segmentation result, bone center after the binarization process
  • the results of line segmentation and bone fracture detection are respectively labeled with connected domains to obtain multi-labeled images; according to a preset threshold, the number of pixels of each label in the multi-labeled image is counted to obtain a bone segmentation mask at high resolution
  • image processing system in this embodiment corresponds to the image processing method in Embodiment 1.
  • image processing method in Embodiment 1 For specific analysis principles and procedures, please refer to the description in Embodiment 1.
  • this embodiment provides an image processing system.
  • the system includes:
  • the image-to-be-detected module 2610 is used to obtain an image to be detected
  • the image-to-be-detected processing module 2620 is configured to input the image-to-be-detected into a neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result; wherein, the neural network model is based on a training image Determined by machine training and learning, which includes a coarse network model and a fine network model;
  • the image processing module 2620 to be detected further includes:
  • the coarse network model processing unit 2621 is configured to input the image to be detected (specifically, the image block to be detected) into the coarse network model to perform network forward propagation calculation to obtain a rib distribution probability map;
  • the positioning area acquisition unit 2622 is used to post-process the rib distribution probability map to obtain a rib segmentation mask at a low resolution (that is, a coarser resolution), and mark the rib segmentation mask at the low resolution Target frame, get positioning area;
  • the thin network model processing unit 2623 is configured to input the image in the target frame into the thin network model for network forward propagation calculation to obtain a rib probability map, a rib centerline probability map, and a rib fracture probability map.
  • the post-processing module 2630 is configured to perform binarization processing on the bone segmentation result, bone centerline segmentation result, and bone fracture detection result according to a preset threshold; the bone segmentation result, bone center after the binarization process
  • the results of line segmentation and bone fracture detection are respectively labeled with connected domains to obtain multi-labeled images; according to a preset threshold, the number of pixels of each label in the multi-labeled image is counted to obtain a bone segmentation mask at high resolution, The bone centerline mask and the position coordinates of each fracture in the image to be detected.
  • the image processing system in this embodiment corresponds to the image processing method in Embodiment 2. For specific analysis principles and procedures, please refer to the description in Embodiment 2.
  • this embodiment discloses an image processing model training system.
  • the system includes:
  • Training image acquisition module 2710 used to obtain training images
  • a model training module 2720 configured to train a neural network model based on the training image
  • the trained neural network model is configured to output the bone segmentation result, bone centerline segmentation result and bone fracture detection result simultaneously according to the input image;
  • the model training module 2720 further includes:
  • the first training unit 2721 is configured to input the training image into a preset neural network for bone segmentation training and bone centerline segmentation training, and fix parameters in the training process to obtain a bone segmentation module and a bone centerline segmentation module ;
  • the second training unit 2722 is configured to continue to perform bone fracture detection training on the training image through the preset neural network, and fix parameters in the training process to obtain a bone fracture detection module.
  • the training system of the image processing model in this embodiment corresponds to the training method of the image processing model in Embodiment 3.
  • the training system of the image processing model in this embodiment corresponds to the training method of the image processing model in Embodiment 3.
  • this embodiment provides another training system for image processing models.
  • the system includes:
  • Training image acquisition module 2810 used to acquire training images
  • a model training module 2820 configured to train a neural network model based on the training image
  • the trained neural network model is configured to be able to simultaneously output a bone probability map, a bone centerline probability map and a bone fracture probability map according to the input image
  • the trained neural network models include a coarse network model and a fine network model.
  • the coarse network model is used for positioning the fine network model
  • the model training module 2820 further includes:
  • the coarse network model training unit 2821 is used to input the training image into a preset neural network for skeleton segmentation training and skeleton center line segmentation training at low resolution, and fix the parameters in the training process to obtain a coarse network model;
  • the fine network model training unit 2822 is used to continue the skeleton segmentation training and skeleton center line segmentation training of the training image through the preset neural network at high resolution, and fix the parameters in the training process, and then The training image is trained for bone fracture detection, and the parameters in the training process are fixed to obtain a fine network model.
  • the training system of the image processing model in this embodiment corresponds to the training method of the image processing model in Embodiment 4.
  • the training system of the image processing model in this embodiment corresponds to the training method of the image processing model in Embodiment 4.
  • this embodiment discloses another image processing system, which includes:
  • the image acquisition module 2910 is used to acquire training images and images to be detected
  • the model training module 2920 trains a neural network model based on the training image, wherein the trained neural network model is configured to output multiple types of analysis results based on the input image;
  • the image-to-be-detected processing module 2930 is configured to input the image-to-be-detected into a trained neural network model for processing to obtain a bone segmentation result, a bone centerline segmentation result, and a bone fracture detection result.
  • Embodiment 5-8 the image processing system and the image processing model training system in Embodiment 5-8 are combined to form a whole.
  • the description in Embodiment 5-8 please refer to the description in Embodiment 5-8.
  • ROC curve Receiveiver Operating Characteristic Curve
  • an appropriate parameter threshold is selected based on the ROC curve as the actual computer.
  • the threshold for the screening results of the auxiliary diagnostic system but once the parameter threshold is selected, when using this computer-aided diagnostic system, doctors cannot use different parameter thresholds to weigh the detection rate and the different use cases for different use scenarios and different case characteristics.
  • the rate of false positives which makes it impossible to achieve a balance between different degrees of diagnostic accuracy and reading time, reduces the flexibility of the computer-aided diagnostic system.
  • another embodiment of the present application proposes a method, device, and terminal for displaying image interest regions.
  • a method for displaying an image region of interest including the following steps:
  • Step S3002 Acquire the original image of the detected object.
  • S3001 Obtain a detection result of an interest area in an image, where the detection result of the interest area includes information of the interest area and an attribute parameter of the interest area.
  • the image may include projection images obtained by various imaging systems.
  • the imaging system may be a single-mode imaging system, such as a computed tomography (CT) system, emission computed tomography (ECT), ultrasound imaging system, X-ray optical imaging system, positron emission tomography (PET) system, and the like.
  • the imaging system may also be a multi-mode imaging system, such as a computed tomography-magnetic resonance imaging (CT-MRI) system, positron emission tomography-magnetic resonance imaging (PET-MRI) system, single-photon emission tomography-computed tomography (SPECT-CT) system, digital subtraction angiography-computed tomography (DSA-CT) system, etc.
  • CT-MRI computed tomography-magnetic resonance imaging
  • PET-MRI positron emission tomography-magnetic resonance imaging
  • SPECT-CT single-photon emission tomography-computed tomography
  • DSA-CT digital subtraction angiography
  • the detection result of the region of interest in the image may be, but not limited to, an output result obtained by processing the corresponding image through a deep learning model, and may include information of the region of interest and attribute parameters of the region of interest.
  • the region of interest may include anatomical structures, such as blood vessels, ossification centers, nerves, muscles, soft tissue, trachea, cartilage, ligaments, cracks, etc .; the region of interest may also include lesions, that is, tissues or organs that have suffered from pathogenic factors The location of the lesion caused by the action is the part of the body where the lesion occurs, such as fractures, lung nodules, tumors, cerebral hemorrhage, heart disease, nerve disease, and so on.
  • the region of interest may also be other regions of interest in the image.
  • the attribute parameter of the region of interest may be any parameter that affects the detection result of the region of interest and can be adjusted in real time during the use stage of the display device of the image region of interest of the embodiment of the present specification.
  • the region of interest attribute parameters may include one of the following: region of interest confidence, region of interest category, region of interest size, where region of interest confidence is in the image detected by a detection model such as a deep learning model The degree of certainty that the area or part belongs to the area of interest.
  • the size of the region of interest is a parameter for characterizing the size of the region or part corresponding to the region of interest.
  • the information of the region of interest may be the detection result information of all the regions of interest, or may be the detection result information of a part of the regions of interest.
  • the attribute parameter threshold corresponds to the attribute parameter of the region of interest, and may include the region of interest confidence, the region of interest category, the region of interest size, and the like.
  • the user can adjust the attribute parameter threshold according to need.
  • the computer-aided diagnosis system obtains the attribute parameter threshold input by the user in real time.
  • the method for obtaining the attribute parameter threshold input by the user in real time may use the method shown in FIG. 22, and as shown in FIG. 22, the method may include:
  • S3101 In response to the user's operation on the threshold control component, obtain control information of the threshold control component.
  • a threshold control component may be set on the human-computer interaction interface, and the threshold control component may be, but not limited to, a slide bar, a pull-down menu, or the like.
  • the threshold control component When the user operates the threshold control component, it can respond to the operation to obtain the control information of the threshold control component. For example, when the user operates the slider, the position information of the slider can be acquired.
  • S3103 Determine the attribute parameter threshold input by the user according to the mapping relationship between the control information of the preset threshold control component and the attribute parameter threshold.
  • the mapping relationship between the control information of the threshold control component and the attribute parameter threshold may be preset.
  • the mapping relationship between the position information of the slider and the attribute parameter threshold may be preset.
  • the position information and attributes of the slider The relationship between the parameter thresholds may be, but not limited to, a linear mapping relationship.
  • S3005 Acquire the information of the target interest area from the detection result of the interest area according to the comparison result of the attribute parameter of the interest area and the threshold value of the attribute parameter.
  • the region of interest attribute parameter is the region of interest confidence
  • the information of the region of interest corresponding to the confidence of the region of interest greater than or equal to the confidence threshold is obtained from the detection result of the region to obtain the information of the target region of interest.
  • the interest area attribute parameter is the area of interest category or the size of the area of interest
  • the sense that the area of interest category or the size of the area of interest matches the category threshold or size threshold input by the user can be obtained from the detection result of the area of interest Information about the area of interest to obtain information about the target area of interest.
  • S3007 Display the information of the target interest area.
  • the target region of interest may include one or more, and the number thereof may change according to the change of the attribute parameter threshold.
  • partial images of one or more target interest areas may be displayed.
  • a partial image corresponding to the target area of interest may be obtained, the partial image may be rendered, and the rendered partial image may be displayed.
  • the rendering of partial images may include at least one of the following methods: Multi-Planner Reform (MPR), Volume Rendering Technology (VRT), Maximum Intensity Projection (abbreviation) MIP), Curved Planar Reformat (CPR).
  • MPR is to superimpose all the axial images in the scanning range, and then reorganize the tissue specified by the reorganization lines marked by certain reticles in the coronal, sagittal, and arbitrary angle oblique positions.
  • the use of MPR can arbitrarily generate new tomographic images without repeated scanning, and the reorganization of curved surfaces can unfold the growth of curved objects in an image.
  • VRT is to make the assumed projection line pass through the scanning volume from a given angle, and comprehensively display the pixel information in the volume.
  • VRT can give images with different pseudo-colors and transparency, giving the impression of a realistic three-dimensional structure. This method loses very little data information during reconstruction, and can better display the anatomical structure or the spatial relationship of the lesions.
  • MIP is a computer visualization method for projecting three-dimensional spatial data on a visualization plane.
  • the brightness of each voxel density value will be attenuated in some way, and the voxel with the highest brightness is finally presented on the projection plane.
  • the projection plane is rotated one rotation at a certain angle step, the MIP at each angle is saved, and then each Angled MIPs can be stacked to obtain the effect of rotating to observe the area corresponding to the area of interest.
  • CPR is a special method of MPR, suitable for the display of some curved structure organs of the human body, such as: jaw bone, tortuous blood vessels, bronchus, etc.
  • the information of the target interest area may include position information of the target interest area and / or size information of the target interest area.
  • the original image when displaying the information of the target area of interest, the original image may also be obtained, and the target image of interest in the original image may be determined according to the position information of the target area of interest and / or the size information of the target area of interest Corresponding target interest area, and displaying the original image containing the target interest area.
  • the original images are images of various modalities directly obtained by various imaging systems.
  • a target index corresponding to the information of the target region of interest may also be generated and displayed.
  • the target index may include a serial number, such as a number in the form of Arabic numerals, and some brief information about the target interest, such as the location overview of the target interest area, etc. When the target index is displayed, it may be listed according to the serial number. Form display.
  • FIG. 23 and FIG. 24 are schematic diagrams of interfaces for displaying information of target interest regions obtained under different confidence thresholds.
  • the threshold control component is set on the human-machine interaction interface in the form of a slider.
  • the mapping relationship between the sliding position of the slider and the confidence threshold the closer the sliding position is to the right, the greater the corresponding confidence threshold.
  • the leftmost end of the slider corresponds to a preset lowest threshold (such as 0)
  • the rightmost end of the slider corresponds to a preset highest threshold (such as 1.0).
  • the confidence threshold corresponding to the sliding position of the slide bar in FIG. 23 is large, and the confidence threshold corresponding to the sliding position of the slide bar in FIG.
  • the target index can also intuitively obtain some brief information about the target interest, such as the location overview of the target interest area.
  • S3201 Receive a selection signal for one of the target indexes.
  • the user can select the target index displayed on the human-computer interaction interface, and accordingly, the terminal can receive a selection signal applied by the user to select the target index.
  • S3203 Determine, according to the selection signal, information of a target region of interest corresponding to the target index.
  • the terminal may determine the information of the target interest area corresponding to the currently selected target index.
  • the target region of interest corresponding to the information of the target region of interest has been determined in the original image, after the information of the target region of interest corresponding to the selection signal is determined based on step S3203, it can be further For the information of the area of interest, the target area of interest corresponding to the information of the target area of interest is identified in the displayed original image.
  • the position selected by the frame in the left image in FIG. 23 and FIG. 24 is the The target interest area corresponding to a target index selected on the right.
  • the rendered partial image corresponding to the information of the target region of interest may also be identified, as shown in FIGS. 23 and 24.
  • FIGS. 23 and 24 only give two possible examples, and do not constitute a limitation on the present application.
  • the present application enables users to adjust the attribute parameter threshold in real time, thereby displaying the detection results under different attribute parameter thresholds in real time, which is beneficial to the user according to different usage scenarios and different case characteristics. Different degrees of diagnostic accuracy and the balance of reading time improve the flexibility of the computer-aided diagnostic system.
  • an embodiment of the present application further provides a device for displaying the image interest region. Since the device for displaying the image interest region provided by the embodiment of the present application is the same as the above The method for displaying the image interest region provided by the several embodiments corresponds to each other, so the implementation of the method for displaying the image interest region is also applicable to the device for displaying the image interest region provided in this embodiment. In the embodiment of this specification No more detailed description.
  • FIG. 26 is a schematic structural diagram of a device for displaying an image region of interest provided by an embodiment of the present application.
  • the device may include: a first acquisition module 3610 and a second acquisition module 3620 , A third acquisition module 3630 and a display module 3640, where,
  • the first acquisition module 3610 may be used to acquire the detection result of the region of interest in the image, where the detection result of the region of interest includes information of the region of interest and attribute parameters of the region of interest.
  • the second obtaining module 3620 can be used to obtain the attribute parameter threshold value input by the user in real time;
  • the third obtaining module 3630 may be used to obtain the information of the target interest area from the detection result of the interest area according to the comparison result of the attribute parameter of the interest area and the threshold value of the attribute parameter;
  • the display module 3640 may be used to display the information of the target region of interest.
  • the region of interest attribute parameter includes one of the following: region of interest confidence, region of interest category, and region of interest size.
  • the region of interest includes an anatomical structure or a lesion.
  • the second obtaining module 3620 may include:
  • the response module 3621 may be used to obtain control information of the threshold control component in response to the user's operation on the threshold control component;
  • the first determining module 3622 may be used to determine the attribute parameter threshold input by the user according to the mapping relationship between the control information of the preset threshold control component and the attribute parameter threshold.
  • the display module 3640 may include:
  • the fourth obtaining module 3641 can be used to obtain local images corresponding to the target region of interest
  • the rendering module 3642 can be used to render the partial image
  • the first display module 3643 may be used to display the rendered partial image.
  • the information of the target interest area includes position information of the target interest area and / or size information of the target interest area.
  • the display module 3640 may further include:
  • the fifth acquisition module 3644 can be used to acquire original images
  • the second determination module 3645 may be used to determine the target interest area corresponding to the target interest area in the original image according to the position information of the target interest area and / or the size information of the target interest area;
  • the second display module 3646 can be used to display the original image containing the target region of interest.
  • the display module 3640 may further include:
  • the generating module 3647 may be used to generate a target index corresponding to the information of the target interest area
  • the third display module 3648 may be used to display the target index.
  • FIG. 29 is a schematic structural diagram of another display device of an image interest region provided by an embodiment of the present application.
  • the device may include: a first acquisition module 3910 and a second acquisition module 3920, a third acquisition module 3930, a display module 3940, a reception module 3950, a third determination module 3960, and an identification module 3970.
  • the first obtaining module 3910, the second obtaining module 3920, the third obtaining module 3930, and the display module 3940 can refer to the function description of the corresponding modules in FIG. 26 to FIG. 28, which will not be repeated here.
  • the receiving module 3950 may be used to receive a selection signal for one of the target indexes
  • the third determining module 3960 may be used to determine the target interest region information corresponding to the target index according to the selection signal;
  • the identification module 3970 may be used to identify the target region of interest corresponding to the target region of interest information in the original image, and / or render the corresponding target region of interest information Local images are marked.
  • the display device of the image interest region in the embodiment of the present application enables the user to adjust the attribute parameter threshold in real time, thereby displaying the detection results under different attribute parameter thresholds in real time, which is beneficial to the user according to different usage scenarios and different case characteristics , Balancing different degrees of diagnostic accuracy and the balance of reading time, improve the flexibility of the computer-aided diagnosis system.
  • FIG. 30 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal is used to implement the method for displaying an image interest region provided in the foregoing embodiment. Specifically:
  • the terminal 3000 may include an RF (Radio Frequency) circuit 3010, a memory 3020 including one or more computer-readable storage media, an input unit 3030, a display unit 3040, a video sensor 3050, an audio circuit 3060, WiFi (wireless fidelity, Wireless fidelity) module 3070, a processor 3080 including one or more processing cores, and a power supply 300 and other components.
  • RF Radio Frequency
  • the RF circuit 3010 can be used to receive and send signals during sending and receiving information or during a call. In particular, after receiving the downlink information of the base station, it is handed over to one or more processors 3080; in addition, uplink data is sent to the base station .
  • the RF circuit 3010 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • SIM subscriber identity module
  • the RF circuit 3010 can also communicate with the network and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to global mobile communication system, general packet radio service, code division multiple access, broadband code division multiple access, long-term evolution, e-mail, and short message service.
  • the memory 3020 may be used to store software programs and modules.
  • the processor 3080 executes various functional applications and data processing by running the software programs and modules stored in the memory 3020.
  • the memory 3020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to the use of the terminal 3000, and the like.
  • the memory 3020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 3020 may further include a memory controller to provide access to the memory 3020 by the processor 3080 and the input unit 3030.
  • the input unit 3030 may be used to receive input numeric or character information, and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
  • the input unit 3030 may include an image input device 3031 and other input devices 3032.
  • the image input device 3031 may be a camera or a photoelectric scanning device.
  • the input unit 3030 may include other input devices 3032.
  • other input devices 3032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), trackball, mouse, joystick, and so on.
  • the display unit 3040 may be used to display information input by the user or provided to the user, and various graphical user interfaces of the terminal 3000. These graphical user interfaces may be composed of graphics, text, icons, videos, and any combination thereof.
  • the display unit 3040 may include a display panel 3041.
  • the display panel 3041 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the terminal 3000 may include at least one video sensor 3050, and the video sensor is used to obtain user's video information.
  • the terminal 3000 may also include other sensors (not shown), such as light sensors, motion sensors, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 3041 according to the brightness of the ambient light, and the proximity sensor may close the display panel 3041 and the display panel 3041 when the terminal 3000 moves to the ear / Or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in various directions, and can detect the magnitude and direction of gravity when at rest. It can be used to identify the gesture of mobile phones, vibration recognition related functions, etc.
  • the terminal 3000 it can also be configured Gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors will not be repeated here.
  • the video circuit 3060, the speaker 3061, and the microphone 3062 can provide a video interface between the user and the terminal 3000.
  • the audio circuit 3060 can transmit the converted electrical signal of the received audio data to the speaker 3061, which converts the speaker 3061 into a sound signal output; on the other hand, the microphone 3062 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 3060 After receiving, it is converted into audio data, and then processed by the audio data output processor 3080, and then sent to another terminal via the RF circuit 3011, or the audio data is output to the memory 3020 for further processing.
  • the audio circuit 3060 may further include an earplug jack to provide communication between the peripheral earphone and the terminal 3000.
  • WiFi is a short-range wireless transmission technology.
  • Terminal 3000 can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 3070. It provides users with wireless broadband Internet access.
  • FIG. 30 shows the WiFi module 3070, it can be understood that it is not a necessary component of the terminal 3000, and can be omitted as long as it does not change the essence of the application as needed.
  • the processor 3080 is the control center of the terminal 3000, and uses various interfaces and lines to connect the various parts of the entire mobile phone, by running or executing the software programs and / or modules stored in the memory 3020, and calling the data stored in the memory 3020, Execute various functions and process data of terminal 3000 to monitor the mobile phone as a whole.
  • the processor 3080 may include one or more processing cores; preferably, the processor 3080 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc.
  • the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 3080.
  • the terminal 3000 further includes a power supply 300 that supplies power to various components.
  • the power supply can be logically connected to the processor 3080 through a power management system, so as to realize functions such as charging, discharging, and power consumption management through the power management system.
  • the power supply 300 may further include any component such as one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, and a power status indicator.
  • the terminal 3000 may also include a Bluetooth module, etc., and will not be repeated here.
  • the terminal 3000 further includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and are configured to be executed by one or more processors.
  • the above one or more programs include instructions for executing the method for displaying the image interest region provided by the above method embodiment.
  • An embodiment of the present application further provides a storage medium, which may be set in a terminal to store at least one instruction and at least one paragraph related to a method for displaying an image interest region in the method embodiment
  • a program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set may be loaded and executed by the processor of the terminal to implement the method of displaying the image interest region provided by the foregoing method embodiments.
  • patients can be scanned by CT scanners to generate scan data.
  • An image sequence is generated based on the scan data, and the image sequence includes a plurality of slice images, each slice image represents a cross-sectional image of the patient, and then a three-dimensional image of the patient is generated according to the image sequence.
  • the physician further determines the lesion area of the patient by observing the image sequence and the three-dimensional image.
  • the medical imaging device In order to obtain the medical image of the scanned object, first, the medical imaging device needs to be used to scan the scanned object, where the scanned object may be the patient's whole body organ, or the patient's organ, tissue, or cell collection that needs to be focused on.
  • the medical imaging device scans the scanned object to obtain scan data, and generates a medical image sequence according to the scan data.
  • the medical image sequence is an image of each cross-section of the scanned object in the scanning direction. Based on the image sequence, a three-dimensional image of the internal structure of the scanned object can be generated.
  • the medical imaging equipment may be: X-ray imaging instruments, CT (general CT, spiral CT), positive scan (PET), nuclear magnetic resonance imaging (MR), infrared scanning equipment, and a combination of various scanning equipment.
  • a medical image display method including the following steps:
  • Step S4002 Acquire the original image of the detected object.
  • the medical imaging device scans the detected object according to preset scanning parameters. Get a three-dimensional image of the scanned object.
  • the original image is a three-dimensional image scanned by a medical imaging device.
  • the scanned object may be a whole-body organ of a human or animal, or an organ, tissue, or cell collection that needs to be detected by the human or animal.
  • Step S4004 Acquire the region of interest selected in the original image.
  • the region of interest is a limited diseased tissue with pathogenic microorganisms.
  • a part of the lung is destroyed by tuberculosis bacteria, then the destroyed part is called the region of interest.
  • the way to obtain the region of interest may be to input the original image into the neural network trained based on the image training set, and then obtain the region of interest through big data analysis.
  • the neural network is trained based on machine learning by learning features or variables, and the input data is input into the trained neural network, and the output data is obtained by extracting and matching the features or variables. More specifically, the neural network is trained to detect the region of interest in the original image, where the region of interest is expressed by the coordinates of the region of interest.
  • the image used for training may be a two-dimensional image or a three-dimensional image.
  • the training image can be a two-dimensional image or a three-dimensional image obtained by any medical imaging device.
  • the neural network is obtained by training the image training set, and the coordinates of the region of interest are determined by inputting the original image to the neural network to further determine the region of interest.
  • the manner of acquiring the region of interest may also be that the physician determines the region of interest in the original image by observing the original image, receives the physician input, and determines the region of interest in the original image based on the physician input.
  • the determined area of interest can be highlighted by highlighting the outline of the area, or displayed by the area intercepted by the border.
  • the border is generally called Bounding Box in the art, and can also be marked in the form of a label, or a file can also be used. Marks such as identifiers are displayed.
  • step S4006 the image in the preset range is selected as the target area of interest image based on the area of interest.
  • the region of interest is used as a reference, that is, the region of interest is used as the central region, and an image within a preset range is selected as the target region of interest image.
  • the selected target region of interest image includes not only the target region of interest image, but also enough relevant background images and medical information of the target region of interest, such as the size and location of the lesion, to help the doctor make the final lesion confirmation.
  • Step S4008 Acquire a plurality of planar images in the target region of interest image according to a preset acquisition method.
  • the plane image may be that, in the target region of interest image, a plurality of slice images perpendicular to the preset direction are sequentially intercepted along the preset direction as the plane image.
  • the plane image may be an image captured on the cross-section of the target region of interest image; the plane image may also be an image captured on the sagittal plane of the target region of interest image; the plane image may also be an image on the target region of interest An image captured on the coronal plane of the image; a planar image can also be an image captured from one end to the other end of the target region of interest image in either direction.
  • the captured multiple images are used as plane images.
  • the planar image may also be: by establishing a rectangular coordinate system in the target area of interest image, firstly, the maximum density projection of the target area of interest image in the Z-axis direction is performed at the initial position of the target area of interest image, and the maximum density projection image is used as Plane image, and then rotate the target area of interest image in a preset direction, each time the preset angle is rotated, the maximum density projection of the target area of interest image in the Z-axis direction is performed, and the maximum density projection image is used as a plane image until The target region of interest image is rotated to the initial position to obtain multiple planar images.
  • the rectangular coordinate system can be established according to the position of the bed during the CT scanning, with the direct coordinate system from left to right as the x axis, from top to bottom as the y axis, and from foot to head as the z axis.
  • the rectangular coordinate system can also be established based on the medical information of the target region of interest, such as spatial morphology.
  • the plane of the central axis of the rib is the xy axis plane, and the normal vector of the central axis plane belongs to the z-axis direction.
  • step S4010 a plurality of planar images are generated in a predetermined order to generate a dynamic image.
  • a plurality of planar images are generated in the order of acquisition or the order opposite to the order of acquisition.
  • a plurality of planar images are images that are intercepted from one end to the other end of the target region of interest image
  • dynamic images are generated in the order of interception or the reverse order of the interception order.
  • the preset order may also be to intercept a part of the planar image based on a certain layer thickness and generate a dynamic image according to the order in which the acquisition order is reverse to the acquisition order.
  • the maximum density projection uses the maximum density projection image as a plane image until the target region of interest image is rotated to the initial position, and then the dynamic image is generated in the order of acquiring the plane image or in the reverse order of acquiring the plane image.
  • the generated dynamic image can be the video obtained by inputting the obtained multiple flat images into the video encoder of the MPEG4 video compression format; the generated dynamic image can be the video obtained by inputting the obtained multiple flat images into the H.264 video compression format
  • the video obtained by the encoder; generating a dynamic image can also be, compressing multiple acquired flat images into a GIF image interchange format file.
  • Step S4012 is executed: the moving image is displayed according to a preset position.
  • the display interface layout has a plurality of display windows (usually interpreted as cells in the art), and a plurality of cells respectively display a curved surface reconstruction image in which the region of interest is a rib and a corresponding multi-plane reconstruction image (For example, a cross-sectional image), a dynamic image of the rib obtained through the previous steps.
  • the rib dynamic image can also be displayed in a floating window.
  • steps S4002-S4012 in the display window of the reconstructed image of the curved surface of the rib during the adjustment of the reading area during the observation of the rib dynamic image by the physician, and the adjusted dynamic state is displayed in conjunction with the switching of the rib dynamic image display window Images to meet the doctor's reading habits, improve the diagnosis efficiency and accuracy.
  • the display can select the playback speed of the dynamic image in response to the input of the physician, such as accelerated playback or slow playback, forward or reverse playback, infinite loop playback or pause playback.
  • the region of interest is selected from the original image, and then the image within the preset range is selected as the target region of interest image based on the region of interest.
  • multiple plane images are acquired in a preset manner, and the multiple plane images are generated in a predetermined order to generate a dynamic image.
  • the physician determines the location of the lesion by observing the dynamic image, which can save the workload of the physician and save the doctor's time for determining the lesion.
  • another medical image display method including the following steps:
  • Step S4102 Acquire the original image of the detected object.
  • Step S4104 acquiring the region of interest selected in the original image.
  • the above fracture detection model can be learned and trained based on the convolutional neural network algorithm.
  • a deep convolutional neural network model including a 5-layer convolutional neural network model is used.
  • the 5-layer convolutional neural network model includes: a convolutional layer, a pooling layer, a convolutional layer, a pooling layer, and a full Connection layer.
  • the process of deep convolutional neural network processing is:
  • the two-dimensional slice image is input to the convolutional layer, the size of the two-dimensional slice image is 64 ⁇ 64, and 36 5 ⁇ 5 size convolution kernels are obtained by pre-training in the perception stage to convolve the input image to obtain 36 64 ⁇ 64 Feature map of size;
  • Convolutional layer sampling 36 images of the pooling layer to obtain one or more sets of 5 ⁇ 5 image blocks, and then training this set using a sparse self-encoding network to obtain 64 5 ⁇ 5 weights, use This weight is used as a convolution kernel and convolution with 36 images of the pooling layer to obtain 64 feature maps of 24 ⁇ 24 size.
  • the training data set used in this application has a total of 1300 images.
  • the feature map of the entire network is 1300 ⁇ 64 ⁇ 8 ⁇ 8, indicating that for each input image of 64 ⁇ 64 size, it can be obtained 64 maps of 8 ⁇ 8 size.
  • the number of training samples comes from 26 patients (subjects). Positive sample images are extracted from each patient's three-dimensional fracture connected domain, and negative sample images are extracted from non-fracture connected regions. The total number of positive and negative sample images is 100,000. About Zhang. The data can be amplified to 1 million. The data amplification method is to rotate and translate the two-dimensional slice image. The size of the positive and negative sample images is a 32 * 32 (32-64 can be) two-dimensional image of pixels, and the resolution of all slice images is uniformly 0.25mm (between 0.2-0.6). The original CT value of the image is used as input for training.
  • the neural network uses a convolutional neural network (CNN), and the optimization algorithm uses a stochastic gradient descent method (SGD) to update the weights.
  • CNN convolutional neural network
  • SGD stochastic gradient descent method
  • the convolutional neural network has a total of 12 layers, including three convolutional layers, three nonlinear mapping layers, three pooling layers, two fully connected layers, and a Loss layer.
  • the first layer is a convolutional layer. Its function is to extract features from the input image, set 64 convolution kernels, each convolution kernel size is 5 * 5, and perform convolution operation on the input image and the convolution kernel to obtain the first One layer of 64 feature maps, the size is 32 * 32;
  • the second layer is a non-linear mapping layer. Its function is to add non-linearity to the neural network and accelerate the convergence rate. Use the modified linear unit function (Relu) to perform non-linear mapping on the first-layer feature map to obtain the second-layer feature map;
  • the third layer is the pooling layer, which is used to reduce the image size and reduce noise.
  • the size of the pooling kernel is 3 * 3
  • the second layer feature map is pooled.
  • the method of pooling is to take the maximum value in the 3 * 3 pixel box to obtain the third layer feature map with a size of 16 * 16 pixels , The number is 64;
  • each convolution kernel size is 5 * 5, get 64 feature maps of the fourth layer, the size is 16 * 16;
  • the sixth layer is the pooling layer, the size of each pooling core is 3 * 3, and the fifth layer feature map is pooled to obtain the sixth layer feature map, the size is 8 * 8 pixels, and the number is 64;
  • each convolution kernel size is 5 * 5, get the seventh layer feature map
  • the size of each pooling core is 3 * 3, and pool the eighth layer feature map to obtain the ninth layer feature map, the size is 4 * 4, and the number is 128;
  • each convolution kernel is 1 * 1, and perform full connection processing on the tenth layer feature map to obtain the eleventh layer feature map;
  • the twelfth layer is the softmax loss layer, which calculates the difference between the predicted value and the actual value, passes the gradient back through the back propagation algorithm (BP algorithm), and updates the weight and bias of each layer .
  • BP algorithm back propagation algorithm
  • the Loss value of the training set and the verification set continues to decrease.
  • the Loss value of the verification set no longer decreases, the training is stopped to prevent overfitting.
  • the neural network model at this moment is taken as a slice classifier.
  • the twelfth layer was changed to the softmax layer, and the eleventh layer feature map was input to this layer for classification prediction. The probability that the input image was fractured and non-fractured was obtained, and the classification result was obtained.
  • the initialization of the neural network model may include building a neural network model based on: a convolutional neural network (CNN), a generative adversarial network (GAN), or the like, or a combination thereof, as shown in FIG. 32 and its description .
  • CNN convolutional neural networks
  • Examples of convolutional neural networks (CNN) may include SRCNN (Super-Resolution Convolutional Neural Network, Super Resolution Convolutional Neural Network), DnCNN (Denoising Convolutional Neural Network, Denoising Convolutional Neural Network), U-net, V- net and FCN (Fully Convolutional Network, fully convolutional neural network).
  • the neural network model may include multiple layers, such as an input layer, multiple hidden layers, and an output layer.
  • the multiple hidden layers may include one or more convolutional layers, one or more batch normalization layers, one or more activation layers, fully connected layers, cost function layers, and so on. Each of the multiple layers may include multiple nodes.
  • Step S4106 taking the region of interest as a reference, and selecting an image within a preset range as the target region of interest image.
  • Step S4108 Establish a three-dimensional rectangular coordinate system for the target region of interest.
  • the method for establishing the rectangular coordinate system may be: first, center the target interest area in the target interest area image, select a rotation axis as the Y axis, and then select any one of the target interest area image and the Y axis
  • the vertical direction is taken as the X axis
  • the direction perpendicular to both the X axis and the Y axis is taken as the Z axis.
  • the establishment of a rectangular coordinate system can be as follows: first calculate the covariance matrix for the positions of all coordinate points of the target interest area in the target area of interest image, then calculate the eigenvalues and eigenvectors of the covariance matrix, and compare the features corresponding to the largest
  • the vector is taken as the central axis direction of the target interest area, and the central axis direction is taken as the Y axis, and then any direction perpendicular to the Y axis is selected as the X axis in the target interest area image, and then the X axis and the Y axis are both perpendicular
  • the direction is used as the Z axis.
  • Step S4110 the maximum density projection is performed on the target region of interest image in the Z-axis direction at the initial position, and the maximum density projection image is used as the plane image.
  • the maximum density projection is performed on the target region of interest image in the Z-axis direction at the initial position, and the maximum density projection image is used as the planar image.
  • the maximum density projection is generated by calculating the maximum density pixels encountered on each ray along the target site. That is, when light passes through the target region of interest image, the pixel with the highest density in the target region of interest image is retained and projected onto a two-dimensional plane, thereby forming the maximum density projection image of the target region of interest image.
  • Step S4112 rotate the target region of interest image according to the preset direction, and every time the preset angle is rotated, the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as a planar image until the target region of interest The image is rotated to the initial position.
  • acquiring the plane image may be: after rotating the target area of interest image around the Y axis by a preset angle in a preset direction, performing maximum density projection on the target area of interest image in the Z axis direction, and using the maximum density projection image as a plane Image; after rotating the target area of interest image around the X axis in a preset direction by a preset angle, the target area of interest image is projected at the maximum density in the Z axis direction, using the maximum density projection image as a planar image; alternately get around Y
  • the plane image after the axis is rotated by a preset angle and the plane image after the preset angle is rotated around the X axis until the target region of interest image is rotated to the initial position.
  • Acquiring the plane image may also be: after rotating the target area of interest image around the X axis in a preset direction by a preset angle, performing the maximum density projection on the target area of interest image in the Z axis direction, and using the maximum density projection image as the plane image; After rotating the target area of interest image around the Y axis in a preset direction by a preset angle, the target area of interest image is projected at the maximum density in the Z axis direction, and the maximum density projection image is used as a planar image; the rotation around the X axis is alternately obtained The plane image after the preset angle and the plane image after the preset angle is rotated around the Y axis until the target region of interest image rotates to the initial position.
  • the preset angle rotating around the Y axis and the preset angle rotating around the X axis may be the same or different.
  • the preset angle of rotation around the Y axis is the same as the preset angle of rotation around the X axis.
  • the direction of rotation about the Y axis is the same as the direction of rotation about the X axis, and may be clockwise or counterclockwise.
  • step S4114 a plurality of planar images are generated in a predetermined order to generate dynamic images.
  • Step S4116 displaying the dynamic image according to the preset position.
  • the region of interest is selected from the original image, and the image within the preset range is selected as the target region of interest image based on the focus region. Then, a rectangular coordinate system is established for the target area of interest image. First, at the initial position, the target area of interest image is subjected to maximum density projection in the Z-axis direction, and the maximum density projection image is obtained as a planar image. Then rotate the target region of interest image according to the preset direction, each time the preset angle is rotated, the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as the plane image until the target region of interest image Rotate to the initial position.
  • a method for acquiring a planar image including the following steps:
  • Step S4202 After rotating the target region of interest image by a preset angle around the Y axis in a preset direction, perform maximum density projection on the target region of interest image in the Z axis direction, and use the maximum density projection image as a planar image.
  • the target region of interest image is first rotated around the Y axis in a preset direction by a preset angle, where the preset direction may be clockwise or reverse Hour hand direction.
  • the preset angle is a small angle.
  • the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as the planar image.
  • Step S4204 After rotating the target area of interest image by a preset angle around the X axis in a preset direction, perform maximum density projection on the target area of interest image in the Z axis direction, and use the maximum density projection image as a planar image.
  • the target region of interest image is then rotated around the X axis in the preset direction by a preset angle, where the preset direction may be clockwise or counterclockwise direction.
  • the rotation direction around the Y axis is the same as the rotation direction around the X axis.
  • the rotation around the X axis is also clockwise; when the rotation around the Y axis is counterclockwise, the rotation around the X axis is also reverse The hour hand rotates.
  • the preset angle is a small angle.
  • the preset angle of rotation about the Y axis and the preset angle of rotation about the X axis may be the same or different.
  • the preset angle of rotation around the Y axis is the same as the preset angle of rotation around the X axis.
  • step S4206 a plane image rotated by a preset angle about the Y axis and a plane image rotated by a preset angle about the X axis are alternately obtained until the target region of interest image is rotated to the initial position.
  • the above-mentioned method of acquiring a planar image can acquire an accurate planar image showing the lesion area, and then use the acquired planar image to generate a dynamic video, which further makes the focal area display more complete, enables the physician to accurately observe the focal area, and saves the physician To determine the time of the lesion.
  • the first state diagram, the second state diagram, and the third state diagram are three different moments intercepted in chronological order in the dynamic display image of the current rib fracture 'S display status.
  • the box in the second state diagram is the target region of interest, that is, the region of the rib fracture.
  • the first state diagram, the second state diagram, and the third state diagram are three different types of rib fractures captured in chronological order.
  • the box in the second state diagram is the target region of interest, that is, the region of the rib fracture.
  • the first state diagram, the second state diagram, and the third state diagram are three kinds of chronological interception of the dynamic display image of the current lung nodule. Display status at different moments.
  • the box in the second state diagram is the target region of interest, that is, the lung nodule region.
  • the first state diagram, the second state diagram, and the third state diagram are three kinds of chronological interception of the dynamic display image of the current lung nodule. Display status at different moments.
  • the box in the second state diagram is the target region of interest, that is, the lung nodule region.
  • steps in the flowcharts of FIGS. 31-33 are displayed in order according to the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 30-32 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be executed in turn or alternately with other steps or sub-steps of the other steps or at least a part of the stages.
  • a structural block diagram of a medical image viewing device including: an original image acquisition module 4100, an interest area acquisition module 4200, an image selection module 4300, a planar image extraction module 4400, Dynamic image generation module 4500 and display module 4600, in which:
  • the original image acquisition module 4100 is used to acquire the original image of the detected object.
  • the region of interest acquisition module 4200 is used to acquire the region of interest selected in the original image.
  • the image selection module 4300 is configured to select an image within a preset range as a target area of interest image based on the area of interest.
  • the planar image extraction module 4400 is configured to acquire multiple planar images in the target region of interest image according to a preset acquisition method.
  • the dynamic image generation module 4500 is used to generate a plurality of planar images in a preset order.
  • the display module 4600 is configured to display the dynamic image according to a preset position.
  • the original image acquisition module 4100 is also used to input the original image into the neural network trained based on the image training set to obtain the region of interest.
  • the dynamic image generation module 4500 is also used to generate a plurality of planar images in an acquisition order or an order opposite to the acquisition order.
  • planar image extraction module 4400 includes: an interception unit 4410.
  • the intercepting unit 4410 is configured to sequentially intercept a plurality of slice images perpendicular to the preset direction in the target interest region image along the preset direction as a plane image.
  • planar image extraction module 4400 includes: a coordinate system establishment unit 4420, an initial position maximum density projection unit 4430, and a rotation unit 4440 .
  • the coordinate system establishing unit 4420 is configured to establish a three-dimensional rectangular coordinate system for the target region of interest image.
  • the initial position maximum density projection unit 4430 is configured to perform maximum density projection on the target region of interest image in the Z-axis direction at the initial position, and use the maximum density projection image as a planar image.
  • the rotation unit 4440 is used to rotate the target region of interest image according to the preset direction, and every time the preset angle is rotated, the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as the plane image until the target The region of interest image is rotated to the initial position.
  • FIG. 49 a structural block diagram of a rotation unit is provided, wherein the rotation unit 4440 includes: an X-axis rotation sub-unit 4441, a Y-axis rotation sub-unit 4442, and an acquisition sub-unit 4443.
  • the X-axis rotation subunit 4441 is used to rotate the target region-of-interest image around the Y-axis in a preset direction by a preset angle, and then perform the maximum density projection on the target region-of-interest image in the Z-axis direction, using the maximum density projection image as a plane image.
  • the Y-axis rotation sub-unit 4442 rotates the target region-of-interest image around the X-axis by a preset angle in a preset direction, and then performs maximum-density projection on the target-interest region image in the Z-axis direction, and uses the maximum-density projection image as a planar image.
  • the obtaining subunit 4443 alternately obtains a plane image rotated by a preset angle about the Y axis and a plane image rotated by a preset angle about the X axis until the target region of interest image rotates to the initial position.
  • Each module in the above medical image viewing device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a terminal, and an internal structure diagram thereof may be as shown in FIG. 50.
  • the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with external terminals through a network connection.
  • the computer program is executed by the processor to implement a medical image display method.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the computer device housing , Can also be an external keyboard, touchpad or mouse.
  • a computer device which includes a memory and a processor, and a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • Acquire the original image of the detected object Get the region of interest selected in the original image. Based on the region of interest, the image within the preset range is selected as the target region of interest image.
  • the processor also implements the following steps when executing the computer program:
  • the original image of the detected object Get the region of interest selected in the original image. Based on the region of interest, the image within the preset range is selected as the target region of interest image. Establish a three-dimensional rectangular coordinate system for the target region of interest image. At the initial position, the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as a planar image. Rotate the target area of interest image according to the preset direction, and each time the preset angle is rotated, the maximum density projection of the target area of interest image in the Z-axis direction is performed, using the maximum density projection image as a planar image until the target area of interest image is rotated to initial position. Generate dynamic images from multiple planar images in a preset order. The dynamic image is displayed according to a preset position.
  • the processor also implements the following steps when executing the computer program:
  • the image of the region of interest is rotated around the Y axis in a predetermined direction by a predetermined angle
  • the image of the region of interest is projected at the maximum density in the direction of the Z axis, and the projected image of the maximum density is used as a planar image.
  • the image of the region of interest is rotated around the X axis in a predetermined direction by a predetermined angle
  • the image of the region of interest is projected at the maximum density in the direction of the Z axis
  • the projected image of the maximum density is used as a planar image.
  • the plane image after the preset angle is rotated around the Y axis and the plane image after the preset angle is rotated around the X axis are alternately obtained until the image of the region of interest rotates to the initial position.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • Acquire the original image of the detected object Get the region of interest selected in the original image. Based on the region of interest, the image within the preset range is selected as the target region of interest image.
  • the computer program also implements the following steps when executed by the processor:
  • the original image of the detected object Get the region of interest selected in the original image. Based on the region of interest, the image within the preset range is selected as the target region of interest image. Establish a three-dimensional rectangular coordinate system for the target region of interest image. At the initial position, the maximum density projection is performed on the target region of interest image in the Z-axis direction, and the maximum density projection image is used as a planar image. Rotate the target area of interest image according to the preset direction, and each time the preset angle is rotated, the maximum density projection of the target area of interest image in the Z-axis direction is performed, using the maximum density projection image as a planar image until the target area of interest image is rotated to initial position. Generate dynamic images from multiple planar images in a preset order. The dynamic image is displayed according to a preset position.
  • the computer program also implements the following steps when executed by the processor:
  • the target region of interest image After the target region of interest image is rotated around the Y axis in a preset direction by a predetermined angle, the target region of interest image is projected at the maximum density in the Z axis direction, and the maximum density projection image is used as a planar image.
  • the target region of interest image is rotated by a predetermined angle around the X axis in a preset direction, the target region of interest image is projected at the maximum density in the Z axis direction, and the maximum density projected image is used as a planar image.
  • the plane image after the preset angle is rotated around the Y axis and the plane image after the preset angle is rotated around the X axis are alternately obtained until the target region of interest image is rotated to the initial position.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un système de traitement d'images médicales, un dispositif informatique, et un support de stockage lisible. Selon le procédé, un médecin peut déterminer l'emplacement d'une lésion en observant une image dynamique, la charge de travail du médecin peut être allégée, et le temps nécessaire au médecin pour déterminer une lésion peut être économisé. Selon le procédé, un utilisateur peut en outre régler un seuil de paramètre d'attribut en temps réel, de façon à afficher des résultats de détection en temps réel sous différents seuils de paramètre d'attribut, faciliter l'évaluation par un utilisateur de l'équilibre entre différents niveaux de précision de diagnostic et le temps nécessaire pour lire une radiographie en fonction de différents scénarios d'utilisation et de différentes caractéristiques de cas, et améliorer la souplesse d'un système de diagnostic assisté par ordinateur. Enfin, le procédé peut réaliser simultanément des fonctions de segmentation des os, de segmentation des axes de symétrie des os, et de détection de fractures osseuses en adoptant un réseau d'apprentissage profond entraîné, peut raccourcir de 50% le temps total passé, un modèle fait économiser 40% de l'espace mémoire; de plus, le procédé peut aider un médecin à réduire la charge de lecture de la radiographie, accélérer la lecture de la radiographie, réduire la probabilité d'un diagnostic manqué, et réduire les conflits entre médecins et patients.
PCT/CN2019/115549 2018-11-05 2019-11-05 Procédé et système de traitement d'images médicales, dispositif informatique, et support de stockage lisible WO2020093987A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811306115.8A CN109493328B (zh) 2018-08-31 2018-11-05 医学图像显示方法、查看设备以及计算机设备
CN201811306115.8 2018-11-05
CN201811626399.9 2018-12-28
CN201811626399.9A CN109859233B (zh) 2018-12-28 2018-12-28 图像处理、图像处理模型的训练方法及系统
CN201910133231.2A CN109934220B (zh) 2019-02-22 2019-02-22 一种影像兴趣点的展示方法、装置及终端
CN201910133231.2 2019-02-22

Publications (1)

Publication Number Publication Date
WO2020093987A1 true WO2020093987A1 (fr) 2020-05-14

Family

ID=70612286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115549 WO2020093987A1 (fr) 2018-11-05 2019-11-05 Procédé et système de traitement d'images médicales, dispositif informatique, et support de stockage lisible

Country Status (1)

Country Link
WO (1) WO2020093987A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230518A (ja) * 2000-11-29 2002-08-16 Fujitsu Ltd 診断支援プログラム、診断支援プログラムを記録したコンピュータ読取可能な記録媒体、診断支援装置及び診断支援方法
CN101216938A (zh) * 2007-12-28 2008-07-09 深圳市蓝韵实业有限公司 一种多序列图像自动定位的方法
CN107767376A (zh) * 2017-11-02 2018-03-06 西安邮电大学 基于深度学习的x线片骨龄预测方法及系统
CN109493328A (zh) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 医学图像显示方法、查看设备以及计算机设备
CN109859233A (zh) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 图像处理、图像处理模型的训练方法及系统
CN109934220A (zh) * 2019-02-22 2019-06-25 上海联影智能医疗科技有限公司 一种影像兴趣点的展示方法、装置及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230518A (ja) * 2000-11-29 2002-08-16 Fujitsu Ltd 診断支援プログラム、診断支援プログラムを記録したコンピュータ読取可能な記録媒体、診断支援装置及び診断支援方法
CN101216938A (zh) * 2007-12-28 2008-07-09 深圳市蓝韵实业有限公司 一种多序列图像自动定位的方法
CN107767376A (zh) * 2017-11-02 2018-03-06 西安邮电大学 基于深度学习的x线片骨龄预测方法及系统
CN109493328A (zh) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 医学图像显示方法、查看设备以及计算机设备
CN109859233A (zh) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 图像处理、图像处理模型的训练方法及系统
CN109934220A (zh) * 2019-02-22 2019-06-25 上海联影智能医疗科技有限公司 一种影像兴趣点的展示方法、装置及终端

Similar Documents

Publication Publication Date Title
CN109493328B (zh) 医学图像显示方法、查看设备以及计算机设备
CN110473186B (zh) 一种基于医学图像的检测方法、模型训练的方法及装置
Wang et al. Smartphone-based wound assessment system for patients with diabetes
CN110738263B (zh) 一种图像识别模型训练的方法、图像识别的方法及装置
CN110348543B (zh) 眼底图像识别方法、装置、计算机设备及存储介质
CN110414631B (zh) 基于医学图像的病灶检测方法、模型训练的方法及装置
WO2021036695A1 (fr) Procédé et appareil de détermination d'image à marquer, et procédé et appareil pour modèle d'apprentissage
US11900594B2 (en) Methods and systems for displaying a region of interest of a medical image
CN105431089B (zh) 根据肝脏扫描确定肝功能的系统和方法
CN111091127A (zh) 一种图像检测方法、网络模型训练方法以及相关装置
CN107895369B (zh) 图像分类方法、装置、存储介质及设备
CN110853111B (zh) 医学影像处理系统、模型训练方法及训练装置
CN107194163A (zh) 一种显示方法和系统
CN112419326B (zh) 图像分割数据处理方法、装置、设备及存储介质
CN111080583B (zh) 医学图像检测方法、计算机设备和可读存储介质
CN113706441A (zh) 一种基于人工智能的图像预测方法、相关装置及存储介质
KR20200120311A (ko) 의료 영상을 이용한 암의 병기 결정 방법 및 의료 영상 분석 장치
CN110517771B (zh) 一种医学图像处理方法、医学图像识别方法及装置
CN113724188A (zh) 一种病灶图像的处理方法以及相关装置
WO2020093987A1 (fr) Procédé et système de traitement d'images médicales, dispositif informatique, et support de stockage lisible
JP7265805B2 (ja) 画像解析方法、画像解析装置、画像解析システム、制御プログラム、記録媒体
US20220076414A1 (en) Method to read chest image
Li et al. Automatic detection of pituitary microadenoma from magnetic resonance imaging using deep learning algorithms
CN112545476A (zh) 在mpMRI上检出前列腺癌包膜外侵犯的系统及方法
WO2023060735A1 (fr) Procédés d'entraînement de modèle de génération d'image et de génération d'image, appareil, dispositif et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19883214

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19883214

Country of ref document: EP

Kind code of ref document: A1