WO2022127318A1 - 一种扫描定位方法、装置、存储介质及电子设备 - Google Patents

一种扫描定位方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2022127318A1
WO2022127318A1 PCT/CN2021/123459 CN2021123459W WO2022127318A1 WO 2022127318 A1 WO2022127318 A1 WO 2022127318A1 CN 2021123459 W CN2021123459 W CN 2021123459W WO 2022127318 A1 WO2022127318 A1 WO 2022127318A1
Authority
WO
WIPO (PCT)
Prior art keywords
positioning
scanning
image
model
positioning frame
Prior art date
Application number
PCT/CN2021/123459
Other languages
English (en)
French (fr)
Inventor
窦世丹
赵小芬
尚雷敏
杜岩峰
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Publication of WO2022127318A1 publication Critical patent/WO2022127318A1/zh
Priority to US18/336,998 priority Critical patent/US20230334698A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/20ICT specially adapted for the handling or processing of medical references relating to practices or guidelines

Definitions

  • the present application relates to the technical field of deep learning, and more particularly, to a scanning and positioning method, apparatus, storage medium and electronic device.
  • Medical scanning technology can be used in various parts of the human body and plays a vital role in the process of various diseases, treatment progress and so on. Before scanning the human body, various medical scanning technologies need to determine the positioning frame for scanning.
  • the current scanning positioning frame is generally determined by manual operation, and the starting line, the ending line and the range in the positioning image are selected by tools such as a keyboard or a mouse.
  • the manual operation method has a large workload and a high requirement for the operator's ability.
  • the present application discloses a scanning positioning method, device, storage medium and electronic device, so as to realize rapid determination of a scanning positioning frame with high precision.
  • a scanning positioning method including:
  • the positioning image of the target object is input into the pre-trained scanning positioning model, and the scanning positioning frame in the positioning image is determined based on the output result of the scanning positioning model, wherein the scanning positioning model is based on the training positioning image and all the scanning positioning frames.
  • the information of the gold standard positioning frame used for scanning in the training positioning image is obtained by training.
  • the scanning positioning model is a positioning frame segmentation model
  • determining the scanning positioning frame in the positioning image based on the output result of the scanning positioning model includes:
  • the positioning frame segmentation model includes an encoding module and a decoding module, wherein the encoding module includes down-sampling network layers connected in sequence, the decoding module includes up-sampling network layers connected in sequence, and the encoding module includes The end downsampling network layer is connected to the initial upsampling network layer in the decoding module, and the encoding module and the decoding module are horizontally connected between downsampling network layers and upsampling network layers of the same scale.
  • the training method of the positioning frame segmentation model includes:
  • the positioning frame segmentation model to be trained is iteratively trained, and the trained positioning frame segmentation model is obtained.
  • the method further includes:
  • the scanning positioning model corresponding to each part is respectively called, wherein the scanning positioning model corresponding to the part is used to determine the scanning positioning frame corresponding to each part.
  • the scanning positioning model includes a region generating sub-network and a target classification sub-network, wherein the region generating sub-network is used to generate the scanning positioning frame corresponding to each part in the positioning image, and the target classification sub-network uses for determining the scanning position information corresponding to each of the scanning positioning frames.
  • the method further includes:
  • a scanning protocol of the target object is acquired, and a target scanning positioning frame is determined from each of the scanning positioning frames according to the scanning position in the scanning protocol.
  • a scanning positioning device including:
  • the positioning image acquisition module is used to obtain the positioning image of the target object
  • a positioning frame determination module configured to input the positioning image of the target object into a pre-trained scanning positioning model, and determine a scanning positioning frame in the positioning image based on the output result of the scanning positioning model, wherein the scanning positioning The model is obtained by training based on the training positioning image and the gold standard positioning frame information used for scanning in the training positioning image.
  • an electronic device comprising:
  • processors one or more processors
  • memory for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the scanning and positioning method according to the first aspect.
  • a storage medium containing computer-executable instructions is provided, the computer-executable instructions, when executed by a computer processor, are used to perform the scanning and positioning method according to the first aspect.
  • the present application discloses a scanning positioning method, device, storage medium and electronic equipment.
  • a preset scanning positioning model the positioning image of the target object is processed, and the output is used for scanning the target object.
  • the obtained scanning positioning frame information does not need to be subjected to any subsequent processing, thereby improving the determination efficiency and accuracy of the scanning positioning frame information.
  • FIG. 1 is a schematic flowchart of a scanning and positioning method according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of a conversion process of a positioning frame provided by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a positioning frame segmentation model provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a scanning positioning method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of scanning positioning frame information in a positioning image provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a scanning positioning device provided in Embodiment 3 of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present invention.
  • the embodiment of the present application discloses a scanning positioning method, device, storage medium and electronic device, so as to realize rapid determination of a scanning positioning frame with high precision.
  • FIG. 1 is a schematic flowchart of a scanning positioning method according to Embodiment 1 of the present invention. This embodiment can be applied to a situation where a scanning positioning frame is automatically determined before scanning a target object.
  • This method can be provided by an embodiment of the present invention. It is performed by scanning a positioning device, which can be integrated on an electronic device such as a computer or a scanning device, and specifically includes the following steps:
  • S120 Input the positioning image of the target object into a pre-trained scanning positioning model, and determine a scanning positioning frame in the positioning image based on the output result of the scanning positioning model, wherein the scanning positioning model is based on the training positioning image It is obtained by training with the gold standard positioning frame information used for scanning in the training positioning image.
  • the target object is an object to be scanned, and the object may be a human body, an animal body, or a local area of a human body or an animal body.
  • the scan to be performed can be CT (Computed Tomography, electronic computed tomography) scan, MRI (Magnetic Resonance Imaging, magnetic resonance scan), PET (positron emission computed tomography, positron emission computed tomography), X-ray film, ultrasound image
  • CT Computer Tomography
  • electronic computed tomography magnetic computed tomography
  • MRI Magnetic Resonance Imaging, magnetic resonance scan
  • PET positron emission computed tomography, positron emission computed tomography
  • X-ray film ultrasound image
  • a single-modality scan, or a combined scan of multiple modalities, is not limited.
  • the positioning image is an image collected on the target object for positioning before scanning, wherein the positioning image at least includes the part to be scanned, and in some embodiments, the positioning image may include two or more than two. part.
  • the positioning image is obtained before scanning the target object, and the previous scanning image of the target object in the same modality or a different modality, or an optically photographed image can also be used.
  • the positioning image of the target object is processed by the pre-trained scanning positioning model to obtain the positioning result of the scanning of the target object, wherein the positioning frame is generally a rectangle shape, and may also be a polygonal shape or an irregular shape, etc. .
  • the positioning result may be a positioning frame drawn in the positioning image, or may be description information of the positioning frame.
  • the description information may include the size and range of the positioning frame, for example, may include the vertex coordinates of the positioning frame, or may include The center coordinates and width and height information of the positioning box.
  • the scanning and positioning module may be a machine learning module such as a neural network module.
  • the scanning and positioning module is obtained through an end-to-end training method. Specifically, the training positioning image and the training positioning image are used for scanning.
  • the gold standard positioning box information is obtained by training. Among them, the gold standard positioning frame information used for scanning can be delineated by an experienced scanning technician, and the positioning frame information can be scanned directly without any post-processing process; the gold standard positioning frame information can also be collected. The technicians in the actual use of the positioning box sketched records.
  • the gold standard positioning frame information can select historical data according to the current scan information, such as the selection is consistent with the current scan mode, the patient is consistent, the technician is consistent, the scanning imaging equipment is consistent with at least one parameter or model, the scanning The historical data of the gold standard positioning box information with at least one parameter consistent with the protocol.
  • the positioning frame information directly used for scanning the target object can be output during the application process, without any further post-processing. The processing process simplifies the determination process of the scanning positioning frame, improves the accuracy of the positioning frame, and reduces the experience requirement for the scanning technician.
  • the scanning positioning model is a positioning frame segmentation model.
  • the positioning frame segmentation model has the function of segmenting the positioning frame from the positioning image. Input the scanned image of the target object into the positioning frame segmentation model, and obtain the segmentation positioning frame output by the positioning frame segmentation model, that is, the segmentation result.
  • the scanning positioning in the positioning image is determined based on the output result of the scanning positioning model.
  • the frame includes: acquiring the segmentation result output by the positioning frame segmentation model, and determining the outline of the segmented area in the segmentation result as the scanning positioning frame.
  • the training method of the above-mentioned positioning frame segmentation model includes: acquiring the training positioning image and the gold standard positioning frame information used for scanning in the training positioning image, and generating the gold standard positioning frame information of the training positioning image based on the gold standard positioning frame information.
  • Standard mask area for example, see FIG. 2 , which is a schematic diagram of a conversion process of a positioning frame provided by an embodiment of the present invention, wherein the left image in FIG. 2 is a positioning image including gold standard positioning frame information, wherein, The gold standard positioning frame information may be the head positioning frame drawn by the technician.
  • the right figure in FIG. 2 is the positioning image including the gold standard mask area, wherein the gold standard mask area is the area included in the gold standard positioning frame information.
  • the positioning frame segmentation model to be trained is iteratively trained to obtain a positioning frame segmentation model based on the positioning frame segmentation function.
  • the following iterative training is performed on the positioning frame segmentation model to be trained, until the predicted segmentation area output by the positioning frame segmentation model satisfies the preset segmentation accuracy: inputting the training positioning image into the positioning frame segmentation model to be trained, A predicted segmentation area is obtained, a loss function is determined based on the predicted segmentation area and the gold standard mask area, and network parameters are adjusted for the positioning frame segmentation model to be trained according to the loss function.
  • the network parameters to be adjusted include at least weight parameters in the positioning frame segmentation model.
  • the loss function in the training process may be a diss loss function, or may be other loss functions set according to requirements, which is not limited.
  • the loss function D can be The predicted segmentation area P and the gold standard mask area G respectively include N pixels, i is the identification of the pixel, pi is the pixel value of the i -th pixel in the predicted segmentation area P, and gi is the gold standard mask The pixel value of the ith pixel in the region G, and p i ⁇ P, g i ⁇ G.
  • the diss loss function can clear the "activation" position in the non-target area in the prediction result, penalize the low-confidence position in the target area, and effectively solve the area contrast between the background and foreground (gold standard mask area) in medical images unbalanced problem.
  • the segmentation result obtained by processing the target object in the application process can be such as the mask area in the right picture of Figure 2, and the mask area is reversely processed to obtain such as Figure 2.
  • the positioning frame information in the left figure can be the boundary information of the extracted mask area, such as boundary coordinate information or boundary vertex coordinates, etc.; it can also be edge detection to obtain the boundary of the positioning frame, and then obtain the positioning frame information.
  • the positioning frame segmentation model includes an encoding module and a decoding module.
  • FIG. 3 is a schematic structural diagram of a positioning frame segmentation model provided by an embodiment of the present invention, wherein the The encoding module includes down-sampling network layers connected in sequence, the decoding module includes up-sampling network layers connected in sequence, and the end down-sampling network layer in the encoding module is connected with the initial up-sampling network layer in the decoding module, And the down-sampling network layer and the up-sampling network layer of the same scale in the encoding module and the decoding module are horizontally connected.
  • the downsampling network layer may include at least one convolutional layer, any downsampling network layer may be a convolutional block, and the convolutional block may include multiple convolutional layers, for example, the convolutional block may be a residual block.
  • the input information is extracted sequentially through multiple downsampling network layers, and the spatial scale is gradually reduced to obtain image features of different scales.
  • the upsampling network layer in the decoding module may include at least one deconvolution layer, for example, the upsampling network layer may be a deconvolution block.
  • Each up-sampling network layer in the decoding module restores the target details and spatial scale of the acquired feature images through deconvolution operations.
  • the feature map output by the downsampling network layer is horizontally output to the corresponding upsampling network layer, and the shallow network layer is horizontally output.
  • the output of the layer and the output of the deep network are combined, so that the network can consider both shallow information (precise position information) and deep information (semantic information of the image) in the final output, so as to improve the segmentation accuracy of the positioning box segmentation model.
  • the positioning image of the target object is processed through the preset scanning positioning model, and the scanning positioning frame information used for scanning the target object is output, and there is no need to perform any follow-up on the obtained scanning positioning frame information.
  • the processing improves the determination efficiency and accuracy of the scanning positioning frame information.
  • a scanning positioning model can be trained for different parts, and it can also be for each scanning type (for example, including but not limited to CT scan, MR scan, PER/CT scan, etc.)
  • One scanning positioning model is trained corresponding to the part, and multiple scanning positioning models can also be trained for different imaging protocols and historical records.
  • the scanning positioning models corresponding to the parts of the human head, the upper abdomen of the human body, the lower abdomen of the human body, the legs, etc. are respectively trained to determine the scanning positioning frame in the positioning images of the above-mentioned parts;
  • multiple scan localization models can be trained for the same site for different scan protocols, such as T1 scan protocol and T2 scan protocol; different scan localization models can also be trained for each technician, each patient or each type of patient , one of the patients may have one or more of the same medical information, such as height, weight, age, gender, disease, etc.
  • the method further includes: identifying at least one part included in the positioning image.
  • the same positioning image may include only one part, such as the head, or It can include two or more sites, such as the upper abdomen and lower abdomen. Identifying at least one part included in the positioning image may be to compare the contour of the image in the positioning image with preset feature information of each part to determine each part included. By training multiple scanning positioning models to process the positioning images of each part in a targeted manner, the determination accuracy of the scanning positioning frame information is improved.
  • the method further includes acquiring the scan type in the scan protocol, and based on the scan type and at least one part, respectively calling the corresponding scan positioning model.
  • the positioning image of the target object is input into the pre-trained scanning positioning model.
  • the method includes: respectively calling the scanning positioning model corresponding to each part based on the at least one part, and determining the scanning positioning frame of each part based on the output result of each scanning positioning model.
  • the positioning image includes at least two parts
  • the scanning positioning model corresponding to each part can be called at the same time, and the scanning positioning frame corresponding to each part is obtained based on each scanning positioning model.
  • the method further includes: acquiring a scanning protocol of the target object, and determining a target scanning positioning frame from each of the scanning positioning frames according to the scanning position in the scanning protocol.
  • the scanning protocol includes the parts to be scanned, and the target scanning positioning frames are screened from the scanning positioning frames corresponding to the multiple parts through the parts to be scanned in the scanning protocol.
  • the method further includes: acquiring the scanning protocol of the target object, calling the scanning positioning model corresponding to the scanning position according to the scanning position in the scanning protocol, and identifying the positioning image based on the scanning positioning model. information of the scan positioning frame.
  • the method further includes: acquiring a scanning protocol of the target object, and calling a corresponding scanning positioning model according to the scanning type and scanning position in the scanning protocol.
  • FIG. 4 is a schematic flowchart of a scanning and positioning method provided by an embodiment of the present invention.
  • a structure of a scanning and positioning model is provided, and the method specifically includes:
  • S220 Input the positioning image of the target object into a pre-trained scanning positioning model, and determine a scanning positioning frame in the positioning image based on the output result of the scanning positioning model, wherein the scanning positioning model includes a region generation sub-network and A target classification sub-network, the region generation sub-network is used to generate the scanning positioning frame corresponding to each part in the positioning image, and the target classification sub-network is used to determine the scanning position information corresponding to each of the scanning positioning frames.
  • the scanning positioning model includes a region generation sub-network and A target classification sub-network
  • the region generation sub-network is used to generate the scanning positioning frame corresponding to each part in the positioning image
  • the target classification sub-network is used to determine the scanning position information corresponding to each of the scanning positioning frames.
  • S230 Acquire a scanning protocol of the target object, and determine a target scanning positioning frame from each of the scanning positioning frames according to the scanning position in the scanning protocol.
  • the scanning positioning model may be any one of an RCNN module (Region Convolutional Neural Networks), a fastRCNN module or a fasterRCNN model.
  • the scanning positioning model includes a region generating sub-network and a target classification sub-network, wherein the region generating sub-network may be an RPN (Region Proposal Network, region extraction) model.
  • the area generation sub-network is used to determine the scanning positioning frame information in the positioning image of the target object. For an example, see FIG. 5.
  • FIG. 5 is a schematic diagram of the scanning positioning frame information in the positioning image provided by an embodiment of the present invention. Positioning frame 1 and positioning frame 2 identified by the region generation sub-network.
  • the target classification sub-network is connected with the area generation sub-network, and is used to classify the part classification information corresponding to each scanning positioning frame information output by the target classification sub-network, so as to obtain the scanning positioning frame information in the positioning image and the corresponding scanning positioning frame information. part information.
  • the scanning positioning model is obtained by training based on the training positioning image, the gold standard positioning frame information used for scanning in the training positioning image, and the position information corresponding to the gold standard positioning frame information.
  • the training positioning image is input into the scanning positioning model to be trained, the prediction positioning frame information and the corresponding prediction part classification information are obtained, and the first loss function is determined based on the gold standard positioning frame information and the predicted positioning frame information.
  • the part information corresponding to the standard positioning frame information and the predicted part classification information determine the second loss function, and the target loss function is obtained based on the first loss function and the second loss function.
  • the parameters of the scanning positioning model are adjusted, and the above training process is iterated until convergence or preset accuracy is reached, and the training is determined to be completed.
  • the first loss function can be softmax loss
  • the second loss function can be smoothL1loss
  • the target loss function can be:
  • L( ⁇ p i ,t i ⁇ ) is the target loss function
  • p i is the prediction positioning frame information
  • p i * is the gold standard positioning frame information
  • t i is the prediction part classification information
  • t i * is the gold standard positioning The part information corresponding to the frame information.
  • L cls (p i ,t i ) is the first loss function
  • L reg (p i * ,t i * ) is the second loss function
  • is the preset weight value
  • N cls is the training data set mini-batch size
  • N reg is the number of candidate detection frames.
  • the scanning protocol of the target object is acquired, and the target scanning positioning frame is determined from each of the scanning positioning frames according to the scanning position in the scanning protocol.
  • the positioning image of the target object is processed by the scanning positioning model including the area generating sub-network and the target classification sub-network, and at least one scanning positioning frame information and the corresponding positioning frame information in the positioning image are obtained respectively.
  • Part classification information fully analyze the positioning image, avoid omission of any part, and improve the comprehensiveness and accuracy of the scanning positioning frame.
  • FIG. 6 is a schematic structural diagram of a scanning and positioning device according to Embodiment 3 of the present invention.
  • the device includes: a positioning image acquisition module 310 and a positioning frame determination module 320, wherein:
  • a positioning image obtaining module 310 used for obtaining the positioning image of the target object
  • a positioning frame determination module 320 configured to input the positioning image of the target object into a pre-trained scanning positioning model, and determine a scanning positioning frame in the positioning image based on the output result of the scanning positioning model, wherein the scanning positioning The positioning model is obtained by training based on the training positioning image and the gold standard positioning frame information used for scanning in the training positioning image.
  • the scanning positioning model is a positioning frame segmentation model.
  • the positioning frame determination module 320 is used for:
  • the positioning frame segmentation model includes an encoding module and a decoding module, wherein the encoding module includes down-sampling network layers connected in sequence, the decoding module includes up-sampling network layers connected in sequence, and the encoding module includes The end downsampling network layer is connected to the initial upsampling network layer in the decoding module, and the downsampling network layer and the upsampling network layer of the same scale in the encoding module and the decoding module are horizontally connected.
  • the device also includes:
  • a model training module for acquiring the training positioning image and the gold standard positioning frame information used for scanning in the training positioning image, and generating the gold standard mask area of the training positioning image based on the gold standard positioning frame information ;
  • the following iterative training is performed on the positioning frame segmentation model to be trained, until the predicted segmentation area output by the positioning frame segmentation model satisfies the preset segmentation accuracy:
  • the positioning box segmentation model is used to adjust the network parameters.
  • the device also includes:
  • a part identification module configured to identify at least one part included in the localization image after the acquisition of the localization image of the target object
  • the model calling module is used for respectively calling the scanning positioning model corresponding to each part based on the at least one part, wherein the scanning positioning model corresponding to the part is used to determine the scanning positioning frame corresponding to each part.
  • the scanning positioning model includes a region generating sub-network and a target classification sub-network, wherein the region generating sub-network is used to generate a scanning positioning frame corresponding to each part in the positioning image, and the target classification sub-network It is used to determine the scanning part information corresponding to each of the scanning positioning frames.
  • the device further includes:
  • the target scanning positioning frame determination module is used for acquiring the scanning protocol of the target object, and determining the target scanning positioning frame from each of the scanning positioning frames according to the scanning position in the scanning protocol.
  • the above product can execute the method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 7 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present invention.
  • This embodiment of the present invention provides services for the implementation of the scanning and positioning method in the above-mentioned embodiment of the present invention, and the bridge-vessel-based model construction in the above-mentioned embodiment can be configured device.
  • Figure 7 shows a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention.
  • the electronic device 12 shown in FIG. 7 is only an example, and should not impose any limitations on the function and scope of use of the embodiments of the present invention.
  • the electronic device 12 takes the form of a general-purpose computing device.
  • Components of electronic device 12 may include, but are not limited to, one or more processors or processing units 16, system memory 28, and a bus 18 that connects various system components, including system memory 28 and processing unit 16.
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Electronic device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 12, including both volatile and non-volatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
  • Electronic device 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive”).
  • disk drives for reading and writing to removable non-volatile magnetic disks (eg "floppy disks") and removable non-volatile optical disks (eg CD-ROM, DVD-ROM) may be provided or other optical media) to read and write optical drives.
  • each drive may be connected to bus 18 through one or more data media interfaces.
  • Memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
  • a program/utility 40 having a set (at least one) of program modules 42, which may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data , each or some combination of these examples may include an implementation of a network environment.
  • Program modules 42 generally perform the functions and/or methods of the described embodiments of the present invention.
  • the electronic device 12 may also communicate with one or more external devices 14 (eg, a keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the electronic device 12, and/or with Any device (eg, network card, modem, etc.) that enables the electronic device 12 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 22 . Also, the electronic device 12 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 20 . As shown in FIG. 7 , the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives and data backup storage systems.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28, for example, to implement the scanning and positioning method provided by the embodiment of the present invention.
  • Embodiment 5 of the present invention also provides a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute a scanning and positioning method when executed by a computer processor, and the method includes:
  • the positioning image of the target object is input into the pre-trained scanning positioning model, and the scanning positioning frame in the positioning image is determined based on the output result of the scanning positioning model, wherein the scanning positioning model is based on the training positioning image and all the scanning positioning frames.
  • the information of the gold standard positioning frame used for scanning in the training positioning image is obtained by training.
  • the computer storage medium in the embodiments of the present invention may adopt any combination of one or more computer-readable mediums.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural A programming language, such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet) ).
  • LAN local area network
  • WAN wide area network
  • Internet service provider to connect through the Internet
  • a storage medium containing computer-executable instructions provided by an embodiment of the present invention is not limited to the above method operations, and can also perform related operations in the scanning and positioning method provided by any embodiment of the present invention. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Bioethics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种扫描定位方法、装置、存储介质及电子设备,其中方法包括:获取目标对象的定位像;将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。通过预先设置的扫描定位模型,对目标对象的定位像进行处理,输出用于对目标对象进行扫描的扫描定位框信息,无需对得到的扫描定位框信息进行任何后续处理,提高了扫描定位框信息的确定效率和精确度。

Description

一种扫描定位方法、装置、存储介质及电子设备
相关申请
本申请要求2020年12月17日申请的,申请号为202011500980.3,名称为“一种扫描定位方法、装置、存储介质及电子设备”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及深度学习技术领域,更具体的说,涉及一种扫描定位方法、装置、存储介质及电子设备。
背景技术
医学扫描技术可用于与人体的各个部位,在多种疾病、治疗进展等过程中起着至关重要的作用。各种医学扫描技术在对人体进行扫描之前,需确定进行扫描的定位框。
目前的扫描定位框一般是通过人工操作确定,通过键盘或者鼠标等工具选择定位像中的起始线、终止线和范围。人工操作方式工作量大、对操作人员的能力要求高。
发明内容
有鉴于此,本申请公开一种扫描定位方法、装置、存储介质及电子设备,以实现快速确定高精度的扫描定位框。
第一方面,提供了一种扫描定位方法,包括:
获取目标对象的定位像;
将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
优选的,所述扫描定位模型为定位框分割模型;
其中,所述基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,包括:
获取所述定位框分割模型输出的分割结果,将所述分割结果中分割区域的轮廓确定为扫描定位框。
优选的,所述定位框分割模型包括编码模块和解码模块,其中,所述编码模块包括依 次连接的下采样网络层,所述解码模块包括依次连接的上采样网络层,所述编码模块中的末端下采样网络层与所述解码模块中的起始上采样网络层连接,且所述编码模块与所述解码模块中同尺度的下采样网络层与上采样网络层之间横向连接。
优选的,所述定位框分割模型的训练方法包括:
获取所述训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息,基于所述金标准定位框信息生成所述训练定位像的金标准掩膜区域;
基于所述训练定位像和对应的金标准掩膜区域对待训练的定位框分割模型进行迭代训练,得到训练完成的定位框分割模型。
优选的,在所述获取目标对象的定位像之后,还包括:
识别所述定位像中包括的至少一个部位;
基于所述至少一个部位分别调用各部位对应的扫描定位模型,其中,所述部位对应的扫描定位模型用于确定各部位对应的扫描定位框。
优选的,所述扫描定位模型包括区域生成子网络和目标分类子网络,其中,所述区域生成子网络用于生成所述定位像中各部位对应的扫描定位框,所述目标分类子网络用于确定各所述扫描定位框对应的扫描部位信息。
优选的,所述方法还包括:
获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框中确定目标扫描定位框。
第二方面,提供了一种扫描定位装置,包括:
定位像获取模块,用于获取目标对象的定位像;
定位框确定模块,用于将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
第三方面,提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的扫描定位方法。
第四方面,提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如第一方面所述的扫描定位方法。
从上述的技术方案可知,本申请公开了一种扫描定位方法、装置、存储介质及电子设 备,通过预先设置的扫描定位模型,对目标对象的定位像进行处理,输出用于对目标对象进行扫描的扫描定位框信息,无需对得到的扫描定位框信息进行任何后续处理,提高了扫描定位框信息的确定效率和精确度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为本发明实施例一提供的一种扫描定位方法的流程示意图;
图2是本发明实施例提供的定位框的转换过程的示意图;
图3是本发明实施例提供的一种定位框分割模型的结构示意图;
图4是本发明实施例提供的一种扫描定位方法的流程示意图;
图5是本发明实施例提供定位像中扫描定位框信息的示意图;
图6是本发明实施例三提供的一种扫描定位装置的结构示意图;
图7是本发明实施例四提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例公开了一种扫描定位方法、装置、存储介质及电子设备,以实现快速确定高精度的扫描定位框。
图1为本发明实施例一提供的一种扫描定位方法的流程示意图,本实施例可适用于在对目标对象进行扫描之前,自动确定扫描定位框的情况,该方法可以由本发明实施例提供的扫描定位装置来执行,该装置可集成于诸如计算机或者扫描设备等的电子设备上,具体包括如下步骤:
S110、获取目标对象的定位像。
S120、将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位 像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
其中,所述目标对象为待进行扫描的对象,该对象可以是人体、动物体或者人体、动物体的局部区域。待进行的扫描可以是CT(Computed Tomography,电子计算机断层扫描)扫描、MRI(Magnetic Resonance Imaging,磁共振扫描)、PET(positron emission computed tomography,正电子发射计算机断层扫描)、X光片、超声影像等单模态扫描,或多种模态的组合扫描,对此不作限定。
定位像为在进行扫描之前,对目标对象采集的用于进行定位的图像,其中,定位像中至少包括进行扫描的部位,在一些实施例中,定位像中可以包括两个或两个以上的部位。一般情况下,定位像在对目标对象的扫描前获得,也可以使用前一次对目标对象的同模态或不同模态的扫描影像,或光学拍照图像。本实施例中,通过预先训练的扫描定位模型对目标对象的定位像进行处理,得到该目标对象进行扫描的定位结果,其中,定位框一般为矩形形状,也可以是多边形形状或不规则形状等。该定位结果可以是在定位像中绘制出的定位框,还可以是定位框的描述信息,该描述信息可以是包括定位框的尺寸和范围,例如可以包括定位框的顶点坐标,还可以是包括定位框的中心坐标以及宽高信息。
本实施例中,扫描定位模块可以是诸如神经网络模块等的机器学习模块,该扫描定位模块通过端到端的训练方法得到,具体的,通过训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。其中,用于进行扫描的金标准定位框信息可以是通过经验丰富的扫描技师勾画的,可直接进行扫描的定位框信息,无需再进行任何后处理过程;金标准定位框信息也可以是收集到的技师在实际使用中的定位框勾画记录。在一个实施例中,所述金标准定位框信息可以根据当前的扫描信息选择历史数据,比如选择和当前扫描模态一致、病人一致、技师一致、扫描影像设备至少一项参数或型号一致、扫描协议至少一项参数一致的金标准定位框信息历史数据。相应的,基于上述用于进行扫描的金标准定位框信息以及训练定位像训练得到的扫描定位模块,在应用过程中可输出直接用于对目标对象进行扫描的定位框信息,无需再进行任何后处理过程,简化了扫描定位框的确定过程,提高了定位框的准确性,同时降低了对扫描技师的经验要求。
在上述实施例的基础上,扫描定位模型为定位框分割模型。该定位框分割模型具有从定位像中分割定位框的功能。将目标对象的扫描像输入至定位框分割模型中,得到定位框分割模型输出的分割定位框,即分割结果,相应的,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,包括:获取所述定位框分割模型输出的分割结果,将所述分割结果中分割区域的轮廓确定为扫描定位框。
上述定位框分割模型的训练方法包括:获取所述训练定位像和所述训练定位像中用于 进行扫描的金标准定位框信息,基于所述金标准定位框信息生成所述训练定位像的金标准掩膜区域,示例性的,参见图2,图2是本发明实施例提供的定位框的转换过程的示意图,其中,图2中左图为包括金标准定位框信息的定位像,其中,金标准定位框信息可以是技师勾画的头部定位框,图2中右图为包括金标准掩膜区域的定位像,其中,金标准掩膜区域为金标准定位框信息所包括的区域。基于大量的训练定位像与训练定位像对应的金标准掩膜区域对待训练的定位框分割模型进行迭代训练,以得到基于定位框分割功能的定位框分割模型。
具体的,对待训练的定位框分割模型进行以下迭代训练,直到所述定位框分割模型输出的预测分割区域满足预设分割精度:将所述训练定位像输入至待训练的定位框分割模型中,得到预测分割区域,基于所述预测分割区域和所述金标准掩膜区域确定损失函数,根据所述损失函数对所述待训练的定位框分割模型进行网络参数调节。其中,进行调节的网络参数至少包括定位框分割模型中的权重参数。本实施例中,训练过程中损失函数可以是diss损失函数,还可以是根据需求设置的其他损失函数,对此不作限定。
示例性的,损失函数D可以是
Figure PCTCN2021123459-appb-000001
预测分割区域P和金标准掩膜区域G分别包括N个像素点,i为像素点的标识,p i为预测分割区域P中的第i个像素点的像素值,g i为金标准掩膜区域G中第i个像素点的像素值,且p i∈P,g i∈G。diss损失函数可以将预测结果中非目标区域中“激活”的位置清零,惩罚目标区域中低置信度位置,有效解决了医学图像中背景和前景(金标准掩膜区域)之间的面积对比不均衡的问题。
相应的,基于上述方式训练得到的定位框分割模型,在应用过程对于目标对象进行处理得到的分割结果可以是诸如图2右图中的掩膜区域,对该掩膜区域反向处理得到诸如图2左图中的定位框信息,例如可以是提取掩膜区域的边界信息,例如边界坐标信息或者边界顶点坐标等;还可以是进行边缘检测,得到定位框的边界,进而得到定位框信息。
在上述实施例的基础上,定位框分割模型包括编码模块和解码模块,示例性的,参见图3,图3是本发明实施例提供的一种定位框分割模型的结构示意图,其中,所述编码模块包括依次连接的下采样网络层,所述解码模块包括依次连接的上采样网络层,所述编码模块中的末端下采样网络层与所述解码模块中的起始上采样网络层连接,且所述编码模块与所述解码模块中同尺度的下采样网络层与上采样网络层之间横向连接。其中,下采样网络层可以是包括至少一个卷积层,任一下采样网络层可以是一个卷积块,该卷积块可以是 包括多个卷积层,例如卷积块可以是残差块。通过多个下采样网络层依次对输入信息进行特征提取,并逐渐降低空间尺度,以得到不同尺度的图像特征。解码模块中的上采样网络层可以是包括至少一个反卷积层,例如上采样网络层可以是一个反卷积块。解码模块中的各上采样网络层通过反卷积操作对获取的特征图像进行目标细节和空间尺度的恢复。同时通过同尺度的下采样网络层与上采样网络层之间横向连接(图中虚线部分内容),将下采样网络层输出的特征图横向输出至对应的上采样网络层中,将浅层网络层中的输出和深层网络的输出合并,使得网络在最终输出的时候能够同时考虑浅层信息(精确的位置信息)和深层信息(图像的语义信息),以提高定位框分割模型的分割精度。
本实施例提供的技术方案,通过预先设置的扫描定位模型,对目标对象的定位像进行处理,输出用于对目标对象进行扫描的扫描定位框信息,无需对得到的扫描定位框信息进行任何后续处理,提高了扫描定位框信息的确定效率和精确度。
在上述实施例的基础上,可以是针对不同部位对应训练一个扫描定位模型,还可以是针对每一种扫描类型(例如包括但不限于CT扫描、MR扫描、PER/CT扫描等)的每一个部位对应训练一个扫描定位模型,还可以是针对不同的成像协议、历史记录训练多个扫描定位模型。示例性的,对于CT扫描,分别训练对于人体头部、人体上腹部、人体下腹部、腿部等部分分别对应的扫描定位模型,用于分别对上述部位的定位像中确定扫描定位框;对于MR扫描,可以针对不同扫描协议,如T1扫描协议和T2扫描协议,为同一个部位训练多个扫描定位模型;还可以针对每一个技师、每一个患者或每一类患者训练不同的扫描定位模型,其中一类患者可以是具有某一项或多项相同的医疗信息,如身高、体重、年龄、性别、病症等。
可选的,在所述获取目标对象的定位像之后,还包括:识别所述定位像中包括的至少一个部位,示例性的,同一定位像中可以是只包括一个部位,例如头部,也可以是包括两个或以上的部位,例如上腹部和下腹部。识别定位像中包括的至少一个部位可以是将定位像中图像轮廓与预先设置的各部位的特征信息进行比对,以确定包括的各个部位。通过训练多个扫描定位模型,以针对性的对各部位的定位像进行处理,提高了扫描定位框信息的确定精度。
在上述实施例的基础上,在确定定位像中的至少一个部位之后,还包括获取扫描协议中的扫描类型,基于该扫描类型和至少一个部位,分别调用对应的扫描定位模型。
相应的,将所述目标对象的定位像输入至预先训练的扫描定位模型。包括:基于所述至少一个部位分别调用各部位对应的扫描定位模型,基于各所述扫描定位模型的输出结果确定各部位的扫描定位框。当定位像中包括至少两个部位时,可以是同时调用各部位对应 的扫描定位模型,并基于各扫描定位模型得到各部位对应的扫描定位框。通过对定位像中的多个部位确定扫描定位框信息,便于操作者进行选择,避免对部位的遗漏导致定位框确定错误的情况。
在上述实施例的基础上,该方法还包括:获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框中确定目标扫描定位框。其中,扫描协议中包括待扫描部位,通过扫描协议中的待扫描部位从多个部位对应的扫描定位框中筛选目标扫描定位框。
在上述实施例的基础上,该方法还包括:获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位调用该扫描部位对应的扫描定位模型,基于该扫描定位模型识别定位像中的扫描定位框信息。
在上述实施例的基础上,该方法还包括:获取所述目标对象的扫描协议,根据所述扫描协议中的扫描类型以及扫描部位,调用对应的扫描定位模型。
图4是本发明实施例提供的一种扫描定位方法的流程示意图,在上述实施例的基础上,提供了一种扫描定位模型的结构,该方法具体包括:
S210、获取目标对象的定位像。
S220、将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,扫描定位模型包括区域生成子网络和目标分类子网络,所述区域生成子网络用于生成所述定位像中各部位对应的扫描定位框,所述目标分类子网络用于确定各所述扫描定位框对应的扫描部位信息。
S230、获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框中确定目标扫描定位框。
本实施例中,扫描定位模型可以是RCNN模块(Region Convolutional Neural Networks)、fastRCNN模块或者fasterRCNN模型中的任一项。该扫描定位模型包括区域生成子网络和目标分类子网络,其中,区域生成子网络可以是RPN(Region Proposal Network,区域提取)模型。该区域生成子网络用于确定目标对象的定位像中的扫描定位框信息,示例性的,参见图5,图5是本发明实施例提供定位像中扫描定位框信息的示意图,图5中包括通过区域生成子网络识别得到的定位框1和定位框2。目标分类子网络与区域生成子网络连接,用于对目标分类子网络输出的各扫描定位框信息对应的部位分类信息,即可得到定位像中各扫描定位框信息,以及扫描定位框信息对应的部位信息。
所述扫描定位模型基于训练定位像、所述训练定位像中用于进行扫描的金标准定位框信息,以及该金标准定位框信息对应的部位信息进行训练得到。具体的,将训练定位像输 入至到待训练的扫描定位模型中,得到预测定位框信息和对应的预测部位分类信息,基于金标准定位框信息与预测定位框信息确定第一损失函数,基于金标准定位框信息对应的部位信息和预测部位分类信息确定第二损失函数,在基于第一损失函数和第二损失函数得到目标损失函数。基于该目标损失函数对扫描定位模型进行参数调节,迭代上述训练过程,直到到达收敛或者预设精度,确定训练完成。
其中,第一损失函数可以是softmax loss,第二损失函数可以是的smoothL1loss;相应的,目标损失函数可以是:
Figure PCTCN2021123459-appb-000002
其中,L({p i,t i})为目标损失函数,p i为预测定位框信息,p i *为金标准定位框信息,t i为预测部位分类信息,t i *为金标准定位框信息对应的部位信息。L cls(p i,t i)为第一损失函数,L reg(p i *,t i *)为第二损失函数,λ为预设权重值,N cls为训练数据集mini-batch大小,N reg为候选检测框的个数。
在上述实施例的基础上,获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框中确定目标扫描定位框。
本实施例提供的技术方案,通过包括区域生成子网络和目标分类子网络的扫描定位模型对目标对象的定位像进行处理,分别得到定位像中的至少一个扫描定位框信息和定位框信息对应的部位分类信息,对定位像进行充分分析,避免对任一部位的遗漏,提高扫描定位框的全面性和准确性。
图6是本发明实施例三提供的一种扫描定位装置的结构示意图。该装置包括:定位像获取模块310和定位框确定模块320,其中:
定位像获取模块310,用于获取目标对象的定位像;
定位框确定模块320,用于将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
可选的,所述扫描定位模型为定位框分割模型。
可选的,定位框确定模块320用于:
获取所述定位框分割模型输出的分割结果,将所述分割结果中分割区域的轮廓确定为扫描定位框。
可选的,所述定位框分割模型包括编码模块和解码模块,其中,所述编码模块包括依次连接的下采样网络层,所述解码模块包括依次连接的上采样网络层,所述编码模块中的末端下采样网络层与所述解码模块中的起始上采样网络层连接,且所述编码模块与所述解码模块中同尺度的下采样网络层与上采样网络层之间横向连接。
可选的,该装置还包括:
模型训练模块,用于获取所述训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息,基于所述金标准定位框信息生成所述训练定位像的金标准掩膜区域;
对待训练的定位框分割模型进行以下迭代训练,直到所述定位框分割模型输出的预测分割区域满足预设分割精度:
将所述训练定位像输入至待训练的定位框分割模型中,得到预测分割区域,基于所述预测分割区域和所述金标准掩膜区域确定损失函数,根据所述损失函数对所述待训练的定位框分割模型进行网络参数调节。
可选的,该装置还包括:
部位识别模块,用于在所述获取目标对象的定位像之后,识别所述定位像中包括的至少一个部位;
模型调用模块,用于基于所述至少一个部位分别调用各部位对应的扫描定位模型,其中,所述部位对应的扫描定位模型用于确定各部位对应的扫描定位框。
可选的,所述扫描定位模型包括区域生成子网络和目标分类子网络,其中,所述区域生成子网络用于生成所述定位像中各部位对应的扫描定位框,所述目标分类子网络用于确定各所述扫描定位框对应的扫描部位信息。
可选的,该装置还包括:
目标扫描定位框确定模块,用于获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框中确定目标扫描定位框。
上述产品可执行本发明任意实施例所提供的方法,具备执行方法相应的功能模块和有益效果。
图7是本发明实施例四提供的一种电子设备的结构示意图,本发明实施例为本发明上述实施例的扫描定位方法的实现提供服务,可配置上述实施例中的基于桥血管的模型构建装置。图7示出了适于用来实现本发明实施方式的示例性电子设备12的框图。图7显示的电子设备12仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图7所示,电子设备12以通用计算设备的形式表现。电子设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包 括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、图形加速端口、处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线、微通道体系结构(MAC)总线、增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
电子设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。电子设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本发明所描述的实施例中的功能和/或方法。
电子设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该电子设备12交互的设备通信,和/或与使得该电子设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,电子设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图7所示,网络适配器20通过总线18与电子设备12的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数 据处理,例如实现本发明实施例所提供的扫描定位方法。
本发明实施例五还提供了一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种扫描定位方法,该方法包括:
获取目标对象的定位像;
将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的扫描定位方法中的相关操作。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种扫描定位方法,其特征在于,包括:
    获取目标对象的定位像;
    将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
  2. 根据权利要求1所述的方法,其特征在于,所述扫描定位模型为定位框分割模型;
    其中,所述基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,包括:
    获取所述定位框分割模型输出的分割结果,将所述分割结果中分割区域的轮廓确定为扫描定位框。
  3. 根据权利要求2所述的方法,其特征在于,所述定位框分割模型包括编码模块和解码模块,其中,所述编码模块包括依次连接的下采样网络层,所述解码模块包括依次连接的上采样网络层,所述编码模块中的末端下采样网络层与所述解码模块中的起始上采样网络层连接,且所述编码模块与所述解码模块中同尺度的下采样网络层与上采样网络层之间横向连接。
  4. 根据权利要求2所述的方法,其特征在于,所述定位框分割模型的训练方法包括:
    获取所述训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息,基于所述金标准定位框信息生成所述训练定位像的金标准掩膜区域;
    基于所述训练定位像和对应的金标准掩膜区域对待训练的定位框分割模型进行迭代训练,得到训练完成的定位框分割模型。
  5. 根据权利要求1所述的方法,其特征在于,在所述获取目标对象的定位像之后,还包括:
    识别所述定位像中包括的至少一个部位;
    基于所述至少一个部位分别调用各部位对应的扫描定位模型,其中,所述部位对应的扫描定位模型用于确定各部位对应的扫描定位框。
  6. 根据权利要求1所述的方法,其特征在于,所述扫描定位模型包括区域生成子网络和目标分类子网络,其中,所述区域生成子网络用于生成所述定位像中各部位对应的扫描定位框,所述目标分类子网络用于确定各所述扫描定位框对应的扫描部位信息。
  7. 根据权利要求5或6所述的方法,其特征在于,所述方法还包括:
    获取所述目标对象的扫描协议,根据所述扫描协议中的扫描部位从各所述扫描定位框 中确定目标扫描定位框。
  8. 一种扫描定位装置,其特征在于,包括:
    定位像获取模块,用于获取目标对象的定位像;
    定位框确定模块,用于将所述目标对象的定位像输入至预先训练的扫描定位模型,基于所述扫描定位模型的输出结果确定所述定位像中的扫描定位框,其中,所述扫描定位模型基于训练定位像和所述训练定位像中用于进行扫描的金标准定位框信息进行训练得到。
  9. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的扫描定位方法。
  10. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-7中任一所述的扫描定位方法。
PCT/CN2021/123459 2020-12-17 2021-10-13 一种扫描定位方法、装置、存储介质及电子设备 WO2022127318A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/336,998 US20230334698A1 (en) 2020-12-17 2023-06-17 Methods and systems for positioning in an medical procedure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011500980.3A CN112530554B (zh) 2020-12-17 2020-12-17 一种扫描定位方法、装置、存储介质及电子设备
CN202011500980.3 2020-12-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/336,998 Continuation US20230334698A1 (en) 2020-12-17 2023-06-17 Methods and systems for positioning in an medical procedure

Publications (1)

Publication Number Publication Date
WO2022127318A1 true WO2022127318A1 (zh) 2022-06-23

Family

ID=75001317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123459 WO2022127318A1 (zh) 2020-12-17 2021-10-13 一种扫描定位方法、装置、存储介质及电子设备

Country Status (3)

Country Link
US (1) US20230334698A1 (zh)
CN (1) CN112530554B (zh)
WO (1) WO2022127318A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530554B (zh) * 2020-12-17 2023-08-18 上海联影医疗科技股份有限公司 一种扫描定位方法、装置、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846828A (zh) * 2018-05-04 2018-11-20 上海交通大学 一种基于深度学习的病理图像目标区域定位方法及系统
CN110223352A (zh) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 一种基于深度学习的医学图像扫描自动定位方法
US20200012895A1 (en) * 2018-07-03 2020-01-09 General Electric Company Classification and localization based on annotation information
CN111028246A (zh) * 2019-12-09 2020-04-17 北京推想科技有限公司 一种医学图像分割方法、装置、存储介质及电子设备
CN112530554A (zh) * 2020-12-17 2021-03-19 上海联影医疗科技股份有限公司 一种扫描定位方法、装置、存储介质及电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111275699A (zh) * 2020-02-11 2020-06-12 腾讯医疗健康(深圳)有限公司 医学图像的处理方法、装置、设备及存储介质
CN111340816A (zh) * 2020-03-23 2020-06-26 沈阳航空航天大学 一种基于双u型网络框架的图像分割方法
CN111724401A (zh) * 2020-05-08 2020-09-29 华中科技大学 一种基于边界约束级联U-Net的图像分割方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846828A (zh) * 2018-05-04 2018-11-20 上海交通大学 一种基于深度学习的病理图像目标区域定位方法及系统
US20200012895A1 (en) * 2018-07-03 2020-01-09 General Electric Company Classification and localization based on annotation information
CN110223352A (zh) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 一种基于深度学习的医学图像扫描自动定位方法
CN111028246A (zh) * 2019-12-09 2020-04-17 北京推想科技有限公司 一种医学图像分割方法、装置、存储介质及电子设备
CN112530554A (zh) * 2020-12-17 2021-03-19 上海联影医疗科技股份有限公司 一种扫描定位方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN112530554A (zh) 2021-03-19
CN112530554B (zh) 2023-08-18
US20230334698A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
US11861829B2 (en) Deep learning based medical image detection method and related device
CN109961491B (zh) 多模态图像截断补偿方法、装置、计算机设备和介质
US11568533B2 (en) Automated classification and taxonomy of 3D teeth data using deep learning methods
CN107886508B (zh) 差分减影方法和医学图像处理方法及系统
CN110998602A (zh) 使用深度学习方法对3d牙颌面结构的分类和3d建模
JP7467348B2 (ja) 医用画像データの表示
US9142030B2 (en) Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
JP7346553B2 (ja) 深層学習を使用する3dデータセット内のオブジェクトの成長率の決定
WO2021189848A1 (zh) 模型训练方法、杯盘比确定方法、装置、设备及存储介质
CN110728673A (zh) 一种目标部位分析方法、装置、计算机设备及存储介质
CN111145160B (zh) 钙化区域所处冠脉分支的确定方法、装置、服务器及介质
CN112150571A (zh) 一种图像运动伪影消除方法、装置、设备及存储介质
CN112634309A (zh) 图像处理方法、装置、电子设备及存储介质
US20230334698A1 (en) Methods and systems for positioning in an medical procedure
CN111243052A (zh) 图像重建方法、装置、计算机设备和存储介质
CN113888566B (zh) 目标轮廓曲线确定方法、装置、电子设备以及存储介质
CN113724185A (zh) 用于图像分类的模型处理方法、装置及存储介质
CN112862752A (zh) 一种图像处理显示方法、系统电子设备及存储介质
Zhou et al. Wrist ultrasound segmentation by deep learning
CN111863206A (zh) 一种图像预处理方法、装置、设备及存储介质
US20220270256A1 (en) Compensation of organ deformation for medical image registration
CN114037830A (zh) 增强图像生成模型的训练方法、图像处理方法及装置
WO2023032438A1 (ja) 回帰推定装置および方法、プログラム並びに学習済みモデルの生成方法
US20230298174A1 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium
WO2023060735A1 (zh) 图像生成模型训练及图像生成方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905247

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905247

Country of ref document: EP

Kind code of ref document: A1