WO2020233272A1 - 一种医疗图像处理方法、装置、电子医疗设备和存储介质 - Google Patents

一种医疗图像处理方法、装置、电子医疗设备和存储介质 Download PDF

Info

Publication number
WO2020233272A1
WO2020233272A1 PCT/CN2020/084147 CN2020084147W WO2020233272A1 WO 2020233272 A1 WO2020233272 A1 WO 2020233272A1 CN 2020084147 W CN2020084147 W CN 2020084147W WO 2020233272 A1 WO2020233272 A1 WO 2020233272A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
biological tissue
lesion
matching
Prior art date
Application number
PCT/CN2020/084147
Other languages
English (en)
French (fr)
Inventor
田宽
江铖
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20809816.0A priority Critical patent/EP3975196A4/en
Publication of WO2020233272A1 publication Critical patent/WO2020233272A1/zh
Priority to US17/367,280 priority patent/US11984225B2/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19153Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • G06V30/274Syntactic or semantic context, e.g. balancing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Definitions

  • This application relates to the field of computer technology, and in particular to a medical image processing method, device, electronic medical equipment, and storage medium.
  • Breast cancer is one of the common malignant diseases. In recent years, the incidence of female breast cancer has increased. Statistics show that there are 1.2 million new breast cancer patients worldwide each year, ranking first in the incidence of female malignant tumors.
  • Breast imaging detection is an important method for breast cancer diagnosis. At present, the commonly used breast examination methods are: mammography, B-ultrasound detection, and MRI. Among them, mammography is the most commonly used method, so medical diagnosis based on mammography images has important medical significance.
  • the embodiments of the present application provide a medical image processing method, device, electronic medical equipment, and storage medium, which can improve the efficiency of identifying the type of lesion and identifying the position information of the quadrant of the lesion.
  • One aspect of the embodiments of the present application provides a medical image processing method, which is executed by an electronic device, and the method includes:
  • a medical image processing device including:
  • An image acquisition module configured to acquire a biological tissue image containing biological tissue, identify the first area of a focus object in the biological tissue in the biological tissue image, and identify a focus attribute matching the focus object;
  • a dividing module for dividing the image area of the biological tissue in the biological tissue image into multiple quadrant position areas
  • the image acquisition module is also used to acquire target quadrant position information of the quadrant position area where the first area is located;
  • the generating module is used to generate medical service data according to the target quadrant location information and the lesion attribute.
  • Another aspect of the embodiments of the present application provides an electronic medical device, including a biological tissue image collector and a biological tissue image analyzer;
  • the biological tissue image collector acquires biological tissue images including biological tissues
  • the biological tissue image analyzer recognizes the first area of the lesion object in the biological tissue in the biological tissue image, and recognizes the lesion attribute matching the lesion object;
  • the biological tissue image analyzer divides the image area of the biological tissue in the biological tissue image into multiple quadrant position areas
  • the biological tissue image analyzer obtains target quadrant position information of the quadrant position area where the first area is located, and generates medical service data according to the target quadrant position information and the lesion attribute.
  • an electronic device including: a processor and a memory;
  • the processor is connected to a memory, where the memory is used to store a program, and the processor is used to call the program to execute the method according to any embodiment of the present application.
  • the computer program includes program instructions. When executed by a processor, the program instructions execute as any The method described in the embodiment.
  • FIG. 1 is a system architecture diagram of a medical image processing system provided by an embodiment of the present application
  • FIGS. 2a-2b are schematic diagrams of a medical image processing scene provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a medical image processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a process for determining location areas of multiple quadrants according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of a process for generating medical service data according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a functional module of medical image processing provided by an embodiment of the present application.
  • Figure 7a is a schematic diagram of determining medical service data provided by an embodiment of the present application.
  • Figure 7b is a schematic structural diagram of an electronic medical device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a medical image processing device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the diagnosis of mammography breast images is mainly through manual diagnosis by professional medical personnel.
  • Professional medical personnel use their own experience to determine the type of lesion and the location of the lesion in the mammography breast image to generate the diagnosis result. Based on the diagnosis results, determine the treatment plan.
  • FIG. 1 is a system architecture diagram of a medical image processing system provided by an embodiment of the present application.
  • the server 10f establishes a connection with the user terminal cluster through the switch 10e and the communication bus 10d.
  • the user terminal cluster may include: user terminals 10a, user terminals 10b, ..., user terminals 10c.
  • the user terminal 10a when the user terminal 10a receives a medical image containing a lesion, the medical image is sent to the sending server 10f through the switch 10e and the communication bus 10d.
  • the server 10f can identify the category to which the lesion belongs and the quadrant position information of the lesion in the medical image, and generate medical service data according to the recognized result.
  • the server 10f may send the medical service data to the user terminal 10a, and the user terminal 10a may subsequently display the medical service data on the screen.
  • the user terminal 10a can also identify the category of the lesion and the quadrant position information of the lesion in the medical image to generate medical service data. Similarly, the user terminal 10a can display the medical service data on the screen.
  • the user terminal 10a, user terminal 10b, user terminal 10c, etc. shown in FIG. 1 may include a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a mobile Internet device (MID, mobile internet device), and a wearable device (such as a smart watch). , Smart bracelet, etc.) etc.
  • FIGS. 2a-2b are schematic diagrams of a medical image processing scene provided by an embodiment of the present application.
  • the user terminal 10a acquires the mammography image of the same side of the same patient, and displays the acquired mammography image on the screen.
  • the mammography image includes: mammography CC Position image 20b, and mammography MLO position image 20c.
  • the CC position image of mammography is to image the breast according to the head and tail position
  • the MLO position image of the mammography is to image the breast according to the oblique lateral position.
  • the user terminal 10a obtains a lump detection model and a calcification detection model.
  • the lump detection model can identify the location area of the tumor in the image; the calcification detection model can identify the location area of the calcification focus in the image, and the tumor category and calcification category belong to the lesion. category.
  • the terminal device 10a can input the breast mammography CC image 20b into the mass detection model, and the mass detection model can output the lesion objects in the breast mammography CC image 20b that are located in the breast mammography CC image 20b
  • the lesion type to which the lesion object in the CC position image 20b of the mammography target belongs can also be determined as a mass category.
  • the terminal device 10a can input the mammography MLO position image 20c into the mass detection model, and the mass detection model can also output that the lesion object in the mammography MLO position image 20c is located in the lesion area 20h in the mammography MLO position image 20c, and also It can be determined that the lesion type to which the lesion object in the mammogram MLO position image 20c belongs is a tumor type.
  • the terminal device 10a also inputs the breast molybdenum target CC image 20b into the calcification detection model.
  • the calcification detection model does not detect calcification in the breast molybdenum target CC image 20b; the terminal device 10a also inputs the breast molybdenum target
  • the MLO position image 20c is input into the calcification detection model, and the calcification detection model also does not detect calcification lesions in the mammography MLO position image 20c.
  • the tumor in image 20b is located in the lesion area for 20 days; in the breast mammography MLO image The tumor in image 20c is located in the lesion area for 20h.
  • the user terminal 10a obtains the image semantic segmentation model.
  • the image semantic segmentation model can identify the tissue category to which each pixel in the image belongs.
  • the tissue categories include: nipple category, muscle category, and background category.
  • the user terminal 10a inputs the mammography CC image 20b into the image semantic segmentation model, and the model can determine the tissue category to which each pixel of the mammography CC image 20b belongs.
  • the user terminal 10a combines the pixels belonging to the nipple category into an area 20e, which is the area where the nipple is located.
  • the user terminal 10a determines the breast edge line 20g in the breast mammography CC position image 20b, and uses the line 20f perpendicular to the breast edge line 20g and passing through the area 20e as the quadrant division line 20f.
  • the inner quadrant is located below the quadrant division line 20f, and the outer quadrant is located above the quadrant division line 20f.
  • the lesion area 20d is located in the inner quadrant (most of the lesion area 20d is located in the inner quadrant, then the lesion area 20d is considered to be in the inner quadrant), so the user terminal 10a can determine the breast mammography CC
  • the tumor in the image 20b is located in the inner quadrant.
  • the user terminal 10a inputs the mammography MLO image 20c into the image semantic segmentation model, and the model can determine the tissue category to which each pixel of the mammography MLO image 20c belongs.
  • the user terminal 10a combines the pixels belonging to the nipple category into an area 20j, which is the area where the nipple is located.
  • the user terminal 10a combines the quadrants belonging to the muscle category as a muscle area, determines the area boundary line 20m between the muscle area and the non-muscle area, and uses a line 20k perpendicular to the area boundary line 20m and passing through the area 20j as the quadrant division line 20k.
  • the lower quadrant is located below the quadrant division line 20k, and the upper quadrant is located above the quadrant division line 20k.
  • the user terminal 10a can determine that the mass lesion in the mammography MLO position image 20c is located in the lower quadrant.
  • the user terminal 10a combines the inner quadrant determined by the mammography CC position image 20b and the lower quadrant determined by the mammography MLO position image 20c into quadrant position information 20n "inner lower quadrant".
  • the user terminal 10a can combine the quadrant position information 20n "inner lower quadrant” and the lesion category "lumps" corresponding to the breast mammography CC position image 20b and the breast mammography MLO position image 20c into a diagnosis opinion: "Inner lower quadrant see Lumps".
  • the user terminal 10a can play a preset animation on the screen during the process of identifying the CC position image 20b of the mammography target and the MLO position image 20c of the mammography target.
  • the animation is stopped and the generated diagnosis opinion is displayed on the screen.
  • the identified lesion area of the breast mammography CC image 20b and the lesion area of the breast mammography MLO image 20c may be marked in the corresponding mammography image by using a rectangular frame.
  • the specific process of identifying the lesion category to which the lesion object belongs, determining the lesion area of the lesion object in the image, determining the quadrant segmentation line, and determining the quadrant position information may refer to the following embodiments corresponding to FIGS. 3-7b.
  • FIG. 3 is a schematic flowchart of a medical image processing method provided by an embodiment of the present application. As shown in FIG. 3, the medical image processing method may include the following steps.
  • Step S101 Obtain a biological tissue image including biological tissue, identify a first area of a lesion object in the biological tissue in the biological tissue image, and identify a lesion attribute matching the lesion object.
  • the terminal device (such as the user terminal 10a in the above-mentioned FIG. 2a corresponding embodiment) acquires biological tissue images containing biological tissues (such as the above-mentioned FIG. 2a corresponding to the breast mammography CC position image 20b and the mammography MLO in the embodiment)
  • the bit image 20c the biological tissue includes a focus object, wherein the biological tissue may be breast, liver, kidney, etc.
  • the terminal device recognizes the area of the lesion object in the biological tissue image as the first area (such as the lesion area 20d and the lesion area 20h in the corresponding embodiment in FIG. 2a), and the terminal device recognizes the lesion attribute corresponding to the lesion.
  • the lesion attributes can include: mass, calcification, structural distortion, etc.; when the biological tissue is liver, the lesion attributes can include: hemangioma, liver abscess, liver cyst, etc.; when the biological tissue is kidney , The attributes of the lesion can include: kidney cyst, kidney tumor, etc.
  • Identifying the location area of the lesion object and the lesion attribute in the biological tissue image may be determined based on multiple lesion detection models (such as the mass detection model and the calcification detection model in the corresponding embodiment of FIG. 2a).
  • a lesion detection model corresponds to a lesion attribute.
  • Each lesion detection model can determine whether the attribute of the lesion object in the biological tissue image is the attribute corresponding to the model. If the attribute of the lesion object is the attribute corresponding to the model, it can also be determined The location area of the focus object in the biological tissue image.
  • the lesion attribute corresponding to the lesion detection model A is a mass
  • the lesion attribute corresponding to the lesion detection model B is calcification. Input the biological tissue image into the lesion detection model A. If the lesion detection model A outputs the location area, it means that the lesion object is located in the location area, and the lesion attribute of the lesion object located in the above location area is a mass; if the lesion detection model A does not output the location area , Indicating that the mass is not the focus attribute of the focus object.
  • the same method can be used for the lesion detection model B to determine whether the lesion attribute of the lesion object is calcification. If the lesion attribute is calcification, the location area of the lesion object in the biological tissue image can also be determined.
  • the terminal device obtains multiple sliding windows of a fixed size, for example ,
  • the sliding window is 16 ⁇ 16, 16 ⁇ 32, 32 ⁇ 64, 64 ⁇ 16, etc.
  • each sliding window slides on the biological tissue image to extract multiple image blocks, and the size of each image block is equal to the size of the sliding window
  • There is a certain degree of overlap between image blocks which can ensure that all areas in the biological tissue image can participate in subsequent recognition.
  • the lesion detection model Based on the classifier in the lesion detection model, identify the probability of each image block belonging to the lesion attribute corresponding to the lesion detection model. If the probability of the identified lesion in the image block is greater than the lesion probability threshold, it means that the image block contains the lesion object, and determine The lesion attribute of the lesion object is the lesion attribute corresponding to the lesion detection model, and the location area of the image block in the biological tissue map is used as the candidate area of the lesion object. Of course, if there is no image block with a lesion probability greater than the lesion probability threshold, it means that the lesion attribute of the lesion object in the biological tissue image is not the lesion attribute corresponding to the lesion detection model, that is, the lesion detection model does not detect the location area.
  • the classifier in the lesion detection model is obtained by training the image block containing the sample lesion object and the image block not containing the sample lesion object, and the lesion attribute of the sample lesion object is equal to the lesion attribute corresponding to the lesion detection model, that is, the classifier It is a two-classifier.
  • the candidate area can be directly used as the first area; when there are multiple candidate areas, although the probability of the lesion corresponding to the candidate area is greater than the probability threshold, there may be overlap between the candidate areas (That is, multiple candidate regions correspond to the same lesion object), so NMS (Non-Maximum Suppression) can be used subsequently to determine the first region of the lesion object in the biological tissue image from multiple candidate regions.
  • NMS Non-Maximum Suppression
  • the specific process of NMS is: Combine all candidate areas into a candidate area set, and the terminal device can extract the candidate area with the largest lesion probability from the candidate area set as a polling candidate area, and determine the polling candidate area and the remaining candidate areas Then, the candidate regions whose overlapping area is less than the area threshold are combined into a new candidate region set, and then the candidate region with the largest lesion probability is selected from the new candidate region set as the polling candidate region.
  • the area set is an empty set, the polling candidate area is taken as the first area of the lesion object in the biological tissue image.
  • Each convolutional layer corresponds to one or more convolution kernels (kernel, also called filters, or receptive fields).
  • Convolution operation refers to the matrix of the convolution kernel and the sub-data located at different positions of the input data
  • the number of channels of the output data of each convolutional layer (can be understood as the number of feature maps) is determined by the number of convolution kernels in the convolutional layer, and the output data (that is, the feature map)
  • H in , H kernel respectively represent the height of the input data and the height of the convolution kernel
  • W in , W kernel respectively represent the width of the input data and the width of the convolution kernel.
  • the data size of the convolution feature information (that is, the size of the feature map) keeps getting smaller.
  • the size of a biological tissue image is H ⁇ W ⁇ 1
  • the size of the output convolution feature information after the first convolution layer is The size of the output convolution feature information after the second convolution layer is
  • the terminal device uses the convolution feature information (or feature map) obtained in the last convolution as the convolution heat map heatmap (the size of the convolution heat map is m ⁇ n ⁇ 2), and upsampling the convolution heat map (upsampling) ), that is, the convolution heat map is enlarged to the same size as the biological tissue image, and a mask of size H ⁇ W ⁇ 2 is obtained. According to the mask, the lesion attribute of each pixel in the biological tissue image can be determined as lesion detection The model corresponds to the probability of the lesion attribute.
  • the terminal device can form the image area with pixels larger than the lesion probability threshold as the first area of the lesion object in the biological tissue image, and can determine the lesion of the lesion object located in the first area
  • the attribute is equal to the attribute of the lesion corresponding to the lesion detection model.
  • the lesion detection model does not detect the location area.
  • the above method can be used to determine the lesion attributes and the first region of the lesion object in the biological tissue image respectively.
  • the user terminal can use a rectangular frame to mark the first area in the biological tissue image.
  • Step S102 dividing the image area of the biological tissue in the biological tissue image into multiple quadrant position areas.
  • the biological tissue includes a first tissue object.
  • the first tissue object may be a nipple; when the biological tissue is a liver, the first tissue object may be a liver sac.
  • Identify the area of the first tissue object in the biological tissue image as the second area (such as the area 20e and area 20j in the corresponding embodiment in Figure 2a above), where the second area can be determined based on the image semantic segmentation model, image semantic segmentation
  • the model is used to identify the object attributes of each pixel in the biological tissue image.
  • the working process of the image semantic segmentation model is similar to the second method of determining the first area and the attributes of the lesion in the lesion detection model mentioned above.
  • the terminal device determines the quadrant division line in the biological tissue image according to the second area (such as the quadrant division line 20f and the quadrant division line 20k in the corresponding embodiment in Figure 2a above), and uses the image area of the biological tissue in the biological tissue image as the tissue image According to the quadrant division line, the tissue image region is divided into multiple quadrant position regions (such as the inner quadrant, outer quadrant, upper quadrant and lower quadrant in the corresponding embodiment in Figure 2a above).
  • Step S103 Obtain target quadrant location information of the quadrant location area where the first area is located, and generate medical service data based on the target quadrant location information and the lesion attribute.
  • the terminal device obtains the quadrant position information of the quadrant position area where the first area is located (referred to as target quadrant position information, such as the quadrant position information 20n in the corresponding embodiment in FIG. 2a).
  • the terminal device generates medical service data according to the location information of the target quadrant and the identified lesion attributes (such as the diagnosis opinion in the embodiment corresponding to FIG. 2a).
  • the biological tissue image after the first area is marked with a rectangular frame can be displayed on the screen of the terminal device, and the medical service data can be displayed.
  • the embodiments of the present application can provide services as a software interface.
  • the input is a single-sided multi-projection mammography target image, for example, it can be a CC bit and an MLO bit, and the output is medical service data.
  • FIG. 4 is a schematic diagram of a process for determining multiple quadrant location areas according to an embodiment of the present application. Determining multiple quadrant location areas includes step S201-step S203, and step S201-step S203 is the corresponding embodiment of FIG. 3 above.
  • Step S201 Identify a second area of the first tissue object in the biological tissue image.
  • the biological tissue includes a first tissue object
  • the biological tissue image includes a first biological tissue image (as shown in Figure 2a corresponding to the breast mammogram CC position image 20b in the embodiment) and a second biological tissue image (as shown in Figure 2a corresponding to the above The mammography target MLO position image 20c) in the embodiment, wherein the first biological tissue image and the second biological image are imaging of biological tissue in different directions, for example, when the biological tissue is a breast, the first biological tissue image may be a breast The imaging in the cranio-caudal position; the second biological tissue image may be the imaging of the breast in the oblique lateral direction.
  • both the first biological tissue image and the second biological tissue image include biological tissue, and both include the first tissue object.
  • the terminal device obtains the image semantic segmentation model, and inputs the first biological tissue image into the image semantic segmentation model. Based on the image semantic segmentation model, the image area of the first tissue object in the first biological tissue image can be determined, which is called the first tissue identification area (For example, Figure 2a corresponds to the area 20e in the embodiment). Input the second biological tissue image into the image semantic segmentation model. Based on the image semantic segmentation model, the image area of the first tissue object in the second biological tissue image can be determined, which is called the second tissue identification area (as shown in the corresponding embodiment in Figure 2a above) The area in 20j).
  • the terminal device may determine the first tissue identification area and the second tissue identification area as the second area, that is, the second area includes the first tissue identification area corresponding to the first biological tissue image and the second biological tissue image The second organization identification area.
  • the image semantic segmentation model may be an FCN (Fully Convolutional Networks) model, and the number of model classifications is set to 3, which respectively represent the background, the first organization object, and the second organization object.
  • the weight initialization of the image semantic segmentation model can use the PASCAL VOC data set, then use the public data set DDSM (Digital Database for Screening Mammography, mammography digital database), and finally use domestic hospital data for migration learning (The input image size can be 800 ⁇ 800 pixels, the batch size can be 8, the learning rate can be 0.00001, and the maximum number of iterations can be 10000), and finally the area where the first tissue object and the second tissue object in the biological tissue image can be extracted Fully convolutional segmentation network in the region.
  • DDSM Digital Database for Screening Mammography, mammography digital database
  • the image semantic segmentation model includes a forward convolution layer and a transposed convolution layer.
  • the forward convolution layer is used For the forward convolution operation, the forward convolution operation can reduce the size of the feature map, the transposed convolution layer is used for the reverse convolution operation, and the reverse convolution operation can increase the size of the feature map.
  • the above H ⁇ W ⁇ 3 mask includes: background masks belonging to the background category , The first tissue mask belonging to the first tissue attribute, and the second tissue mask belonging to the second tissue attribute, the sizes of the aforementioned background mask, first tissue mask, and second tissue mask are all H ⁇ W,
  • the first tissue attribute here is used to identify a first tissue object
  • the second tissue attribute is used to identify a second tissue object.
  • the biological tissue is a breast
  • the first tissue object may be a nipple and the second tissue object may be a muscle.
  • each unit mask in the mask represents the probability that the pixel in the corresponding first biological tissue image belongs to the corresponding attribute of the mask.
  • the attribute corresponding to the mask with the largest probability value is taken as the object attribute of the pixel.
  • the object attribute includes background attribute, first One organization attribute and second organization attribute.
  • the terminal device may use an image area composed of pixels belonging to the first tissue attribute as the first tissue identification area of the first tissue object in the first biological tissue image.
  • the same method can be used for the second biological tissue image, that is, three masks are first determined, and then the object attributes of each pixel in the second biological tissue image are determined according to the three masks, which will also belong to the first tissue attribute
  • the image area composed of pixel points is used as the second tissue identification area of the first tissue object in the second biological tissue image.
  • Step S202 Determine a quadrant division line in the biological tissue image according to the second region.
  • the biological tissue image includes the first biological tissue image and the second biological tissue image
  • the first division line (such as The above figure 2a corresponds to the quadrant division line 20f in the embodiment);
  • the second division line is determined in the second biological tissue image according to the second tissue identification area (as shown in the corresponding embodiment in Figure 2a above)
  • the quadrant dividing line 20k where the first dividing line and the second dividing line belong to the quadrant dividing line.
  • the terminal device acquires the edge dividing line of the biological tissue in the first biological tissue image (for example, the breast edge line 20g in the corresponding embodiment in FIG. 2a), where the biological tissue When it is a breast, the edge dividing line may be the dividing line of the rear edge of the breast.
  • the terminal device uses a vertical line perpendicular to the edge dividing line and passing through the first tissue identification area as the first dividing line.
  • the terminal device determines the object attribute of each pixel according to the aforementioned image semantic segmentation model, and in the second biological tissue image, the pixel belonging to the second tissue attribute
  • the image area composed of dots is used as the object area of the second tissue object in the second biological tissue image in the second biological tissue image.
  • the target area is the area of the muscle in the second biological tissue image.
  • the area where the first tissue object (such as the nipple) is located is the second tissue identification area
  • the area where the second tissue object (such as the muscle) is located is the object area
  • the remaining areas are the background area.
  • the terminal device determines the boundary line between the target area and the non-target area as the target boundary line (for example, the area boundary line 20m in the corresponding embodiment in FIG. 2a).
  • the terminal device uses a vertical line perpendicular to the object boundary line and passing through the second tissue identification area as the second dividing line.
  • Step S203 Use the image area of the biological tissue in the biological tissue image as a tissue image area, and divide the tissue image area into the multiple quadrant position areas according to the quadrant division line.
  • the first biological tissue image is the imaging of the biological tissue in the cranio-caudal direction
  • the second biological tissue image is the imaging of the biological tissue in the oblique lateral direction.
  • the first biological tissue image may be a mammogram CC position image
  • the second biological tissue image may be a mammogram MLO position image.
  • the terminal device uses the image area of the biological tissue in the first biological tissue image as the first tissue image area.
  • the area above the first dividing line is used as the outer quadrant position area;
  • the area below a dividing line is regarded as the inner quadrant position area.
  • the inner quadrant position area and the outer quadrant position area can be determined.
  • the terminal device regards the image area of the biological tissue in the second biological tissue image as the second tissue image area.
  • the area above the second dividing line is used as the upper quadrant position area;
  • the area below the two dividing line is used as the lower quadrant position area.
  • the upper quadrant position area and the lower quadrant position area can be determined.
  • the terminal device can determine the inner quadrant location area, the outer quadrant location area, the upper quadrant location area, and the lower quadrant location area as the quadrant location area.
  • FIG. 5 is a schematic diagram of a process for generating medical service data according to an embodiment of the present application.
  • Generating medical service data includes steps S301-step S305, and step S301-step S305 are steps of step S103 in the embodiment corresponding to FIG. 3 above.
  • Step S301 Obtain target quadrant position information of the quadrant position area where the first area is located.
  • the biological tissue image includes the first biological tissue image and the second biological tissue image
  • the first area includes the first lesion area of the lesion object in the first biological tissue image (as shown in the above-mentioned FIG. 2a corresponding embodiment, the lesion area 20d) and the second focus area of the above-mentioned focus object in the second biological tissue image (as shown in the above-mentioned FIG. 2a corresponding to the focus area 20h in the embodiment).
  • the target quadrant position information is the inner upper quadrant
  • the target quadrant position information is the inner lower quadrant
  • the target quadrant location information is the outer upper quadrant
  • the target quadrant location information is the outer lower quadrant.
  • Step S302 Extract a first sub-image corresponding to the first lesion area from the first biological tissue image.
  • the terminal device extracts the sub-image where the first lesion area is located from the first biological tissue image as the first sub-image.
  • Step S303 Extract a second sub-image corresponding to the second lesion area from the second biological tissue image.
  • the terminal device extracts the sub-image where the second lesion area is located from the second biological tissue image as the second sub-image.
  • Step S304 Obtain a target matching model, and identify a model matching probability between the first sub-image and the second sub-image based on the target matching model.
  • the terminal device acquires the target matching model.
  • the target matching model can identify whether the lesion objects contained in the two images are the same lesion object. This is because there may be multiple lesion objects in the biological tissue image, so the corresponding first There are multiple lesion objects in one biological tissue image and multiple lesion objects in the second biological tissue image.
  • the target matching model can be used to identify the matching probability between the lesion objects in the two images, and to determine the consistency of the lesion objects .
  • the terminal device inputs the first sub-image into the target matching model, and the convolutional layer and the pooling layer in the target matching model can extract the first pooling feature information of the first sub-image.
  • the terminal device inputs the second sub-image into the target matching model, and the convolutional layer and the pooling layer in the target matching model can also extract the second pooling feature information of the second sub-image.
  • the terminal device splices the first pooling feature information and the second pooling feature information into the target pooling feature information in the column direction.
  • the classifier in the target matching model identifies the matching probability between the target pooling feature information and the matching category, and matches The probability is used as the model matching probability.
  • the model matching probability can identify the probability that the lesion object in the first sub-image and the lesion object in the second sub-image are the same lesion object, and the matching probability is a real number between 0 and 1.
  • the classifier in the target matching model is also a two-classifier. It can output not only the probability that the lesion object in the first sub-image and the lesion object in the second sub-image are the same lesion object, but also the lesion in the first sub-image The probability that the object and the focus object in the second sub-image are not the same focus object.
  • Step S305 Acquire the conditional matching probability between the first lesion area and the second lesion area, and when the model matching probability and the conditional matching probability meet the lesion matching condition, combine the target quadrant position information with The lesion attribute combination is the medical service data.
  • the terminal device determines the size of the first lesion area in the first biological tissue image, which is called the first area size; the terminal device determines the size of the second lesion area in the second biological tissue image, which is called the second area size .
  • the terminal device divides the size of the second area by the size of the first area, and the value obtained is used as the size matching probability; when the size of the first area is smaller than the size of the second area, the terminal device will The size of the first area is divided by the size of the second area, and the value obtained is used as the size matching probability.
  • the size matching probability is a real number between 0 and 1.
  • the terminal device determines the area distance between the first lesion area and the image area corresponding to the first tissue object in the first biological tissue image (that is, the aforementioned first tissue identification area). Is the first area distance); in the second biological tissue image, the terminal device determines the image area of the second lesion area corresponding to the first tissue object in the second biological tissue image (that is, the second tissue identification area in the foregoing) The area distance between (called the second area distance).
  • the terminal device When the distance in the first area is greater than the distance in the second area, the terminal device divides the distance in the second area by the distance in the first area, and the value obtained is used as the distance matching probability; when the distance in the first area is less than the distance in the second area, the terminal device will The distance of one area is divided by the distance of the second area, and the value obtained is used as the distance matching probability.
  • the distance matching probability is also a real number between 0 and 1.
  • the terminal device uses the aforementioned size matching probability and distance matching probability as the conditional matching probability.
  • model matching probability and the size matching probability and the distance matching probability in the conditional matching probability are greater than the preset probability threshold, it is determined that the model matching probability and the conditional matching probability meet the lesion matching condition.
  • the terminal device can directly combine the target quadrant location information and lesion attributes determined in the foregoing into medical service data.
  • Each target matching probability pair includes a unit model matching probability and a unit condition matching probability . It can be known that the first sub-image and the second sub-image corresponding to each target matching pair correspond to the same lesion object. In this way, the pair of lesion objects (that is, the first biological tissue image and the second biological tissue image) can be determined The same lesion object).
  • the lesion object of the target matching probability pair is taken as the target lesion object, and the terminal device receives the target quadrant position information of the target lesion object and the target lesion object
  • the object attributes of is combined into medical business data.
  • the object attribute determined by the target lesion object in the first biological tissue image is the same as the object attribute determined by the target lesion object in the second biological tissue image.
  • the biological tissue image includes two lesion objects.
  • the lesion object 1 and the lesion object 2 are included; in the second biological tissue image, the lesion object 3 and the lesion object 4 are included.
  • the lesion detection model A it is determined that the lesion attributes of the lesion object 1, the lesion object 2, the lesion object 3, and the lesion object 4 are all masses.
  • the terminal device may acquire the first sub-image 1 corresponding to the lesion object 1, the first sub-image 2 corresponding to the lesion object 2, the second sub-image 1 corresponding to the lesion object 3, and the second sub-image 2 corresponding to the lesion object 4.
  • the terminal device determines based on the target matching model that the unit model matching probability between the first sub-image 1 and the second sub-image 1 is 0.9, and the unit model matching probability between the first sub-image 1 and the second sub-image 2 is 0.2;
  • the unit model matching probability between a sub-image 2 and the second sub-image 1 is 0.1;
  • the unit model matching probability between the first sub-image 2 and the second sub-image 2 is 0.8.
  • the terminal device determines that the unit condition matching probability between the first sub-image 1 and the second sub-image 1 is 0.8 according to the area size of the lesion object and the area distance between the lesion object and the first tissue object.
  • the unit condition matching probability between the second sub-image 2 is 0.1; the unit condition matching probability between the first sub-image 2 and the second sub-image 1 is 0.2; the unit condition matching probability between the first sub-image 2 and the second sub-image 2
  • the unit condition matching probability is 0.9.
  • the terminal device can determine the unit model matching probability between the first sub-image 1 and the second sub-image 1 and the unit condition matching probability from the above 4 unit model matching probabilities and the 4 unit condition matching probabilities, which can be combined into a target matching pair
  • the unit model matching probability and unit condition matching probability between the first sub-image 2 and the second sub-image 2 can be combined into a target matching pair.
  • the lesion object 1 corresponding to the first sub-image 1 and the lesion object 3 corresponding to the second sub-image 1 are the same lesion object, and the terminal device can regard the lesion object 1 and the lesion object 3 as the target lesion objects, and the lesion object 1 and The target quadrant position information determined by the lesion object 3 and the lesion attribute "lumps" are combined as medical service data.
  • the lesion object 2 corresponding to the first sub-image 2 and the lesion object 4 corresponding to the second sub-image 2 are the same lesion object.
  • the terminal device can regard the lesion object 2 and the lesion object 4 as the target lesion object, and the lesion object 2 and the lesion object 4
  • the determined target quadrant position information and the lesion attribute "lumps" are combined as medical service data.
  • the terminal device obtains a positive sample image and a negative sample image, where the positive sample image includes a first positive sample image and a second positive sample image, and The first positive sample image and the second positive sample image correspond to the same lesion object; the negative sample image includes a first negative sample image and a second negative sample image, and the first negative sample image and the second negative sample image correspond to different lesion objects
  • different focus objects may refer to different focus attributes, or the same focus attributes but not the same focus object.
  • the terminal device obtains the original matching model, where the original matching model is a class model trained from non-biological tissue images, for example, it may be a landscape image or a face image or a classification model trained on the ImageNet dataset.
  • the terminal device selects the model parameters of the first n convolutional layers as the target model parameters from the model parameters contained in the original matching model. For example, the model parameters corresponding to the first 5 convolutional layers in the original matching model can be selected as the target model parameters.
  • the terminal device generates a sample matching model according to the target model parameters, wherein the model parameters corresponding to the first n convolutional layers in the sample matching model are target model parameters.
  • the above principle is the algorithm principle of transfer learning, that is, the sample matching model is that part of the initial weight is not randomly initialized, but determined by another already trained classification model, so that when the subsequent training sample matches the model, only a small amount of biological
  • the tissue image as a sample image can make the sample matching model converge.
  • the terminal device After obtaining the sample matching model, the terminal device identifies the positive sample prediction probability between the first positive sample image and the second positive sample image in the positive sample based on the sample matching model, and obtains the difference between the first positive sample image and the second positive sample image The true matching probability between the two, as the positive sample probability, we can know that the positive sample probability is 100%.
  • the terminal device determines the positive sample error between the positive sample prediction probability and the above positive sample probability, and adjusts the model parameters in the sample matching model based on the above positive sample error back propagation.
  • the terminal device determines the negative sample error between the negative sample predicted probability and the negative sample probability, and adjusts the model parameters in the sample matching model based on the negative sample error back propagation.
  • the gradient descent algorithm in backpropagation can use Adam, the batch size can be 128, the initial learning rate can be 0.001, and the maximum number of iterations can be 10,000.
  • the terminal device can use the adjusted sample matching model as the target matching model.
  • FIG. 6 is a schematic diagram of a functional module of medical image processing provided by an embodiment of the present application.
  • the following takes the CC image of the mammography target and the MLO image of the mammography target as examples, and the CC image and the MLO image are Imaging the same breast in different directions.
  • the medical image processing includes 4 functional modules, which are the lesion quadrant calculation module, the lesion detection module, the lesion matching module, and the information fusion module.
  • the lesion detection module After the terminal device obtains the CC image and the MLO image, the lesion detection module performs lesion detection on the two images respectively, determines the first lesion area of the lesion object on the CC image, and recognizes the lesion of the lesion object according to the CC image Properties, the first lesion area and the properties of the lesion can be referred to as CC lesions.
  • the lesion detection module determines the second lesion area of the lesion object on the MLO image, and recognizes the lesion attribute of the lesion object according to the MLO image.
  • the second lesion area and the lesion attribute may be referred to as the MLO lesion.
  • the lesion matching module obtains the target matching model.
  • the target matching model is obtained by training multiple positive sample images and multiple negative sample images.
  • the lesion matching module determines the first sub-image according to the first lesion area in the CC lesions. Determine the second sub-image based on the second lesion area in the MLO lesion.
  • the lesion matching module inputs the first sub-image and the second sub-image into the target matching model, and the target matching model can identify the model matching probability between the first sub-image and the second sub-image.
  • the lesion matching module then based on the area size of the above two lesion areas (that is, the image size of the first sub-image and the image size of the second sub-image) and the distance between the two lesion areas and the nipple in the respective biological tissue images To determine the conditional matching probability, the conditional matching probability can be used as a manual feature. The lesion matching module judges whether the model matching probability and the conditional matching probability meet the lesion matching condition.
  • the lesion quadrant calculation module obtains the image semantic segmentation model.
  • the image semantic segmentation model is obtained through multiple sample image training. Each pixel in the sample image is annotated.
  • the annotation types include: background attributes, nipple attributes and muscle attributes .
  • the image semantic segmentation model can identify that each pixel in an image belongs to either the background attribute, the nipple attribute, or the muscle attribute.
  • the lesion quadrant calculation module inputs the CC bit image into the image semantic segmentation model. Based on the image semantic segmentation model, the position of the nipple in the CC bit image can be determined (the CC bit image has no muscle information, so there is no muscle area).
  • the nipple position is the aforementioned
  • the lesion quadrant calculation module inputs the MLO image into the image semantic segmentation model. Based on the image semantic segmentation model, the nipple position in the MLO image (that is, the second tissue identification area in the foregoing) and muscles can be determined Location (that is, the target area mentioned above). For the CC image, the lesion quadrant calculation module determines the first segmentation line according to the nipple position and the boundary line of the breast border, and determines that the first lesion area is located in the inner quadrant or the outer quadrant according to the first segmentation line.
  • the lesion quadrant calculation module fits the muscle boundary line equation according to the muscle position, and then determines the muscle boundary line (that is, the object boundary line in the foregoing), and then determines the second according to the nipple position and muscle boundary line
  • the dividing line according to the second dividing line, determines that the second lesion area is located in the upper quadrant or the lower quadrant.
  • the information fusion module fuses the quadrant position information determined by the CC bit image and the quadrant position information determined by the MLO bit image into the target quadrant position information, and the target quadrant position information And the combination of lesion attributes is medical business data.
  • the information fusion module also needs to pair multiple first sub-images and multiple second sub-images, and respectively determine the target quadrant position information of the lesion object corresponding to each image pair and the lesion Attributes.
  • FIG. 7a is a schematic diagram of determining medical service data according to an embodiment of the present application.
  • the image 30a is the CC position image of the mammography target
  • the image 30b is the MLO position image of the mammography target
  • the images 30a and 30b correspond to the same breast.
  • the first focus area 30c where the focus object is located first determines the first focus area 30c where the focus object is located, and determine that the focus attribute of the focus object is calcification. Identify the first tissue identification area 30d where the nipple (that is, the first tissue object) is located, and determine the edge boundary line 30f of the breast.
  • the terminal device will be perpendicular to the edge boundary line 30f and pass the vertical line 30e of the first tissue identification area 30d as For the first dividing line 30e, since the first lesion area 30c is located above the first dividing line 30e, it is determined that the first lesion area 30c is located in the outer quadrant position area.
  • the second focus area 30g where the focus object is located first determines the second focus area 30g where the focus object is located, and determine that the focus attribute of the focus object is calcification. Identify the second tissue identification area 30h where the nipple (that is, the first tissue object) is located. Based on the image semantic segmentation model, determine the location area of the muscle (that is, the second tissue object mentioned above), and then determine the object boundary 30m. The terminal device will be perpendicular to the object boundary 30m and pass the perpendicular line of the second tissue identification area 30h 30k is used as the second dividing line 30k. Since the second lesion area 30g is located above the second dividing line 30k, it is determined that the second lesion area 30g is located in the upper quadrant position area.
  • the target quadrant location information is: the outer upper quadrant.
  • the terminal device may display the medical service data, and simultaneously display the image 30a that identifies the first lesion area 30c based on the rectangular frame and the image 30b that identifies the second lesion area 30g based on the rectangular frame on the screen.
  • the classification of the lesion and the quadrant location information of the lesion in the medical image are recognized in an automated manner, and then the business data is automatically generated. Compared with manual diagnosis, it can save the time to determine the type of the lesion and the location information of the lesion quadrant, and improve the determination of the type of the lesion. And the efficiency and accuracy of the location information of the lesion quadrant.
  • FIG. 7b is a schematic structural diagram of an electronic medical device provided by an embodiment of the present application.
  • the electronic medical device may be the terminal device in the corresponding embodiment of FIGS. 3-7a; the electronic medical device includes a biological tissue image collector and Biological tissue image analyzer, the above electronic medical equipment can collect medical images and analyze medical images.
  • the specific process includes the following steps:
  • Step S401 the biological tissue image collector acquires a biological tissue image containing the biological tissue.
  • the biological tissue image collector collects biological tissue images including biological tissues. If the biological tissue is a breast, the biological tissue image collector can be a mammography machine, and the corresponding biological tissue image is a mammography target image; if the biological tissue is a liver, then the biological tissue image collector can be a B-ultrasound machine, corresponding to biological tissue The tissue image is the liver ultrasound image; if the biological tissue is the brain, the biological tissue image collector can be an MRI (Magnetic Resonance Imaging) instrument, and the corresponding biological tissue image is an MRI image.
  • MRI Magnetic Resonance Imaging
  • Step S402 the biological tissue image analyzer recognizes the first area of the lesion object in the biological tissue in the biological tissue image, and recognizes the lesion attribute that matches the lesion object.
  • the biological tissue image analyzer recognizes the area of the lesion object in the biological tissue image in the biological tissue image, which is called the first area, and recognizes the lesion attribute of the lesion object.
  • the biological tissue image analyzer recognizes the first area of the lesion object in the biological tissue image and the lesion attribute may be determined based on multiple lesion detection models.
  • a lesion detection model corresponds to a lesion attribute.
  • Each lesion detection model can determine whether the attribute of the lesion object in the biological tissue image is the attribute corresponding to the model. If the attribute of the lesion object is the attribute corresponding to the model, it can also be determined The location area of the focus object in the biological tissue image.
  • the biological tissue image analyzer determines the first area from the location area determined by the model based on the NMS algorithm.
  • the specific process for the biological tissue image analyzer to determine the first area of the lesion object and the attributes of the lesion can refer to step S101 in the corresponding embodiment in FIG. 3 above. .
  • Step S403 The biological tissue image analyzer divides the image area of the biological tissue in the biological tissue image into multiple quadrant position areas.
  • the biological tissue includes a first tissue object.
  • the biological tissue image analyzer recognizes the second area of the first tissue object in the biological tissue image, and the biological tissue image analyzer determines the quadrant division line in the biological tissue image according to the second area, and divides the biological tissue into the image area of the biological tissue image As a tissue image area.
  • the biological tissue image analyzer divides the tissue image area into multiple quadrant position areas according to the quadrant division line.
  • step S201 For the specific process of the biological tissue image analyzer determining the multiple quadrant position areas, please refer to step S201 to step S203 in the corresponding embodiment of FIG. 4.
  • Step S404 The biological tissue image analyzer obtains target quadrant position information of the quadrant position area where the first area is located, and generates medical service data according to the target quadrant position information and the lesion attribute.
  • the biological tissue image analyzer acquires the quadrant position information of the quadrant position area where the first area is located (referred to as target quadrant position information).
  • the biological tissue image analyzer generates medical business data based on the target quadrant location information and the identified lesion attributes, and displays the biological tissue image after the first area is marked with a rectangular frame on the screen of the electronic medical device, and displays the medical business data.
  • FIG. 8 is a schematic structural diagram of a medical image processing device provided by an embodiment of the present application.
  • the medical image processing apparatus 1 can be applied to the terminal equipment in the corresponding embodiments of FIGS. 3-7a.
  • the medical image processing apparatus 1 can include: an image acquisition module 11, a division module 12, and a generation module 13.
  • the image acquisition module 11 is configured to acquire a biological tissue image including biological tissue, identify the first area of the lesion object in the biological tissue in the biological tissue image, and identify the lesion attribute that matches the lesion object;
  • the dividing module 12 is configured to divide the image area of the biological tissue in the biological tissue image into multiple quadrant position areas;
  • the image acquisition module 11 is configured to acquire target quadrant position information of the quadrant position area where the first area is located;
  • the generating module 13 is configured to generate medical service data according to the target quadrant location information and the lesion attribute.
  • step S101 to step S103 the specific functional implementation manners of the image acquisition module 11, the division module 12, and the generation module 13 can be referred to step S101 to step S103 in the embodiment corresponding to FIG. 3, and details are not described herein again.
  • the biological tissue includes the first tissue object
  • the division module 12 may include: an identification unit 121, a quadrant determination unit 122, and a division unit 123.
  • An identifying unit 121 configured to identify the second area of the first tissue object in the biological tissue image
  • a quadrant determination unit 122 configured to determine a quadrant division line in the biological tissue image according to the second region
  • the dividing unit 123 is configured to use the image area of the biological tissue in the biological tissue image as a tissue image area, and divide the tissue image area into the multiple quadrant position areas according to the quadrant division line.
  • step S201 For specific functional implementations of the identification unit 121, the quadrant determination unit 122, and the division unit 123, reference may be made to step S201 to step S203 in the corresponding embodiment of FIG. 4, and details are not described herein again.
  • the biological tissue image includes a first biological tissue image and a second biological tissue image; the first biological tissue image and the second biological tissue image are imaging of the biological tissue in different directions;
  • the identifying unit 121 may include: an acquiring subunit 1211, a first identifying subunit 1212, and a second identifying subunit 1213.
  • the obtaining subunit 1211 is used to obtain an image semantic segmentation model
  • the first recognition subunit 1212 is configured to determine the first tissue identification area of the first tissue object in the first biological tissue image based on the image semantic segmentation model;
  • a second recognition subunit 1213 configured to determine a second tissue identification area of the first tissue object in the second biological tissue image based on the image semantic segmentation model
  • the second identification subunit 1213 is further configured to determine the first organization identification area and the second organization identification area as the second area.
  • step S201 For specific functional implementations of the acquiring subunit 1211, the first identifying subunit 1212, and the second identifying subunit 1213, refer to step S201 in the embodiment corresponding to FIG. 4, and details are not described herein again.
  • the first identification subunit 1212 may include: a convolution subunit 12121 and an attribute determination subunit 12122.
  • the convolution subunit 12121 is configured to perform forward convolution and reverse convolution on the first biological tissue image based on the forward convolution layer and the transposed convolution layer in the image semantic segmentation model to obtain the convolution Product feature map;
  • the attribute determining subunit 12122 is configured to determine the object attribute of each pixel in the first biological tissue image according to the convolution feature map; the object attribute includes the first tissue attribute;
  • the attribute determining subunit 12122 is further configured to use an image area composed of pixels belonging to the first tissue attribute as the first tissue identification area of the first tissue object in the first biological tissue image.
  • step S201 For specific functional implementations of the convolution subunit 12121 and the attribute determination subunit 12122, refer to step S201 in the embodiment corresponding to FIG. 4, and details are not described herein again.
  • the quadrant segmentation line includes a first segmentation line corresponding to the first biological tissue image and a second segmentation line corresponding to the second biological tissue image; the object attribute also includes a second tissue attribute;
  • the quadrant determination unit 122 may include: a quadrant determination subunit 1221, an area determination subunit 1222.
  • the quadrant determination subunit 1221 is used to obtain the edge boundary line of the biological tissue in the first biological tissue image, and according to the first tissue identification area and the edge boundary line, the first biological tissue image Determine the first dividing line in;
  • the area determining subunit 1222 is configured to use an image area composed of pixels belonging to the second tissue attribute as an object area of the second tissue object in the second biological tissue image in the second biological tissue image ;
  • the area determining subunit 1222 is further configured to determine the object boundary of the target area, and determine the second tissue identification area in the second biological tissue image according to the second tissue identification area and the object boundary. Dividing line.
  • step S202 The specific functional implementation of the quadrant determining sub-unit 1221 and the area determining sub-unit 1222 can be referred to step S202 in the corresponding embodiment of FIG. 4, which will not be repeated here.
  • the first biological tissue image is the imaging of the biological tissue in the cranio-caudal direction
  • the second biological tissue image is the imaging of the biological tissue in the oblique lateral direction.
  • the dividing unit 123 may include: a first dividing sub-unit 1231, a second dividing sub-unit 1232;
  • the first division subunit 1231 is configured to use the image area of the biological tissue in the first biological tissue image as the first tissue image area, and in the first biological tissue image, divide the second tissue image according to the first dividing line.
  • a tissue image area is divided into an inner quadrant position area and an outer quadrant position area;
  • the second dividing subunit 1232 is configured to use the image area of the biological tissue in the second biological tissue image as a second tissue image area, and in the second biological tissue image, divide the first tissue image according to a second dividing line. 2.
  • the tissue image area is divided into upper quadrant position area and lower quadrant position area;
  • the second division subunit 1232 is further configured to determine the inner quadrant position area, the outer quadrant position area, the upper quadrant position area, and the lower quadrant position area as the quadrant position area.
  • step S203 For the specific functional implementation of the first dividing sub-unit 1231 and the second dividing sub-unit 1232, reference may be made to step S203 in the embodiment corresponding to FIG. 4, and details are not described herein again.
  • the first area includes the first lesion area of the lesion object in the first biological tissue image and the second lesion area of the lesion object in the second biological tissue image;
  • the biological tissue image includes all The first biological tissue image and the second biological tissue image;
  • the generating module 13 may include: an extraction unit 131, a probability acquisition unit 132, and a combination unit 133.
  • An extracting unit 131 configured to extract a first sub-image corresponding to the first lesion area from the first biological tissue image
  • the extraction unit 131 is further configured to extract a second sub-image corresponding to the second lesion area from the second biological tissue image;
  • the extracting unit 131 is further configured to obtain a target matching model, and identify the model matching probability between the first sub-image and the second sub-image based on the target matching model;
  • the probability obtaining unit 132 is configured to obtain the conditional matching probability between the first lesion area and the second lesion area;
  • the combining unit 133 is configured to combine the target quadrant location information and the lesion attribute into the medical service data when the model matching probability and the conditional matching probability meet the lesion matching condition.
  • step S301 For specific functional implementations of the extraction unit 131, the probability acquisition unit 132, and the combination unit 133, reference may be made to step S301 to step S305 in the embodiment corresponding to FIG. 5 above, and details are not described herein again.
  • the probability obtaining unit 132 may include: a size determining subunit 1321, a distance determining subunit 1322.
  • the size determination subunit 1321 is configured to determine the size of the first area of the first lesion area in the first biological tissue image, and determine the first area of the second lesion area in the second biological tissue image Area size, generating a size matching probability according to the first area size and the second area size;
  • the distance determining subunit 1322 is used to determine the first area distance between the first lesion area and the image area corresponding to the first tissue object in the first biological tissue image, and to determine the second lesion area and The second area distance between image areas corresponding to the first tissue object in the second biological tissue image;
  • the distance determining subunit 1322 is further configured to generate a distance matching probability according to the first area distance and the second area distance;
  • the distance determining subunit 1322 is further configured to determine the size matching probability and the distance matching probability as the conditional matching probability.
  • step S305 for the specific functional implementation of the size determining sub-unit 1321 and the distance determining sub-unit 1322, reference may be made to step S305 in the embodiment corresponding to FIG. 5, which will not be repeated here.
  • the number of lesion objects in the biological tissue image is multiple;
  • the model matching probability includes the first sub-image of the multiple lesion objects and the second sub-image of the multiple lesion objects
  • the conditional matching probability includes the unit conditional matching probability between the first lesion area of the multiple lesion objects and the second lesion area of the multiple lesion objects;
  • the combination unit 133 may include: a selection sub-unit 1331 and a combination sub-unit 1332.
  • the selection subunit 1331 is configured to select, from the multiple unit model matching probabilities and the multiple unit condition matching probabilities, a matching probability pair that satisfies the lesion matching condition as a target matching probability pair; the target matching probability The pair includes a unit model matching probability and a unit condition matching probability;
  • the combination subunit 1332 is configured to, when the unit model matching probability and the unit condition matching probability in the target matching probability pair meet the lesion matching condition, use the lesion object of the target probability matching pair as the target lesion object, and The target quadrant position information of the target lesion object and the lesion attribute of the target lesion object are combined into the medical service data.
  • step S305 For the specific functional implementation of the selection subunit 1331 and the combination subunit 1332, refer to step S305 in the embodiment corresponding to FIG. 5 above, and details are not described herein again.
  • the medical image processing device 1 may include: an image acquisition module 11, a division module 12, and a generation module 13, and may also include an identification module 14 and a model acquisition module 15.
  • the recognition module 14 is used to obtain a positive sample image and a negative sample image; the positive sample image includes a first positive sample image and a second positive sample image; the first positive sample image and the second positive sample image correspond to the same Lesion object; the negative sample image includes a first negative sample image and a second negative sample image; the first negative sample image and the second negative sample image correspond to different lesion objects;
  • the model acquisition module 15 is used to acquire a sample matching model
  • the recognition module 14 is further configured to recognize the positive sample prediction probability between the first positive sample image and the second positive sample image based on the sample matching model, and obtain the first positive sample image and the The positive sample probability between the second positive sample images, and determine the positive sample error between the positive sample predicted probability and the positive sample probability;
  • the identification module 14 is further configured to identify the negative sample prediction probability between the first negative sample image and the second negative sample image based on the sample matching model, and obtain the first negative sample image and the The negative sample probability between the second negative sample images, and determine the negative sample error between the negative sample predicted probability and the negative sample probability;
  • the identification module 14 is further configured to adjust the sample matching model according to the positive sample error and the negative sample error;
  • the identification module 14 is further configured to determine the adjusted sample matching model as the target matching model when the adjusted sample matching model meets the convergence condition.
  • step S305 the specific functional implementation of the recognition module 14 and the model acquisition module 15 can be referred to step S305 in the embodiment corresponding to FIG. 5, which will not be repeated here.
  • the model obtaining module 15 may include: a model obtaining unit 151 and a generating unit 152.
  • the model acquisition unit 151 is configured to acquire an original matching model; the original matching model is obtained through training based on non-biological tissue images;
  • the generating unit 152 is configured to extract target model parameters from the model parameters included in the original matching model, and generate the sample matching model based on transfer learning and the target model parameters.
  • step S305 For the specific functional implementation of the model acquisition unit 151 and the generation unit 152, refer to step S305 in the embodiment corresponding to FIG. 5, and details are not described herein again.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the terminal device in the embodiment corresponding to FIG. 3 to FIG. 7a may be an electronic device 1000.
  • the electronic device 1000 may include: a user interface 1002, a processor 1004, an encoder 1006, and a memory 1008.
  • the signal receiver 1016 is used to receive or send data via the cellular interface 1010, the WIFI interface 1012,..., or the NFC interface 1014.
  • the encoder 1006 encodes the received data into a data format processed by the computer.
  • a computer program is stored in the memory 1008, and the processor 1004 is configured to execute the steps in any one of the foregoing method embodiments through the computer program.
  • the memory 1008 may include a volatile memory (for example, a dynamic random access memory DRAM), and may also include a non-volatile memory (for example, a one-time programmable read-only memory OTPROM). In some embodiments, the memory 1008 may further include a memory remotely provided with respect to the processor 1004, and these remote memories may be connected to the electronic device 1000 through a network.
  • the user interface 1002 may include a keyboard 1018 and a display 1020.
  • the processor 1004 may be used to call a computer program stored in the memory 1008 to implement:
  • the electronic device 1000 described in the embodiment of the present application can perform the description of the medical image processing method in the foregoing embodiment corresponding to FIG. 3 to FIG. 7b, and may also perform the foregoing description of the medical image processing method in the foregoing embodiment corresponding to FIG. 8
  • the description of the medical image processing device 1 will not be repeated here.
  • the description of the beneficial effects of using the same method will not be repeated.
  • the embodiment of the present application also provides a computer storage medium, and the computer storage medium stores the aforementioned computer program executed by the medical image processing device 1, and the computer program Including program instructions.
  • the processor executes the program instructions, it can execute the description of the medical image processing method in the previous embodiment corresponding to FIG. 3 to FIG. 7b. Therefore, it will not be repeated here.
  • the description of the beneficial effects of using the same method will not be repeated.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

一种医疗图像处理方法、装置、电子医疗设备和存储介质,所述方法包括:获取包含生物组织的生物组织图像,识别生物组织中的病灶对象在生物组织图像中的第一区域,并识别与病灶对象匹配的病灶属性(S101);将生物组织在生物组织图像中的图像区域,划分为多个象限位置区域(S102);获取第一区域所在的象限位置区域的目标象限位置信息,根据目标象限位置信息和病灶属性生成医疗业务数据(S103)。

Description

一种医疗图像处理方法、装置、电子医疗设备和存储介质
本申请要求于2019年5月22日提交国家知识产权局、申请号为201910429414.9,申请名称为“一种医疗图像处理方法、装置、电子医疗设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种医疗图像处理方法、装置、电子医疗设备和存储介质。
背景技术
乳腺癌是常见的恶性疾病之一。近年来,女性乳腺癌的发病率越来越高,数据统计,全球每年新增乳腺癌患者高达120万,排名女性恶性肿瘤发病率第一位。乳腺影像学检测是乳腺癌诊断的重要手段,目前常用的乳腺检查的方法有:钼靶成像、B超检测、核磁共振成像。其中钼靶成像是最常用的方法,因此基于钼靶乳腺图像进行医疗诊断具有重要的医疗意义。
发明内容
本申请实施例提供一种医疗图像处理方法、装置、电子医疗设备和存储介质,可以提高识别病灶类别以及识别病灶象限位置信息的效率。
本申请实施例一方面提供了一种医疗图像处理方法,由电子设备执行,该方法包括:
获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
本申请实施例另一方面提供了一种医疗图像处理装置,包括:
图像获取模块,用于获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
划分模块,用于将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
所述图像获取模块,还用于获取所述第一区域所在的象限位置区域的目标象限位置信息;
生成模块,用于根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
本申请实施例另一方面提供了一种电子医疗设备,包括生物组织图像采集器和生物组织图像分析器;
所述生物组织图像采集器获取包含生物组织的生物组织图像;
所述生物组织图像分析器识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
所述生物组织图像分析器将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
所述生物组织图像分析器获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
本申请实施例另一方面提供了一种电子设备,包括:处理器和存储器;
所述处理器和存储器相连,其中,所述存储器用于存储程序,所述处理器用于调用所述程序,以执行如本申请任一实施例所述的方法。
本申请实施例另一方面提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,执行如本申请任一实施例所述的方法。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种医疗图像处理的系统架构图;
图2a-图2b是本申请实施例提供的一种医疗图像处理的场景示意图;
图3是本申请实施例提供的一种医疗图像处理方法的流程示意图;
图4是本申请实施例提供的一种确定多个象限位置区域的流程示意图;
图5是本申请实施例提供的一种生成医疗业务数据的流程示意图;
图6是本申请实施例提供的一种医疗图像处理的功能模块示意图;
图7a是本申请实施例提供的一种确定医疗业务数据的示意图;
图7b是本申请实施例提供的一种电子医疗设备的结构示意图;
图8是本申请实施例提供的一种医疗图像处理装置的结构示意图;
图9是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
相关技术中,诊断钼靶乳腺图像主要是通过专业医务人员的人工诊断,专业医务人员通过自身经验,对钼靶乳腺图像中的病灶进行病灶类别以及病灶位置的判断,以生成诊断结果,后续在诊断结果的基础上,确定治疗方案。
由于专业医务人员对钼靶乳腺图像的人工诊断需要耗费大量的时间,造成对钼靶乳腺图像的诊断效率低下。
请参见图1,是本申请实施例提供的一种医疗图像处理的系统架构图。服务器10f通过交换机10e和通信总线10d与用户终端集群建立连接,用户终端集群可以包括:用户终端10a、用户终端10b、...、用户终端10c。
以用户终端10a为例,当用户终端10a接收到包含病灶的医疗图像时,将上述医疗图像通过交换机10e和通信总线10d至发送服务器10f。服务器10f可以识别病灶所属的类别以及病灶在医疗图像中的象限位置信息,并根据识别到的结果生成医疗业务数据。服务器10f可以将医疗业务数据发送至用户终端10a,后续用户终端10a可以在屏幕上显示该医疗业务数据。
当然,也可以由用户终端10a识别病灶所属的类别以及病灶在医疗图像中的象限位置信息,进而生成医疗业务数据,同样地用户终端10a可以在屏幕上显示该医疗业务数据。
下述以用户终端10a如何识别病灶所述的类别以及识别病灶在医疗图像中的象限位置信息为例进行具体的说明。
其中,图1所示的用户终端10a、用户终端10b、用户终端10c等可以包括手机、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(MID,mobile internet device)、可穿戴设备(例如智能手表、智能手环等)等。
请参见图2a-图2b,是本申请实施例提供的一种医疗图像处理的场景示意图。如图2a中的界面20a所示,用户终端10a获取同一个患者的同一侧乳腺钼靶图像,并将获取到的乳腺钼靶图像显示在屏幕上,其中 乳腺钼靶图像包括:乳腺钼靶CC位图像20b,以及乳腺钼靶MLO位图像20c。乳腺钼靶CC位图像是按照头尾位对乳腺成像,乳腺钼靶MLO位图像是按照斜侧位对乳腺成像。
用户终端10a获取肿块检测模型以及钙化检测模型,肿块检测模型可以识别出图像中肿块病灶所在的位置区域;钙化检测模型可以是识别出图像中钙化病灶所在的位置区域,肿块类别和钙化类别属于病灶类别。
对肿块检测模型来说,终端设备10a可以将乳腺钼靶CC位图像20b输入肿块检测模型,肿块检测模型可以输出乳腺钼靶CC位图像20b中的病灶对象位于乳腺钼靶CC位图像20b中的病灶区域20d,还可以确定乳腺钼靶CC位图像20b中的病灶对象所属的病灶类别为肿块类别。
终端设备10a可以将乳腺钼靶MLO位图像20c输入肿块检测模型,肿块检测模型也可以输出乳腺钼靶MLO位图像20c中的病灶对象位于乳腺钼靶MLO位图像20c中的病灶区域20h,且还可以确定乳腺钼靶MLO位图像20c中的病灶对象所属的病灶类别为肿块类别。
对钙化检测模型来说,终端设备10a同样将乳腺钼靶CC位图像20b输入钙化检测模型,钙化检测模型在乳腺钼靶CC位图像20b中没有检测到钙化病灶;终端设备10a同样将乳腺钼靶MLO位图像20c输入钙化检测模型,钙化检测模型在乳腺钼靶MLO位图像20c中同样没有检测到钙化病灶。
因此,对乳腺钼靶CC位图像20b,以及乳腺钼靶MLO位图像20c来说,只存在肿块病灶,且在乳腺钼靶CC位图像20b中肿块病灶位于病灶区域20d;在乳腺钼靶MLO位图像20c中肿块病灶位于病灶区域20h。
用户终端10a获取图像语义分割模型,图像语义分割模型可以识别图像中每个像素点所属的组织类别,组织类别包括:乳头类别、肌肉类别以及背景类别。
用户终端10a将乳腺钼靶CC位图像20b输入图像语义分割模型,模型可以确定乳腺钼靶CC位图像20b的每个像素点所属的组织类别。 在乳腺钼靶CC位图像20b中,用户终端10a将属于乳头类别的像素组合为区域20e,该区域20e即是乳头所在的区域。用户终端10a在乳腺钼靶CC位图像20b中确定乳腺边缘线20g,将垂直于乳腺边缘线20g且经过区域20e的线条20f作为象限分割线20f。在乳腺钼靶CC位图像20b中,位于象限分割线20f以下的为内象限,位于象限分割线20f以上的为外象限。在乳腺钼靶CC位图像20b中,由于病灶区域20d位于内象限(病灶区域20d中的大部分位于内象限,那就认为病灶区域20d位于内象限),因此用户终端10a可以确定乳腺钼靶CC位图像20b中的肿块病灶位于内象限。
用户终端10a将乳腺钼靶MLO位图像20c输入图像语义分割模型,模型可以确定乳腺钼靶MLO位图像20c的每个像素点所属的组织类别。在乳腺钼靶MLO位图像20c中,用户终端10a将属于乳头类别的像素组合为区域20j,该区域20j即是乳头所在的区域。用户终端10a将属于肌肉类别的象限组合为肌肉区域,并确定肌肉区域与非肌肉区域的区域分界线20m,将垂直于区域分界线20m且经过区域20j的线条20k作为象限分割线20k。在乳腺钼靶MLO位图像20c中,位于象限分割线20k以下的为下象限,位于象限分割线20k以上的为上象限。在乳腺钼靶MLO位图像20c中,由于病灶区域20h位于下象限,因此用户终端10a可以确定乳腺钼靶MLO位图像20c中的肿块病灶位于下象限。
用户终端10a将由乳腺钼靶CC位图像20b确定的内象限,以及由乳腺钼靶MLO位图像20c确定的下象限,组合为象限位置信息20n“内下象限”。
用户终端10a可以将象限位置信息20n“内下象限”以及与乳腺钼靶CC位图像20b、乳腺钼靶MLO位图像20c均对应的病灶类别“肿块”,组合为诊断意见:“内下象限见肿块”。
如图2b中的界面20x所示,用户终端10a在识别乳腺钼靶CC位图像20b、乳腺钼靶MLO位图像20c的过程中,可以在屏幕上播放预设的动画。当检测到乳腺钼靶CC位图像20b、乳腺钼靶MLO位图像20c识 别完毕时,如界面20y所示,停止播放动画,将生成的诊断意见显示在屏幕上。进一步地,还可以将识别到的乳腺钼靶CC位图像20b的病灶区域以及乳腺钼靶MLO位图像20c的病灶区域,使用矩形框在对应乳腺钼靶图像中标记出来。
其中,识别病灶对象所属的病灶类别、确定病灶对象在图像中的病灶区域,确定象限分割线以及确定象限位置信息的具体过程可以参见下述图3-图7b对应的实施例。
请参见图3,是本申请实施例提供的一种医疗图像处理方法的流程示意图,如图3所示,医疗图像处理方法可以包括以下步骤。
步骤S101,获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性。
具体地,终端设备(如上述图2a对应实施例中的用户终端10a)获取包含生物组织的的生物组织图像(如上述图2a对应实施例中的乳腺钼靶CC位图像20b以及乳腺钼靶MLO位图像20c),且该生物组织包括病灶对象,其中生物组织可以是乳腺、肝脏、肾脏等。
终端设备识别该病灶对象在生物组织图像中的区域,作为第一区域(如上述图2a对应实施例中的病灶区域20d以及病灶区域20h),终端设备识别与该病灶对应的病灶属性。
例如,当生物组织是乳腺时,病灶属性可以包括:肿块、钙化、结构扭曲等;当生物组织是肝脏时,病灶属性可以包括:血管瘤、肝脓肿、肝囊肿等;当生物组织是肾脏时,病灶属性可以包括:肾囊肿、肾肿瘤等。
识别生物组织图像中病灶对象的位置区域以及病灶属性可以是基于多个病灶检测模型(如上述图2a对应实施例中的肿块检测模型以及钙化检测模型)来确定的。一个病灶检测模型对应一种病灶属性,每个病灶检测模型都可以确定生物组织图像中的病灶对象的属性是否是该模型对应的属性,若病灶对象的属性是该模型对应的属性,还可以确定病 灶对象在生物组织图像中的位置区域。
例如,若存在2个病灶检测模型,病灶检测模型A对应的病灶属性为肿块,病灶检测模型B对应的病灶属性为钙化。将生物组织图像输入病灶检测模型A,若病灶检测模型A输出位置区域,说明病灶对象位于该位置区域,且位于上述位置区域的病灶对象的病灶属性为肿块;若病灶检测模型A没有输出位置区域,说明肿块不是该病灶对象的病灶属性。对病灶检测模型B可以采用同样的方法,确定病灶对象的病灶属性是否为钙化,若病灶属性为钙化,还可以确定该病灶对象在生物组织图像中的位置区域。
病灶检测模型识别病灶对象的第一区域以及病灶对象的病灶属性的方式有两种,首先对两种方式中的其中一种方式进行详细的说明:终端设备获取多个固定尺寸的滑动窗口,例如,滑动窗口为16×16,16×32,32×64,64×16等,每个滑动窗口在生物组织图像上滑动,用于提取多个图像块,每个图像块的尺寸等于滑动窗口的尺寸,且图像块之间存在一定程度的重叠,这样可以保证生物组织图像中的所有区域都可以参与后续的识别。
基于病灶检测模型中的分类器,识别每个图像块属于该病灶检测模型对应病灶属性的病灶概率,若图像块识别出来的病灶概率大于病灶概率阈值,说明该图像块中包含病灶对象,并确定该病灶对象的病灶属性为病灶检测模型对应的病灶属性,进而将该图像块在生物组织图中的位置区域作为病灶对象的候选区域。当然,若不存在任何一个图像块的病灶概率大于病灶概率阈值,说明生物组织图像中的病灶对象的病灶属性不是该病灶检测模型对应的病灶属性,也即是病灶检测模型没有检测出位置区域。
其中病灶检测模型中的分类器是通过包含样本病灶对象的图像块和不包含样本病灶对象图像块训练得到的,且上述样本病灶对象的病灶属性等于病灶检测模型对应的病灶属性,即该分类器是二分类器。
当候选区域的数量只有一个时,可以直接将该候选区域作为第一区 域;当候选区域存在多个时,虽然候选区域对应的病灶概率都是大于概率阈值的,但候选区域之间可能存在重叠(即是多个候选区域对应同一个病灶对象),因此后续可以采用NMS(Non-Maximum Suppression,非极大值抑制)从多个候选区域中确定病灶对象在生物组织图像中的第一区域。
NMS的具体过程为:将所有的候选区域组合为候选区域集合,终端设备可以从候选区域集合中提取出具有最大病灶概率的候选区域,作为轮询候选区域,确定轮询候选区域与其余候选区域之间的重叠面积,再将重叠面积小于面积阈值的候选区域组合为新的候选区域集合,再从新的候选区域集合中选择具有最大病灶概率的候选区域作为轮询候选区域,不断循环,当候选区域集合为空集时,将轮询候选区域作为病灶对象在生物组织图像中的第一区域。
下面对病灶检测模型识别病灶对象的第一区域以及病灶对象的病灶属性的另一种方式进行详细的说明:基于病灶检测模型的多个卷积层,对生物组织图像进行卷积运算,可以得到卷积特征信息,卷积特征信息可以看作是多个特征图(feature map)。
每个卷积层对应1个或者多个卷积核(kernel,也可以称为滤波器,或者称为感受野),卷积运算是指卷积核与位于输入数据不同位置的子数据进行矩阵乘法运算,每一个卷积层的输出数据的通道数(可以理解为是特征图的数量)是由该卷积层中的卷积核的数量决定的,且输出数据(即是特征图)的高度H out和宽度W out是由输入数据的尺寸、卷积核的尺寸、步长(stride)以及边界填充(padding)共同决定的,即H out=(H in-H kernel+2*padding)/stride+1,W out=(W in-W kernel+2*padding)/stride+1。H in,H kernel分别表示输入数据的高度和卷积核的高度;W in,W kernel分别表示输入数据的宽度和卷积核的宽度。
随着不断的卷积,卷积特征信息的数据尺寸(即是特征图的尺寸)不断变小。例如,生物组织图像的尺寸为H×W×1,经过第一个卷积层后输出卷积特征信息的尺寸为
Figure PCTCN2020084147-appb-000001
经过第二个卷积层后输出卷积特 征信息的尺寸为
Figure PCTCN2020084147-appb-000002
终端设备将最后一次卷积得到的卷积特征信息(或者是特征图)作为卷积热图heatmap(卷积热图的尺寸为m×n×2),将卷积热图进行上采样(upsampling),即是将卷积热图放大至与生物组织图像相同尺寸,得到尺寸为H×W×2的掩模,根据该掩模可以确定生物组织图像中每个像素点的病灶属性为病灶检测模型对应病灶属性的概率,终端设备可以将大于病灶概率阈值的像素点所组成为图像区域,作为病灶对象在生物组织图像中的第一区域,并可以确定位于该第一区域的病灶对象的病灶属性等于病灶检测模型对应的病灶属性。
当然,若不存在任何一个像素点的概率大于病灶概率阈值,说明生物组织图像中的病灶对象的病灶属性不是该病灶检测模型对应的病灶属性,也即是病灶检测模型没有检测出位置区域。
当存在多个病灶检测模型时,可以采用上述方式,分别确定生物组织图像中的病灶对象的病灶属性以及第一区域。
用户终端可以在生物组织图像中使用矩形框将第一区域标记出来。
步骤S102,将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域。
具体地,生物组织包括第一组织对象,例如当生物组织是乳腺时,第一组织对象可以是乳头;当生物组织是肝脏是,第一组织对象可以是肝囊。
识别第一组织对象在生物组织图像中的区域,作为第二区域(如上述图2a对应实施例中的区域20e和区域20j),其中确定第二区域可以是基于图像语义分割模型,图像语义分割模型是用于识别生物组织图像中每个像素点的对象属性,图像语义分割模型的工作过程与前述中病灶检测模型确定第一区域和病灶属性的第二种方式类似。
终端设备根据第二区域在生物组织图像中确定象限分割线(如上述图2a对应实施例中的象限分割线20f和象限分割线20k),将生物组织 在生物组织图像中的图像区域作为组织图像区域,并根据象限分割线,将组织图像区域划分为多个象限位置区域(如上述图2a对应实施例中的内象限、外象限、上象限和下象限)。
步骤S103,获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
具体地,终端设备获取第一区域所在的象限位置区域的象限位置信息(称为目标象限位置信息,如上述图2a对应实施例中的象限位置信息20n)。终端设备根据目标象限位置信息以及识别出来的病灶属性生成医疗业务数据(如上述图2a对应实施例中的诊断意见)。
后续,可以在终端设备的屏幕上显示使用矩形框标记第一区域后的生物组织图像,以及显示医疗业务数据。
本申请实施例可以作为软件接口方式提供服务,输入为单侧多投照位钼靶图像,例如可以为CC位和MLO位,输出为医疗业务数据。
请参见图4,是本申请实施例提供的一种确定多个象限位置区域的流程示意图,确定多个象限位置区域包括步骤S201-步骤S203,且步骤S201-步骤S203是上述图3对应实施例中步骤S102的一个具体实施例:
步骤S201,识别所述第一组织对象在所述生物组织图像中的第二区域。
具体地,生物组织包括第一组织对象,生物组织图像包括第一生物组织图像(如上述图2a对应实施例中的乳腺钼靶CC位图像20b)和第二生物组织图像(如上述图2a对应实施例中的乳腺钼靶MLO位图像20c),其中第一生物组织图像和第二生物图像是对生物组织在不同方向上的成像,例如生物组织是乳腺时,第一生物组织图像可以是乳腺在头尾位方向上的成像;所述第二生物组织图像可以是乳腺在斜侧位方向上的成像。
需要注意的是,第一生物组织图像和第二生物组织图像中都包含生物组织,且都包含第一组织对象。
终端设备获取图像语义分割模型,将第一生物组织图像输入图像语 义分割模型中,基于图像语义分割模型可以确定第一组织对象在第一生物组织图像中的图像区域,称为第一组织标识区域(如上述图2a对应实施例中的区域20e)。将第二生物组织图像输入图像语义分割模型中,基于图像语义分割模型可以确定第一组织对象在第二生物组织图像中的图像区域,称为第二组织标识区域(如上述图2a对应实施例中的区域20j)。
终端设备可以将第一组织标识区域和第二组织标识区域确定为第二区域,也就是说,第二区域包括与第一生物组织图像对应的第一组织标识区域以及与第二生物组织图像对应的第二组织标识区域。
其中,图像语义分割模型可以是FCN(全卷积网络,Fully Convolutional Networks)模型,模型分类数设置为3,分别表示背景、第一组织对象和第二组织对象。训练图像语义分割模型时,图像语义分割模型的权重初始化可以使用PASCAL VOC数据集,然后使用公开数据集DDSM(Digital Database for Screening Mammography,乳腺X线摄影数字化数据库),最后使用国内医院数据进行迁移学习(输入图像尺寸可以为800×800像素,批处理大小可以为8,学习率可以0.00001,最大迭代次数可以是10000),最终得到可以提取生物组织图像中第一组织对象所在区域和第二组织对象所在区域的全卷积分割网络。
下面对基于图像语义分割模型确定第一生物组织图像中的第一组织标识区域进行具体的说明:图像语义分割模型包括正向卷积层和转置卷积层,正向卷积层是用于正向卷积运算,正向卷积运算可以降低特征图的尺寸,转置卷积层是用于反向卷积运算,反向卷积运算可以增大特征图的尺寸。
首先基于正向卷积层,对第一生物组织图像进行正向卷积运算,提取第一生物组织图像的正向卷积特征信息,基于转置卷积层,对正向卷积特征信息进行反向卷积运算,得到H×W×3的掩模,其中H,W分别表示第一生物组织图像的高和宽,上述H×W×3的掩模包括:属于背景类别的背景掩模,属于第一组织属性的第一组织掩模,以及属于第二组织 属性的第二组织掩模,上述背景掩模、第一组织掩模、第二组织掩模的尺寸均为H×W,此处的第一组织属性用于标识第一组织对象,第二组织属性用于标识第二组织对象,当生物组织为乳腺时,第一组织对象可以是乳头,第二组织对象可以是肌肉。
掩模中每个单位掩模的取值表示对应第一生物组织图像中的像素点属于该掩模对应属性的概率。
对第一生物组织图像中的每一个像素点来说,在上述3个掩模,将具有最大概率值的掩模对应的属性作为该像素点的对象属性,可以知道对象属性包括背景属性、第一组织属性和第二组织属性。简单来说,根据上述3个掩模,可以确定每个像素点要么是属于背景属性,要么是属于第一组织属性,要么是属于第二组织属性的。
终端设备可以将属于第一组织属性的像素点组成的图像区域,作为第一组织对象在第一生物组织图像中的第一组织标识区域。
对第二生物组织图像可以采用相同的方式,即是首先确定3个掩模,然后根据3个掩模确定第二生物组织图像中每个像素点的对象属性,同样地将属于第一组织属性的像素点组成的图像区域,作为第一组织对象在第二生物组织图像中的第二组织标识区域。
步骤S202,根据所述第二区域在所述生物组织图像中确定象限分割线。
具体地,由于生物组织图像包括第一生物组织图像和第二生物组织图像,因此对第一生物组织图像来说,根据第一组织标识区域在第一生物组织图像中确定第一分割线(如上述图2a对应实施例中的象限分割线20f);对第二生物组织图像来说,根据第二组织标识区域在第二生物组织图像中确定第二分割线(如上述图2a对应实施例中的象限分割线20k),其中第一分割线和第二分割线属于象限分割线。
下面对如何确定第一分割线进行具体的说明:终端设备获取生物组织在第一生物组织图像中的边缘分界线(如上述图2a对应实施例中的乳腺边缘线20g),其中当生物组织是乳腺时,边缘分界线可以是乳腺后边 缘的分界线,终端设备在第一生物组织图像中,将垂直于边缘分界线且经过第一组织标识区域的垂线,作为第一分割线。
下面对如何确定第二分割线进行具体的说明:终端设备根据前述中图像语义分割模型所确定的每个像素点的对象属性,在第二生物组织图像中,将属于第二组织属性的像素点组成的图像区域,作为第二生物组织图像中的第二组织对象在第二生物组织图像中的对象区域。例如,当第二组织对象是肌肉时,对象区域即是肌肉在第二生物组织图像中的区域。
对第二生物组织图像来说,第一组织对象(例如乳头)所在的区域为第二组织标识区域,第二组织对象(例如肌肉)所在的区域为对象区域,其余的区域为背景区域。终端设备确定对象区域与非对象区域之间的分界线,作为对象分界线(如上述图2a对应实施例中的区域分界线20m)。
需要说明的是,若对象区域与非对象区域之间的分界线是曲线,对该曲线进行拟合,得到直线,并将该直线作为对象分界线,拟合所得到的对象分界线与未拟合前的曲线中各个点的距离之和最小。终端设备在第二生物组织图像中,将垂直于对象分界线且经过第二组织标识区域的垂线,作为第二分割线。
步骤S203,将所述生物组织在所述生物组织图像中的图像区域作为组织图像区域,并根据所述象限分割线,将所述组织图像区域划分为所述多个象限位置区域。
具体地,第一生物组织图像是生物组织在头尾位方向上的成像,第二生物组织图像是生物组织在斜侧位方向上的成像。例如,当生物组织是乳腺时,第一生物组织图像可以是乳腺钼靶CC位图像,第二生物组织图像可以是乳腺钼靶MLO位图像。
终端设备将生物组织在第一生物组织图像中的图像区域,作为第一组织图像区域,在第一组织图像区域中,将位于第一分割线以上的区域,作为外象限位置区域;将位于第一分割线以下的区域,作为内象限位置 区域。
换句话说,根据第一生物组织图像,可以确定内象限位置区域以及外象限位置区域。
终端设备将生物组织在第二生物组织图像中的图像区域,作为第二组织图像区域,在第二组织图像区域中,将位于第二分割线以上的区域,作为上象限位置区域;将位于第二分割线以下的区域,作为下象限位置区域。
换句话说,根据第二生物组织图像,可以确定上象限位置区域以及下象限位置区域。
终端设备可以将内象限位置区域、外象限位置区域、上象限位置区域和下象限位置区域确定为象限位置区域。
请参见图5,是本申请实施例提供的一种生成医疗业务数据的流程示意图,生成医疗业务数据包括步骤S301-步骤S305,且步骤S301-步骤S305是上述图3对应实施例中步骤S103的一个具体实施例:
步骤S301,获取所述第一区域所在的象限位置区域的目标象限位置信息。
具体地,由于生物组织图像包括第一生物组织图像和第二生物组织图像,第一区域包括病灶对象在第一生物组织图像中的第一病灶区域(如上述图2a对应实施例中的病灶区域20d)和上述病灶对象在第二生物组织图像中的第二病灶区域(如上述图2a对应实施例中的病灶区域20h)。
当第一病灶区域位于第一生物组织图像中的内象限位置区域,且第二病灶区域位于第二生物组织图像中的上象限位置区域时,目标象限位置信息为内上象限;
当第一病灶区域位于第一生物组织图像中的内象限位置区域,且第二病灶区域位于第二生物组织图像中的下象限位置区域时,目标象限位置信息为内下象限;
当第一病灶区域位于第一生物组织图像中的外象限位置区域,且第 二病灶区域位于第二生物组织图像中的上象限位置区域时,目标象限位置信息为外上象限;
当第一病灶区域位于第一生物组织图像中的外象限位置区域,且第二病灶区域位于第二生物组织图像中的下象限位置区域时,目标象限位置信息为外下象限。
步骤S302,从所述第一生物组织图像中提取与所述第一病灶区域对应的第一子图像。
具体地,终端设备从第一生物组织图像中提取第一病灶区域所在的子图像,作为第一子图像。
步骤S303,从所述第二生物组织图像中提取与所述第二病灶区域对应的第二子图像。
具体地,终端设备从第二生物组织图像中提取第二病灶区域所在的子图像,作为第二子图像。
步骤S304,获取目标匹配模型,基于所述目标匹配模型识别所述第一子图像和所述第二子图像之间的模型匹配概率。
具体地,终端设备获取目标匹配模型,目标匹配模型可以识别两张图像中所包含的病灶对象是否是同一个病灶对象,这是因为生物组织图像中的病灶对象可能有多个,那么对应的第一生物组织图像中的病灶对象和第二生物组织图像中的病灶对象也有多个,目标匹配模型可以用于识别两张图像中的病灶对象之间的匹配概率,用于判断病灶对象的一致性。
终端设备将第一子图像输入目标匹配模型,目标匹配模型中的卷积层和池化层,可以提取第一子图像的第一池化特征信息。终端设备将第二子图像输入目标匹配模型,目标匹配模型中的卷积层和池化层,同样可以提取第二子图像的第二池化特征信息。终端设备将第一池化特征信息和第二池化特征信息在列方向上拼接为目标池化特征信息,目标匹配模型中分类器识别目标池化特征信息与匹配类别的匹配概率,将该匹配概率作为模型匹配概率,模型匹配概率可以标识第一子图像中的病灶对 象与第二子图像中的病灶对象是同一个病灶对象的概率,且匹配概率是一个0到1之间的实数。目标匹配模型中分类器同样也是二分类器,不仅可以输出第一子图像中的病灶对象与第二子图像中的病灶对象是同一个病灶对象的概率,还可以输出第一子图像中的病灶对象与第二子图像中的病灶对象不是同一个病灶对象的概率。
步骤S305,获取所述第一病灶区域和所述第二病灶区域之间的条件匹配概率,当所述模型匹配概率和所述条件匹配概率满足病灶匹配条件时,将所述目标象限位置信息和所述病灶属性组合为所述医疗业务数据。
具体地,终端设备确定第一病灶区域在第一生物组织图像中的尺寸,称为第一区域尺寸;终端设备确定第二病灶区域在第二生物组织图像中的尺寸,称为第二区域尺寸。当第一区域尺寸大于第二区域尺寸时,终端设备将第二区域尺寸除以第一区域尺寸,所得到的数值作为尺寸匹配概率;当第一区域尺寸小于第二区域尺寸时,终端设备将第一区域尺寸除以第二区域尺寸,所得到的数值作为尺寸匹配概率,可以知道尺寸匹配概率是一个0到1之间的实数。
终端设备在第一生物组织图像中,确定第一病灶区域与第一生物组织图像中的第一组织对象对应的图像区域(即是前述中的第一组织标识区域)之间的区域距离(称为第一区域距离);终端设备在第二生物组织图像中,确定第二病灶区域与第二生物组织图像中的第一组织对象对应的图像区域(即是前述中的第二组织标识区域)之间的区域距离(称为第二区域距离)。当第一区域距离大于第二区域距离时,终端设备将第二区域距离除以第一区域距离,得到的数值作为距离匹配概率;当第一区域距离小于第二区域距离时,终端设备将第一区域距离除以第二区域距离,得到的数值作为距离匹配概率,距离匹配概率也是一个0到1之间的实数。
终端设备将前述中的尺寸匹配概率以及距离匹配概率作为条件匹配概率。
若模型匹配概率以及条件匹配概率中的尺寸匹配概率以及距离匹 配概率均大于预设概率阈值,则确定模型匹配概率和条件匹配概率满足病灶匹配条件。
或者,分别为模型匹配概率以及条件匹配概率中的尺寸匹配概率以及距离匹配概率设置权重系数,若模型匹配概率、尺寸匹配概率以及距离匹配概率乘以各自的权重系数之和大于预设概率阈值,则确定模型匹配概率和条件匹配概率满足病灶匹配条件。
当模型匹配概率和条件匹配概率满足病灶匹配条件,且生物组织图像中的病灶对象的数量只有一个时,终端设备可以直接将前述中确定的目标象限位置信息和病灶属性组合为医疗业务数据。
当生物组织图像中的病灶对象的数量有多个时,那么对应的就存在多个第一子图以及多个第二子图像,第一子图像的数量=第二子图像的数量=病灶对象的数量。对应地,模型匹配概率包括多个单位模型匹配概率,每个单位模型匹配概率均是通过目标匹配模型所确定的一个第一子图像与一个第二子图之间的匹配概率;条件匹配概率包括多个单位条件匹配概率,每个条件匹配概率是指通过区域尺寸以及区域距离所确定的一个第一子图像与一个第二子图之间的匹配概率。可以知道,单位模型匹配概率的数量=单位条件匹配概率的数量=病灶对象的数量的平方。
从多个单位匹配模型概率以及多个单位条件匹配概率中,选择满足病灶匹配条件的匹配概率对,作为目标匹配概率对,每个目标匹配概率对包括一个单位模型匹配概率以及一个单位条件匹配概率。可以知道,每个目标匹配对所对应的第一子图像和第二子图像对应同一个病灶对象,这样就可以从第一生物组织图像以及第二生物组织图像中确定成对病灶对象(即是同一个病灶对象)。
当目标匹配概率对中的单位模型匹配概率和单位条件匹配概率满足病灶匹配条件时,将目标匹配概率对的病灶对象作为目标病灶对象,终端设备将目标病灶对象的目标象限位置信息以及目标病灶对象的对象属性,组合为医疗业务数据。目标病灶对象在第一生物组织图像中所确定的对象属性与目标病灶对象在第二生物组织图像中所确定的对象 属性是相同的。
举例来说,生物组织图像包括2个病灶对象,在第一生物组织图像中,包含病灶对象1与病灶对象2;在第二生物组织图像中,包含病灶对象3与病灶对象4。基于病灶检测模型A确定上述病灶对象1、病灶对象2、病灶对象3、病灶对象4的病灶属性均为肿块。终端设备可以获取病灶对象1对应的第一子图像1,病灶对象2对应的第一子图像2,病灶对象3对应的第二子图像1以及病灶对象4对应的第二子图像2。
终端设备基于目标匹配模型确定第一子图像1与第二子图像1之间的单位模型匹配概率为0.9,第一子图像1与第二子图像2之间的单位模型匹配概率为0.2;第一子图像2与第二子图像1之间的单位模型匹配概率为0.1;第一子图像2与第二子图像2之间的单位模型匹配概率为0.8。
终端设备根据病灶对象的区域尺寸以及病灶对象与第一组织对象之间的区域距离,确定第一子图像1与第二子图像1之间的单位条件匹配概率为0.8,第一子图像1与第二子图像2之间的单位条件匹配概率为0.1;第一子图像2与第二子图像1之间的单位条件匹配概率为0.2;第一子图像2与第二子图像2之间的单位条件匹配概率为0.9。
终端设备可以从上述4个单位模型匹配概率以及4个单位条件匹配概率中,确定第一子图像1与第二子图像1之间的单位模型匹配概率以及单位条件匹配概率可以组合为目标匹配对,第一子图像2与第二子图像2之间的单位模型匹配概率以及单位条件匹配概率可以组合为目标匹配对。
因此,第一子图像1对应的病灶对象1与第二子图像1对应的病灶对象3是同一个病灶对象,终端设备可以将病灶对象1与病灶对象3作为目标病灶对象,将病灶对象1与病灶对象3所确定的目标象限位置信息以及病灶属性“肿块”组合为医疗业务数据。
第一子图像2对应的病灶对象2与第二子图像2对应的病灶对象4是同一个病灶对象,终端设备可以将病灶对象2与病灶对象4作为目标 病灶对象,将病灶对象2与病灶对象4所确定的目标象限位置信息以及病灶属性“肿块”组合为医疗业务数据。
在本申请一实施例中,下面对如何训练目标匹配模型进行详细的说明:终端设备获取正样本图像以及负样本图像,其中正样本图像包括第一正样本图像和第二正样本图像,且第一正样本图像与第二正样本图像对应同一个病灶对象;负样本图像包括第一负样本图像和第二负样本图像,且第一负样本图像与第二负样本图像对应不同的病灶对象,此处的不同病灶对象可以是指病灶属性不同,也可以是病灶属性相同但不是同一个病灶对象。
终端设备获取原始匹配模型,其中原始匹配模型是由非生物组织图像训练得到的类模型,例如,可以是风景图像或者是人脸图像或者是ImageNet数据集所训练的分类模型。终端设备从原始匹配模型所包含的模型参数中,选择前n个卷积层的模型参数作为目标模型参数,例如可以选择原始匹配模型中前5个卷积层所对应的模型参数作为目标模型参数,终端设备根据目标模型参数生成样本匹配模型,其中样本匹配模型中的前n个卷积层对应的模型参数是目标模型参数。上述原理是迁移学习的算法原理,即是样本匹配模型是部分起始权重不是随机初始化,而是由另外一个已经训练好的分类模型确定的,这样后续训练样本匹配模型时,只需要少量的生物组织图像作为样本图像就可以使样本匹配模型收敛。
终端设备获取样本匹配模型后,基于样本匹配模型识别正样本中的第一正样本图像与第二正样本图像之间的正样本预测概率,并获取第一正样本图像与第二正样本图像之间的真实匹配概率,作为正样本概率,可以知道正样本概率即是100%。终端设备确定正样本预测概率与上述正样本概率之间的正样本误差,基于上述正样本误差反向传播调整样本匹配模型中的模型参数。
同样地,基于样本匹配模型识别负样本中的第一负样本图像与第二负样本图像之间的负样本预测概率,并获取第一负样本图像与第二负样 本图像之间的真实匹配概率,作为负样本概率,可以知道负样本概率即是0%。终端设备确定负样本预测概率与上述负样本概率之间的负样本误差,基于上述负样本误差反向传播调整样本匹配模型中的模型参数。
反向传播中的梯度下降算法可以使用Adam,批处理大小可以为128,初始学习率可以为0.001,最大迭代次数可以为10000。
当调整后的样本匹配模型的模型参数变化量小于变化量阈值,或者调整次数大于次数阈值,或者调整后的样本匹配模型所确定的样本误差(包括正样本误差与负样本误差)小于误差阈值,终端设备可以将调整后的样本匹配模型作为目标匹配模型。
请参见图6,是本申请实施例提供的一种医疗图像处理的功能模块示意图,下述以乳腺钼靶CC位图像以及乳腺钼靶MLO位图像为例,且CC位图像以及MLO位图像是对同一个乳腺在不同方向上的成像。医疗图像的处理包括4个功能模块,分别为病灶象限计算模块、病灶检测模块、病灶匹配模块以及信息融合模块。终端设备获取到CC位图像以及MLO位图像后,病灶检测模块对上述两张图像分别进行病灶检测,确定病灶对象在CC位图像上的第一病灶区域以及根据CC位图像识别该病灶对象的病灶属性,可以将第一病灶区域以及病灶属性称为CC位病灶。病灶检测模块确定该病灶对象在MLO位图像上的第二病灶区域,以及根据MLO位图像识别该病灶对象的病灶属性,可以将第二病灶区域以及病灶属性称为MLO位病灶。
病灶匹配模块获取目标匹配模型,其中目标匹配模型是通过多个正样本图像和多个负样本图像训练得到的,病灶匹配模块根据CC位病灶中的第一病灶区域,确定第一子图像,同样地根据MLO位病灶中的第二病灶区域,确定第二子图像。病灶匹配模块将第一子图像以及第二子图像输入目标匹配模型中,目标匹配模型就可以识别第一子图像与第二子图像之间的模型匹配概率。病灶匹配模块再根据上述两个病灶区域的区域尺寸(也就是第一子图像的图像尺寸和第二子图像的图像尺寸)以及两个病灶区域距离各自生物组织图像中的乳头之间的区域距离,确定 条件匹配概率,可以将条件匹配概率作为手工特征。病灶匹配模块判断模型匹配概率以及条件匹配概率是否满足病灶匹配条件。
病灶象限计算模块获取图像语义分割模型,其中图像语义分割模型是通过多个样本图像训练得到的,样本图像中的每一个像素点都进行了标注,标注类型包括:背景属性、乳头属性和肌肉属性。换句话说,图像语义分割模型可以识别出一张图像中每个像素点要么是属于背景属性的,要么是属于乳头属性的,要么是属于肌肉属性的。病灶象限计算模块将CC位图像输入图像语义分割模型,基于图像语义分割模型可以确定CC位图像中的乳头位置(CC位图像是没有肌肉信息的,因此没有肌肉区域),该乳头位置即是前述中的第一组织标识区域;病灶象限计算模块将MLO位图像输入图像语义分割模型,基于图像语义分割模型可以确定MLO位图像中的乳头位置(即是前述中的第二组织标识区域)以及肌肉位置(即是前述中的对象区域)。对CC位图像来说,病灶象限计算模块根据乳头位置以及乳腺边缘分界线,确定第一分割线,根据第一分割线,确定第一病灶区域位于内象限或者外象限。对MLO位图像来说,病灶象限计算模块根据肌肉位置拟合肌肉边界线直线方程,进而确定肌肉边界线(即是前述中的对象分界线),再根据乳头位置以及肌肉边界线,确定第二分割线,根据第二分割线,确定第二病灶区域位于上象限或者下象限。
当模型匹配概率以及条件匹配概率满足病灶匹配条件时,信息融合模块将CC位图像所确定的象限位置信息以及MLO位图像所确定的象限位置信息融合为目标象限位置信息,并将目标象限位置信息以及病灶属性组合为医疗业务数据。当病灶对象的数量为多个时,信息融合模块还需要将多个第一子图像以及多个第二子图像进行配对,分别确定每个图像对所对应的病灶对象的目标象限位置信息以及病灶属性。
请参见图7a,是本申请实施例提供的一种确定医疗业务数据的示意图。图像30a是乳腺钼靶CC位图像,图像30b是乳腺钼靶MLO位图像,且图像30a与30b对应同一个乳腺。
对图像30a来说,首先确定病灶对象所在的第一病灶区域30c,并确定该病灶对象的病灶属性为钙化。识别乳头(即是第一组织对象)所在的第一组织标识区域30d,并确定乳腺的边缘分界线30f,终端设备将垂直于边缘分界线30f并经过第一组织标识区域30d的垂线30e作为第一分割线30e,由于第一病灶区域30c位于第一分割线30e的上方,因此确定第一病灶区域30c位于外象限位置区域。
对图像30b来说,首先确定病灶对象所在的第二病灶区域30g,并确定该病灶对象的病灶属性为钙化。识别乳头(即是第一组织对象)所在的第二组织标识区域30h。基于图像语义分割模型确定肌肉(即是前述中的第二组织对象)所在的位置区域,进而确定对象分界线30m,终端设备将垂直于对象分界线30m并经过第二组织标识区域30h的垂线30k作为第二分割线30k,由于第二病灶区域30g位于第二分割线30k的上方,因此确定第二病灶区域30g位于上象限位置区域。
由于第一病灶区域30c位于外象限位置区域,且第二病灶区域30g位于上象限位置区域,因此目标象限位置信息为:外上象限。将目标象限位置信息“外上象限”以及病灶属性“钙化”组合为医疗业务数据:外上象限钙化。后续,终端设备可以显示该医疗业务数据,同时将基于矩形框标识了第一病灶区域30c的图像30a以及基于矩形框标识了第二病灶区域30g的图像30b一并显示在屏幕上。
上述可知,通过自动化的方式识别医学图像中病灶的类别以及病灶的象限位置信息,进而自动生成业务数据,相比人工诊断,可以节约确定病灶类别以及病灶象限位置信息的耗时,提高确定病灶类别以及病灶象限位置信息的效率和准确率。
请参见图7b,是本申请实施例提供的一种电子医疗设备的结构示意图,电子医疗设备可以为上述图3-图7a对应实施例中的终端设备;电子医疗设备包括生物组织图像采集器和生物组织图像分析器,上述电子医疗设备可以采集医疗图像并分析医疗图像,具体过程包括如下步骤:
步骤S401,生物组织图像采集器获取包含生物组织的生物组织图像。
具体地,生物组织图像采集器采集包含生物组织的生物组织图像。若生物组织是乳腺,那么生物组织图像采集器可以是钼靶机,对应地生物组织图像就是乳腺钼靶图像;若生物组织是肝脏,那么生物组织图像采集器可以是B超机,对应地生物组织图像就是肝脏B超图像;若生物组织是大脑,那么生物组织图像采集器可以是MRI(Magnetic Resonance Imaging,磁共振成像)仪,对应地生物组织图像就是MRI图像。
步骤S402,所述生物组织图像分析器识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性。
具体地,生物组织图像分析器识别生物组织图像中病灶对象在生物组织图像中的区域,称为第一区域,并识别病灶对象的病灶属性。
其中,生物组织图像分析器识别生物组织图像中病灶对象的第一区域以及病灶属性可以是基于多个病灶检测模型来确定的。一个病灶检测模型对应一种病灶属性,每个病灶检测模型都可以确定生物组织图像中的病灶对象的属性是否是该模型对应的属性,若病灶对象的属性是该模型对应的属性,还可以确定病灶对象在生物组织图像中的位置区域。
生物组织图像分析器基于NMS算法从模型确定的位置区域中确定第一区域,生物组织图像分析器确定病灶对象的第一区域以及病灶属性的具体过程可以参见上述图3对应实施例中的步骤S101。
步骤S403,所述生物组织图像分析器将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区。
具体地,生物组织包括第一组织对象。生物组织图像分析器识别第一组织对象在生物组织图像中的第二区域,生物组织图像分析器根据第二区域在生物组织图像中确定象限分割线,将生物组织在生物组织图像中的图像区域作为组织图像区域。生物组织图像分析器根据象限分割线,将组织图像区域划分为多个象限位置区域。
其中,生物组织图像分析器确定多个象限位置区域的具体过程可以参见上述图4对应实施例中的步骤S201-步骤S203。
步骤S404,所述生物组织图像分析器获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
具体地,生物组织图像分析器获取第一区域所在的象限位置区域的象限位置信息(称为目标象限位置信息)。生物组织图像分析器根据目标象限位置信息以及识别出来的病灶属性生成医疗业务数据,并在电子医疗设备的屏幕上显示使用矩形框标记第一区域后的生物组织图像,以及显示医疗业务数据。
进一步地,请参见图8,是本申请实施例提供的一种医疗图像处理装置的结构示意图。如图8所示,医疗图像处理装置1可以应用于上述图3-图7a对应实施例中的终端设备,医疗图像处理装置1可以包括:图像获取模块11、划分模块12、生成模块13。
图像获取模块11,用于获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
划分模块12,用于将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
所述图像获取模块11,用于获取所述第一区域所在的象限位置区域的目标象限位置信息;
生成模块13,用于根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
其中,图像获取模块11、划分模块12、生成模块13的具体功能实现方式可以参见上述图3对应实施例中的步骤S101-步骤S103,这里不再进行赘述。
请参见图8,生物组织包括第一组织对象;
划分模块12可以包括:识别单元121、象限确定单元122、划分单元123。
识别单元121,用于识别所述第一组织对象在所述生物组织图像中 的第二区域;
象限确定单元122,用于根据所述第二区域在所述生物组织图像中确定象限分割线;
划分单元123,用于将所述生物组织在所述生物组织图像中的图像区域作为组织图像区域,并根据所述象限分割线,将所述组织图像区域划分为所述多个象限位置区域。
其中,识别单元121、象限确定单元122、划分单元123的具体功能实现方式可以参见上述图4对应实施例中的步骤S201-步骤S203,这里不再进行赘述。
请参见图8,生物组织图像包括第一生物组织图像和第二生物组织图像;所述第一生物组织图像和所述第二生物组织图像是对所述生物组织在不同方向上的成像;
识别单元121可以包括:获取子单元1211、第一识别子单元1212、第二识别子单元1213。
获取子单元1211,用于获取图像语义分割模型;
第一识别子单元1212,用于基于所述图像语义分割模型确定所述第一组织对象在所述第一生物组织图像中的第一组织标识区域;
第二识别子单元1213,用于基于所述图像语义分割模型确定所述第一组织对象在所述第二生物组织图像中的第二组织标识区域;
所述第二识别子单元1213,还用于将所述第一组织标识区域和所述第二组织标识区域确定为所述第二区域。
其中,获取子单元1211、第一识别子单元1212、第二识别子单元1213的具体功能实现方式可以参见上述图4对应实施例中的步骤S201,这里不再进行赘述。
请参见图8,第一识别子单元1212可以包括:卷积子单元12121、属性确定子单元12122。
卷积子单元12121,用于基于所述图像语义分割模型中的正向卷积层和转置卷积层,对所述第一生物组织图像进行正向卷积和反向卷积, 得到卷积特征图;
属性确定子单元12122,用于根据所述卷积特征图,确定所述第一生物组织图像中每个像素点的对象属性;所述对象属性包括第一组织属性;
所述属性确定子单元12122,还用于将属于所述第一组织属性的像素点组成的图像区域,作为所述第一组织对象在所述第一生物组织图像中的第一组织标识区域。
其中,卷积子单元12121、属性确定子单元12122的具体功能实现方式可以参见上述图4对应实施例中的步骤S201,这里不再进行赘述。
请参见图8,所述象限分割线包括与所述第一生物组织图像对应的第一分割线以及与所述第二生物组织图像对应的第二分割线;对象属性还包括第二组织属性;
象限确定单元122可以包括:象限确定子单元1221、区域确定子单元1222。
象限确定子单元1221,用于获取所述生物组织在所述第一生物组织图像中的边缘分界线,根据所述第一组织标识区域和所述边缘分界线,在所述第一生物组织图像中确定所述第一分割线;
区域确定子单元1222,用于将属于所述第二组织属性的像素点组成的图像区域,作为所述第二生物组织图像中的第二组织对象在所述第二生物组织图像中的对象区域;
所述区域确定子单元1222,还用于确定所述对象区域的对象分界线,根据所述第二组织标识区域和所述对象分界线,在所述第二生物组织图像中确定所述第二分割线。
其中,象限确定子单元1221、区域确定子单元1222的具体功能实现方式可以参见上述图4对应实施例中的步骤S202,这里不再进行赘述。
请参见图8,第一生物组织图像是生物组织在头尾位方向上的成像;所述第二生物组织图像是生物组织在斜侧位方向上的成像。
划分单元123可以包括:第一划分子单元1231、第二划分子单元 1232;
第一划分子单元1231,用于将所述生物组织在第一生物组织图像中的图像区域作为第一组织图像区域,在所述第一生物组织图像中,根据第一分割线将所述第一组织图像区域划分为内象限位置区域和外象限位置区域;
第二划分子单元1232,用于将所述生物组织在第二生物组织图像中的图像区域作为第二组织图像区域,在所述第二生物组织图像中,根据第二分割线将所述第二组织图像区域划分为上象限位置区域和下象限位置区域;
所述第二划分子单元1232,还用于将所述内象限位置区域、所述外象限位置区域、所述上象限位置区域和所述下象限位置区域确定为所述象限位置区域。
其中,第一划分子单元1231、第二划分子单元1232的具体功能实现方式可以参见上述图4对应实施例中的步骤S203,这里不再进行赘述。
请参见图8,第一区域包括所述病灶对象在第一生物组织图像中的第一病灶区域和所述病灶对象在第二生物组织图像中的第二病灶区域;所述生物组织图像包括所述第一生物组织图像和所述第二生物组织图像;
生成模块13可以包括:提取单元131、概率获取单元132、组合单元133。
提取单元131,用于从所述第一生物组织图像中提取与所述第一病灶区域对应的第一子图像;
所述提取单元131,还用于从所述第二生物组织图像中提取与所述第二病灶区域对应的第二子图像;
所述提取单元131,还用于获取目标匹配模型,基于所述目标匹配模型识别所述第一子图像和所述第二子图像之间的模型匹配概率;
概率获取单元132,用于获取所述第一病灶区域和所述第二病灶区域之间的条件匹配概率;
组合单元133,用于当所述模型匹配概率和所述条件匹配概率满足病灶匹配条件时,将所述目标象限位置信息和所述病灶属性组合为所述医疗业务数据。
其中,提取单元131、概率获取单元132、组合单元133的具体功能实现方式可以参见上述图5对应实施例中的步骤S301-步骤S305,这里不再进行赘述。
请参见图8,概率获取单元132可以包括:尺寸确定子单元1321、距离确定子单元1322。
尺寸确定子单元1321,用于确定所述第一病灶区域在所述第一生物组织图像中的第一区域尺寸,并确定所述第二病灶区域在所述第二生物组织图像中的第一区域尺寸,根据所述第一区域尺寸和所述第二区域尺寸生成尺寸匹配概率;
距离确定子单元1322,用于确定所述第一病灶区域与所述第一生物组织图像中的第一组织对象对应的图像区域之间的第一区域距离,并确定所述第二病灶区域与所述第二生物组织图像中的第一组织对象对应的图像区域之间的第二区域距离;
所述距离确定子单元1322,还用于根据所述第一区域距离和所述第二区域距离生成距离匹配概率;
所述距离确定子单元1322,还用于将所述尺寸匹配概率和所述距离匹配概率确定为所述条件匹配概率。
其中,尺寸确定子单元1321、距离确定子单元1322的具体功能实现方式可以参见上述图5对应实施例中的步骤S305,这里不再进行赘述。
请参见图8,所述生物组织图像中的病灶对象的数量为多个;所述模型匹配概率包括所述多个病灶对象的第一子图像和所述多个病灶对象的第二子图像之间的单位模型匹配概率;所述条件匹配概率包括所述多个病灶对象的第一病灶区域和所述多个病灶对象的第二病灶区域之间的单位条件匹配概率;
组合单元133可以包括:选择子单元1331、组合子单元1332。
选择子单元1331,用于从所述多个单位模型匹配概率和所述多个单位条件匹配概率中,选择满足所述病灶匹配条件的匹配概率对,作为目标匹配概率对;所述目标匹配概率对包括一个单位模型匹配概率和一个单位条件匹配概率;
组合子单元1332,用于当所述目标匹配概率对中的单位模型匹配概率和单位条件匹配概率满足所述病灶匹配条件时,将所述目标概率匹配对的病灶对象作为目标病灶对象,并将所述目标病灶对象的目标象限位置信息和所述目标病灶对象的病灶属性,组合为所述医疗业务数据。
其中,选择子单元1331、组合子单元1332的具体功能实现方式可以参见上述图5对应实施例中的步骤S305,这里不再进行赘述。
请参见图8,医疗图像处理装置1可以包括:图像获取模块11、划分模块12、生成模块13,还可以包括:识别模块14、模型获取模块15。
识别模块14,用于获取正样本图像和负样本图像;所述正样本图像包括第一正样本图像和第二正样本图像;所述第一正样本图像和所述第二正样本图像对应相同病灶对象;所述负样本图像包括第一负样本图像和第二负样本图像;所述第一负样本图像和所述第二负样本图像对应不同病灶对象;
模型获取模块15,用于获取样本匹配模型;
所述识别模块14,还用于基于所述样本匹配模型,识别所述第一正样本图像和所述第二正样本图像之间的正样本预测概率,获取所述第一正样本图像和所述第二正样本图像之间的正样本概率,并确定所述正样本预测概率与所述正样本概率之间的正样本误差;
所述识别模块14,还用于基于所述样本匹配模型,识别所述第一负样本图像和所述第二负样本图像之间的负样本预测概率,获取所述第一负样本图像和所述第二负样本图像之间的负样本概率,并确定所述负样本预测概率与所述负样本概率之间的负样本误差;
所述识别模块14,还用于根据所述正样本误差和所述负样本误差调 整所述样本匹配模型;
所述识别模块14,还用于当调整后的样本匹配模型满足收敛条件时,将调整后的样本匹配模型确定为所述目标匹配模型。
其中,识别模块14、模型获取模块15的具体功能实现方式可以参见上述图5对应实施例中的步骤S305,这里不再进行赘述。
请参见图8,模型获取模块15可以包括:模型获取单元151、生成单元152。
模型获取单元151,用于获取原始匹配模型;所述原始匹配模型是根据非生物组织图像训练得到的;
生成单元152,用于从所述原始匹配模型所包含的模型参数中,提取目标模型参数,基于迁移学习和所述目标模型参数生成所述样本匹配模型。
其中,模型获取单元151、生成单元152的具体功能实现方式可以参见上述图5对应实施例中的步骤S305,这里不再进行赘述。
进一步地,请参见图9,是本申请实施例提供的一种电子设备的结构示意图。上述图3-图7a对应实施例中的终端设备可以为电子设备1000,如图9所示,所述电子设备1000可以包括:用户接口1002、处理器1004、编码器1006以及存储器1008。信号接收器1016用于经由蜂窝接口1010、WIFI接口1012、...、或NFC接口1014接收或者发送数据。编码器1006将接收到的数据编码为计算机处理的数据格式。存储器1008中存储有计算机程序,处理器1004被设置为通过计算机程序执行上述任一项方法实施例中的步骤。存储器1008可包括易失性存储器(例如,动态随机存取存储器DRAM),还可以包括非易失性存储器(例如,一次性可编程只读存储器OTPROM)。在一些实施例中,存储器1008可进一步包括相对于处理器1004远程设置的存储器,这些远程存储器可以通过网络连接至电子设备1000。用户接口1002可以包括:键盘1018和显示器1020。
在图9所示的电子设备1000中,处理器1004可以用于调用存储器 1008中存储计算机程序,以实现:
获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
应当理解,本申请实施例中所描述的电子设备1000可执行前文图3到图7b所对应实施例中对所述医疗图像处理方法的描述,也可执行前文图8所对应实施例中对所述医疗图像处理装置1的描述,在此不再赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。
此外,这里需要指出的是:本申请实施例还提供了一种计算机存储介质,且所述计算机存储介质中存储有前文提及的医疗图像处理装置1所执行的计算机程序,且所述计算机程序包括程序指令,当所述处理器执行所述程序指令时,能够执行前文图3到图7b所对应实施例中对所述医疗图像处理方法的描述,因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的计算机存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本申请示例性实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申 请所涵盖的范围。

Claims (15)

  1. 一种医疗图像处理方法,由电子设备执行,包括:
    获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
    将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
    获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
  2. 根据权利要求1所述的方法,所述生物组织包括第一组织对象;
    所述将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域,包括:
    识别所述第一组织对象在所述生物组织图像中的第二区域;
    根据所述第二区域在所述生物组织图像中确定象限分割线;
    将所述生物组织在所述生物组织图像中的图像区域作为组织图像区域,并根据所述象限分割线,将所述组织图像区域划分为所述多个象限位置区域。
  3. 根据权利要求2所述的方法,所述生物组织图像包括第一生物组织图像和第二生物组织图像;所述第一生物组织图像和所述第二生物组织图像是对所述生物组织在不同方向上的成像;
    所述识别所述第一组织对象在所述生物组织图像中的第二区域,包括:
    获取图像语义分割模型,基于所述图像语义分割模型确定所述第一组织对象在所述第一生物组织图像中的第一组织标识区域;
    基于所述图像语义分割模型确定所述第一组织对象在所述第二生物组织图像中的第二组织标识区域;
    将所述第一组织标识区域和所述第二组织标识区域确定为所述第二区域。
  4. 根据权利要求3所述的方法,所述基于所述图像语义分割模型确定所述第一组织对象在所述第一生物组织图像中的第一组织标识区域,包括:
    基于所述图像语义分割模型中的正向卷积层和转置卷积层,对所述第一生物组织图像进行正向卷积和反向卷积,得到卷积特征图;
    根据所述卷积特征图,确定所述第一生物组织图像中每个像素点的对象属性;所述对象属性包括第一组织属性;
    将属于所述第一组织属性的像素点组成的图像区域,作为所述第一组织对象在所述第一生物组织图像中的第一组织标识区域。
  5. 根据权利要求4所述的方法,所述象限分割线包括与所述第一生物组织图像对应的第一分割线以及与所述第二生物组织图像对应的第二分割线;所述对象属性还包括第二组织属性;
    所述根据所述第二区域在所述生物组织图像中确定象限分割线,包括:
    获取所述生物组织在所述第一生物组织图像中的边缘分界线,根据所述第一组织标识区域和所述边缘分界线,在所述第一生物组织图像中确定所述第一分割线;
    将属于所述第二组织属性的像素点组成的图像区域,作为所述第二生物组织图像中的第二组织对象在所述第二生物组织图像中的对象区域;
    确定所述对象区域的对象分界线,根据所述第二组织标识区域和所述对象分界线,在所述第二生物组织图像中确定所述第二分割线。
  6. 根据权利要求5所述的方法,所述第一生物组织图像是生物组织在头尾位方向上的成像;所述第二生物组织图像是生物组织在斜侧位方向上的成像;
    所述将所述生物组织在所述生物组织图像中的图像区域作为组织图像区域,并根据所述象限分割线,将所述组织图像区域划分为所述多个象限位置区域,包括:
    将所述生物组织在第一生物组织图像中的图像区域作为第一组织图像区域,在所述第一生物组织图像中,根据所述第一分割线将所述第一组织图像区域划分为内象限位置区域和外象限位置区域;
    将所述生物组织在第二生物组织图像中的图像区域作为第二组织图像区域,在所述第二生物组织图像中,根据所述第二分割线将所述第二组织图像区域划分为上象限位置区域和下象限位置区域;
    将所述内象限位置区域、所述外象限位置区域、所述上象限位置区域和所述下象限位置区域确定为所述象限位置区域。
  7. 根据权利要求1所述的方法,所述第一区域包括所述病灶对象在第一生物组织图像中的第一病灶区域和所述病灶对象在第二生物组织图像中的第二病灶区域;所述生物组织图像包括所述第一生物组织图像和所述第二生物组织图像;
    所述根据所述目标象限位置信息和所述病灶属性生成医疗业务数据,包括:
    从所述第一生物组织图像中提取与所述第一病灶区域对应的第一子图像;
    从所述第二生物组织图像中提取与所述第二病灶区域对应的第二子图像;
    获取目标匹配模型,基于所述目标匹配模型识别所述第一子图像和所述第二子图像之间的模型匹配概率;
    获取所述第一病灶区域和所述第二病灶区域之间的条件匹配概率,当所述模型匹配概率和所述条件匹配概率满足病灶匹配条件时,将所述目标象限位置信息和所述病灶属性组合为所述医疗业务数据。
  8. 根据权利要求7所述的方法,所述获取所述第一病灶区域和所述第二病灶区域之间的条件匹配概率,包括:
    确定所述第一病灶区域在所述第一生物组织图像中的第一区域尺寸,并确定所述第二病灶区域在所述第二生物组织图像中的第一区域尺寸,根据所述第一区域尺寸和所述第二区域尺寸生成尺寸匹配概率;
    确定所述第一病灶区域与所述第一生物组织图像中的第一组织对象对应的图像区域之间的第一区域距离,并确定所述第二病灶区域与所述第二生物组织图像中的第一组织对象对应的图像区域之间的第二区域距离;
    根据所述第一区域距离和所述第二区域距离生成距离匹配概率;
    将所述尺寸匹配概率和所述距离匹配概率确定为所述条件匹配概率。
  9. 根据权利要求7所述的方法,所述生物组织图像中的病灶对象的数量为多个;所述模型匹配概率包括所述多个病灶对象的第一子图像和所述多个病灶对象的第二子图像之间的单位模型匹配概率;所述条件匹配概率包括所述多个病灶对象的第一病灶区域和所述多个病灶对象的第二病灶区域之间的单位条件匹配概率;
    所述当所述模型匹配概率和所述条件匹配概率满足病灶匹配条件时,将所述目标象限位置信息和所述病灶属性组合为所述医疗业务数据,包括:
    从所述多个单位模型匹配概率和所述多个单位条件匹配概率中,选择满足所述病灶匹配条件的匹配概率对,作为目标匹配概率对;所述目标匹配概率对包括一个单位模型匹配概率和一个单位条件匹配概率;
    当所述目标匹配概率对中的单位模型匹配概率和单位条件匹配概率满足所述病灶匹配条件时,将所述目标概率匹配对的病灶对象作为目标病灶对象,并将所述目标病灶对象的目标象限位置信息和所述目标病灶对象的病灶属性,组合为所述医疗业务数据。
  10. 根据权利要求7所述的方法,还包括:
    获取正样本图像和负样本图像;所述正样本图像包括第一正样本图像和第二正样本图像;所述第一正样本图像和所述第二正样本图像对应相同病灶对象;所述负样本图像包括第一负样本图像和第二负样本图像;所述第一负样本图像和所述第二负样本图像对应不同病灶对象;
    获取样本匹配模型,基于所述样本匹配模型,识别所述第一正样本 图像和所述第二正样本图像之间的正样本预测概率,获取所述第一正样本图像和所述第二正样本图像之间的正样本概率,并确定所述正样本预测概率与所述正样本概率之间的正样本误差;
    基于所述样本匹配模型,识别所述第一负样本图像和所述第二负样本图像之间的负样本预测概率,获取所述第一负样本图像和所述第二负样本图像之间的负样本概率,并确定所述负样本预测概率与所述负样本概率之间的负样本误差;
    根据所述正样本误差和所述负样本误差调整所述样本匹配模型;
    当调整后的样本匹配模型满足收敛条件时,将调整后的样本匹配模型确定为所述目标匹配模型。
  11. 根据权利要求10所述的方法,所述获取样本匹配模型,包括:
    获取原始匹配模型;所述原始匹配模型是根据非生物组织图像训练得到的;
    从所述原始匹配模型所包含的模型参数中,提取目标模型参数,基于迁移学习和所述目标模型参数生成所述样本匹配模型。
  12. 一种医疗图像处理装置,包括:
    图像获取模块,用于获取包含生物组织的生物组织图像,识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
    划分模块,用于将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
    所述图像获取模块,还用于获取所述第一区域所在的象限位置区域的目标象限位置信息;
    生成模块,用于根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
  13. 一种电子医疗设备,包括生物组织图像采集器和生物组织图像分析器;
    所述生物组织图像采集器获取包含生物组织的生物组织图像;
    所述生物组织图像分析器识别所述生物组织中的病灶对象在所述生物组织图像中的第一区域,并识别与所述病灶对象匹配的病灶属性;
    所述生物组织图像分析器将所述生物组织在所述生物组织图像中的图像区域,划分为多个象限位置区域;
    所述生物组织图像分析器获取所述第一区域所在的象限位置区域的目标象限位置信息,根据所述目标象限位置信息和所述病灶属性生成医疗业务数据。
  14. 一种电子设备,包括:处理器和存储器;
    所述处理器和存储器相连,其中,所述存储器用于存储计算机程序,所述处理器用于调用所述计算机程序,以执行如权利要求1-11中的任一项所述的方法。
  15. 一种计算机存储介质,其上存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,执行如权利要求1-11中的任一项所述的方法。
PCT/CN2020/084147 2019-05-22 2020-04-10 一种医疗图像处理方法、装置、电子医疗设备和存储介质 WO2020233272A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20809816.0A EP3975196A4 (en) 2019-05-22 2020-04-10 MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, MEDICAL ELECTRONIC DEVICE AND STORAGE MEDIA
US17/367,280 US11984225B2 (en) 2019-05-22 2021-07-02 Medical image processing method and apparatus, electronic medical device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910429414.9 2019-05-22
CN201910429414.9A CN110136809B (zh) 2019-05-22 2019-05-22 一种医疗图像处理方法、装置、电子医疗设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/367,280 Continuation US11984225B2 (en) 2019-05-22 2021-07-02 Medical image processing method and apparatus, electronic medical device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020233272A1 true WO2020233272A1 (zh) 2020-11-26

Family

ID=67572156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/084147 WO2020233272A1 (zh) 2019-05-22 2020-04-10 一种医疗图像处理方法、装置、电子医疗设备和存储介质

Country Status (4)

Country Link
US (1) US11984225B2 (zh)
EP (1) EP3975196A4 (zh)
CN (2) CN110491480B (zh)
WO (1) WO2020233272A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3646240A4 (en) 2017-06-26 2021-03-17 The Research Foundation for The State University of New York SYSTEM, PROCEDURE AND COMPUTER-ACCESSIBLE MEDIUM FOR VIRTUAL PANCREATOGRAPHY
CN110491480B (zh) * 2019-05-22 2021-04-30 腾讯科技(深圳)有限公司 一种医疗图像处理方法、装置、电子医疗设备和存储介质
CN110490262B (zh) 2019-08-22 2022-06-03 京东方科技集团股份有限公司 图像处理模型生成方法、图像处理方法、装置及电子设备
CN110647947B (zh) * 2019-09-30 2022-04-15 杭州依图医疗技术有限公司 一种病灶融合的方法及装置
CN110852258A (zh) * 2019-11-08 2020-02-28 北京字节跳动网络技术有限公司 物体检测方法、装置、设备及存储介质
CN111128347A (zh) * 2019-12-10 2020-05-08 青岛海信医疗设备股份有限公司 医学影像显示方法和通信终端
CN110956218A (zh) * 2019-12-10 2020-04-03 同济人工智能研究院(苏州)有限公司 基于Heatmap的Nao机器人目标检测足球候选点的生成方法
CN111755118B (zh) * 2020-03-16 2024-03-08 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN113435233B (zh) * 2020-03-23 2024-03-05 北京金山云网络技术有限公司 色情图像识别方法、系统及电子设备
CN111462082A (zh) * 2020-03-31 2020-07-28 重庆金山医疗技术研究院有限公司 一种病灶图片识别装置、方法、设备及可读存储介质
CN111611947B (zh) * 2020-05-25 2024-04-09 济南博观智能科技有限公司 一种车牌检测方法、装置、设备及介质
CN111768878A (zh) * 2020-06-30 2020-10-13 杭州依图医疗技术有限公司 可视化指引病灶的方法及计算机可读存储介质
CN111863204A (zh) * 2020-07-22 2020-10-30 北京青燕祥云科技有限公司 基于钼靶x线摄影检查的乳腺疾病ai辅助诊断方法及系统
CN112712878A (zh) * 2020-12-30 2021-04-27 四川桑瑞思环境技术工程有限公司 一种数字化手术室系统和控制方法
CN113487480A (zh) * 2021-06-30 2021-10-08 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113570567A (zh) * 2021-07-23 2021-10-29 无锡祥生医疗科技股份有限公司 超声图像中目标组织的监测方法、装置及存储介质
CN113764077B (zh) * 2021-07-27 2024-04-19 上海思路迪生物医学科技有限公司 病理图像的处理方法、装置、电子设备与存储介质
WO2023047360A1 (en) * 2021-09-23 2023-03-30 The Joan and Irwin Jacobs Technion-Cornell Institute Multi-stage machine learning techniques for profiling hair and uses thereof
CN114424948B (zh) * 2021-12-15 2024-05-24 上海交通大学医学院附属瑞金医院 一种分布式的超声扫查系统及通信方法
CN114429459A (zh) * 2022-01-24 2022-05-03 上海商汤智能科技有限公司 目标检测模型的训练方法及对应的检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170053403A1 (en) * 2014-05-06 2017-02-23 Simens Healthcare Gmbh Evaluation of an x-ray image of a breast produced during a mammography
CN109002846A (zh) * 2018-07-04 2018-12-14 腾讯科技(深圳)有限公司 一种图像识别方法、装置和存储介质
CN109447966A (zh) * 2018-10-26 2019-03-08 科大讯飞股份有限公司 医学图像的病灶定位识别方法、装置、设备及存储介质
CN109583440A (zh) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 结合影像识别与报告编辑的医学影像辅助诊断方法及系统
CN110136809A (zh) * 2019-05-22 2019-08-16 腾讯科技(深圳)有限公司 一种医疗图像处理方法、装置、电子医疗设备和存储介质

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1839762A (zh) * 2005-03-30 2006-10-04 林礼务 超声解剖定位工作站
CN101727537A (zh) * 2009-11-16 2010-06-09 杭州电子科技大学 基于双视角信息融合的乳腺cr图像的计算机确定方法
US8064677B2 (en) * 2009-11-25 2011-11-22 Fujifilm Corporation Systems and methods for measurement of objects of interest in medical images
US20120014578A1 (en) * 2010-07-19 2012-01-19 Qview Medical, Inc. Computer Aided Detection Of Abnormalities In Volumetric Breast Ultrasound Scans And User Interface
CN105979875B (zh) * 2014-02-04 2019-12-31 皇家飞利浦有限公司 用于生成乳房参数图的医学成像设备、方法、装置和计算机可读介质
CN103815926B (zh) * 2014-03-07 2016-04-27 杭州千思科技有限公司 乳腺癌检测方法和装置
CN204033359U (zh) * 2014-08-01 2014-12-24 湖北省肿瘤医院 基于双视角乳腺x线摄片的钙化灶定位装置
US9959617B2 (en) 2016-01-28 2018-05-01 Taihao Medical Inc. Medical image processing apparatus and breast image processing method thereof
CN106339591B (zh) * 2016-08-25 2019-04-02 汤一平 一种基于深度卷积神经网络的预防乳腺癌自助健康云服务系统
CN106682435B (zh) * 2016-12-31 2021-01-29 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法
US10452927B2 (en) * 2017-08-09 2019-10-22 Ydrive, Inc. Object localization within a semantic domain
CA3115898C (en) * 2017-10-11 2023-09-26 Aquifi, Inc. Systems and methods for object identification
CN107680678B (zh) * 2017-10-18 2020-12-01 北京航空航天大学 基于多尺度卷积神经网络甲状腺超声图像结节诊断系统
CN108304841B (zh) * 2018-01-26 2022-03-08 腾讯科技(深圳)有限公司 乳头定位方法、装置及存储介质
CN108427951B (zh) * 2018-02-08 2023-08-04 腾讯科技(深圳)有限公司 图像处理方法、装置、存储介质和计算机设备
CN110363210B (zh) * 2018-04-10 2023-05-05 腾讯科技(深圳)有限公司 一种图像语义分割模型的训练方法和服务器
US10789288B1 (en) * 2018-05-17 2020-09-29 Shutterstock, Inc. Relational model based natural language querying to identify object relationships in scene
CN108682015B (zh) * 2018-05-28 2021-10-19 安徽科大讯飞医疗信息技术有限公司 一种生物图像中的病灶分割方法、装置、设备及存储介质
CN109087327B (zh) * 2018-07-13 2021-07-06 天津大学 一种级联全卷积神经网络的甲状腺结节超声图像分割方法
CN109461495B (zh) * 2018-11-01 2023-04-14 腾讯科技(深圳)有限公司 一种医学图像的识别方法、模型训练的方法及服务器
CN109493343A (zh) * 2018-12-29 2019-03-19 上海鹰瞳医疗科技有限公司 医疗图像异常区域分割方法及设备
CN109949271B (zh) 2019-02-14 2021-03-16 腾讯科技(深圳)有限公司 一种基于医学图像的检测方法、模型训练的方法及装置
CN110033456B (zh) 2019-03-07 2021-07-09 腾讯科技(深圳)有限公司 一种医疗影像的处理方法、装置、设备和系统
CN114041149A (zh) * 2019-04-11 2022-02-11 安捷伦科技有限公司 配置为便于生物样本内的实例分割的用户注释的用户界面
US11763558B1 (en) * 2019-04-19 2023-09-19 Apple Inc. Visualization of existing photo or video content
US10885386B1 (en) * 2019-09-16 2021-01-05 The Boeing Company Systems and methods for automatically generating training image sets for an object
TWI790560B (zh) * 2021-03-03 2023-01-21 宏碁股份有限公司 並排影像偵測方法與使用該方法的電子裝置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170053403A1 (en) * 2014-05-06 2017-02-23 Simens Healthcare Gmbh Evaluation of an x-ray image of a breast produced during a mammography
CN109583440A (zh) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 结合影像识别与报告编辑的医学影像辅助诊断方法及系统
CN109002846A (zh) * 2018-07-04 2018-12-14 腾讯科技(深圳)有限公司 一种图像识别方法、装置和存储介质
CN109447966A (zh) * 2018-10-26 2019-03-08 科大讯飞股份有限公司 医学图像的病灶定位识别方法、装置、设备及存储介质
CN110136809A (zh) * 2019-05-22 2019-08-16 腾讯科技(深圳)有限公司 一种医疗图像处理方法、装置、电子医疗设备和存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐伟栋 (XU, WEIDONG): "乳腺X线图象计算机辅助诊断算法研究 (Non-official translation: Research on Computer Aided Diagnosis for Mammography)", 中国优秀博硕士学位论文全文数据库医药卫生科技辑 (MEDICINE & PUBLIC HEALTH, CHINESE SELECTED DOCTORAL DISSERTATIONS AND MASTER'S THESES FULL-TEXT DATABASES), no. 09, 15 September 2006 (2006-09-15), XP055756590, DOI: 20200611141817Y *
蒋越峰 (JIANG, YUEFENG): "乳腺X线图象计算机辅助诊断算法研究 (Non-official translation: Research on Computer Aided Diagnosis for Mammography)", 中国优秀硕士学位论文全文数据库 医药卫生辑 (CHINESE MASTER’S THESES FULL-TEXT DATABASE, MEDICINE & HEALTH SCIENCES), no. 01, 15 June 2002 (2002-06-15), XP55756590, DOI: 20200611141711Y *

Also Published As

Publication number Publication date
EP3975196A4 (en) 2022-10-19
EP3975196A1 (en) 2022-03-30
CN110136809A (zh) 2019-08-16
CN110136809B (zh) 2022-12-27
CN110491480A (zh) 2019-11-22
US11984225B2 (en) 2024-05-14
CN110491480B (zh) 2021-04-30
US20210343016A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
WO2020233272A1 (zh) 一种医疗图像处理方法、装置、电子医疗设备和存储介质
US11880972B2 (en) Tissue nodule detection and tissue nodule detection model training method, apparatus, device, and system
WO2021036616A1 (zh) 一种医疗图像处理方法、医疗图像识别方法及装置
WO2020228570A1 (zh) 乳腺钼靶图像处理方法、装置、系统及介质
US20210264599A1 (en) Deep learning based medical image detection method and related device
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
CN112767329B (zh) 图像处理方法及装置、电子设备
WO2020019612A1 (zh) 医疗影像处理方法及装置、电子设备及存储介质
WO2022151755A1 (zh) 目标检测方法及装置、电子设备、存储介质、计算机程序产品和计算机程序
TW202112299A (zh) 圖像處理方法、電子設備和電腦可讀儲存介質
CN108615236A (zh) 一种图像处理方法及电子设备
WO2021057536A1 (zh) 一种图像处理方法、装置、计算机设备以及存储介质
US20200320701A1 (en) Image processing method and apparatus and neural network model training method
CN114820584B (zh) 肺部病灶定位装置
CN112712906A (zh) 视频图像处理方法、装置、电子设备及存储介质
WO2023142532A1 (zh) 一种推理模型训练方法及装置
CN113989293A (zh) 图像分割方法及相关模型的训练方法和装置、设备
CN112561877A (zh) 多尺度双通道卷积模型训练方法、图像处理方法及装置
JP7404535B2 (ja) コンピュータビジョンに基づく導管特徴取得方法、並びに知能顕微鏡、導管組織特徴取得装置、コンピュータプログラム、及びコンピュータ機器
CN112862752A (zh) 一种图像处理显示方法、系统电子设备及存储介质
WO2022252107A1 (zh) 一种基于眼部图像的疾病检测系统及方法
Yang et al. MABEL: An AI-powered mammographic breast lesion diagnostic system
KR102633823B1 (ko) 의료영상 판별장치 및 그 방법
CN115018795B (zh) 医学影像中的病灶的匹配方法、装置、设备及存储介质
Tsai et al. Calcification Clusters And Lesions Analysis In Mammogram Using Multi-Architecture Deep Learning Algorithms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20809816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020809816

Country of ref document: EP

Effective date: 20211222