WO2023010797A1 - 胆胰超声图像识别方法、装置、服务器 - Google Patents

胆胰超声图像识别方法、装置、服务器 Download PDF

Info

Publication number
WO2023010797A1
WO2023010797A1 PCT/CN2021/143710 CN2021143710W WO2023010797A1 WO 2023010797 A1 WO2023010797 A1 WO 2023010797A1 CN 2021143710 W CN2021143710 W CN 2021143710W WO 2023010797 A1 WO2023010797 A1 WO 2023010797A1
Authority
WO
WIPO (PCT)
Prior art keywords
pancreatic
biliary
pancreas
anatomical structure
biliopancreatic
Prior art date
Application number
PCT/CN2021/143710
Other languages
English (en)
French (fr)
Inventor
郑碧清
姚理文
胡珊
刘奇为
Original Assignee
武汉楚精灵医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉楚精灵医疗科技有限公司 filed Critical 武汉楚精灵医疗科技有限公司
Publication of WO2023010797A1 publication Critical patent/WO2023010797A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Definitions

  • the present application relates to the field of medical technology assistance, in particular to a method, device, and server for biliary-pancreatic ultrasound image recognition.
  • Biliary-pancreatic endoscopic ultrasonography is an important means of diagnosis and treatment of biliary-pancreatic system diseases.
  • the basis of endoscopic ultrasound diagnosis and treatment is that doctors use ultrasound images to accurately identify and locate biliopancreatic structures.
  • biliopancreatic ultrasound images are cross-sectional images of human tissue, which mainly contain texture information that is difficult for human eyes to recognize.
  • it is difficult to accurately identify anatomical landmarks in images which greatly affects the accuracy of identifying and locating biliopancreatic structures in biliopancreatic sites.
  • the present application provides a method, device, and server for ultrasonic image recognition of gallbladder and pancreas, aiming at solving the problem in the prior art that the anatomical structure of gallbladder and pancreas in human body cannot be better distinguished.
  • an embodiment of the present application provides a method for recognizing ultrasound images of gallbladder and pancreas, including:
  • Using the preset biliary-pancreatic site identification model perform biliary-pancreatic site identification on multiple biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, and determine multiple biliary-pancreatic sites corresponding to the multiple biliary-pancreatic ultrasonic images, so as to obtain multiple The first biliary-pancreatic ultrasound image, the biliary-pancreatic site corresponding to each first biliary-pancreatic ultrasound image in the plurality of first biliary-pancreatic ultrasound images is determined;
  • Using a preset biliary-pancreatic anatomical structure recognition model identify the biliary-pancreatic anatomical structure on the plurality of first biliary-pancreatic ultrasonic images, and determine the identifiable first biliary-pancreatic anatomical structure in the plurality of first biliary-pancreatic ultrasonic images.
  • Position recognition is performed on the third ultrasound image of gallbladder and pancreas, and an anatomical structure of the gallbladder and pancreas in the third ultrasound image of gallbladder and pancreas is determined.
  • the acquiring multiple biliary-pancreatic ultrasound images of the human biliary-pancreatic structure to be identified includes:
  • the multiple initial biliary-pancreatic ultrasound images of the biliopancreatic structure to be identified are respectively cut with the multiple horizontal circumscribed rectangles to obtain multiple biliary-pancreatic ultrasound images for subsequent identification.
  • the use of the preset biliary-pancreatic site identification model performs biliary-pancreatic site identification on multiple biliary-pancreatic ultrasound images of the biliary-pancreatic structure, and determines the multiple biliary-pancreatic ultrasound images
  • the images correspond to multiple biliary and pancreatic sites to obtain multiple first biliary and pancreatic ultrasound images, including:
  • the multiple biliary-pancreatic site identification models respectively perform biliary-pancreatic site identification on the multiple biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, so as to determine each first biliary-pancreatic ultrasonic image in the multiple biliary-pancreatic ultrasonic images Corresponding biliary and pancreatic sites;
  • the multiple preset initial models of biliary-pancreatic sites are trained respectively to obtain multiple biliary-pancreatic site identification models to respectively identify different biliary-pancreatic sites, including:
  • ResNet neural network models are trained to obtain multiple bile-pancreatic site recognition models, and different bile-pancreatic site recognition models identify different bile-pancreatic sites.
  • the preset initial models for biliopancreatic site identification are eight;
  • the said multiple preset biliary-pancreatic site identification initial models are respectively trained to obtain a plurality of biliary-pancreatic site identification models to respectively identify different biliary-pancreatic sites, including:
  • Eight preset biliary-pancreatic site identification initial models are respectively trained to obtain eight biliary-pancreatic site identification models; wherein, the eight biliary-pancreatic site identification models are used to respectively identify multiple bile-pancreatic sites of the bile-pancreatic structure
  • abdominal aorta station gastric cavity pancreatic body station, gastric cavity pancreatic tail station, Confluence station, first hepatic porta station, gastric cavity pancreatic head station, duodenal bulb station, duodenal descending station department station.
  • the determining the corresponding effective area of each initial biliary-pancreatic ultrasonic image in the plurality of initial biliary-pancreatic ultrasonic images to obtain a plurality of effective areas includes:
  • Training the UNet++ image neural network model so as to use the trained UNet++ image neural network model to identify the effective areas corresponding to each initial biliary-pancreatic ultrasound image in the plurality of initial biliary-pancreatic ultrasound images, and obtain multiple effective areas.
  • the use of the preset biliary-pancreatic anatomical structure recognition model is used to identify the biliary-pancreatic anatomical structure on the multiple first biliary-pancreatic ultrasonic images, and determine the multiple first A second biliary-pancreatic ultrasound image in which the biliary-pancreatic anatomy is identifiable in the biliary-pancreatic ultrasound image, and a third biliary-pancreatic ultrasound image in which the biliary-pancreatic anatomy is not identifiable, including:
  • the identifiable image of the biliary and pancreatic anatomical structure in the plurality of first biliary and pancreatic ultrasonic images is the second biliary and pancreatic ultrasonic image
  • the image in the plurality of first biliary and pancreatic ultrasonic images that is not identifiable of the biliary and pancreatic anatomical structure is the second Ultrasound image of the three gall bladders and pancreas.
  • the performing position identification on the third ultrasound image of gallbladder and pancreas, and determining the anatomical structure of the gallbladder and pancreas in the third ultrasound image of gallbladder and pancreas include:
  • each target area in the plurality of target areas is surrounded by a plurality of initial edge points
  • the biliopancreatic anatomical structure corresponding to the target area is the biliopancreatic anatomical structure corresponding to the third biliopancreatic ultrasonic image.
  • each of the multiple target areas corresponds to The center point coordinates of , including:
  • the plurality of edge point coordinates determine the corresponding center point coordinates of each target area in the plurality of target areas.
  • performing preset thinning and homogenization processing on the multiple initial edge points to obtain multiple edge points includes:
  • a new initial edge point is inserted into any two initial edge points whose distance between any two adjacent initial edge points is greater than 10 pixels, so as to obtain the plurality of edge points.
  • the determining the corresponding center point coordinates of each target area in the multiple target areas according to the plurality of edge point coordinates includes:
  • the anatomical structures are of seven types, and the acquisition of a plurality of preset recognition models of bile-pancreatic anatomical structures includes:
  • the embodiment of the present application also provides a biliopancreatic ultrasound image recognition device, and the biliopancreatic ultrasound image recognition device includes:
  • An acquisition module configured to acquire multiple biliary-pancreatic ultrasound images of the human biliary-pancreatic structure to be identified
  • the first identification module is configured to use a preset biliary-pancreatic site identification model to perform biliary-pancreatic site identification on multiple biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, and determine a plurality of biliary-pancreatic site corresponding to the multiple biliary-pancreatic ultrasonic images pancreas site, to obtain a plurality of first biliary-pancreatic ultrasonic images, and the biliary-pancreatic site corresponding to each first biliary-pancreatic ultrasonic image in the plurality of first biliary-pancreatic ultrasonic images is determined;
  • the second recognition module is configured to use a preset biliary-pancreatic anatomical structure recognition model to identify the biliary-pancreatic anatomical structure on the multiple first biliary-pancreatic ultrasound images, and determine the bile-pancreatic anatomical structure in the multiple first biliary-pancreatic ultrasound images.
  • the positioning module is configured to identify the position of the third ultrasound image of gallbladder and pancreas, and determine the anatomical structure of the gallbladder and pancreas in the third ultrasound image of gallbladder and pancreas.
  • the acquisition module is specifically configured to: acquire multiple initial biliary-pancreatic ultrasound images of the human biliary-pancreatic structure to be identified;
  • the multiple initial biliary-pancreatic ultrasound images of the biliopancreatic structure to be identified are respectively cut with the multiple horizontal circumscribed rectangles to obtain multiple biliary-pancreatic ultrasound images for subsequent identification.
  • the first identification module is specifically configured to: acquire a plurality of preset initial models for identifying biliopancreatic sites;
  • the multiple biliary-pancreatic site identification models respectively perform biliary-pancreatic site identification on the multiple biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, so as to determine each first biliary-pancreatic ultrasonic image in the multiple biliary-pancreatic ultrasonic images Corresponding biliary and pancreatic sites;
  • the number of preset initial models for biliopancreatic site identification is eight; the first identification module is specifically used for:
  • Eight preset biliary-pancreatic site identification initial models are respectively trained to obtain eight biliary-pancreatic site identification models; wherein, the eight biliary-pancreatic site identification models are used to respectively identify multiple bile-pancreatic sites of the bile-pancreatic structure
  • abdominal aorta station gastric cavity pancreatic body station, gastric cavity pancreatic tail station, Confluence station, first hepatic porta station, gastric cavity pancreatic head station, duodenal bulb station, duodenal descending station department station.
  • the second identification module is specifically configured to:
  • the identifiable image of the biliary and pancreatic anatomical structure in the plurality of first biliary and pancreatic ultrasonic images is the second biliary and pancreatic ultrasonic image
  • the image in the plurality of first biliary and pancreatic ultrasonic images that is not identifiable of the biliary and pancreatic anatomical structure is the second Ultrasound image of the three gall bladders and pancreas.
  • the positioning module is specifically used for:
  • each target area in the plurality of target areas is surrounded by a plurality of initial edge points
  • the biliopancreatic anatomical structure corresponding to the target area is the biliopancreatic anatomical structure corresponding to the third biliopancreatic ultrasound image.
  • the positioning module is specifically used for:
  • the plurality of edge point coordinates determine the corresponding center point coordinates of each target area in the plurality of target areas.
  • the present application also provides a server, and the server includes:
  • processors one or more processors
  • One or more application programs wherein the one or more application programs are stored in the memory and are configured to be executed by the processor to implement the method for recognizing biliary-pancreatic ultrasound images as described in any one of the above items.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and the computer program is loaded by a processor, so as to execute the method in any one of the methods for biliary-pancreatic ultrasound image recognition described above. step.
  • This application provides a method, device, and server for biliary-pancreatic ultrasonic image recognition.
  • the unrecognizable biliary-pancreatic anatomical structure is determined, and based on the existing known multiple biliary-pancreatic anatomical structures
  • the positional relationship between pancreatic anatomy determining the position of unrecognized biliopancreatic anatomy in the biliopancreatic ultrasound image, to identify biliopancreatic anatomy.
  • This method combines the existing biliary-pancreatic site information, the image features of the biliary-pancreatic anatomical structure, and the position coordinates of the biliary-pancreatic anatomical structure to comprehensively identify and label the anatomical structure of the biliary-pancreatic structure, which significantly reduces the anatomical structure of the biliary-pancreas in ultrasonic images. recognition difficulty.
  • FIG. 1 is a schematic diagram of a lesion recognition scene provided by an embodiment of the present application
  • FIG. 2 is a schematic flow chart of an embodiment of a method for recognizing ultrasound images of gallbladder and pancreas provided by an embodiment of the present application;
  • Fig. 3 is a schematic diagram of an embodiment of the standard eight stations and the corresponding anatomical structure of the bile and pancreas provided by the embodiment of the present application;
  • Fig. 4 is a schematic diagram of an embodiment of the anatomical structure of the gallbladder and pancreas provided by the embodiment of the present application;
  • Fig. 5 is a schematic diagram of an embodiment of an identifiable anatomical structure of the gallbladder and pancreas provided by the embodiment of the present application;
  • Fig. 6 is a schematic diagram of an embodiment of an unidentifiable anatomical structure of the bile and pancreas provided by the embodiment of the present application;
  • FIG. 7 is a schematic flow diagram of an embodiment of acquiring biliary-pancreatic ultrasound images provided by the embodiment of the present application.
  • Fig. 8 is a schematic flow chart of an embodiment of bile-pancreas site identification provided by the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an embodiment of location identification provided by the embodiment of the present application.
  • Fig. 10 is a schematic diagram of an embodiment of the recognition situation of different bile-pancreatic anatomical structure recognition models provided by the embodiment of the present application;
  • FIG. 11 is a schematic diagram of an embodiment of the positional relationship provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of an embodiment of a biliopancreatic ultrasound image recognition device provided in an embodiment of the present application.
  • FIG. 13 shows a schematic structural diagram of the server involved in the embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of said features.
  • “plurality” means two or more, unless otherwise specifically defined.
  • Embodiments of the present application provide a method, device, and server for biliopancreatic ultrasound image recognition, which will be described in detail below.
  • FIG. 1 it is a schematic diagram of the scene of the biliary-pancreatic ultrasound image recognition system provided by the embodiment of the present application.
  • the biliary-pancreatic ultrasound image recognition system may include multiple terminals 100 and servers 200, between terminals 100, between servers 200, And the connection and communication between the terminal 100 and the server 200 through the Internet composed of various gateways, etc., will not be repeated here.
  • the terminal 100 may include a detection terminal 101, a user terminal 102, and the like.
  • the server 200 is mainly used to obtain multiple ultrasound images of the bile-pancreas structure of the human body to be identified; use the preset bile-pancreas site recognition model to perform biliopancreatic ultrasound images on the multiple ultrasound images of the bile-pancreas structure.
  • Site identification determining multiple biliary and pancreatic sites corresponding to multiple biliary and pancreatic ultrasound images to obtain multiple first biliary and pancreatic ultrasound images, the biliary and pancreatic sites corresponding to each biliary and pancreatic ultrasound image in the multiple first biliary and pancreatic ultrasound images are Determined; use the preset biliary-pancreatic anatomical structure recognition model to identify the biliary-pancreatic anatomical structure on multiple first biliary-pancreatic ultrasound images, and determine the identifiable second biliary-pancreatic anatomical structure in the multiple first biliary-pancreatic ultrasound images.
  • the server 200 can be an independent server, or a server network or server cluster composed of servers.
  • the server 200 described in the embodiment of the present invention includes but is not limited to computers, network hosts, A single web server, multiple web server sets, or a cloud server composed of multiple servers.
  • the cloud server is composed of a large number of computers or network servers based on cloud computing (Cloud Computing).
  • any communication method can be used between the server and the terminal, including but not limited to, based on the 3rd Generation Partnership Project (3rd Generation Partnership Project, 3GPP), Long Term Evolution (Long Term Evolution, LTE) , Worldwide Interoperability for Microwave Access (WiMAX) mobile communication, or a computer based on TCP/IP Protocol Suite (TCP/IP Protocol Suite, TCP/IP), User Datagram Protocol (User Datagram Protocol, UDP) network communication, etc.
  • 3rd Generation Partnership Project 3rd Generation Partnership Project, 3GPP
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • WiMAX Worldwide Interoperability for Microwave Access
  • TCP/IP Protocol Suite TCP/IP Protocol Suite, TCP/IP
  • User Datagram Protocol User Datagram Protocol
  • UDP User Datagram Protocol
  • the terminal 100 used in the embodiment of the present invention may be a device including both receiving and transmitting hardware, that is, a device having receiving and transmitting hardware capable of performing bidirectional communication on a bidirectional communication link.
  • a terminal may include a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display.
  • the detection terminal 101 is mainly responsible for collecting endoscopic images of parts to be detected in the human body
  • the collection equipment on the detection terminal may include a magnetic resonance imager (MRI, Magnetic Resonance Imaging), computerized tomography equipment (CT, Computed Tomography), colposcope or endoscope and other electronic equipment.
  • the image acquisition device may be a biliary-pancreatic ultrasound endoscope, which is mainly used to acquire biliary-pancreatic ultrasound images of the human body's bile-pancreatic structure.
  • User terminals 102 include, but are not limited to, portable terminals such as mobile phones and tablets, fixed terminals such as computers and query machines, and various virtual terminals, etc.; mainly provide uploading functions, processing functions, and processing results corresponding to contrast images of biliary and pancreatic ultrasound images display function, etc.
  • FIG. 1 is only an application scenario related to the solution of this application, and does not constitute a limitation on the application scenario of the solution of this application. Show more or fewer servers, or server network connection relationship, for example, only one server and two terminals are shown in Figure 1, it can be understood that one or more other servers may also be included in the lesion recognition scene, or /and one or more terminals connected to the server network, which is not specifically limited here.
  • the biliary-pancreatic ultrasound image recognition system may further include a memory 300 for storing data, such as storing image data, for example, image data of the part to be detected acquired by the terminal.
  • the storage 300 may include a local database and/or a cloud database.
  • FIG. 2 it is a schematic flow chart of an embodiment of a method for recognizing biliary-pancreatic ultrasound images provided by the embodiment of the present application, which may include:
  • the ultrasound image recognition method of gallbladder and pancreas provided in the embodiment of the present application is mainly to recognize the structure of gallbladder and pancreas of human body, so that doctors can determine the lesion of gallbladder and pancreas structure according to the ultrasound image of gallbladder and pancreas corresponding to the structure of gallbladder and pancreas.
  • multiple biliary-pancreatic ultrasonic images corresponding to the biliary-pancreatic structure can be directly obtained by using the biliary-pancreatic endoscopic ultrasonography.
  • Using the preset biliary-pancreatic site identification model perform biliary-pancreatic site identification on the biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, determine multiple biliary-pancreatic sites corresponding to multiple biliary-pancreatic ultrasonic images, and obtain multiple first biliary-pancreatic sites Ultrasound image.
  • the standard scanning of biliopancreatic endoscopic ultrasonography is divided into multiple biliopancreatic sites and multiple biliopancreatic anatomical structures, and the doctor needs to complete the scanning of all biliopancreatic sites and all
  • the identification of the biliopancreatic anatomy can ensure a comprehensive observation of the biliopancreatic system.
  • the preset biliary-pancreatic site identification model can be used to first identify the biliary-pancreatic site on the biliary-pancreatic ultrasonic image of the biliary-pancreatic structure, and determine the multiple bile-pancreatic sites corresponding to the multiple biliary-pancreatic ultrasonic images.
  • Pancreatic site; and one biliary-pancreatic ultrasound image corresponds to only one biliary-pancreatic site.
  • multiple first biliary-pancreatic ultrasound images can be obtained; Pancreatic sites were also identified.
  • FIG. 3 it is a schematic diagram of an embodiment of the standard eight stations and the corresponding anatomical structure of the bile and pancreas provided in the embodiment of the present application.
  • the abdominal aortic station also corresponds to three biliopancreatic anatomical structures, namely: abdominal aorta, celiac trunk and superior mesenteric artery.
  • the gastric cavity-pancreatic body station also includes the bile-pancreatic anatomical structures of the splenic artery and vein and the pancreatic body.
  • the bile-pancreas anatomy corresponding to each station is different.
  • the preset biliopancreatic anatomical structure recognition model can also be used to perform biliopancreatic anatomical structure recognition again on the plurality of first biliopancreatic ultrasound images obtained after biliopancreatic site recognition has been performed.
  • the preset biliopancreatic anatomical structure identification model only part of the biliopancreatic anatomical structure can be identified; that is, using the preset biliopancreatic anatomical structure identification model, it can be determined that the biliopancreatic anatomical structure in multiple first biliary and pancreatic ultrasound images can be identified. A second biliary-pancreatic ultrasound image was identified, and a third biliary-pancreatic ultrasound image where the biliary-pancreatic anatomy was not identifiable.
  • FIG. 4 it is a schematic diagram of an embodiment of the anatomical structure of the gallbladder and pancreas provided in the embodiment of the present application.
  • the biliary and pancreatic site identification is not performed on the ultrasound images of the biliary and pancreatic structures.
  • Ultrasound images of the pancreas are used to identify the biliopancreatic anatomical structure, so as to classify the biliopancreatic structure according to the biliopancreatic anatomy.
  • FIG. 5 it is a schematic diagram of an embodiment of an identifiable biliopancreatic anatomical structure provided in the embodiment of the present application.
  • FIG. 5 by using the aforementioned biliary-pancreatic site identification and biliary-pancreatic anatomical structure identification, it is possible to effectively identify different biliary-pancreatic sites corresponding to different bile-pancreatic anatomical structures, but not all biliopancreatic anatomical structures can be identified.
  • the first porta hepatic station corresponds to three biliopancreatic anatomical structures: liver, portal vein and bile duct.
  • liver liver
  • portal vein bile duct
  • Figure 5 shows the recognizable structure.
  • Figure 6 what is shown in Figure 6 is the unrecognized biliopancreatic anatomy.
  • the identifiable image of the biliary-pancreatic anatomical structure among the multiple first biliary-pancreatic ultrasonic images is the second biliary-pancreatic ultrasonic image, and the image in which the biliary-pancreatic anatomical structure is not identifiable is the third biliary-pancreatic ultrasonic image; Subsequent identification is only required on the third biliary-pancreatic ultrasound image to confirm biliary-pancreatic anatomy.
  • position recognition may be performed on the third ultrasonic biliary-pancreatic image to determine the anatomical structure of the third biliary-pancreatic ultrasonic image that could not be identified originally.
  • the ultrasound image recognition method of gallbladder and pancreas determines the unidentifiable anatomical structure of gallbladder and pancreas through the existing recognition of gallbladder and pancreas site and recognition of gallbladder and pancreas anatomical structure, and based on the existing known multiple gallbladder and pancreas anatomical structure Determine the location of the unrecognized biliopancreatic anatomy in the biliopancreatic ultrasound image to identify the biliopancreatic anatomy.
  • This method combines the existing biliary-pancreatic site information, the image features of the biliary-pancreatic anatomical structure, and the position coordinates of the biliary-pancreatic anatomical structure to comprehensively identify and label the anatomical structure of the biliary-pancreatic structure, which significantly reduces the anatomical structure of the biliary-pancreas in ultrasonic images. recognition difficulty.
  • FIG. 7 it is a schematic flow chart of an embodiment of acquiring biliary-pancreatic ultrasound images provided by the embodiment of the present application, which may include:
  • the initial biliary-pancreatic ultrasound image of the biliary-pancreatic structure obtained by endoscopic biliary-pancreatic ultrasonography includes a lot of redundant information, which will affect the subsequent biliopancreatic site identification and biliopancreatic anatomical structure identification, so it is necessary to Remove redundant information.
  • valid areas corresponding to each of the multiple initial biliary-pancreatic ultrasound images may be sequentially determined, and other areas except the effective areas may be cropped to remove redundant areas.
  • the horizontal circumscribing rectangle corresponding to the effective area can be directly determined, and the effective area is retained according to the shape of the horizontal circumscribing rectangle, while the horizontal Other areas outside the circumscribed rectangle were obtained to obtain the ultrasound image of the biliopancreas corresponding to the final biliopancreatic structure.
  • the neural network model can be used to determine the effective area in the initial biliary-pancreatic ultrasound image.
  • the UNet++ image neural network model can be trained to identify effective regions in the initial biliary-pancreatic ultrasound image and crop the initial biliary-pancreatic ultrasound image.
  • each initial biliary-pancreatic ultrasound image in the multiple initial biliary-pancreatic ultrasound images is different; so each The horizontal circumscribed rectangles corresponding to the effective regions in the initial biliary-pancreatic ultrasound image are also different; the multiple biliary-pancreatic ultrasound images obtained by final cropping are also different.
  • FIG. 8 it is a schematic flowchart of an embodiment of bile-pancreatic site identification provided by the embodiment of the present application, which may include:
  • Using multiple biliary-pancreatic site identification models perform biliary-pancreatic site identification on multiple biliary-pancreatic ultrasound images of the biliary-pancreatic structure, so as to determine the bile-pancreatic site corresponding to each first biliary-pancreatic ultrasound image in the multiple biliary-pancreatic ultrasound images. pancreatic site.
  • the neural network model can also be used to identify different biliopancreatic sites in the biliopancreatic ultrasound image.
  • multiple ResNet neural network models can be trained to obtain multiple bile-pancreatic site recognition models, and different bile-pancreatic site recognition models can identify different bile-pancreatic sites.
  • a ResNet neural network model can also be used to train the ResNet neural network model, so that the ResNet neural network model can identify different biliary and pancreatic sites at the same time.
  • eight biliary-pancreatic site recognition models are used to identify multiple biliary-pancreatic ultrasound images of the bile-pancreatic structure: abdominal aorta station, gastric cavity-pancreatic body station, gastric cavity-pancreatic tail station, Confluence station, first hepatic porta station, gastric cavity pancreatic head station, duodenal bulb station, duodenal descending station.
  • the identification of biliopancreatic anatomy is continued. Specifically, multiple preset initial models for recognizing the anatomical structure of the gallbladder and pancreas can also be obtained, and the multiple preset initial models for recognizing the anatomical structure of the gallbladder and pancreas can be trained to obtain multiple recognition models for the anatomical structure of the gallbladder and pancreas.
  • multiple biliary-pancreatic anatomical structure recognition models among multiple preset biliary-pancreatic anatomical structure recognition models are sequentially used as target biliary-pancreatic anatomical structure recognition models, and the target biliary-pancreatic anatomical structure recognition model is used.
  • the identification model recognizes the bile-pancreas anatomical structure on the plurality of first biliary-pancreatic ultrasound images in sequence, and determines the corresponding biliopancreatic anatomical structure of each first biliary-pancreatic ultrasound image in the plurality of first biliary-pancreatic ultrasound images.
  • a biliopancreatic anatomical structure identification model can only identify one type of biliopancreatic anatomical structure, for the same first biliopancreatic ultrasound image, it is necessary to use all biliopancreatic anatomical structure identification models for the first biliopancreatic ultrasound image.
  • the identification of biliopancreatic anatomy was performed on the image to ensure that all biliopancreatic anatomy in the same first biliopancreatic ultrasound image was identified.
  • the anatomical structures of the gallbladder and pancreas are divided into seven categories, so there may be seven initial models for identifying the multiple anatomical structures of the gallbladder and pancreas; specifically, it may be a UNet++ neural network model. Seven UNet++ neural network models were trained separately to recognize different kinds of biliopancreatic anatomy.
  • the identifiable images of the bile-pancreatic anatomical structure in the multiple first biliary-pancreatic ultrasonic images are the second biliary-pancreatic ultrasonic images, and the images in which the bile-pancreatic anatomical structure is not identifiable in the multiple first biliary-pancreatic ultrasonic images are the third biliary-pancreatic ultrasonic images image. That is, it is necessary to identify the anatomical structure of the gallbladder and pancreas in the third ultrasound image of the gallbladder and pancreas.
  • FIG. 9 it is a schematic flowchart of an embodiment of location identification provided by the embodiment of the present application, which may include:
  • FIG. 10 it shows the recognition status of different bile-pancreatic anatomical structure recognition models provided in the embodiment of the present application.
  • the biliopancreatic anatomical structure corresponding to each biliopancreatic site is known, if the biliopancreatic ultrasound image of the biliopancreatic structure does not include a certain biliopancreatic anatomical structure, the biliopancreatic anatomical structure
  • the recognition model does not recognize the corresponding anatomical structure of the gallbladder and pancreas on the ultrasound image of the gallbladder and pancreas.
  • the biliopancreatic anatomical structure recognition model that can recognize the biliopancreatic anatomical structure will not recognize the first biliopancreatic ultrasound image, so as to save the time spent on biliopancreatic anatomical structure recognition. time and improve recognition efficiency.
  • the abdominal aorta, celiac trunk and superior mesenteric artery belong to the same bile-pancreatic anatomical structure, so the three bile-pancreatic anatomy can be identified by using the same bile-pancreatic anatomical structure identification model structure; that is, three target regions in the third biliary-pancreatic ultrasound image corresponding to the abdominal aorta can be determined using the category 1 recognition model.
  • the biliopancreatic anatomical structure corresponding to the first porta hepatic station includes: liver, portal vein, and bile duct; while the portal vein and bile duct belong to the same type of biliopancreatic anatomical structure, the liver belongs to another type of bile duct anatomical structure . Therefore, when using the bile-pancreatic anatomical structure recognition model for recognition, it is not only necessary to use the category 1 recognition model to identify two target areas; it is also necessary to use the category 5 recognition model to identify a target area, a total of three target areas. The three identified target regions correspond to the liver, portal vein, and bile duct, respectively.
  • the upper left vertex of the third biliary-pancreatic ultrasound image may be used as the coordinate origin to construct a coordinate system.
  • the edges of multiple target areas are surrounded by multiple initial edge points (x, y).
  • performing preset thinning and homogenization processing on multiple initial edge points may include: traversing all initial edge points in a preset order, and initial edge points whose distance between any two adjacent initial edge points is less than 10 pixels Discard; at the same time, insert a new initial edge point into any two initial edge points whose distance between any two adjacent initial edge points is greater than 10 pixels; thereby obtain multiple edge points.
  • the coordinate sequence ⁇ (x1, y1), (x2, y2)...(xn, yn) ⁇ corresponding to multiple edge points can be obtained.
  • the mean value of all edge point coordinates can be used as the coordinates of the center point corresponding to the target area, that is, the center point coordinate Rc (Xc, Yc) corresponding to the target area can be:
  • center point coordinates is for a certain target area, and different target areas correspond to different center point coordinates.
  • FIG. 11 it is a schematic diagram of an embodiment of the positional relationship provided by the embodiment of the present application.
  • FIG. 11 the positional relationship among the abdominal aorta 1 , the celiac trunk 2 and the superior mesenteric artery 3 is shown in the figure.
  • the abdominal aortic station it can be determined that three target regions in the abdominal aortic station are identified by using the category 1 recognition model.
  • the three bile-pancreatic anatomical structures in the abdominal aorta station: the abdominal aorta, the celiac trunk and the superior mesenteric artery are known and determined; at this time, only the coordinates of the center points corresponding to the three target areas need to be compared with Matching the known positional relationship can determine the bile-pancreas anatomy corresponding to each of the three target areas.
  • the three target areas corresponding to the abdominal aorta station are Rc1, Rc2, and Rc3 respectively, and the three target areas correspond to the coordinates of the three center points respectively; and according to the positional relationship, it can be determined that among the coordinates of the three center points, Xc and Yc are the largest
  • the target region of Xc is the abdominal aorta
  • the smallest target region of Xc is the superior mesenteric artery
  • the rest is the celiac trunk.
  • the coordinates of the center point corresponding to the target area can be used as the position coordinates of the target area; therefore, the position of the anatomical structure of the bile and pancreas can be determined by judging the preset position relationship and the coordinates of the center point, that is, the The biliopancreatic anatomy corresponding to the target area.
  • the bile-pancreas anatomy corresponding to each target area is sequentially determined by using the above method.
  • “No” in FIG. 10 represents that it is not necessary to use the first biliary-pancreatic ultrasonic image corresponding to the corresponding category recognition model (ie, bile-pancreatic anatomical structure recognition model) for recognition.
  • the "1" in Figure 10 represents that only one target area can be identified by using the corresponding category recognition model, and since the anatomical structure of the gallbladder and pancreas that can be recognized by each category recognition model is determined, when the only target area is identified , the bile-pancreas anatomy corresponding to the target area has actually been determined. Therefore, there is no need to perform subsequent identification using the positional relationship.
  • the identification of the gastric cavity pancreatic body station two target areas are obtained by using the recognition model of category 1 and the recognition model of category 2 respectively. Since only the splenic artery and vein belong to the gastric cavity and pancreas in the bile-pancreatic anatomical structures that can be recognized by the category 1 recognition model; therefore, after the target area is recognized by the category 1 recognition model, it is possible to directly confirm the recognition by the category 1 recognition model.
  • the out-of-target area corresponds to the splenic artery and vein.
  • pancreatic body belongs to the pancreatic body station in the gastric cavity; therefore, after the category 2 recognition model recognizes the target area, it can be directly confirmed that the category 2 recognition model recognizes The out-of-target area corresponds to the pancreatic body.
  • the embodiment of the present application also provides a biliary-pancreatic ultrasonic image recognition device, as shown in Figure 12, which is A schematic diagram of an embodiment of a biliopancreatic ultrasound image recognition device provided in an embodiment of the present application.
  • the biliopancreatic ultrasound image recognition device includes:
  • the acquisition module 1201 is configured to acquire multiple ultrasound images of the gallbladder-pancreas structure of the human body to be identified.
  • the first identification module 1202 is configured to use a preset biliary-pancreatic site identification model to perform biliary-pancreatic site identification on multiple biliary-pancreatic ultrasonic images of the biliary-pancreatic structure, and determine multiple biliary-pancreatic sites corresponding to the multiple biliary-pancreatic ultrasonic images, In order to obtain a plurality of first biliary-pancreatic ultrasound images, a biliary-pancreatic site corresponding to each first biliary-pancreatic ultrasound image in the multiple first biliary-pancreatic ultrasound images is determined.
  • the second identification module 1203 is configured to identify the biliopancreatic anatomical structure on multiple first biliopancreatic ultrasound images by using a preset biliopancreatic anatomical structure identification model, and determine the biliopancreatic anatomical structure in the multiple first biliopancreatic ultrasound images A second ultrasound image of the gallbladder and pancreas is recognizable, and a third ultrasound image of the gallbladder and pancreas is not identifiable.
  • the positioning module 1204 is configured to identify the position of the third ultrasound image of gallbladder and pancreas, and determine the anatomical structure of the gallbladder and pancreas in the third ultrasound image of gallbladder and pancreas.
  • the ultrasound image recognition device for gallbladder and pancreas provided in the embodiment of the present application can determine the unrecognizable gallbladder and pancreas anatomical structure through the existing gallbladder and pancreas site recognition and gallbladder and pancreas anatomical structure recognition, and according to the existing known multiple gallbladder and pancreas anatomical structures Determine the location of the unrecognized biliopancreatic anatomy in the biliopancreatic ultrasound image to identify the biliopancreatic anatomy.
  • This method combines the existing biliary-pancreatic site information, the image features of the biliary-pancreatic anatomical structure, and the position coordinates of the biliary-pancreatic anatomical structure to comprehensively identify and label the anatomical structure of the biliary-pancreatic structure, which significantly reduces the anatomical structure of the biliary-pancreas in ultrasonic images. recognition difficulty.
  • the acquisition module 1201 can be specifically configured to: acquire multiple initial biliary-pancreatic ultrasonic images of the human body's biliary-pancreatic structure to be identified; determine each initial biliary-pancreatic ultrasonic image in the multiple initial biliary-pancreatic ultrasonic images Obtain multiple valid areas corresponding to their corresponding effective areas; obtain the horizontal circumscribed rectangles corresponding to each of the multiple effective areas to obtain multiple horizontal circumscribed rectangles; use multiple horizontal circumscribed rectangles to cut and identify Multiple initial biliary-pancreatic ultrasonographic images of the biliary-pancreatic structure, and multiple biliary-pancreatic ultrasonographic images for subsequent identification of the biliary-pancreatic structure.
  • the first identification module 1202 can be specifically configured to: obtain multiple preset initial models for biliopancreatic site identification; respectively train multiple preset initial models for biliopancreatic site identification to obtain multiple A biliary-pancreatic site identification model to identify different biliary-pancreatic sites; using multiple biliary-pancreatic site identification models to perform biliary-pancreatic site identification on multiple biliary-pancreatic ultrasound images of biliary-pancreatic structures to determine multiple biliary-pancreatic ultrasound images Each first biliary-pancreatic ultrasound image in the image corresponds to the biliary-pancreatic site.
  • the first identification module 1202 may be specifically used to: respectively train the eight preset biliary-pancreatic site identification initial models to obtain eight Biliary-pancreatic site identification model; Among them, eight biliary-pancreatic site identification models are used to identify multiple biliary-pancreatic ultrasound images of bile-pancreatic structures: abdominal aortic station, gastric cavity-pancreatic body station, gastric cavity-pancreatic tail station, Confluence Station, the first hepatic portal station, gastric cavity pancreatic head station, duodenal bulb station, duodenal descending part station.
  • the second recognition module 1203 can be specifically configured to: acquire multiple preset biliary-pancreatic anatomical structure recognition models;
  • the pancreas anatomical structure recognition model is a target biliary and pancreatic anatomical structure recognition model, and the target biliary and pancreatic anatomical structure recognition model is used to identify the biliopancreatic anatomical structure of multiple first biliary and pancreatic ultrasound images in sequence, and determine the The anatomical structure of the gallbladder and pancreas corresponding to each first ultrasound image of the gallbladder and pancreas;
  • the identifiable images of the bile-pancreatic anatomical structure in the multiple first biliary-pancreatic ultrasonic images are the second biliary-pancreatic ultrasonic images
  • the images in which the bile-pancreatic anatomical structure is not identifiable in the multiple first biliary-pancreatic ultrasonic images are the third biliary-pancreatic ultrasonic images image.
  • the positioning module 1204 can be used to: determine multiple target areas in the third biliary-pancreatic ultrasound image, each target area in the multiple target areas is surrounded by multiple initial edge points; determine the third biliary-pancreatic ultrasound image The origin of the coordinates in the ultrasound image to determine the coordinates of the multiple initial edge points corresponding to each of the multiple target areas; according to the coordinates of the multiple initial edge points corresponding to each of the multiple target areas, Determine the center point coordinates corresponding to each of the multiple target areas; obtain the corresponding positional relationship of the preset biliopancreatic anatomical structure; determine the biliopancreatic anatomical structure corresponding to the target area according to the positional relationship and the center point coordinates.
  • the biliopancreatic anatomical structure corresponding to the target area is the biliopancreatic anatomical structure corresponding to the third biliopancreatic ultrasonic image.
  • the positioning module 1204 can also be used to: perform preset thinning and homogenization processing on a plurality of initial edge points to obtain a plurality of edge points; determine the respective edges corresponding to the plurality of edge points according to the coordinate origin Point coordinates: according to the plurality of edge point coordinates, determine the corresponding center point coordinates of each target area in the plurality of target areas.
  • the present application also provides a server that integrates any of the biliary-pancreatic ultrasound image recognition devices provided in the embodiments of the present application, as shown in Figure 13, which shows a schematic structural diagram of the server involved in the embodiments of the present application , specifically:
  • the server may include a processor 1301 of one or more processing cores, a memory 1302 of one or more computer-readable storage media, a power supply 1303, an input unit 1304 and other components.
  • a processor 1301 of one or more processing cores may include a processor 1301 of one or more processing cores, a memory 1302 of one or more computer-readable storage media, a power supply 1303, an input unit 1304 and other components.
  • FIG. 13 is not limited to the server, and may include more or less components than shown in the figure, or combine some components, or arrange different components. in:
  • the processor 1301 is the control center of the server, and uses various interfaces and lines to connect various parts of the entire server, by running or executing software programs and/or models stored in the memory 1302, and calling data stored in the memory 1302, Execute various functions of the server and process data to monitor the server as a whole.
  • the processor 1301 may include one or more processing cores; preferably, the processor 1301 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, etc. , the modem processor mainly handles wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 1301 .
  • the memory 1302 can be used to store software programs and models, and the processor 1301 executes various functional applications and data processing by running the software programs and models stored in the memory 1302 .
  • the memory 1302 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.), etc.; The data created by the use of the server, etc.
  • the memory 1302 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the memory 1302 may further include a memory controller to provide the processor 1301 with access to the memory 1302 .
  • the server also includes a power supply 1303 for supplying power to each component.
  • the power supply 1303 can be logically connected to the processor 1301 through the power management system, so that functions such as charging, discharging, and power consumption management can be realized through the power management system.
  • the power supply 1303 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the server can also include an input unit 1304, which can be used to receive input numbers or character information, and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input unit 1304 can be used to receive input numbers or character information, and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the server may also include a display unit, etc., which will not be repeated here.
  • the processor 1301 in the server will load the executable file corresponding to the process of one or more application programs into the memory 1302 according to the following instructions, and the processor 1301 will run the executable file stored in the memory. 1302, so as to realize various functions, as follows:
  • the present application also provides a computer-readable storage medium, which may include: a read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD, etc.
  • the storage medium stores a computer program, and the computer program is loaded by the processor to execute the steps in any method for recognizing biliary-pancreatic ultrasound images provided in the embodiments of the present application.
  • the computer program being loaded by the processor may perform the following steps:

Abstract

本申请提供一种胆胰超声图像识别方法、装置、服务器,确定无法识别的胆胰解剖结构,根据已知的胆胰解剖结构的位置关系,确定无法识别的胆胰解剖结构在胆胰超声图像中的位置,以识别胆胰解剖结构。该方法对胆胰解剖结构进行全面的识别标注,降低胆胰超声图像中的胆胰解剖结构的识别难度。

Description

胆胰超声图像识别方法、装置、服务器 技术领域
本申请涉及医疗技术辅助领域,具体涉及一种胆胰超声图像识别方法、装置、服务器。
背景技术
胆胰超声内镜是胆胰系统疾病诊治的重要手段。超声内镜诊疗的基础在于医生利用超声影像准确识别和定位胆胰结构,但胆胰超声图像为人体组织截面影像,主要包含人类肉眼难以识别的纹理信息,对缺乏专业训练和长期实践的内镜医生,难以准确辨认影像中的解剖学标志,极大地影响了识别与定位胆胰站点中胆胰结构的精度。
技术问题
现有技术中针对胆胰结构的识别,无法对胆胰结构中的多个胆胰站点以及胆胰解剖结构进行区分,识别较为困难。
技术解决方案
本申请提供一种胆胰超声图像识别方法、装置、服务器,旨在解决现有技术下的无法较好的区分人体胆胰结构中的胆胰解剖结构的问题。
一方面,本申请实施例提供一种胆胰超声图像识别方法,包括:
获取待识别的人体胆胰结构的多个胆胰超声图像;
利用预设的胆胰站点识别模型,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,所述多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;
利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;
对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构。
在本申请一种可能的实现方式中,所述获取待识别的人体胆胰结构的多个胆胰超声图像包括:
获取待识别的人体胆胰结构的多个初始胆胰超声图像;
确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域;
获取所述多个有效区域中每个有效区域各自对应的水平外切矩形,得到多个水平外切矩形;
以所述多个水平外切矩形分别裁切所述待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
在本申请一种可能的实现方式中,所述利用预设的胆胰站点识别模型,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,包括:
获取多个预设的胆胰站点识别初始模型;
分别对所述多个预设的胆胰站点识别初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点;
利用所述多个胆胰站点识别模型,分别对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定所述多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点;
其中,所述第一胆胰超声图像为多个,所述多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的。
在本申请一种可能的实现方式中,所述分别对所述多个预设的胆胰站点初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点,包括:
对多个ResNet神经网络模型进行训练,得到多个胆胰站点识别模型,不同的胆胰站点识别模型识别不同的胆胰站点。
在本申请一种可能的实现方式中,所述预设的胆胰站点识别初始模型为八个;
所述分别对所述多个预设的胆胰站点识别初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点,包括:
分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型;其中,所述八个胆胰站点识别模型用于分别识别所述胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
在本申请一种可能的实现方式中,所述确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域,包括:
训练UNet++图像神经网络模型,以利用训练后的UNet++图像神经网络模型识别所述多个初始胆胰超声图像中,每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域。
在本申请一种可能的实现方式中,所述利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像,包括:
获取多个预设的胆胰解剖结构识别模型;
依次以所述多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用所述目标胆胰解剖结构识别模型依次对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构;
其中,所述多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,所述多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。
在本申请一种可能的实现方式中,所述对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构,包括:
确定所述第三胆胰超声图像中的多个目标区域,所述多个目标区域中每个目标区域由多个初始边缘点围成;
确定所述第三胆胰超声图像中的坐标原点,以确定所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标;
根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标;
获取预设的胆胰解剖结构对应的位置关系;
根据所述位置关系和所述中心点坐标,确定所述目标区域对应的胆胰解剖结构;
其中,所述目标区域对应的胆胰解剖结构即为所述第三胆胰超声图像对应的胆胰解剖结构。
在本申请一种可能的实现方式中,所述根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标,包括:
对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点;
根据所述坐标原点,确定所述多个边缘点各自对应的边缘点坐标;
根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标。
在本申请一种可能的实现方式中,所述对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点,包括:
按照预设顺序遍历所述多个初始边缘点;
对任意相邻两个初始边缘点中间距小于10像素的初始边缘点进行丢弃;
在任意相邻两个初始边缘点中间距大于10像素的两个初始边缘点中插入新的初始边缘点,以得到所述多个边缘点。
在本申请一种可能的实现方式中,所述根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标,包括:
将所述多个目标区域中每个目标区域各自对应的多个边缘点坐标的均值,作为所述多个目标区域中每个目标区域各自对应的中心点坐标。
在本申请一种可能的实现方式中,所述解剖结构为七类,所述获取多个预设的胆胰解剖结构识别模型,包括:
获取七个预设的胆胰解剖结构识别模型。
另一方面,本申请实施例还提供一种胆胰超声图像识别装置,所述胆胰超声图像识别装置包括:
获取模块,用于获取待识别的人体胆胰结构多个胆胰超声图像;
第一识别模块,用于利用预设的胆胰站点识别模型,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,所述多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;
第二识别模块,用于利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;
定位模块,用于对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构。
在本申请一种可能的实现方式中,所述获取模块具体用于:获取待识别的人体胆胰结构的多个初始胆胰超声图像;
确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域;
获取所述多个有效区域中每个有效区域各自对应的水平外切矩形,得到多个水平外切矩形;
以所述多个水平外切矩形分别裁切所述待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
在本申请一种可能的实现方式中,所述第一识别模块具体用于:获取多个预设的胆胰站点识别初始模型;
分别对所述多个预设的胆胰站点初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点;
利用所述多个胆胰站点识别模型,分别对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定所述多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点;
其中,所述第一胆胰超声图像为多个,所述多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的。
在本申请一种可能的实现方式中,所述预设的胆胰站点识别初始模型为八个;所述第一识别模块具体用于:
分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型;其中,所述八个胆胰站点识别模型用于分别识别所述胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
在本申请一种可能的实现方式中,所述第二识别模块具体用于:
获取多个预设的胆胰解剖结构识别模型;
依次以所述多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用所述目标胆胰解剖结构识别模型依次对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构;
其中,所述多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,所述多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。
在本申请一种可能的实现方式中,所述定位模块具体用于:
确定所述第三胆胰超声图像中的多个目标区域,所述多个目标区域中每个目标区域由多个初始边缘点围成;
确定所述第三胆胰超声图像中的坐标原点,以确定所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标;
根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标;
获取预设的胆胰解剖结构对应的位置关系;
根据所述位置关系和所述中心点坐标,确定所述目标区域对应的胆胰解剖结构;
其中,所述目标区域对应的胆胰解剖结构即为所述第三胆胰超声图像对应的胆胰解剖结构。
在本申请一种可能的实现方式中,所述定位模块具体用于:
对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点;
根据所述坐标原点,确定所述多个边缘点各自对应的边缘点坐标;
根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标。
另一方面,本申请还提供一种服务器,所述服务器包括:
一个或多个处理器;
存储器;以及
一个或多个应用程序,其中所述一个或多个应用程序被存储于所述存储器中,并配置为由所述处理器执行以实现如上任一项所述的胆胰超声图像识别方法。
另一方面,本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器进行加载,以执行如上任一项所述的胆胰超声图像识别方法中的步骤。
有益效果
本申请提供一种胆胰超声图像识别方法、装置、服务器,通过现有的胆胰站点识别和胆胰解剖结构识别,确定无法识别的胆胰解剖结构,并根据现有已知的多个胆胰解剖结构之间的位置关系,确定无法识别的胆胰解剖结构在胆胰超声图像中的位置,以识别胆胰解剖结构。该方法结合现有的胆胰站点信息、胆胰解剖结构的图像特征以及胆胰解剖结构的位置坐标,对胆胰解剖结构进行全面的识别标注,显著降低胆胰超声图像中的胆胰解剖结构的识别难度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的病变识别的场景示意图;
图2为本申请实施例提供的胆胰超声图像识别方法的一个实施例流程示意图;
图3为本申请实施例提供的标准八站及对应胆胰解剖结构一实施例示意图;
图4为本申请实施例提供的胆胰解剖结构一实施例示意图;
图5为本申请实施例提供的可识别的胆胰解剖结构一实施例示意图;
图6为本申请实施例提供的不可识别的胆胰解剖结构一实施例示意图;
图7为本申请实施例提供的获取胆胰超声图像一实施例流程示意图;
图8为本申请实施例提供的进行胆胰站点识别一实施例流程示意图;
图9为本申请实施例提供的进行位置识别的一实施例流程示意图;
图10为本申请实施例提供的不同胆胰解剖结构识别模型的识别情况一实施例示意图;
图11为本申请实施例提供的位置关系一实施例示意图;
图12为本申请实施例提供的胆胰超声图像识别装置一实施例示意图;
图13其示出了本申请实施例所涉及到的服务器的结构示意图。
本发明的实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请中,“示例性”一词用来表示“用作例子、例证或说明”。本申请中被描述为“示例性”的任何实施例不一定被解释为比其它实施例更优选或更具优势。为了使本领域任何技术人员能够实现和使用本发明,给出了以下描述。在以下描述中,为了解释的目的而列出了细节。应当明白的是,本领域普通技术人员可以认识到,在不使用这些特定细节的情况下也可以实现本发明。在其它实例中,不会对公知的结构和过程进行详细阐述,以避免不必要的细节使本发明的描述变得晦涩。因此,本发明并非旨在限于所示的实施例,而是与符合本申请所公开的原理和特征的最广范围相一致。
本申请实施例提供一种胆胰超声图像识别方法、装置、服务器,以下分别进行详细说明。
如图1所示,为本申请实施例提供的胆胰超声图像识别系统的场景示意图,该胆胰超声图像识别系统可以包括多个终端100和服务器200,终端100之间、服务器200之间、以及终端100与服务器200之间通过各种网关组成的互联网等方式连接通信,不再赘述。其中,终端100可以包括检测终端101以及用户终端102等。
本发明实施例中服务器200主要用于获取待识别的人体胆胰结构的多个胆胰超声图像;利用预设的胆胰站点识别模型,对胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的;利用预设的胆胰解剖结构识别模型,对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;对第三胆胰超声图像进行位置识别,确定第三胆胰超声图像中的胆胰解剖结构。
本发明实施例中,该服务器200可以是独立的服务器,也可以是服务器组成的服务器网络或服务器集群,例如,本发明实施例中所描述的服务器200,其包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云服务器。其中,云服务器由基于云计算(Cloud Computing)的大量计算机或网络服务器构成。本发明的实施例中,服务器与终端之间可通过任何通信方式实现通信,包括但不限于,基于第三代合作伙伴计划(3rd Generation Partnership Project,3GPP)、长期演进(Long Term Evolution,LTE)、全球互通微波访问(Worldwide Interoperability for Microwave Access, WiMAX)的移动通信,或基于TCP/IP协议族(TCP/IP Protocol Suite,TCP/IP)、用户数据报协议(User Datagram Protocol,UDP)的计算机网络通信等。
可以理解的是,本发明实施例中所使用的终端100可以是既包括接收和发射硬件的设备,即具有能够在双向通信链路上,执行双向通信的接收和发射硬件的设备。这种终端可以包括:蜂窝或其他通信设备,其具有单线路显示器或多线路显示器或没有多线路显示器的蜂窝或其他通信设备。
具体的,检测终端101主要负责采集人体的待检测部位的内镜影像,检测终端上的采集设备可以包括核磁共振成像仪(MRI,Magnetic Resonance Imaging)、计算机断层扫描设备(CT,Computed Tomography)、阴道镜或内窥镜等电子设备。本实施例中图像采集设备可以为胆胰超声内镜,主要用于获取人体胆胰结构的胆胰超声图像。
用户终端102包括但不局限于手机、平板等便携终端,电脑、查询机等固定终端,以及各种虚拟终端等;主要提供待胆胰超声图像的上传功能、处理功能、造影图像对应的处理结果的展示功能等。
本领域技术人员可以理解,图1中示出的应用环境,仅仅是与本申请方案一种应用场景,并不构成对本申请方案应用场景的限定,其他的应用环境还可以包括比图1中所示更多或更少的服务器,或者服务器网络连接关系,例如图1中仅示出1个服务器和2个终端,可以理解的,该病变识别场景中还可以包括一个或多个其他服务器,或/且一个或多个与服务器网络连接的终端,具体此处不作限定。
另外,如图1所示,该胆胰超声图像识别系统还可以包括存储器300,用于存储数据,如存储图像数据,例如终端获取到的待检测部位的图像数据。且存储器300可以包括本地数据库和/或云端数据库。
需要说明的是,图1所示的胆胰超声图像识别系统的场景示意图仅仅是一个示例,本发明实施例描述的病变识别场景以及场景是为了更加清楚的说明本发明实施例的技术方案,并不构成对于本发明实施例提供的技术方案的限定,本领域普通技术人员可知,随着胆胰超声图像识别场景的演变和新业务场景的出现,本发明实施例提供的技术方案对于类似的技术问题,同样适用。
如图2所示,为本申请实施例提供的胆胰超声图像识别方法一实施例流程示意图,可以包括:
21、获取待识别的人体胆胰结构的多个胆胰超声图像。
本申请实施例提供的胆胰超声图像识别方法,主要是对人体胆胰结构进行识别,便于医生根据胆胰结构对应的胆胰超声图像确定胆胰结构的病变。
具体的,可以利用胆胰超声内镜直接获取胆胰结构对应的多个胆胰超声图像。
22、利用预设的胆胰站点识别模型,对胆胰结构的胆胰超声图像进行胆胰站点识别,确定多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像。
对于本申请实施例中的胆胰结构的识别来说,胆胰超声内镜的标准扫查分为多个胆胰站点以及多个胆胰解剖结构,医生需要完成所有胆胰站点的扫查以及所有胆胰解剖结构的识别,才能保证对胆胰系统的全面观察。
因此,在本申请的实施例中,可以利用预设的胆胰站点识别模型,对胆胰结构的胆胰超声图像先进行胆胰站点的识别,确定多个胆胰超声图像对应的多个胆胰站点;且一个胆胰超声图像仅对应一个胆胰站点。
在识别了多个胆胰超声图像对应的多个胆胰站点后,可以得到多个第一胆胰超声图像;且多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点也是确定的。
在本申请的一个具体实施例中,胆胰超声内镜的标准扫查共八站,且每一站有对应的胆胰解剖结构。如图3所示,为本申请实施例提供的标准八站及对应胆胰解剖结构一实施例示意图。
在图3中,以腹主动脉站为例,腹主动脉站还对应有三个胆胰解剖结构,分别为:腹主动脉、腹腔干和肠系膜上动脉。以胃腔胰体站为例,胃腔胰体站还包括脾动静脉和胰体两个胆胰解剖结构。在本申请的实施例中,每个站对应的胆胰解剖结构是不同的。
23、利用预设的胆胰解剖结构识别模型,对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像。
在识别了每个第一胆胰超声图像对应的胆胰站点后,还需要识别每个第一胆胰超声图像中的多个胆胰解剖结构。因此同样可以利用预设的胆胰解剖结构识别模型,对已经进行胆胰站点识别后得到的多个第一胆胰超声图像再次进行胆胰解剖结构的识别。
需要说明的是,现有胆胰解剖结构识别的技术中,进行胆胰解剖结构的识别时,是对已经进行胆胰站点识别后的胆胰超声图像进行胆胰解剖结构的识别。但在胆胰解剖结构的识别过程中,由于部分胆胰解剖结构的纹理质地基本相同,部分胆胰解剖结构为无回声结构,使得无法识别出所有的胆胰解剖结构。
因此,利用预设的胆胰解剖结构识别模型,仅能识别部分胆胰解剖结构;即利用预设的胆胰解剖结构识别模型,可以确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像。
如图4所示,为本申请实施例提供的胆胰解剖结构一实施例示意图。在图4中,不对胆胰结构的胆胰超声图像进行胆胰站点识别,而是先将胆胰解剖结构分为七类,并按照分类后的七类胆胰解剖结构对胆胰结构的胆胰超声图像进行胆胰解剖结构的识别,以将胆胰结构按照胆胰解剖结构进行分类。
需要说明的是,在图3所示胆胰站点中,每个胆胰站点对应的胆胰解剖结构名称会存在相同的情况,因此在图4中将多个胆胰解剖结构进行分类时,图4中包含的胆胰解剖结构的数量少于图3中所有胆胰站点对应的胆胰解剖结构的数量。
如图5所示,为本申请实施例提供的可识别的胆胰解剖结构一实施例示意图。在图5中利用前述胆胰站点识别和胆胰解剖结构识别,可以有效的识别出不同胆胰站点对应不同胆胰解剖结构,但并不是所有的胆胰解剖结构都可以识别。
以第一肝门站为例,正常情况下第一肝门站对应三个胆胰解剖结构,分别为:肝脏、门静脉和胆管。而利用上述胆胰站点识别和胆胰解剖结构识别仅可以识别出肝脏对应的胆胰解剖结构,而无法识别出门静脉和胆管。图5所示的即为可以识别的结构。而图6中所示的即为无法识别的胆胰解剖结构。
而在本申请的实施例中,多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,胆胰解剖结构不可识别的图像为第三胆胰超声图像;仅需对第三胆胰超声图像进行后续识别以确认胆胰解剖结构即可。
需要说明的是,在上述实施例中,利用预设的胆胰解剖结构识别模型进行识别时,是对胆胰站点已经确认的第一胆胰超声图像进行识别。
24、对第三胆胰超声图像进行位置识别,确定第三胆胰超声图像中的胆胰解剖结构。
在确定了无法识别的胆胰解剖结构对应的第三胆胰超声图像后,可以对第三胆胰超声图像进行位置识别,以确定第三胆胰超声图像中原本无法识别的胆胰解剖结构。
本申请实施例提供的胆胰超声图像识别方法,通过现有的胆胰站点识别和胆胰解剖结构识别,确定无法识别的胆胰解剖结构,并根据现有已知的多个胆胰解剖结构之间的位置关系,确定无法识别的胆胰解剖结构在胆胰超声图像中的位置,以识别胆胰解剖结构。该方法结合现有的胆胰站点信息、胆胰解剖结构的图像特征以及胆胰解剖结构的位置坐标,对胆胰解剖结构进行全面的识别标注,显著降低胆胰超声图像中的胆胰解剖结构的识别难度。
在本申请的实施例中,利用胆胰超声内镜获取到胆胰结构对应的多个初始胆胰超声图像后,还需要对多个初始胆胰超声图像进行处理,得到可以进行后续识别的多个胆胰超声图像。如图7所示,为本申请实施例提供的获取胆胰超声图像一实施例流程示意图,可以包括:
71、获取待识别的人体胆胰结构的多个初始胆胰超声图像。
72、确定多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域。
73、获取多个有效区域中每个有效区域各自对应的水平外切矩形,得到多个水平外切矩形。
74、以多个水平外切矩形分别裁切待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
具体的,利用胆胰超声内镜获取到的胆胰结构的初始胆胰超声图像中包括很多冗余信息,这些冗余信息会对后续胆胰站点识别和胆胰解剖结构识别产生影响,因此需要去除冗余信息。具体的可以依次确定多个初始胆胰超声图像各自对应的有效区域,并对除有效区域之外的其他区域进行裁切,去除多余区域。
在一个具体实施例中,在确定了初始胆胰超声图像中的有效区域后,可以直接确定有效区域对应的水平外切矩形,按照水平外切矩形的形状保留有效区域,而裁切掉除水平外切矩形之外的其他区域,得到最终的胆胰结构对应的胆胰超声图像。
在本申请的实施例中,可以利用神经网络模型来确定初始胆胰超声图像中的有效区域。具体的,可以利用训练UNet++图像神经网络模型,以识别初始胆胰超声图像中的有效区域,并对初始胆胰超声图像进行裁切。具体的训练过程可以参考现有技术,此处不做限定。
需要说明的是,在本申请的实施例中,由于初始胆胰超声图像有多个,而多个初始胆胰超声图像中每个初始胆胰超声图像对应的有效区域都不相同;所以每个初始胆胰超声图像中的有效区域对应的水平外切矩形也不相同;最终裁切得到的多个胆胰超声图像也是不相同的。
如图8所示,为本申请实施例提供的进行胆胰站点识别一实施例流程示意图,可以包括:
81、获取多个预设的胆胰站点识别初始模型。
82、分别对多个预设的胆胰站点识别初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点。
83、利用多个胆胰站点识别模型,分别对胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点。
在本申请的实施例中,同样可以利用神经网络模型来识别胆胰超声图像中的不同胆胰站点。具体的可以对多个ResNet神经网络模型进行训练,得到多个胆胰站点识别模型,不同的胆胰站点识别模型可以识别不同的胆胰站点。
而在本申请的另一些实施例中,也可以利用一个ResNet神经网络模型,对ResNet神经网络模型进行训练,使得ResNet神经网络模型可以同时识别不同的胆胰站点。
在一个具体实施例中,预设的胆胰站点识别初始模型可以为八个,分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型。其中,八个胆胰站点识别模型用于分别识别胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
在识别了胆胰超声图像中的胆胰站点,得到多个第一胆胰超声图像后,继续进行胆胰解剖结构的识别。具体的,同样可以获取多个预设的胆胰解剖结构识别初始模型,并对多个预设的胆胰解剖结构识别初始模型进行训练,得到多个胆胰解剖结构识别模型。
在得到多个胆胰解剖结构识别模型后,依次以多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用目标胆胰解剖结构识别模型依次对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构。
具体的,由于一个胆胰解剖结构识别模型仅能识别一类胆胰解剖结构,因此对于同一个第一胆胰超声图像来说,需要利用所有的胆胰解剖结构识别模型对第一胆胰超声图像进行胆胰解剖结构的识别,以确定同一个第一胆胰超声图像中的所有胆胰解剖结构均被识别。
在一个具体实施例中,如图4中将胆胰解剖结构分为七类,因此多个胆胰解剖结构识别初始模型可以为七个;具体可以为UNet++神经网络模型。对七个UNet++神经网络模型分别进行训练,以识别不同种类的胆胰解剖结构。
但由于存在部分不能被识别的胆胰解剖结构,具体如图5所示;因此需要再对不能识别的胆胰解剖结构进行识别,确定对应的胆胰解剖结构。其中,多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。即需要对第三胆胰超声图像中的胆胰解剖结构进行识别。
如图9所示,为本申请实施例提供的进行位置识别的一实施例流程示意图,可以包括:
91、确定第三胆胰超声图像中的多个目标区域。
如图10所示,为本申请实施例提供的不同胆胰解剖结构识别模型的识别情况。在本申请的一些实施例中,由于每个胆胰站点对应的胆胰解剖结构是已知的,因此若是胆胰结构的胆胰超声图像不包括某个胆胰解剖结构,则胆胰解剖结构识别模型不会对胆胰超声图像进行对应的胆胰解剖结构的识别。
即若是胆胰站点中不存在某些胆胰解剖结构,则可以识别该胆胰解剖结构的胆胰解剖结构识别模型不会对第一胆胰超声图像进行识别,以节省胆胰解剖结构识别的时间,提高识别效率。
具体的,结合图4和图10,在图4中腹主动脉、腹腔干和肠系膜上动脉属于同一类胆胰解剖结构,因此利用同一个胆胰解剖结构识别模型就可以识别这三个胆胰解剖结构;即利用类别1识别模型可以确定腹主动脉对应的第三胆胰超声图像中的三个目标区域。
而对于第一肝门站来说,第一肝门站对应的胆胰解剖结构包括:肝脏、门静脉和胆管;而门静脉和胆管属于同一类胆胰解剖结构,肝脏属于另一类胆胰解剖结构。因此在利用胆胰解剖结构识别模型进行识别时,不仅需要利用类别1识别模型识别出两个目标区域;还需要再利用类别5识别模型识别出一个目标区域,共计三个目标区域。而识别出的三个目标区域分别对应肝脏、门静脉和胆管。
92、确定第三胆胰超声图像中的坐标原点,以确定多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标。
在本申请的实施例中,可以将第三胆胰超声图像的左上角顶点作为坐标原点,以构建坐标系。且多个目标区域的边缘是由多个初始边缘点(x,y)围成的。
93、根据多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定多个目标区域中每个目标区域各自对应的中心点坐标。
在确定了每个目标区域各自对应的多个初始边缘点后,由于多个初始边缘点的数量较多,且分布不一定均匀;为了保证目标区域中心点坐标计算的准确性并简化计算,还需要对多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点。并根据坐标原点,确定多个边缘点各自对应的边缘点坐标;最后再根据多个边缘点坐标,确定多个目标区域中每个目标区域各自对应的中心点坐标。
具体的,对多个初始边缘点进行预设稀疏化和均匀化处理,可以包括:按照预设顺序遍历所有初始边缘点,对任意相邻两个初始边缘点中间距小于10像素的初始边缘点进行丢弃;同时在任意相邻两个初始边缘点中间距大于10像素的两个初始边缘点中插入新的初始边缘点;以此得到多个边缘点。
进一步根据坐标原点,可以得到多个边缘点对应的坐标序列{(x1,y1),(x2,y2)......(xn,yn)}。而在本申请的实施例中,可以将所有边缘点坐标的均值作为目标区域对应的中心点的坐标,即目标区域对应的中心点坐标Rc(Xc,Yc)可以为:
Xc = (x1+x2+......+xn) / n
Yc = (y1+y2+......+yn) / n
需要说明的是,上述中心点坐标的计算是针对一个确定的目标区域,而不同的目标区域对应的中心点坐标不同。
94、获取预设的胆胰解剖结构对应的位置关系。
在本申请的实施例中,可以获取已知的多个胆胰解剖结构之间的位置关系。如图11所示,为本申请实施例提供的位置关系一实施例示意图。在图11中,腹主动脉1、腹腔干2和肠系膜上动脉3之间的位置关系如图所示。
95、根据位置关系和中心点坐标,确定目标区域对应的胆胰解剖结构。
在本申请的实施例中,结合图4和图11;根据图4可以确定利用类别1识别模型识别出腹主动脉站中三个目标区域。而腹主动脉站中的三个胆胰解剖结构:腹主动脉、腹腔干和肠系膜上动脉的位置关系是已知且确定的;此时只需要将三个目标区域各自对应的中心点坐标与已知的位置关系进行匹配,即可确定三个目标区域各自对应的胆胰解剖结构。
具体的,腹主动脉站对应的三个目标区域分别为Rc1、Rc2、Rc3,三个目标区域分别对应三个中心点坐标;而根据位置关系可以确定,三个中心点坐标中Xc、Yc最大的目标区域为腹主动脉,Xc最小的目标区域为肠系膜上动脉,剩余的则为腹腔干。
在本申请的实施例中,可以将目标区域对应的中心点坐标作为目标区域的位置坐标;因此可以通过判断预设的位置关系和中心点坐标,以确定胆胰解剖结构的位置,即确定了目标区域所对应的胆胰解剖结构。利用上述方法依次确定每个目标区域对应的胆胰解剖结构。
需要说明的是,图10中的“否”代表无需利用对应的类别识别模型(即胆胰解剖结构识别模型)对应的第一胆胰超声图像进行识别。而图10中的“1”代表利用对应的类别识别模型仅能识别出一个目标区域,又由于每个类别识别模型可以识别的胆胰解剖结构是确定的,因此在识别出唯一一个目标区域时,实际上已经确定了目标区域所对应的胆胰解剖结构。因此,无需再进行后续利用位置关系的识别。
以胃腔胰体站为例,在胃腔胰体站的识别中,利用类别1识别模型和类别2识别模型分别进行识别得到了两个目标区域。由于类别1识别模型所能识别的胆胰解剖结构中,仅有脾动静脉是属于胃腔胰体站的;因此在类别1识别模型识别得到目标区域后,可以直接确认利用类别1识别模型识别出的目标区域对应脾动静脉。同理,类别2识别模型所能识别的胆胰解剖结构中,仅有胰体是属于胃腔胰体站的;因此在类别2识别模型识别得到目标区域后,可以直接确认类别2识别模型识别出的目标区域对应胰体。
同时在图10中,若是同一个类别识别模型识别出了多个目标区域,则需要根据多个胆胰解剖结构的位置关系确认多个目标区域各自对应的胆胰解剖结构。
需要说明的是,在本申请的实施例中,对神经网络进行训练的过程可以参考现有技术,此处不做任何限定。
为了更好实施本申请实施例中胆胰超声图像识别方法,在胆胰超声图像识别方法基础之上,本申请实施例中还提供一种胆胰超声图像识别装置,如图12所示,为本申请实施例提供的胆胰超声图像识别装置一实施例示意图,该胆胰超声图像识别装置包括:
获取模块1201,用于获取待识别的人体胆胰结构多个胆胰超声图像。
第一识别模块1202,用于利用预设的胆胰站点识别模型,对胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的。
第二识别模块1203,用于利用预设的胆胰解剖结构识别模型,对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像。
定位模块1204,用于对第三胆胰超声图像进行位置识别,确定第三胆胰超声图像中的胆胰解剖结构。
本申请实施例提供的胆胰超声图像识别装置,通过现有的胆胰站点识别和胆胰解剖结构识别,确定无法识别的胆胰解剖结构,并根据现有已知的多个胆胰解剖结构之间的位置关系,确定无法识别的胆胰解剖结构在胆胰超声图像中的位置,以识别胆胰解剖结构。该方法结合现有的胆胰站点信息、胆胰解剖结构的图像特征以及胆胰解剖结构的位置坐标,对胆胰解剖结构进行全面的识别标注,显著降低胆胰超声图像中的胆胰解剖结构的识别难度。
在本申请的一些实施例中,获取模块1201具体可以用于:获取待识别的人体胆胰结构的多个初始胆胰超声图像;确定多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域;获取多个有效区域中每个有区域各自对应的水平外切矩形,得到多个水平外切矩形;以多个水平外切矩形分别裁切待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
在本申请的一些实施例中,第一识别模块1202具体可以用于:获取多个预设的胆胰站点识别初始模型;分别对多个预设的胆胰站点识别初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点;利用多个胆胰站点识别模型,分别对胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点。
其中,第一胆胰超声图像为多个,多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的。
在一个具体实施例中,预设的胆胰站点识别初始模块可以为八个,第一识别模块1202具体可以用于:分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型;其中,八个胆胰站点识别模型用于分别识别胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
在本申请的一些实施例中,第二识别模块1203具体可以用于:获取多个预设的胆胰解剖结构识别模型;依次以多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用目标胆胰解剖结构识别模型依次对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构;
其中,多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。
在一些实施例中,定位模块1204可以用于:确定第三胆胰超声图像中的多个目标区域,多个目标区域中每个目标区域由多个初始边缘点围成;确定第三胆胰超声图像中的坐标原点,以确定多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标;根据多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定多个目标区域中每个目标区域各自对应的中心点坐标;获取预设的胆胰解剖结构对应的位置关系;根据位置关系和中心点坐标,确定目标区域对应的胆胰解剖结构。
其中,目标区域对应的胆胰解剖结构即为第三胆胰超声图像对应的胆胰解剖结构。
在另一些实施例中,定位模块1204还可以用于:对多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点;根据坐标原点,确定多个边缘点各自对应的边缘点坐标;根据多个边缘点坐标,确定多个目标区域中每个目标区域各自对应的中心点坐标。
本申请还提供一种服务器,其集成了本申请实施例所提供的任一种胆胰超声图像识别装置,如图13所示,其示出了本申请实施例所涉及到的服务器的结构示意图,具体来讲:
该服务器可以包括一个或者一个以上处理核心的处理器1301、一个或一个以上计算机可读存储介质的存储器1302、电源1303和输入单元1304等部件。本领域技术人员可以理解,图13中示出的服务器结构并不构成对服务器的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器1301是该服务器的控制中心,利用各种接口和线路连接整个服务器的各个部分,通过运行或执行存储在存储器1302内的软件程序和/或模型,以及调用存储在存储器1302内的数据,执行服务器的各种功能和处理数据,从而对服务器进行整体监控。可选的,处理器1301可包括一个或多个处理核心;优选的,处理器1301可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1301中。
存储器1302可用于存储软件程序以及模型,处理器1301通过运行存储在存储器1302的软件程序以及模型,从而执行各种功能应用以及数据处理。存储器1302可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据服务器的使用所创建的数据等。此外,存储器1302可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器1302还可以包括存储器控制器,以提供处理器1301对存储器1302的访问。
服务器还包括给各个部件供电的电源1303,优选的,电源1303可以通过电源管理系统与处理器1301逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源1303还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该服务器还可包括输入单元1304,该输入单元1304可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
尽管未示出,服务器还可以包括显示单元等,在此不再赘述。具体在本实施例中,服务器中的处理器1301会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器1302中,并由处理器1301来运行存储在存储器1302中的应用程序,从而实现各种功能,如下:
获取待识别的人体胆胰结构的多个胆胰超声图像;利用预设的胆胰站点识别模型,对胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;利用预设的胆胰解剖结构识别模型,对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;对第三胆胰超声图像进行位置识别,确定第三胆胰超声图像中的胆胰解剖结构。
本申请还提供一种计算机可读存储介质,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。存储介质存储有计算机程序,该计算机程序被处理器进行加载,以执行本申请实施例所提供的任一种胆胰超声图像识别方法中的步骤。例如,所述计算机程序被处理器进行加载可以执行如下步骤:
获取待识别的人体胆胰结构的多个胆胰超声图像;利用预设的胆胰站点识别模型,对胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;利用预设的胆胰解剖结构识别模型,对多个第一胆胰超声图像进行胆胰解剖结构的识别,确定多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;对第三胆胰超声图像进行位置识别,确定第三胆胰超声图像中的胆胰解剖结构。
需要说明的是,本申请实施例方法由于是在电子设备中执行,各电子设备的处理对象均以数据或信息的形式存在,例如时间,实质为时间信息,可以理解的是,后续实施例中若提及尺寸、数量、位置等,均为对应的数据存在,以便电子设备进行处理,具体此处不作赘述。
以上对本申请实施例所提供的一种胆胰超声图像识别方法、装置、服务器进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种胆胰超声图像识别方法,其中,包括:
    获取待识别的人体胆胰结构的多个胆胰超声图像;
    利用预设的胆胰站点识别模型站,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,所述多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;
    利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;
    对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构。
  2. 根据权利要求1所述的胆胰超声图像识别方法,其中,所述获取待识别的人体胆胰结构的多个胆胰超声图像包括:
    获取待识别的人体胆胰结构的多个初始胆胰超声图像;
    确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域;
    获取所述多个有效区域中每个有效区域各自对应的水平外切矩形,得到多个水平外切矩形;
    以所述多个水平外切矩形分别裁切所述待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
  3. 根据权利要求2所述的胆胰超声图像识别方法,其中,所述利用预设的胆胰站点识别模型,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,包括:
    获取多个预设的胆胰站点识别初始模型;
    分别对所述多个预设的胆胰站点初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点;
    利用所述多个胆胰站点识别模型,分别对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定所述多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点;
    其中,所述第一胆胰超声图像为多个,所述多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的。
  4. 根据权利要求3所述的超声图像识别方法,其中,所述分别对所述多个预设的胆胰站点初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点,包括:
    对多个ResNet神经网络模型进行训练,得到多个胆胰站点识别模型,不同的胆胰站点识别模型识别不同的胆胰站点。
  5. 根据权利要求3所述的胆胰超声图像识别方法,其中,所述预设的胆胰站点识别初始模型为八个;
    所述分别对所述多个预设的胆胰站点识别初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点,包括:
    分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型;其中,所述八个胆胰站点识别模型用于分别识别所述胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
  6. 根据权利要求2所述的超声图像识别方法,其中,所述确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域,包括:
    训练UNet++图像神经网络模型,以利用训练后的UNet++图像神经网络模型识别所述多个初始胆胰超声图像中,每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域。
  7. 根据权利要求1所述的胆胰超声图像识别方法,其中,所述利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像,包括:
    获取多个预设的胆胰解剖结构识别模型;
    依次以所述多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用所述目标胆胰解剖结构识别模型依次对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构;
    其中,所述多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,所述多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。
  8. 根据权利要求7所述的胆胰超声图像识别方法,其中,所述对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构,包括:
    确定所述第三胆胰超声图像中的多个目标区域,所述多个目标区域中每个目标区域由多个初始边缘点围成;
    确定所述第三胆胰超声图像中的坐标原点,以确定所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标;
    根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标;
    获取预设的胆胰解剖结构对应的位置关系;
    根据所述位置关系和所述中心点坐标,确定所述目标区域对应的胆胰解剖结构;
    其中,所述目标区域对应的胆胰解剖结构即为所述第三胆胰超声图像对应的胆胰解剖结构。
  9. 根据权利要求8所述的胆胰超声图像识别方法,其中,所述根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标,包括:
    对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点;
    根据所述坐标原点,确定所述多个边缘点各自对应的边缘点坐标;
    根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标。
  10. 根据权利要求9所述的超声图像识别方法,其中,所述对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点,包括:
    按照预设顺序遍历所述多个初始边缘点;
    对任意相邻两个初始边缘点中间距小于10像素的初始边缘点进行丢弃;
    在任意相邻两个初始边缘点中间距大于10像素的两个初始边缘点中插入新的初始边缘点,以得到所述多个边缘点。
  11. 根据权利要求9所述的超声图像识别方法,其中,所述根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标,包括:
    将所述多个目标区域中每个目标区域各自对应的多个边缘点坐标的均值,作为所述多个目标区域中每个目标区域各自对应的中心点坐标。
  12. 根据权利要求7所述的超声图像识别方法,其中,所述解剖结构为七类,所述获取多个预设的胆胰解剖结构识别模型,包括:
    获取七个预设的胆胰解剖结构识别模型。
  13. 一种胆胰超声图像识别装置,其中,所述胆胰超声图像识别装置包括:
    获取模块,用于获取待识别的人体胆胰结构多个胆胰超声图像;
    第一识别模块,用于利用预设的胆胰站点识别模型,对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,确定所述多个胆胰超声图像对应的多个胆胰站点,以得到多个第一胆胰超声图像,所述多个第一胆胰超声图像中每个第一胆胰超声图像对应的胆胰站点是确定的;
    第二识别模块,用于利用预设的胆胰解剖结构识别模型,对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中胆胰解剖结构可识别的第二胆胰超声图像,和胆胰解剖结构不可识别的第三胆胰超声图像;
    定位模块,用于对所述第三胆胰超声图像进行位置识别,确定所述第三胆胰超声图像中的胆胰解剖结构。
  14. 根据权利要求13所述的胆胰超声图像识别装置,其中,所述获取模块具体用于:获取待识别的人体胆胰结构的多个初始胆胰超声图像;
    确定所述多个初始胆胰超声图像中每个初始胆胰超声图像各自对应的有效区域,得到多个有效区域;
    获取所述多个有效区域中每个有效区域各自对应的水平外切矩形,得到多个水平外切矩形;
    以所述多个水平外切矩形分别裁切所述待识别的胆胰结构的多个初始胆胰超声图像,得到进行后续识别的胆胰结构的多个胆胰超声图像。
  15. 根据权利要求13所述的胆胰超声图像识别装置,其中,所述第一识别模块具体用于:获取多个预设的胆胰站点识别初始模型;
    分别对所述多个预设的胆胰站点初始模型进行训练,得到多个胆胰站点识别模型,以分别识别不同的胆胰站点;
    利用所述多个胆胰站点识别模型,分别对所述胆胰结构的多个胆胰超声图像进行胆胰站点识别,以确定所述多个胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰站点;
    其中,所述第一胆胰超声图像为多个,所述多个第一胆胰超声图像中每个胆胰超声图像对应的胆胰站点是确定的。
  16. 根据权利要求15所述的胆胰超声图像识别装置,其中,所述预设的胆胰站点识别初始模型为八个;所述第一识别模块具体用于:
    分别对八个预设的胆胰站点识别初始模型进行训练,得到八个胆胰站点识别模型;其中,所述八个胆胰站点识别模型用于分别识别所述胆胰结构的多个胆胰超声图像中的:腹主动脉站、胃腔胰体站、胃腔胰尾站、Confluence站、第一肝门站、胃腔胰头站、十二指肠球部站、十二指肠降部站。
  17. 根据权利要求13所述的胆胰超声图像识别装置,其中,所述第二识别模块具体用于:
    获取多个预设的胆胰解剖结构识别模型;
    依次以所述多个预设的胆胰解剖结构识别模型中的多个胆胰解剖结构识别模型为目标胆胰解剖结构识别模型,利用所述目标胆胰解剖结构识别模型依次对所述多个第一胆胰超声图像进行胆胰解剖结构的识别,确定所述多个第一胆胰超声图像中每个第一胆胰超声图像各自对应的胆胰解剖结构;
    其中,所述多个第一胆胰超声图像中胆胰解剖结构可识别的图像为第二胆胰超声图像,所述多个第一胆胰超声图像中胆胰解剖结构不可识别的图像为第三胆胰超声图像。
  18. 根据权利要求17所述的胆胰超声图像识别装置,其中,所述定位模块具体用于:
    确定所述第三胆胰超声图像中的多个目标区域,所述多个目标区域中每个目标区域由多个初始边缘点围成;
    确定所述第三胆胰超声图像中的坐标原点,以确定所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标;
    根据所述多个目标区域中每个目标区域各自对应的多个初始边缘点的坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标;
    获取预设的胆胰解剖结构对应的位置关系;
    根据所述位置关系和所述中心点坐标,确定所述目标区域对应的胆胰解剖结构;
    其中,所述目标区域对应的胆胰解剖结构即为所述第三胆胰超声图像对应的胆胰解剖结构。
  19. 根据权利要求18所述的胆胰超声图像识别装置,其中,所述定位模块具体用于:
    对所述多个初始边缘点进行预设稀疏化和均匀化处理,得到多个边缘点;
    根据所述坐标原点,确定所述多个边缘点各自对应的边缘点坐标;
    根据所述多个边缘点坐标,确定所述多个目标区域中每个目标区域各自对应的中心点坐标。
  20. 一种服务器,其中,所述服务器包括:
    一个或多个处理器;
    存储器;以及
    一个或多个应用程序,其中所述一个或多个应用程序被存储于所述存储器中,并配置为由所述处理器执行以实现权利要求1所述的胆胰超声图像识别方法。
PCT/CN2021/143710 2021-08-05 2021-12-31 胆胰超声图像识别方法、装置、服务器 WO2023010797A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110897534.9 2021-08-05
CN202110897534.9A CN113344926B (zh) 2021-08-05 2021-08-05 胆胰超声图像识别方法、装置、服务器及存储介质

Publications (1)

Publication Number Publication Date
WO2023010797A1 true WO2023010797A1 (zh) 2023-02-09

Family

ID=77480875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/143710 WO2023010797A1 (zh) 2021-08-05 2021-12-31 胆胰超声图像识别方法、装置、服务器

Country Status (2)

Country Link
CN (1) CN113344926B (zh)
WO (1) WO2023010797A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344926B (zh) * 2021-08-05 2021-11-02 武汉楚精灵医疗科技有限公司 胆胰超声图像识别方法、装置、服务器及存储介质
CN114913173B (zh) * 2022-07-15 2022-10-04 天津御锦人工智能医疗科技有限公司 内镜辅助检查系统、方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614995A (zh) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) 一种超声内镜下识别胰胆管和胰腺结构的系统及方法
CN111415564A (zh) * 2020-03-02 2020-07-14 武汉大学 基于人工智能的胰腺超声内镜检查导航方法及系统
US20200245960A1 (en) * 2019-01-07 2020-08-06 Exini Diagnostics Ab Systems and methods for platform agnostic whole body image segmentation
CN111582215A (zh) * 2020-05-17 2020-08-25 华中科技大学同济医学院附属协和医院 一种胆胰系统正常解剖结构的扫查识别系统和方法
CN113012140A (zh) * 2021-03-31 2021-06-22 武汉楚精灵医疗科技有限公司 基于深度学习的消化内镜视频帧有效信息区域提取方法
CN113344926A (zh) * 2021-08-05 2021-09-03 武汉楚精灵医疗科技有限公司 胆胰超声图像识别方法、装置、服务器及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3067824A1 (en) * 2017-06-26 2019-01-03 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for virtual pancreatography
WO2019146079A1 (ja) * 2018-01-26 2019-08-01 オリンパス株式会社 内視鏡画像処理装置、内視鏡画像処理方法及びプログラム
WO2021054477A2 (ja) * 2019-09-20 2021-03-25 株式会社Aiメディカルサービス 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体
CN111353978B (zh) * 2020-02-26 2023-05-12 合肥凯碧尔高新技术有限公司 一种识别心脏解剖学结构的方法及装置
CN112201335B (zh) * 2020-07-23 2023-05-26 中国人民解放军总医院 一种线阵超声内镜下识别腹腔内结构系统及其方法
CN112052882B (zh) * 2020-08-14 2023-08-22 北京师范大学 磁共振脑结构影像的分类模型构建、分类与可视化方法
CN112766314A (zh) * 2020-12-31 2021-05-07 上海联影智能医疗科技有限公司 解剖结构的识别方法、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614995A (zh) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) 一种超声内镜下识别胰胆管和胰腺结构的系统及方法
US20200245960A1 (en) * 2019-01-07 2020-08-06 Exini Diagnostics Ab Systems and methods for platform agnostic whole body image segmentation
CN111415564A (zh) * 2020-03-02 2020-07-14 武汉大学 基于人工智能的胰腺超声内镜检查导航方法及系统
CN111582215A (zh) * 2020-05-17 2020-08-25 华中科技大学同济医学院附属协和医院 一种胆胰系统正常解剖结构的扫查识别系统和方法
CN113012140A (zh) * 2021-03-31 2021-06-22 武汉楚精灵医疗科技有限公司 基于深度学习的消化内镜视频帧有效信息区域提取方法
CN113344926A (zh) * 2021-08-05 2021-09-03 武汉楚精灵医疗科技有限公司 胆胰超声图像识别方法、装置、服务器及存储介质

Also Published As

Publication number Publication date
CN113344926B (zh) 2021-11-02
CN113344926A (zh) 2021-09-03

Similar Documents

Publication Publication Date Title
CN109035234B (zh) 一种结节检测方法、装置和存储介质
WO2023010797A1 (zh) 胆胰超声图像识别方法、装置、服务器
CN109002846B (zh) 一种图像识别方法、装置和存储介质
CN113177928B (zh) 一种图像识别方法、装置、电子设备及存储介质
CN112861961A (zh) 肺血管分类方法及装置、存储介质及电子设备
WO2023179719A1 (zh) 医疗设备、胃部三维模型重建方法及其装置
CN114387320B (zh) 医学图像配准方法、装置、终端及计算机可读存储介质
CN114417037B (zh) 图像处理方法、装置、终端及可读存储介质
US10621728B2 (en) Internal organ localization in computed tomography (CT) images
WO2023169108A1 (zh) 目标区域的定位方法、电子设备、介质
CN114419135B (zh) 胰腺标志物尺寸量化方法、装置、终端及可读存储介质
CN114494406B (zh) 医学图像处理方法、装置、终端及计算机可读存储介质
CN113298773A (zh) 基于深度学习的心脏视图识别与左心室检测装置、系统
WO2021081839A1 (zh) 基于vrds 4d的病情分析方法及相关产品
WO2021081846A1 (zh) 静脉血管肿瘤影像处理方法及相关产品
JP5403431B2 (ja) 断層画像処理方法及び装置
CN114399503B (zh) 医学图像处理方法、装置、终端及存储介质
CN115393230B (zh) 超声内镜图像标准化方法、装置及其相关装置
CN114511045B (zh) 图像处理方法、装置、终端及计算机可读存储介质
CN117475344A (zh) 超声影像截取方法、装置、终端设备及存储介质
CN114419050B (zh) 胃黏膜可视化程度量化方法、装置、终端及可读存储介质
CN114359280B (zh) 胃黏膜图像边界量化方法、装置、终端及存储介质
CN115553753B (zh) 胆结石的风险预警装置及相关设备
CN113610840B (zh) 图像处理方法、装置、存储介质及电子设备
CN115731175A (zh) 手术结扎辅助方法、装置及相关设备

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021887872

Country of ref document: EP

Effective date: 20240305