CN208622438U - A kind of multi-modality images assistant diagnosis system - Google Patents

A kind of multi-modality images assistant diagnosis system Download PDF

Info

Publication number
CN208622438U
CN208622438U CN201822095219.0U CN201822095219U CN208622438U CN 208622438 U CN208622438 U CN 208622438U CN 201822095219 U CN201822095219 U CN 201822095219U CN 208622438 U CN208622438 U CN 208622438U
Authority
CN
China
Prior art keywords
development platform
calculation rod
embedded development
modality images
hospital server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201822095219.0U
Other languages
Chinese (zh)
Inventor
武志芳
王东文
柴锐
李思进
解军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Hospital of Shanxi Medical University
Original Assignee
First Hospital of Shanxi Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Hospital of Shanxi Medical University filed Critical First Hospital of Shanxi Medical University
Priority to CN201822095219.0U priority Critical patent/CN208622438U/en
Application granted granted Critical
Publication of CN208622438U publication Critical patent/CN208622438U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The utility model discloses a kind of multi-modality images assistant diagnosis systems, comprising: hospital server, embedded development platform and neural calculation rod;The nerve calculation rod is connected with the embedded development platform;The embedded development platform is connected with the hospital server;And the hospital server, the embedded development platform and the neural calculation rod are in same LAN environment;Wherein, patient information and corresponding multi-modality images data are stored in the hospital server;It is deployed with preparatory trained deep learning model in the nerve calculation rod, not only can mitigate simultaneously the burden of patient and doctor, and can be improved working efficiency.

Description

A kind of multi-modality images assistant diagnosis system
Technical field
The utility model relates to medical diagnostic techniqu field, more particularly to a kind of multi-modality images auxiliary diagnosis System.
Background technique
During clinical medicine diagnosis, the image data of patient generally requires to choose target area manually by expert Domain, segmentation ROI region, this is taking time and effort very much for a task.And checking tool and the means multiplicity of present hospitals, it can A variety of image datas of patient, such as CT, MR, PET, SPECT are obtained, still, these multi-modality images are not often by same One department can complete to acquire and collect in same diagnostic device, and patient is frequently necessary to print the diagnosis knot of a variety of images Fruit gives diagnostician to do final decision, while having aggravated the burden of patient and doctor, and working efficiency is low.
Therefore, how to mitigate the burden of patient and doctor, and improving working efficiency is those skilled in the art's urgent need to resolve The problem of.
Utility model content
In view of this, the present invention provides a kind of multi-modality images assistant diagnosis systems, can not only mitigate simultaneously The burden of patient and doctor, and can be improved working efficiency.
To achieve the goals above, the utility model adopts the following technical solution:
A kind of multi-modality images assistant diagnosis system, comprising: hospital server, embedded development platform and nerve calculate Stick;
The nerve calculation rod is connected with the embedded development platform;The embedded development platform and the gown Business device is connected;And the hospital server, the embedded development platform and the neural calculation rod are in same local area network In environment;
Wherein, patient information and corresponding multi-modality images data are stored in the hospital server;The nerve meter It calculates and is deployed with preparatory trained deep learning network model in stick.
Preferably, the embedded development platform includes: Raspberry Pi microcomputer.
Preferably, the neural calculation rod includes: Intel's nerve calculation rod.
Preferably, the neural calculation rod is connected by USB interface with the embedded development platform.
It can be seen via above technical scheme that compared with prior art, the utility model, which discloses, provides multi-modality images Assistant diagnosis system, the inexpensive auxiliary diagnosis for being suitable for multi-modality medical image clinical diagnosis using the building of neural calculation rod are flat Platform can reduce dependence of the hospital to high-performance GPU server, at the same can collect patient be distributed in different department, it is different when Between, the image datas of different modalities, and the neural calculation rod of application carries out auxiliary diagnosis and can be realized the automatic calibration of image, divides And fusion, at low cost and effect are better than existing computer, greatly reduce the workload of doctor, improve work effect Rate, while reducing the load of patient.
Detailed description of the invention
In order to illustrate the embodiment of the utility model or the technical proposal in the existing technology more clearly, below will be to embodiment Or attached drawing needed to be used in the description of the prior art is briefly described, it should be apparent that, the accompanying drawings in the following description is only It is the embodiments of the present invention, for those of ordinary skill in the art, without creative efforts, also Other attached drawings can be obtained according to the attached drawing of offer.
Fig. 1 is the structural schematic diagram of multi-modality images assistant diagnosis system provided by the utility model;
Fig. 2 is the schematic diagram of SPECT two dimension dynamic image provided by the utility model;
Fig. 3 is the schematic diagram of tomography CT image provided by the utility model;
Fig. 4 is the schematic diagram of enhanced CT image provided by the utility model;
Fig. 5 is the schematic diagram of enhanced CT image segmentation provided by the utility model;
Fig. 6 is the schematic diagram of multi-modality images provided by the utility model fusion;
Fig. 7 is the schematic diagram of multi-modality images target area provided by the invention segmentation;
Fig. 8 is that the ROI of kidney provided by the invention delineates the schematic diagram in region;
Fig. 9 is segmentation effect schematic diagram one provided by the invention;
Figure 10 is segmentation effect schematic diagram two provided by the invention.
Specific embodiment
The following will be combined with the drawings in the embodiments of the present invention, carries out the technical scheme in the embodiment of the utility model Clearly and completely describe, it is clear that the described embodiments are only a part of the embodiments of the utility model, rather than whole Embodiment.Based on the embodiments of the present invention, those of ordinary skill in the art are without making creative work Every other embodiment obtained, fall within the protection scope of the utility model.
Referring to attached drawing 1, the utility model embodiment discloses a kind of multi-modality images assistant diagnosis system, comprising: hospital Server, embedded development platform and neural calculation rod;
Neural calculation rod is connected with embedded development platform;Embedded development platform is connected with hospital server;And hospital Server, embedded development platform and neural calculation rod are in same LAN environment;
Wherein, patient information and corresponding multi-modality images data are stored in hospital server;In the middle part of neural calculation rod There is preparatory trained deep learning model in administration.
Visual processing unit (VPU-Visual Processing Unit) --- the neural calculation rod released in 2017, Maximum characteristic is the performance that can be provided under 1 watt of power more than 100,000,000,000 floating-point operations per second.According to this characteristic, It is connected by inventor with embedded development platform, constructs the low cost auxiliary suitable for multi-modality medical image clinical diagnosis Diagnostic platform, so that common low power consuming devices also have operation real-time deep neural network (DNN-Deep NeuralNetwork) Ability so that various artificial intelligence applications can be disposed offline.
The multi-modal of patient can be quickly obtained by multi-modality images assistant diagnosis system provided by the utility model Image, and merged in neural calculation rod, divide, identify, relative to traditional computer, improve the effect of clinical diagnosis Rate alleviates the working strength of doctor and the burden of patient.
In order to further optimize the above technical scheme, embedded development platform includes: RaspberryPi microcomputer.Tool Body, it can be RaspberryPi 3B+ microcomputer.
In order to further optimize the above technical scheme, neural calculation rod includes: Intel's nerve calculation rod.
In order to further optimize the above technical scheme, neural calculation rod is connected by USB with embedded development platform, specifically , it can be USB3.0.
RaspberryPi microcomputer combines neural calculation rod, can reduce hospital to high-performance GPU server according to Rely, greatly reduces the cost that hospital application artificial intelligence carries out auxiliary diagnosis.Moreover, this programme can be directed to different department Different diagnostic requirements, flexible Application deep learning network implementations batch, efficient auxiliary diagnosis.And the floating-point of neural calculation rod It can be better than existing mainstream PC equipment, and be able to maintain lower power consumption up to 100GFLOPs.
Network server is built based on embedded development platform, the data interaction with hospital server is realized, collects multimode State data (MySQL database specially in embedded development platform remittance) into the embedded development platform in specific consulting room, It has the advantage that the data interaction that network server realization and hospital server is built based on embedded development platform, can To collect the image data that patient is distributed in different department, different time, different modalities, the workload of doctor is significantly reduced; Multi-modality images data application Blob mode stores and path stores the mode combined, improves data search efficiency.
Using the mass data in the database of hospital server, depth is trained in advance in conjunction with high-performance GPU server Learning network model, and be deployed in the neural calculation rod of low-power consumption, the time caused by diagnosing a large amount of patients daily can be reduced And energy consumption;The repetitive operation of doctor is reduced, and provides believable diagnostic recommendations for doctor.
Carrying out auxiliary diagnosis using neural calculation rod can be realized autoregistration, segmentation and the fusion of image, and effect is excellent In existing computer.
The utility model uses above technical scheme, and the multiple modalities figure of patient can be quickly obtained from hospital lan Picture, and merged in neural calculation rod, divide, identify, the efficiency and accuracy rate of clinical diagnosis are improved, profession is alleviated The working strength of doctor.
In addition, the utility model embodiment also discloses a kind of building method of multi-modality images assistant diagnosis system, It is characterized in that, above-mentioned multi-modality images assistant diagnosis system, building method specifically includes:
S1: equipping Ubuntu operating system in embedded development platform, is specifically as follows Ubuntu16.04 operation system System, neural calculation rod is attached with embedded development platform, can be specifically attached by USB3.0 interface;
SDK and Caffe is installed to neural calculation rod;
S2: server framework and database are built based on embedded development platform;
S3: the network environment of connection hospital server is built;
S4: IP Address Velocity is arranged in secure access setting, and the only IP address of hospital internal just allows to register and access;
S5: deep learning network model is trained using GPU server;Deep learning network model is compiled And tuning, and deep learning network model is compiled into the special purpose model file that neural calculation rod can be run;It is calculated in nerve SDL API is called to run deep learning network model on stick.
The high performance server framework of lightweight and database are built using the hardware resource that RaspberryPi 3B+ is provided LAMP (i.e. Linux+Apache+MySQL+PHP).
In order to further optimize the above technical scheme, when embedded development platform is Raspberry Pi microcomputer, In step sl, in RaspberryPi microcomputer equip Ubuntu operating system, by neural calculation rod by USB3.0 with RaspberryPi microcomputer is attached;SDK and Caffe is installed to neural calculation rod.
In order to further optimize the above technical scheme, it is specifically included in step S2:
Apache is installed and configured, default web page catalogue is set, MySQL database is installed and configured;It is installed and configured PHP;PhpMyAdmin is installed and configured;
The mode of MySQL storage multi-modality images data is set, wherein is made with number and patient name of the hospital to patient For the major key of multi-modality images data, to avoid obscuring for data of the same name.
In order to further optimize the above technical scheme, the mode for storing image data includes:
Mode one: it is stored for diagnosis report image according to binary type (Blob) mode;Generally compared with small data class Type is no more than 64K.
Mode two: being directed to sequence medical image, and the path that picture is saved is stored to database, generally biggish DICOM format.
Auxiliary diagnosis unit of the neural calculation rod as multi-modality medical image, its implementation the following steps are included: (with For the segmentation and auxiliary diagnosis of abdomen multi-modality images, multi-modality images are respectively SPECT two dimension dynamic image such as 2 institute of attached drawing Show, tomography CT image is as shown in Fig. 3, enhanced CT image is as shown in Fig. 4)
Step 1, the GPU server powerful according to operational capability carries out depth network training, for clinical assistant diagnosis The network that demand trains can be realized quick multi-modality image registration, fusion and target area segmentation, such as attached drawing 5 to attached drawing Shown in 7.Specifically, its step can be realized by step 1.1 to step 1.6:
Step 1.1, enhance image using the CT after mark, training is used for the deep learning net of abdomen enhanced CT image segmentation Network;
Step 1.2, the destination organization in enhanced CT image is divided automatically using trained deep learning network, The segmentation effect of enhanced CT kidney and tumour is obtained, it is specific as shown in Fig. 5;
Step 1.3, the characteristic point of tomography CT image and enhanced CT image is extracted, and is registrated, Feature Mapping pass is obtained System, registration result are as shown in Fig. 6;
Step 1.4, the Feature Mapping relationship obtained according to registration, point of enhanced CT kidney and tumour in step 1.2 It cuts result to be mapped in tomography CT image, as shown in Fig. 7.Repeat the segmentation that this step completes whole enhanced CT kidneys and tumour As a result the mapping for arriving tomography CT image obtains 3 dimension modules (normal tissue and tumor tissues) of kidney eventually by three-dimensional reconstruction;
Step 1.5,3 dimension modules (normal tissue and tumor tissues) of the kidney obtained using step 1.4, project to two dimension SPECT image on, the ROI as kidney is delineated, and can reduce the error that doctor delineates ROI, and can omit doctor's Background area is delineated, as shown in Fig. 8;
Step 1.6, region is delineated using the ROI of the SPECT kidney of step 1.5, counts the pixel meter inside its region Number is brought into Gate ' s method calculation formula and calculates GFR value;
Above-mentioned technical proposal is advanced optimized, realizes the segmentation effect in Optimization Steps 1.2.Specifically, its step can be with It is realized by 2.1 to 2.8:
Step 2.1, marked clinical image is filtered out from hospital server as training set, according to the number of patient And/or CT sequence number saves as JPG format;
Step 2.2, using the jpg picture in Tensorflow included image library analyzing step 2.1, picture lattice after parsing Formula is uint8, can communicate directly into and carry out operation in network, is 512 × 512 × 1 gray level image;
Step 2.3, it is randomly selected when training, does not do image enhancement and translation.Using Medical imaging network U-net as base Directrix, then residual error study is carried out on the basis of this;
Step 2.4, U-net is by symmetrical network structure, in 6-10 layers, by the Feature Mapping of same size and preceding 1- 5 layers connect, to ensure that feature reuse, it is therefore prevented that gradient disperse, but each piece or simple convolution+activation+ The design of dropout, such as Vgg structure, therefore network parameter amount is very big, learns very slow, realizes that effect is as shown in Fig. 5.
Step 2.5, it changes convolution module in U-net into dense module, obtains improved full convolutional network, wherein dense Module can effectively solve the problem that gradient disappearance problem, strengthens feature propagation and supported feature reuses, and then ginseng can be greatly reduced Number quantity;
Step 2.6, Up-sampling is used when up-sampling in U-net to Feature Mapping, and it is right in full convolutional network Feature Mapping, which carries out up-sampling, to be sampled using transposition, is equivalent to and is relearned feature rather than simple scalability;
Step 2.7, it is trained using 256 × 256 CT image and 256 × 256 manual tab area (mask), it is former Dimension of picture is 512 × 512, there is the information loss of part after zooming to 256 × 256, before improved full convolutional network Introduce new Zoom module (+3 × 3 convolution of 7 × 7 convolution+maximum pond+maximum pond), i.e., adjust original image to 128 × 128 Feature Mapping reconnects former full convolutional network.The aspect ratio that the module is learnt original image according to acquired by difference On perform better than;
Step 2.8, by introducing Zoom module, the CT image that network inputs are 512 × 512 exports 128 × 128 Mask, then pass through interpolation (4 times of amplifications) to 512 × 512.Final effect is as shown in attached drawing 9 and attached drawing 10.
Step 2, deep learning network model is compiled and tuning,
Step 3, the special purpose model file that model compilation can be run at neural calculation rod, i.e. graph file;
Step 4, SDKAPI is called to run deep learning network model on neural calculation rod.
Step 4.1, it enumerates equipment and opens neural calculation rod;
Step 4.2, it reads and loads graph file, inputted with graph.LoadTensor to diagnostic graph after the completion of load As data, diagnostic result is obtained using graph.GetResult after training.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment For, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is said referring to method part It is bright.
The foregoing description of the disclosed embodiments can be realized professional and technical personnel in the field or using originally practical new Type.Various modifications to these embodiments will be readily apparent to those skilled in the art, and determine herein The General Principle of justice can be realized in other embodiments without departing from the spirit or scope of the present utility model.Cause This, the present invention will not be limited to the embodiments shown herein, and is to fit to and principles disclosed herein The widest scope consistent with features of novelty.

Claims (4)

1. a kind of multi-modality images assistant diagnosis system characterized by comprising hospital server, embedded development platform and Neural calculation rod;
The nerve calculation rod is connected with the embedded development platform;The embedded development platform and the hospital server It is connected;And the hospital server, the embedded development platform and the neural calculation rod are in same LAN environment In;
Wherein, patient information and corresponding multi-modality images data are stored in the hospital server;The nerve calculation rod In be deployed with preparatory trained deep learning network model.
2. multi-modality images assistant diagnosis system according to claim 1, which is characterized in that the embedded development platform It include: Raspberry Pi microcomputer.
3. multi-modality images assistant diagnosis system according to claim 2, which is characterized in that the nerve calculation rod packet It includes: Intel's nerve calculation rod.
4. multi-modality images assistant diagnosis system according to claim 1, which is characterized in that the nerve calculation rod passes through USB interface is connected with the embedded development platform.
CN201822095219.0U 2018-12-13 2018-12-13 A kind of multi-modality images assistant diagnosis system Expired - Fee Related CN208622438U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201822095219.0U CN208622438U (en) 2018-12-13 2018-12-13 A kind of multi-modality images assistant diagnosis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201822095219.0U CN208622438U (en) 2018-12-13 2018-12-13 A kind of multi-modality images assistant diagnosis system

Publications (1)

Publication Number Publication Date
CN208622438U true CN208622438U (en) 2019-03-19

Family

ID=65717647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201822095219.0U Expired - Fee Related CN208622438U (en) 2018-12-13 2018-12-13 A kind of multi-modality images assistant diagnosis system

Country Status (1)

Country Link
CN (1) CN208622438U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488712A (en) * 2019-08-30 2019-11-22 上海有个机器人有限公司 A kind of dispensing machine people human-computer interaction embedded main board

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488712A (en) * 2019-08-30 2019-11-22 上海有个机器人有限公司 A kind of dispensing machine people human-computer interaction embedded main board

Similar Documents

Publication Publication Date Title
CN109378054A (en) A kind of multi-modality images assistant diagnosis system and its building method
van Ginneken Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning
JP7102531B2 (en) Methods, Computer Programs, Computer-Readable Storage Mediums, and Devices for the Segmentation of Anatomical Structures in Computer Toxiography Angiography
Qiu et al. Automatic segmentation of the mandible from computed tomography scans for 3D virtual surgical planning using the convolutional neural network
US11862325B2 (en) System and method for processing medical image data
CN107330953A (en) A kind of Dynamic MRI method for reconstructing based on non-convex low-rank
Wu et al. Estimating the 4D respiratory lung motion by spatiotemporal registration and super‐resolution image reconstruction
CN111447877A (en) Positron Emission Tomography (PET) system design optimization using depth imaging
Ning et al. A new bidirectional unsupervised domain adaptation segmentation framework
Li et al. Automatic quantification of epicardial adipose tissue volume
CN208622438U (en) A kind of multi-modality images assistant diagnosis system
Chen et al. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review
Xu et al. Bg-net: Boundary-guided network for lung segmentation on clinical ct images
Li et al. Cardiac MRI segmentation with focal loss constrained deep residual networks
Cheirsilp et al. Thoracic cavity definition for 3D PET/CT analysis and visualization
Wang et al. Prediction of major torso organs in low-contrast micro-CT images of mice using a two-stage deeply supervised fully convolutional network
CN111105475A (en) Bone three-dimensional reconstruction method based on orthogonal angle X-ray
Joshi et al. R2Net: Efficient and flexible diffeomorphic image registration using Lipschitz continuous residual networks
Najeeb et al. Spatial feature fusion in 3D convolutional autoencoders for lung tumor segmentation from 3D CT images
Jaus et al. Towards unifying anatomy segmentation: automated generation of a full-body CT dataset via knowledge aggregation and anatomical guidelines
Banerjee et al. Optimised misalignment correction from cine MR slices using statistical shape model
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
CN112136155A (en) Generating textual descriptions of images using domain-independent anomaly analysis
Liu et al. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
CN105125228A (en) Image processing method for chest X-ray DR (digital radiography) image rib inhibition

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190319

Termination date: 20191213

CF01 Termination of patent right due to non-payment of annual fee