CN111967540B - Maxillofacial fracture identification method and device based on CT database and terminal equipment - Google Patents

Maxillofacial fracture identification method and device based on CT database and terminal equipment Download PDF

Info

Publication number
CN111967540B
CN111967540B CN202011046297.7A CN202011046297A CN111967540B CN 111967540 B CN111967540 B CN 111967540B CN 202011046297 A CN202011046297 A CN 202011046297A CN 111967540 B CN111967540 B CN 111967540B
Authority
CN
China
Prior art keywords
fracture
data
maxillofacial
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011046297.7A
Other languages
Chinese (zh)
Other versions
CN111967540A (en
Inventor
贺洋
揭璧朦
徐子能
张益�
仝雁行
彭歆
丁鹏
白海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepcare Information Technology Co ltd
Peking University School of Stomatology
Original Assignee
Beijing Deepcare Information Technology Co ltd
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepcare Information Technology Co ltd, Peking University School of Stomatology filed Critical Beijing Deepcare Information Technology Co ltd
Priority to CN202011046297.7A priority Critical patent/CN111967540B/en
Publication of CN111967540A publication Critical patent/CN111967540A/en
Application granted granted Critical
Publication of CN111967540B publication Critical patent/CN111967540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Abstract

The invention relates to the field of image processing, and provides a method for identifying maxillofacial fractures based on a CT (computed tomography) database, which comprises the following steps: acquiring CT data samples of maxillofacial fractures of different fracture types; labeling and classifying the CT data samples; training a fracture discrimination model by adopting processed CT data samples of the same fracture type, and determining whether the corresponding fracture type exists by the trained fracture discrimination model through identifying whether an input image block comprises an image feature corresponding to a certain fracture type; and inputting the input jaw face CT data into the corresponding trained fracture discrimination model to determine whether the input jaw face CT data has a corresponding fracture type. Meanwhile, a corresponding recognition device for the maxillofacial fracture based on the CT database and a terminal device are also provided. The embodiment provided by the invention is suitable for identifying the fracture in the CT image in the medical image, and improves the identification efficiency.

Description

Maxillofacial fracture identification method and device based on CT database and terminal equipment
Technical Field
The invention relates to the field of image processing, in particular to a method for identifying jaw and facial fractures based on a CT database, a device for identifying jaw and facial fractures based on the CT database, a terminal device and a corresponding storage medium.
Background
Maxillofacial fracture is a common trauma type in accidents such as traffic accidents, accidental injuries, competitive sports and the like. The anatomical structures of the jaw and face fracture are complex and various, the diagnosis difficulty only depending on clinical symptoms and physical signs is large, and the auxiliary diagnosis of the imaging is often needed. Compared with the conventional X-ray film, the three-dimensional image of the maxillofacial CT can more clearly and intuitively represent the position and the displacement direction of the fracture, and is considered as the "gold standard" for diagnosis of the maxillofacial fracture in recent years. However, the CT image contains a lot of information, and it is difficult to fully and accurately evaluate details such as bone structures, boundaries, and hidden fracture lines only by visual inspection by a clinician. Therefore, the traditional diagnosis and treatment mode has great difficulty in treating the large-scale events and emergency scene which are characterized by 'rapidness, accuracy and high efficiency'.
In recent years, deep learning techniques have been gradually applied to the medical field, and have good effects in detecting diseases such as cancer, cataract, fracture, cerebral hemorrhage, and the like. The Convolutional Neural Network (CNN) is the most advanced technology in medical image diagnosis, the defects of missed diagnosis and misdiagnosis of human eye diagnosis are made up by high accuracy and stability, and the accuracy of classification of diseases such as pulmonary tuberculosis, pulmonary nodule CT images, breast cancer, brain lesion, cataract classification and the like is proved to reach the level of human experts.
The existing technologies for identifying jaw fracture based on CT image all depend on professional doctors in imaging department to make manual judgment by means of related software (such as mimics research 19.0.), and the fracture types are complex and include maxillary fracture, zygomatic fracture, mandibular angle and lifting fracture, alveolar fracture, chin fracture, condylar fracture and coracoid fracture, so that the manual diagnosis of fracture is not efficient.
Disclosure of Invention
In view of the above, the present invention is directed to a method, an apparatus and a terminal device for identifying a maxillofacial fracture based on a CT database, so as to at least partially solve the above problems. The invention is based on a convolutional neural network algorithm, carries out deep learning training on a maxillofacial fracture CT data model, tests set model verification, and learns artificial intelligence to assist in diagnosing frostbite and maxillofacial wounds by means of human experience, thereby forming an intelligent diagnosis platform and improving the stability and reaction efficiency of disease diagnosis and treatment. The invention solves the efficiency problem of jaw face fracture identification, and assists doctors to improve diagnosis efficiency by automatically detecting the fracture position.
In a first aspect of the present invention, there is provided a method for identifying a maxillofacial fracture based on a CT database, the method comprising: acquiring CT data samples of maxillofacial fractures of different fracture types; labeling and classifying the CT data samples; training a fracture discrimination model by adopting processed CT data samples of the same fracture type, and determining whether the corresponding fracture type exists by the trained fracture discrimination model through identifying whether an input image block comprises an image feature corresponding to a certain fracture type; the fracture types are divided according to anatomical regions; decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to an anatomical structure; and respectively inputting each image block in the image block sequence into a corresponding trained fracture discrimination model so as to determine whether the input maxillofacial CT data has a corresponding fracture type.
Optionally, decomposing the maxillofacial CT data to be identified into image block sequences of different anatomical regions according to the anatomical structure includes: inputting the CT image sequence of the jaw face CT data into a trained bone structure positioning model; generating a detection rectangular frame sequence; positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions; and respectively extracting the image block sequence of each anatomical region layer by layer from the CT image sequence in the positioned fault range.
Optionally, the bone structure positioning model is a convolutional neural network model, the input of the convolutional neural network model is a CT image sequence of the maxillofacial CT data, and the output is a corresponding rectangular frame detection result.
Optionally, the trained bone structure localization model is obtained through the following steps: performing the labeling processing on the CT data sample to be used as a first training sample; dividing the first training sample into a training set and a validation set; after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set.
Optionally, before the dividing the training samples into the training set and the verification set, the method further includes: and mapping the Henschel unit value of the CT image sequence to a preset Henschel unit value range.
Optionally, the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet.
Optionally, the trained fracture discrimination model is obtained through the following steps: determining a model structure of the fracture discrimination model; decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample; the fracture discrimination model of each different anatomical region is trained by adopting the following steps: dividing the partition training samples corresponding to the anatomical region into a training set and a verification set; after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set.
In a second aspect of the present invention, there is also provided a device for identifying maxillofacial fractures based on a CT database, the device comprising: the sample acquisition module is used for acquiring CT data samples of maxillofacial fractures of different fracture types; the sample processing module is used for performing labeling processing and classification processing on the CT data samples; the characteristic extraction module is used for extracting image characteristics of the fracture type from the processed CT data samples of the same fracture type; and the fracture identification module is used for comparing the image characteristics with input maxillofacial CT data to be identified so as to determine whether the input maxillofacial CT data has fracture types corresponding to the image characteristics.
In the third aspect of the present invention, there is also provided a terminal device, including a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the recognition method for maxillofacial fracture based on CT database.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the aforementioned recognition method of maxillofacial fractures based on a CT database.
Through the technical scheme provided by the invention, the following beneficial effects are achieved: compared with the prior art, the invention has the beneficial effects that: the artificial intelligent diagnosis method for the maxillofacial fracture based on the spiral CT database is based on a convolutional neural network algorithm, deep learning training is carried out on a maxillofacial fracture CT data model, test set model verification is carried out, artificial intelligence is learned by means of human experience to assist diagnosis of frostbite and maxillofacial wound, an intelligent diagnosis platform is formed, and stability and reaction efficiency of disease diagnosis and treatment are improved. The system solves the limitation that the traditional diagnosis and treatment depends on professional doctors and specific diagnosis and treatment places, and makes the instant diagnosis and treatment of accident sites such as the winter Olympic Games and the like which require quick response, accurate judgment and timely on-site treatment possible.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart of a recognition method of maxillofacial fracture based on a CT database according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a recognition device for maxillofacial fracture based on a CT database according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow chart of a recognition method for maxillofacial fracture based on a CT database according to an embodiment of the present invention, as shown in fig. 1. A method for identifying maxillofacial fractures based on a CT database is provided, the method comprises the following steps: acquiring CT data samples of maxillofacial fractures of different fracture types; labeling and classifying the CT data samples; and training a fracture discrimination model by adopting the processed CT data samples of the same fracture type, and determining whether the corresponding fracture type exists by identifying whether the input image block comprises the image characteristics corresponding to a certain fracture type through the trained fracture discrimination model.
In this way, based on a big data sample formed by CT data of jaw face fracture, the image characteristics of each fracture type are extracted, so as to realize the identification of whether the fracture exists in the jaw face CT data and the existing fracture type. The embodiment provided by the invention can decompose and judge the CT data according to types through the trained image processing characteristics of machine learning, has the advantage of big data processing, and simultaneously avoids the problem of low accuracy caused by direct identification of the CT data of the jaw face. Preferably, the classification process and the alignment process are both implemented by a convolutional neural network.
Specifically, a sample database is established. And labeling the CT data of the patient based on the collected CT data of the patient with the left/right coracoid fracture, the left/right ascending branch fracture, the left/right mandibular angle fracture, the left/right mandibular body fracture, the left/right condylar fracture, the chin fracture, the alveolar process fracture, the maxillary fracture and the left/right zygomatic bone fracture. And obtaining the mask mark and the mark of the fracture type layer by layer. Exporting the DICOM files of all labels and the label table. This step can be further divided into the following three substeps:
1-1: inclusion criteria
Chinese 18-80 years old with a history of jaw face trauma fracture within half a month. The inclusion criteria were: 1) the age of the Han adults is 18-80 years old; 2) the history of jaw face trauma fracture is kept within half a month; 3) the maxillofacial region has no history of serious tumors; 4) no systemic bone metabolism disease; 5) no development deformity of maxillofacial region; 6) there was no history of radiotherapy and chemotherapy.
The types of fractures mainly include: left/right coracoid fracture, left/right ascending branch fracture, left/right mandibular angle fracture, left/right mandibular body fracture, left/right condylar fracture, chin fracture, alveolar fracture, maxillary fracture, left/right zygomatic bone fracture.
Exclusion criteria were: 1) there are congenital facial asymmetries such as severe deviation of the jaw, deviation of nasal septum, and small ear deformities; 2) there is history of surgery on hard tissues of maxillofacial region; 3) old fracture or greenstick fracture; 4) the whole body is poor in condition and cannot tolerate referral or sit up. 5) Women with pregnancy period of 1-3 months.
1-2: obtaining annotations
The content of the annotation mainly comprises mask annotation and fracture type annotation layer by layer. And (3) introducing the DICOM format of CT data of the subject into Mimics Medical 21.0 software for segmentation by layer labeling, and exporting the segmented zygomatic maxilla and mandible mask in the DICOM format. Observing a CT fault containing a fracture line, marking and exporting; the fracture type labeling divides the fracture type into 14 types, corresponding to each case, the fracture type of a positive result is labeled as '1', a negative result is labeled as '0', and all labeled DICOM files and labeling tables are exported.
1-3: and exporting the DICOM format file of the label and the original CT.
And forming a sample library through the steps, and forming image characteristics of different fracture types by using the samples in the sample library. And comparing and judging whether the fracture type exists or not based on the image characteristics of each fracture type.
In one embodiment provided by the present invention, the fracture types are divided by anatomical region; decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to an anatomical structure; and respectively inputting each image block in the image block sequence into a corresponding trained fracture discrimination model to determine whether the input maxillofacial CT data has a corresponding fracture type. The jaw face CT data are decomposed into components on a plurality of anatomical structures, whether fracture exists in the jaw face CT data is identified by identifying whether single component is fractured, so that the whole jaw face is judged, and fracture positions can be identified. According to the embodiment provided by the invention, the CT data is decomposed and judged according to the region through the trained image processing characteristics of machine learning, so that the method has the advantages of high processing speed and low information loss, and simultaneously, the problem of low accuracy caused by integral judgment of the CT data of the jaw face is avoided. The algorithm of the embodiment mainly comprises two models: the bone structure positioning model and the fracture distinguishing model are mainly used for finding specific positions and contours of the maxilla and the mandible, extracting different key parts, and then inputting the extracted different parts into the fracture distinguishing model to identify the fracture type.
In an embodiment provided by the present invention, the decomposing of the maxillofacial CT data to be identified into image block sequences of different anatomical regions according to the anatomical structure includes: inputting the CT image sequence of the jaw face CT data into a trained bone structure positioning model; generating a detection rectangular frame sequence; positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions; and respectively extracting the image block sequence of each anatomical region layer by layer from the CT image sequence in the positioned fault range. The CT image sequence is sequentially input into a trained detection model, a detection rectangular frame sequence is output, the fault ranges of 8 anatomical regions are positioned according to detected rectangular frames of different types, then the 8 anatomical regions are respectively extracted from the CT image sequence within the positioned fault ranges layer by layer, and as the cheekbone, the mandible, the angle of the mandible, the ascending limb, the condylar process and the coracoid process have bilateral symmetry, 13 image block sequences are finally extracted.
In an embodiment provided by the present invention, the bone structure localization model is a convolutional neural network model, and the input of the convolutional neural network model is a CT image sequence of the maxillofacial CT data, and the output is a corresponding rectangular frame detection result. Specifically, the present embodiment may adopt a RetinaNet convolutional neural network structure, where the input of the model is a single CT image sequence, and the output is a corresponding rectangular frame detection result. The detection model RetinaNet may be replaced by other models, such as SSD, YOLO, FasterRCNN, and other target detection models.
In an embodiment of the present invention, the trained bone structure localization model is obtained by the following steps: performing the labeling processing on the CT data sample to be used as a first training sample; dividing the first training sample into a training set and a validation set; after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set. Training samples are formed through the steps, and then the model is trained by the training samples. Taking 10% of training samples, namely 50 samples, as a verification set, and taking the rest as a training set; the model parameters adopt parameters pre-trained on a large natural image data set ImageNet, iterative training is carried out on the model by adopting a gradient descent algorithm SGD, and the optimal parameters of the model are determined according to mAP values on a verification set.
In one embodiment of the present invention, before the dividing the training samples into a training set and a validation set, the method further includes: and mapping the Henschel unit value of the CT image sequence to a preset Henschel unit value range. In an embodiment of the present invention, the mapping the hounsfield unit value of the CT image to the preset range includes: linearly mapping the Hounsfield Unit number of the CT image to [0, 255], the mapping equation is as follows:
y=(x-xmin)/(xmax-xmin)*255
where y represents the mapped value, x represents the original Hounsfield Unit value, and xminRepresenting the smallest Hounsfield unit, x, in CT imagesmaxRepresents the maximum hounsfield unit value of CT. The present embodiment compresses the hounsfield unit value to 8 bits to achieve consistency of the training image in the processing.
In one embodiment of the present invention, the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet. The models are all of a convolutional neural network structure, the input of each model is a single-area image block, and the output of each model is a corresponding binary discrimination result.
In an embodiment of the present invention, the trained fracture discrimination model is obtained by the following steps: determining a model structure of the fracture discrimination model; decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample; the fracture discrimination model of each different anatomical region is trained by adopting the following steps: dividing the partition training samples corresponding to the anatomical region into a training set and a verification set; after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set. Specifically, the fracture detection step comprises: carrying out deep learning discrimination model training according to the extracted region image block sequence and the corresponding label, and carrying out automatic identification on various fractures; wherein the model training of the fracture discrimination model comprises: taking 80% of the extracted region image block sequences as a training set and 20% as a verification set; the model parameters adopt parameters pre-trained on a large natural image data set ImageNet, a gradient descent algorithm Adam is adopted to carry out iterative training on the model, and the optimal parameters of the model are determined according to the F1 values on the verification set.
Specific fracture detection results are generated: and sequentially inputting the 13 region image block sequences into the trained discrimination model, and outputting to form a binary discrimination result sequence. For a bone structure with bilateral symmetry, the sequence of the discrimination result can be regarded as a left binary sequence and a right binary sequence (0 represents no fracture, and 1 represents fracture); the bone structure without left-right symmetry can be regarded as 1 binary sequence according to the judgment result, and finally 13 binary sequences can be obtained; for any binary sequence, the following rules are made according to the characteristics of fracture continuity and on the premise of ensuring high sensitivity: if the number of the continuous elements 1 in the binary sequence is more than 1, the type fracture identification result of the sample is positive, and a corresponding positive layer image is output, otherwise, the type fracture identification result is negative. And further, the test group data were annotated by two hospitalizers for maxillofacial surgery as "gold standard" for fracture diagnosis. Comparing the output result of the test group data machine with the 'gold standard', and calculating the sensitivity and specificity of the fracture diagnosis of each part.
In one embodiment provided by the invention, a device for identifying the maxillofacial fracture based on the CT database is also provided. Fig. 2 is a schematic structural diagram of a recognition device for maxillofacial fracture based on a CT database according to an embodiment of the present invention, as shown in fig. 2. The device comprises: the sample acquisition module is used for acquiring CT data samples of maxillofacial fractures of different fracture types; the sample processing module is used for performing labeling processing and classification processing on the CT data samples; the characteristic extraction module is used for extracting image characteristics of the fracture type from the processed CT data samples of the same fracture type; and the fracture identification module is used for comparing the image characteristics with input maxillofacial CT data to be identified so as to determine whether the input maxillofacial CT data has fracture types corresponding to the image characteristics.
For the specific definition of the recognition device for the maxillofacial fracture based on the CT database, reference may be made to the above definition of the recognition method for the maxillofacial fracture based on the CT database, and details are not repeated here. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment of the present invention, there is also provided a terminal device, including a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the recognition method for maxillofacial fracture based on CT database.
Fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention, as shown in fig. 3. The terminal device 10 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. Terminal device 10 may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 3 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit of the terminal device 10 and an external storage device. The memory 101 is used for storing the computer program 102 and other programs and data required by the terminal device 10. The memory 101 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the invention provides a method and a device for identifying jaw and facial fractures based on a CT database aiming at the problems of complex identification and low accuracy rate of fracture identification of CT data. The embodiment provided by the invention is applied to a medical image processing system.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (8)

1. A recognition method of maxillofacial fracture based on CT database, characterized in that the method comprises:
acquiring CT data samples of maxillofacial fractures of different fracture types;
labeling and classifying the CT data samples;
training a fracture discrimination model by adopting processed CT data samples of the same fracture type, and determining whether the corresponding fracture type exists by the trained fracture discrimination model through identifying whether an input image block comprises an image feature corresponding to a certain fracture type; the fracture types are divided according to anatomical regions;
decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to an anatomical structure;
inputting each image block in the image block sequence into a corresponding trained fracture discrimination model respectively to determine whether the input maxillofacial CT data has a corresponding fracture type;
the method for decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to the anatomical structure comprises the following steps:
inputting the CT image sequence of the jaw face CT data into a trained bone structure positioning model;
generating a detection rectangular frame sequence;
positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions;
and respectively extracting the image block sequence of each anatomical region layer by layer from the CT image sequence in the positioned fault range.
2. The method of claim 1, wherein the trained bone structure localization model is a convolutional neural network model, and the convolutional neural network model has an input of a CT image sequence of the CT data of the maxillofacial portion and an output of a corresponding rectangular frame detection result.
3. The method according to claim 2, wherein the trained bone structure localization model is obtained by:
performing the labeling processing on the CT data sample to be used as a first training sample;
dividing the first training sample into a training set and a validation set;
after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm;
and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set.
4. The method of claim 3, wherein prior to said separating the first training sample into a training set and a validation set, the method further comprises:
and mapping the Henschel unit value of the CT image sequence to a preset Henschel unit value range.
5. The method of claim 1, wherein the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet.
6. The method of claim 3, wherein the trained fracture discrimination model is obtained by:
determining a model structure of the fracture discrimination model;
decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample;
the fracture discrimination model of each different anatomical region is trained by adopting the following steps:
dividing the partition training samples corresponding to the anatomical region into a training set and a verification set;
after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm;
and determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set.
7. An apparatus for identifying maxillofacial fractures based on a CT database, the apparatus comprising:
the sample acquisition module is used for acquiring CT data samples of maxillofacial fractures of different fracture types;
the sample processing module is used for performing labeling processing and classification processing on the CT data samples;
the characteristic extraction module is used for training and storing the characteristics of the fracture type from the processed CT data samples of the same fracture type; the fracture identification module is used for comparing the image characteristics with input maxillofacial CT data to be identified so as to determine whether the input maxillofacial CT data has fracture types corresponding to the image characteristics; the fracture types are divided according to anatomical regions; and
the sequence extraction module is used for decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to an anatomical structure; inputting each image block in the image block sequence into a corresponding trained fracture discrimination model respectively to determine whether the input maxillofacial CT data has a corresponding fracture type;
the method for decomposing the input maxillofacial CT data into image block sequences of different anatomical regions according to the anatomical structure comprises the following steps:
inputting the CT image sequence of the jaw face CT data into a trained bone structure positioning model;
generating a detection rectangular frame sequence;
positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions;
and respectively extracting the image block sequence of each anatomical region layer by layer from the CT image sequence in the positioned fault range.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the recognition method of maxillofacial fractures based on CT database according to any one of claims 1 to 6.
CN202011046297.7A 2020-09-29 2020-09-29 Maxillofacial fracture identification method and device based on CT database and terminal equipment Active CN111967540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011046297.7A CN111967540B (en) 2020-09-29 2020-09-29 Maxillofacial fracture identification method and device based on CT database and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011046297.7A CN111967540B (en) 2020-09-29 2020-09-29 Maxillofacial fracture identification method and device based on CT database and terminal equipment

Publications (2)

Publication Number Publication Date
CN111967540A CN111967540A (en) 2020-11-20
CN111967540B true CN111967540B (en) 2021-06-08

Family

ID=73386810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011046297.7A Active CN111967540B (en) 2020-09-29 2020-09-29 Maxillofacial fracture identification method and device based on CT database and terminal equipment

Country Status (1)

Country Link
CN (1) CN111967540B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298828B (en) * 2021-06-11 2023-09-22 上海交通大学医学院附属第九人民医院 Jawbone automatic segmentation method based on convolutional neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305248A (en) * 2018-01-17 2018-07-20 慧影医疗科技(北京)有限公司 It is a kind of fracture identification model construction method and application
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
CN109119140A (en) * 2018-08-27 2019-01-01 北京大学口腔医学院 It is a kind of for accurately treating the computer assisted navigation method of Old zygomatic fractures
CN110826557A (en) * 2019-10-25 2020-02-21 杭州依图医疗技术有限公司 Method and device for detecting fracture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004086972A2 (en) * 2003-03-25 2004-10-14 Imaging Therapeutics, Inc. Methods for the compensation of imaging technique in the processing of radiographic images
CN110310723A (en) * 2018-03-20 2019-10-08 青岛海信医疗设备股份有限公司 Bone image processing method, electronic equipment and storage medium
CN110458799A (en) * 2019-06-24 2019-11-15 上海皓桦科技股份有限公司 Fracture of rib automatic testing method based on rib cage expanded view
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305248A (en) * 2018-01-17 2018-07-20 慧影医疗科技(北京)有限公司 It is a kind of fracture identification model construction method and application
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
CN109119140A (en) * 2018-08-27 2019-01-01 北京大学口腔医学院 It is a kind of for accurately treating the computer assisted navigation method of Old zygomatic fractures
CN110826557A (en) * 2019-10-25 2020-02-21 杭州依图医疗技术有限公司 Method and device for detecting fracture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Review of Intelligent Image Processing Method of Pulmonary CT Images;wenjun Tan et al;《2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)》;20200206;1641-1648页 *

Also Published As

Publication number Publication date
CN111967540A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2021082416A1 (en) Network model training method and device, and focus area determination method and device
CN112150472A (en) Three-dimensional jaw bone image segmentation method and device based on CBCT (cone beam computed tomography) and terminal equipment
JPWO2019146357A1 (en) Medical image processing equipment, methods and programs, and diagnostic support equipment, methods and programs
Lang et al. Automatic localization of landmarks in craniomaxillofacial CBCT images using a local attention-based graph convolution network
Bano et al. AutoFB: Automating fetal biometry estimation from standard ultrasound planes
Warin et al. Maxillofacial fracture detection and classification in computed tomography images using convolutional neural network-based models
Chan et al. Quasi-conformal statistical shape analysis of hippocampal surfaces for Alzheimer׳ s disease analysis
Chen et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs
Kaya et al. Proposing a CNN method for primary and permanent tooth detection and enumeration on pediatric dental radiographs
CN111967540B (en) Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN111967539B (en) Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
WO2022247007A1 (en) Medical image grading method and apparatus, electronic device, and readable storage medium
CN112150473A (en) Three-dimensional jaw bone image segmentation modeling method and device based on CT and terminal equipment
CN113948190A (en) Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points
WO2023029348A1 (en) Image instance labeling method based on artificial intelligence, and related device
CN117237351A (en) Ultrasonic image analysis method and related device
Lang et al. DLLNet: an attention-based deep learning method for dental landmark localization on high-resolution 3D digital dental models
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
Tu et al. Quantitative evaluation of local head malformations from 3 dimensional photography: application to craniosynostosis
CN113554640A (en) AI model training method, use method, computer device and storage medium
Zhang et al. The Implementation of Facial Symmetry Assessment before and after Orthognathic Surgery Using Transfer Learning
Sadr et al. Deep learning for tooth identification and enumeration in panoramic radiographs
CN114004940B (en) Non-rigid generation method, device and equipment of face defect reference data
Millan-Arias et al. General Cephalometric Landmark Detection for Different Source of X-Ray Images
Guo et al. The Gap in the Thickness: Estimating Effectiveness of Pulmonary Nodule Detection in Thick-and Thin-Section CT Images with 3D Deep Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant