CN112967375A - Forensic medicine pelvis gender identification method based on deep learning and virtual image technology - Google Patents

Forensic medicine pelvis gender identification method based on deep learning and virtual image technology Download PDF

Info

Publication number
CN112967375A
CN112967375A CN202110193008.4A CN202110193008A CN112967375A CN 112967375 A CN112967375 A CN 112967375A CN 202110193008 A CN202110193008 A CN 202110193008A CN 112967375 A CN112967375 A CN 112967375A
Authority
CN
China
Prior art keywords
pelvis
picture
deep learning
model
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110193008.4A
Other languages
Chinese (zh)
Inventor
黄平
张吉
邓恺飞
陈忆九
秦志强
张建华
曹永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Academy Of Forensic Science
Original Assignee
Academy Of Forensic Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Academy Of Forensic Science filed Critical Academy Of Forensic Science
Priority to CN202110193008.4A priority Critical patent/CN112967375A/en
Publication of CN112967375A publication Critical patent/CN112967375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a forensic medicine pelvis gender identification method based on deep learning and virtual image technology, which comprises the following steps: constructing a pelvis virtual three-dimensional reconstruction model based on dicom data of CT, selecting a target analysis part, carrying out screenshot, obtaining a target analysis part picture for expansion processing, carrying out gender labeling based on the gender of a pelvis sample, and obtaining a pelvis sample data set; performing iterative training on the deep learning model based on a pelvis sample data set by constructing the deep learning model, and constructing a gender inference model of a pelvis specific region; acquiring corresponding pelvis imaging data based on a corpse or skeleton remains sample, selecting a target analysis part for screenshot, and acquiring a pelvis sample data set to be detected; the method reduces the deviation caused by artificial observation, and improves the accuracy of gender inference by adopting pelvis multi-region comprehensive judgment.

Description

Forensic medicine pelvis gender identification method based on deep learning and virtual image technology
Technical Field
The invention belongs to the technical field of forensic anthropology inspection, and relates to a forensic pelvis gender identification method based on deep learning and virtual image technology.
Background
In forensic anthropology practice, gender inference is the primary step in determining identity information of skeletal remains of a human body, mainly because the index is an important basis for determining other biological factors (such as race, age and height). The sex binary nature contained in the skeleton of the human body part is an important practical basis for the legal anthropologist to carry out sex deduction, and the pelvis is generally considered as the skeleton with the highest sex binary nature by the legal anthropologist and is the optimal choice for carrying out the sex deduction. At present, a morphological observation method is a main method for estimating the pelvis gender in forensic medicine, and the method is widely applied to practice because the operation is simple, convenient and quick, but the accuracy of the estimation often depends on the subjective judgment and personal experience of an observer. In addition, the existing pelvis morphological observation indexes are mostly summarized from ancient European and American pelvis samples and are influenced by the difference factors of external environments and ethnic groups, the indexes are difficult to be suitable for Chinese contemporary Han people in China, and in addition, certain uncertain factors are brought to the detection of subsequent criminal cases due to the influences of pelvis defects, neutral pelvis characteristics and the like in some cases. Therefore, establishing a data set more conforming to the pelvis morphological characteristics of the contemporary population in China and developing a more objective and accurate method based on the data set are important research hotspots in the field of forensic anthropology at present.
Disclosure of Invention
The invention aims to provide a forensic pelvis gender inference method based on a deep learning technology and a virtual image technology, which provides corresponding gender probabilities aiming at different anatomical features of a pelvis. Compared with the traditional method, the method has the advantages of high accuracy, strong objectivity, high repeatability and the like, and can be used as an effective auxiliary means for detecting the human skeleton remains case.
The invention provides a forensic medicine pelvis gender identification method based on deep learning and virtual image technology, which comprises the following steps:
s1, three-dimensional reconstruction is carried out on human body pelvis imaging data based on dicom data of CT, a pelvis virtual three-dimensional reconstruction model is constructed, a target analysis part is selected and subjected to screenshot based on the pelvis virtual three-dimensional reconstruction model, a target analysis part picture is obtained, the target analysis part picture is subjected to expansion processing, gender labeling is carried out on the target analysis part picture based on the gender of a pelvis sample, and a pelvis sample data set is obtained;
s2, performing iterative training on the deep learning model by constructing the deep learning model based on a pelvis sample data set, and constructing a pelvis specific region gender inference model;
s3, acquiring a corresponding image data set by adopting an imaging instrument based on a corpse or skeleton sample, constructing a virtual three-dimensional reconstruction model of the pelvis to be detected, selecting a target analysis part of the virtual three-dimensional reconstruction model of the pelvis to be detected based on the target analysis part of the virtual three-dimensional reconstruction model of the pelvis, and carrying out screenshot to obtain a picture of the target analysis part to be detected, thereby obtaining a sample data set of the pelvis to be detected;
and S4, judging the gender probability of the pelvis sample to be detected through a pelvis sample data set to be detected based on the pelvis specific region gender inference model.
Preferably, S1 includes importing three-dimensional reconstruction software Mimics based on CT scan data, separating bones and bones of the pelvis sample from adjacent soft tissues of the bones by setting the HU threshold of the CT scan data importing three-dimensional reconstruction software Mimics to 180-.
Preferably, the target analysis site pictures include a pubic ventral picture, a pubic dorsal picture, an ischial major incisure plane picture, a pelvic inlet plane picture, an ischial ventral picture, and an acetabular picture.
Preferably, the processing procedure of the target analysis portion picture is to perform centralized clipping based on the target analysis portion picture and adjust the size of the clipped picture to be 255 × 255 pixels to obtain an initial data set, divide the initial data set into a training set and a verification set according to a ratio of 8:2 based on the initial data set, and expand the training set and the verification set to obtain a pelvis sample data set.
Preferably, the training set and validation set based augmentation comprises the steps of:
s5.1, performing random turning and rotation on the data picture based on the initial data picture of the training set and the verification set to obtain a first transformation data picture;
s5.2, carrying out random contrast, brightness, color difference balance and intensity conversion on the data picture to obtain a second conversion data picture;
and S5.3, supplementing the first conversion data picture and the second conversion data picture into the initial data picture, and constructing a pelvis sample data set.
Preferably, the angle of random flipping and rotation is 90 ° or 180 ° or 270 °.
Preferably, the deep learning model comprises an input layer, a convolution layer and an output layer, wherein the convolution layer is composed of 3 increment modules and 2 Reduction modules;
the input layer is connected with the output layer through the convolution layer;
adjusting model parameters of the deep learning model based on an Adadelta optimizer to obtain target model parameters;
training a pelvis sample data set through a deep learning model based on the target model parameters;
and estimating the gender inference efficiency and the generalization ability of the deep learning model based on the accuracy, the sensitivity, the specificity and the area indexes under the sensitivity curve of the deep learning model to the pelvis sample data set to obtain the gender inference model of the specific region of the pelvis.
Preferably, the target model parameters include a number of single training sessions of 64, a learning rate of 0.01, a learning rate decay exponent of 0.8, and a learning rate decay step number of 10.
The positive progress effects of the invention are as follows:
the invention uses the electronic computer tomography data to construct a virtual three-dimensional pelvis data set, the storage form of the pelvis sample is converted from an entity into electronic data, the defects that the entity sample is complex to process and needs a specific storage space and the like are avoided, and the combination of a digital platform is favorable for a clerk to deeply analyze, subsequently call and recheck inspection materials; in addition, the pelvis gender inference result related to the method is completely obtained by a computer, so that the deviation caused by artificial observation is greatly reduced, and the accuracy of gender inference can be effectively improved by adopting pelvis multi-region comprehensive judgment.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, the present invention provides a method for identifying pelvis and sex in forensic medicine based on deep learning and virtual image technology, comprising the following steps:
s1, three-dimensional reconstruction is carried out on human body pelvis imaging data based on dicom data of CT, a pelvis virtual three-dimensional reconstruction model is constructed, a target analysis part is selected and subjected to screenshot based on the pelvis virtual three-dimensional reconstruction model, a target analysis part picture is obtained, the target analysis part picture is processed, gender labeling is carried out on the target analysis part picture based on the gender of a pelvis sample, and a pelvis sample data set is obtained;
s2, performing iterative training on the deep learning model by constructing the deep learning model based on a pelvis sample data set, and constructing a pelvis specific region gender inference model;
s3, acquiring a corresponding image data set by adopting an imaging instrument based on a corpse or skeleton sample, constructing a virtual three-dimensional reconstruction model of the pelvis to be detected, selecting a target analysis part of the virtual three-dimensional reconstruction model of the pelvis to be detected based on the target analysis part of the virtual three-dimensional reconstruction model of the pelvis, and carrying out screenshot to obtain a picture of the target analysis part to be detected, thereby obtaining a sample data set of the pelvis to be detected;
and S4, judging the gender probability of the pelvis sample to be detected through a pelvis sample data set to be detected based on the pelvis specific region gender inference model.
S1 includes the steps of importing three-dimensional reconstruction software Mimics based on CT scanning data, and separating bones of the pelvis sample from adjacent soft tissues of the bones by setting an HU threshold value of the CT scanning data importing three-dimensional reconstruction software Mimics to be 180-2976. Subsequently, the pelvis of the pelvis sample is separated from the lumbar vertebra and the thighbone adjacent to the pelvis, and finally, a pelvis virtual three-dimensional reconstruction model is constructed.
The target analysis part pictures comprise a pubic ventral picture, a pubic dorsal picture, an ischial large incisure plane picture, a pelvic inlet plane picture, an ischial ventral picture and an acetabulum picture.
The processing process of the target analysis part picture comprises the steps of carrying out centralized cutting on the basis of the target analysis part picture, adjusting the size of the cut picture to be 255 x 255 pixels, obtaining an initial data set, dividing the initial data set into a training set and a verification set according to the ratio of 8:2, and expanding the training set and the verification set to obtain a pelvis sample data set.
The expansion based on the training set and the verification set comprises the following steps:
s5.1, performing random turning and rotation on the data picture based on the initial data picture of the training set and the verification set to obtain a first transformation data picture;
s5.2, carrying out random contrast, brightness, color difference balance and intensity conversion on the data picture to obtain a second conversion data picture;
and S5.3, supplementing the first conversion data picture and the second conversion data picture into the initial data picture, and constructing a pelvis sample data set.
The angle of random flipping and rotation is 90 ° or 180 ° or 270 °.
The deep learning model comprises an input layer, a convolution layer and an output layer, wherein the convolution layer consists of 3 increment modules and 2 Reduction modules; the input layer is connected with the output layer through the convolution layer; adjusting model parameters of the deep learning model based on an Adadelta optimizer to obtain target model parameters; training a pelvis sample data set through a deep learning model based on the target model parameters; and estimating the gender inference efficiency and the generalization ability of the deep learning model based on the accuracy, the sensitivity, the specificity and the area indexes under the sensitivity curve of the deep learning model to the pelvis sample data set to obtain the gender inference model of the specific region of the pelvis.
The target model parameters include a single training number of 64, a learning rate of 0.01, a learning rate decay exponent of 0.8, and a learning rate decay step number of 10.
The technical process of the present application is explained in detail as follows:
the forensic medicine pelvis gender identification method based on the deep learning and virtual image technology comprises the following steps:
(1) processing training data: carrying out three-dimensional reconstruction on a pelvis sample from dicom data obtained by CT (computed tomography), constructing a pelvis virtual three-dimensional reconstruction model, determining an analyzed anatomical part in the reconstruction model and carrying out screenshot, wherein the obtained pelvis specific region picture data can be preprocessed and appropriately expanded to construct an initial sample data set for training a deep learning model;
(2) deep learning model training: segmenting the constructed initial sample data set in proportion and importing the segmented initial sample data set into a deep learning model of a corresponding pelvis specific region for iterative training, and finally training to form a gender inference model for various pelvis specific regions;
(3) and (3) prediction data processing: performing CT flat scanning on an entity pelvis sample obtained in an actual detection case to obtain dicom data, and obtaining image data of a corresponding pelvis specific region by adopting the method in the step (1) to form image data of the pelvis specific region to be predicted for predicting a deep learning model;
(4) and (3) gender inference: and inputting the image data of the specific pelvic region to be predicted into the corresponding trained deep learning model, and finally outputting the probability of the corresponding gender by the model.
The training data processing of the step (1) comprises the following steps:
firstly, regarding the aspect of virtual three-dimensional reconstruction of the pelvis, introducing human body CT scanning data into three-dimensional reconstruction software Mimics, separating bones from adjacent soft tissues of the bones within the range of HU threshold value of 180-2976, and then separating the pelvis from the adjacent lumbar vertebrae and thighbones by using 'region growing' and 'editing mask' tools of the software to complete the construction of a virtual three-dimensional reconstruction model of the pelvis;
and secondly, regarding the aspect of capturing the images of the specific pelvic area, determining a preselected specific pelvic area in the virtual three-dimensional reconstruction model of the pelvis and capturing the images, thereby obtaining the two-dimensional images of the specific pelvic area. The preselected specific region of the pelvis comprises a pubic ventral aspect, a pubic dorsal aspect, a ischial major incisional plane, a pelvic inlet plane, an ischial ventral aspect, and an acetabulum;
marking the two-dimensional picture of the specific area of the virtual pelvis in relation to the marking aspect of the artificial picture, wherein the marked content is male and female;
fourthly, regarding the aspect of preprocessing the image of the specific area of the pelvis, the two-dimensional image of the specific area of the pelvis is cut in a concentrated mode, and the size of the image is adjusted to be 255 pixels multiplied by 255 pixels matched with preset parameters of the model;
regarding the construction aspect of a training set and a verification set, an initial two-dimensional picture data set obtained by screenshot is randomly divided into 8 parts according to the proportion: 2, dividing the training set into a training set and a verification set, wherein the training set is used for debugging the deep learning model, and the verification set is used for checking the training effect of the model;
in the aspect of sample expansion, a preset expansion method is adopted to expand the samples in the constructed training set and the verification set to obtain a training sample set, and the adopted preset expansion method comprises the following steps: turning and rotating the pictures at 90 degrees, 180 degrees and 270 degrees randomly; and carrying out random contrast, brightness, color difference balance and intensity conversion on the picture.
The deep learning model training of the step (2) comprises the following steps:
firstly, regarding the aspect of model construction, a GoogLeNet addition V4 framework is used, the framework mainly comprises three parts, wherein the first part is a picture input layer and is used for receiving the established training sample set, the second part comprises a convolution module called stem, three addition modules and two Reduction modules, and the third part is an output layer and is used for outputting the type and the corresponding probability of a prediction picture; the framework can be pre-trained by using an ImageNet data set containing 128 ten thousand pictures of 1000 classifications, and better related parameters are formed; then, a transfer learning technology is used for importing the trained V4 architecture containing better relevant parameters so as to improve the training efficiency of the model;
regarding the aspect of model training, taking the accuracy and the cross entropy of the loss function as the evaluation index of the model training condition, and importing the constructed training sample set containing the pelvis specific region picture into the input layer of the corresponding pelvis specific region deep learning model to start iterative training;
in the aspect of model training parameters, an Adadelta optimizer is added, related parameters are preset, the accuracy of the model is improved and the cross entropy of a loss function is reduced under the condition that the related parameters are continuously adjusted, and the adopted training parameters comprise: number of single training: 64, learning rate: 0.01, learning rate decay index: 0.8, learning rate decay step number: 10;
and fourthly, regarding the aspect of model training and evaluation: and (3) evaluating the gender inference efficiency and generalization capability of the deep learning model according to indexes such as accuracy, sensitivity, specificity and an area under a sensitivity curve (AUC value) of the deep learning model to the specific region image of the pelvis, and finally obtaining the deep learning model with the best prediction efficiency.
The step (3) of prediction sample processing comprises the following steps: regarding prediction sample picture acquisition: performing CT flat scanning on an entity pelvis sample obtained in an actual detection case to obtain dicom data, and obtaining a pelvis specific region picture by adopting the method in the step (1).
The step (4) of gender inference comprises the following steps: in respect of outputting results of gender inference: and (4) importing the picture containing the pelvis specific region acquired in the step (3) into a depth model with corresponding gender inference, and outputting the gender probability of the pelvis region represented by the picture by the model.
Compared with the traditional pelvis gender inference method, the method has the following beneficial technical effects: the invention uses the electronic computer tomography data to construct a virtual three-dimensional pelvis data set, the storage form of the pelvis sample is converted from an entity into electronic data, the defects that the entity sample is complex to process and needs a specific storage space and the like are avoided, and the combination of a digital platform is favorable for a clerk to deeply analyze, subsequently call and recheck inspection materials; in addition, the pelvis gender inference result related to the method is completely obtained by a computer, so that the deviation caused by artificial observation is greatly reduced, and the accuracy of gender inference can be effectively improved by adopting pelvis multi-region comprehensive judgment.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. The forensic medicine pelvis gender identification method based on the deep learning and virtual image technology is characterized by comprising the following steps of:
s1, three-dimensional reconstruction is carried out on human body pelvis imaging data based on dicom data of CT, a pelvis virtual three-dimensional reconstruction model is constructed, a target analysis part is selected and subjected to screenshot based on the pelvis virtual three-dimensional reconstruction model, a target analysis part picture is obtained, the target analysis part picture is subjected to expansion processing, gender labeling is carried out on the target analysis part picture based on the gender of a pelvis sample, and a pelvis sample data set is obtained;
s2, performing iterative training on the deep learning model based on the pelvis sample data set by constructing a deep learning model, and constructing a pelvis specific region gender inference model;
s3, acquiring a corresponding image data set by adopting an image instrument based on a corpse or skeleton sample, constructing a virtual three-dimensional reconstruction model of the pelvis to be detected, selecting a target analysis part of the virtual three-dimensional reconstruction model of the pelvis to be detected based on the target analysis part of the virtual three-dimensional reconstruction model of the pelvis, and carrying out screenshot to obtain a picture of the target analysis part to be detected, so as to obtain a sample data set of the pelvis to be detected;
and S4, judging the gender probability of the pelvis sample to be detected through the pelvis sample data set to be detected based on the pelvis specific region gender inference model.
2. The forensic medicine pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 1,
the step S1 includes importing three-dimensional reconstruction software Mimics based on CT scan data, separating bones and bones of the pelvis sample from soft tissues adjacent to the bones by setting the HU threshold value of the CT scan data importing three-dimensional reconstruction software Mimics to 180-2976, and then separating the pelvis of the pelvis sample from the lumbar vertebrae and the femurs adjacent to the pelvis to finally construct the virtual three-dimensional reconstruction model of the pelvis.
3. The forensic medicine pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 1,
the target analysis part pictures comprise a pubic ventral picture, a pubic dorsal picture, an ischial large incisure plane picture, a pelvic inlet plane picture, an ischial ventral picture and an acetabulum picture.
4. The forensic medicine pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 1,
the processing process of the target analysis part picture comprises the steps of performing centralized cutting on the basis of the target analysis part picture, adjusting the size of the cut picture to be 255 pixels multiplied by 255 pixels, obtaining an initial data set, dividing the initial data set into a training set and a verification set according to the ratio of 8:2, and expanding the training set and the verification set to obtain the pelvis sample data set.
5. The forensic medicine pelvic sex discrimination method based on the deep learning and virtual image technology as claimed in claim 4,
the augmenting based on the training set and the validation set comprises the following steps:
s5.1, performing random turning and rotation on the data picture based on the initial data picture of the training set and the verification set to obtain a first transformation data picture;
s5.2, carrying out random contrast, brightness, color difference balance and intensity conversion on the data picture to obtain a second conversion data picture;
and S5.3, supplementing the first conversion data picture and the second conversion data picture into the initial data picture, and constructing the pelvis sample data set.
6. The forensic medical pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 5,
the angle of the random flipping and the rotation is 90 degrees or 180 degrees or 270 degrees.
7. The forensic medicine pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 1,
the deep learning model comprises an input layer, a convolution layer and an output layer, wherein the convolution layer consists of 3 increment modules and 2 Reduction modules;
the input layer is connected with the output layer through the convolution layer;
adjusting model parameters of the deep learning model based on an Adadelta optimizer to obtain target model parameters;
training the pelvis sample data set through the deep learning model based on the target model parameters;
and evaluating the gender inference efficiency and the generalization ability of the deep learning model through the accuracy, the sensitivity, the specificity and the area indexes under the sensitivity curve of the pelvis sample data set based on the deep learning model to obtain the gender inference model of the specific region of the pelvis.
8. The forensic medical pelvic sex discrimination method based on the deep learning and virtual image technique as claimed in claim 7,
the target model parameters comprise 64 single training quantities, 0.01 learning rate, 0.8 learning rate attenuation index and 10 learning rate attenuation steps.
CN202110193008.4A 2021-02-20 2021-02-20 Forensic medicine pelvis gender identification method based on deep learning and virtual image technology Pending CN112967375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110193008.4A CN112967375A (en) 2021-02-20 2021-02-20 Forensic medicine pelvis gender identification method based on deep learning and virtual image technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110193008.4A CN112967375A (en) 2021-02-20 2021-02-20 Forensic medicine pelvis gender identification method based on deep learning and virtual image technology

Publications (1)

Publication Number Publication Date
CN112967375A true CN112967375A (en) 2021-06-15

Family

ID=76285266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110193008.4A Pending CN112967375A (en) 2021-02-20 2021-02-20 Forensic medicine pelvis gender identification method based on deep learning and virtual image technology

Country Status (1)

Country Link
CN (1) CN112967375A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106420030A (en) * 2016-10-14 2017-02-22 重庆医科大学附属第医院 Design method and device of internal fixation system for fracture of quadrilateral body of acetabulum
CN107767376A (en) * 2017-11-02 2018-03-06 西安邮电大学 X-ray film stone age Forecasting Methodology and system based on deep learning
CN109580526A (en) * 2019-01-18 2019-04-05 重庆医科大学 A kind of infrared spectrum analysis identifying human body gender based on histotomy
CN110033445A (en) * 2019-04-10 2019-07-19 司法鉴定科学研究院 Medicolegal examination automatic identification system and recognition methods based on deep learning
CN110232685A (en) * 2019-06-17 2019-09-13 合肥工业大学 Space pelvis parameter auto-testing method based on deep learning
CN111797902A (en) * 2020-06-10 2020-10-20 西安邮电大学 Medical X-ray film magnification measuring system and method based on image data analysis
CN112365438A (en) * 2020-09-03 2021-02-12 杭州电子科技大学 Automatic pelvis parameter measuring method based on target detection neural network
CN112382384A (en) * 2020-11-10 2021-02-19 中国科学院自动化研究所 Training method and diagnosis system for Turner syndrome diagnosis model and related equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106420030A (en) * 2016-10-14 2017-02-22 重庆医科大学附属第医院 Design method and device of internal fixation system for fracture of quadrilateral body of acetabulum
CN107767376A (en) * 2017-11-02 2018-03-06 西安邮电大学 X-ray film stone age Forecasting Methodology and system based on deep learning
CN109580526A (en) * 2019-01-18 2019-04-05 重庆医科大学 A kind of infrared spectrum analysis identifying human body gender based on histotomy
CN110033445A (en) * 2019-04-10 2019-07-19 司法鉴定科学研究院 Medicolegal examination automatic identification system and recognition methods based on deep learning
CN110232685A (en) * 2019-06-17 2019-09-13 合肥工业大学 Space pelvis parameter auto-testing method based on deep learning
CN111797902A (en) * 2020-06-10 2020-10-20 西安邮电大学 Medical X-ray film magnification measuring system and method based on image data analysis
CN112365438A (en) * 2020-09-03 2021-02-12 杭州电子科技大学 Automatic pelvis parameter measuring method based on target detection neural network
CN112382384A (en) * 2020-11-10 2021-02-19 中国科学院自动化研究所 Training method and diagnosis system for Turner syndrome diagnosis model and related equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
丑大哥: "《识骨寻踪~~性别篇(教你如何通过骨头辨男女)》", 《百度文库:HTTPS://WENKU.BAIDU.COM/VIEW/1775178029EA81C758F5F61FB7360B4C2F3F2A16.HTML》 *
杨稳等: "《结合改进卷积神经网络和最小二乘法的颅骨性别鉴定》", 《人类学学报》 *
王世雄: "《基于深度学习的颅骨性别和种族鉴定方法研究与实现》", 《万方数据》 *
王锟朋等: "《基于附加间隔Softmax特征的人脸聚类算法》", 《计算机应用与软件》 *
霰雪城公主: "《骨盆性别特征》", 《百度文库:HTTPS://WENKU.BAIDU.COM/VIEW/D889E6ADED630B1C59EEB5E6.HTML》 *

Similar Documents

Publication Publication Date Title
Lakshminarayanan et al. Deep Learning-Based Hookworm Detection in Wireless Capsule Endoscopic Image Using AdaBoost Classifier.
CN110544245B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US20210142477A1 (en) Bone Age Assessment And Height Prediction Model, System Thereof And Prediction Method Thereof
CN111986177A (en) Chest rib fracture detection method based on attention convolution neural network
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
Aslam et al. Liver-tumor detection using CNN ResUNet
CN113989407A (en) Training method and system for limb part recognition model in CT image
CN110459303B (en) Medical image abnormity detection device based on depth migration
CN111325754A (en) Automatic lumbar vertebra positioning method based on CT sequence image
CN114708493A (en) Traditional Chinese medicine crack tongue diagnosis portable device and using method
Liu et al. Bone image segmentation
CN113397485A (en) Scoliosis screening method based on deep learning
CN116433620A (en) CT image-based bone mineral density prediction and osteoporosis intelligent screening method and system
CN112967375A (en) Forensic medicine pelvis gender identification method based on deep learning and virtual image technology
CN113469942B (en) CT image lesion detection method
CN115312189A (en) Construction method of breast cancer neoadjuvant chemotherapy curative effect prediction model
Agafonova et al. Meningioma detection in MR images using convolutional neural network and computer vision methods
CN114372985A (en) Diabetic retinopathy focus segmentation method and system adapting to multi-center image
Alwash et al. Detection of COVID-19 based on chest medical imaging and artificial intelligence techniques
Nonthasaen et al. Sex estimation from Thai hand radiographs using convolutional neural networks
CN112907537A (en) Skeleton sex identification method based on deep learning and on-site virtual simulation technology
CN112766332A (en) Medical image detection model training method, medical image detection method and device
CN111951241A (en) Method for measuring and displaying muscle deformation of aquatic animals in movement process
EP3963541A1 (en) Medical image analysis system and method for identification of lesions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210615

RJ01 Rejection of invention patent application after publication