CN108197539A - A kind of Diagnosis of Crania By Means identification method - Google Patents
A kind of Diagnosis of Crania By Means identification method Download PDFInfo
- Publication number
- CN108197539A CN108197539A CN201711397107.4A CN201711397107A CN108197539A CN 108197539 A CN108197539 A CN 108197539A CN 201711397107 A CN201711397107 A CN 201711397107A CN 108197539 A CN108197539 A CN 108197539A
- Authority
- CN
- China
- Prior art keywords
- skull
- sample
- identified
- convolution
- skull sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 241001316086 Novocrania Species 0.000 title abstract 4
- 238000003745 diagnosis Methods 0.000 title abstract 4
- 210000003625 skull Anatomy 0.000 claims abstract description 117
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 12
- 238000010606 normalization Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 15
- 230000020509 sex determination Effects 0.000 claims description 11
- 230000001419 dependent effect Effects 0.000 claims description 2
- 238000005259 measurement Methods 0.000 abstract description 13
- 230000008859 change Effects 0.000 abstract description 3
- 238000011160 research Methods 0.000 description 6
- 230000000877 morphologic effect Effects 0.000 description 5
- 210000003128 head Anatomy 0.000 description 4
- 210000004197 pelvis Anatomy 0.000 description 4
- 230000036544 posture Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 210000000216 zygoma Anatomy 0.000 description 4
- 238000000691 measurement method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011840 criminal investigation Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 210000001595 mastoid Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention discloses a kind of Diagnosis of Crania By Means identification methods.Method includes:The skull data of training sample are transformed under unified Frankfort coordinate system, and carry out dimension normalization;Each skull sample of normalized training sample set around Z axis is rotated, obtains multiple images of skull sample under different angle;Skull sample global characteristics are extracted using improved convolutional neural networks method, every image for calculating each skull sample is belonging respectively to the probability of male and female;Optimized parameter, design Diagnosis of Crania By Means identification function are obtained using least square method;The gender of unknown skull is differentiated using the function of structure.The present invention is easy to operate, and overcoming existing Diagnosis of Crania By Means identification method needs to have the shortcomings that the expert of professional knowledge participates in and cumbersome manual measurement;The problems such as overcome existing method is influenced greatly by skull change in size, and measurement accuracy is not high.
Description
Technical Field
The invention belongs to the field of pattern recognition, and relates to a method for classifying skull gender by using a convolutional neural network and a least square method. The method is mainly used in the fields of criminal investigation, archaeology, forensic anthropology and the like.
Background
Within the field of biometric identification, gender identification is an important research content in this field. The sex determination has certain application in the fields of forensic anthropology, face restoration, unknown skull recognition and the like, and provides important basis for later related research.
When the gender identification is carried out according to the bone remains, the pelvis and the skull are important research objects for judging the gender. The pelvis is the most remarkable region in morphological difference, but is easily influenced by external environmental factors, so that the pelvis is not beneficial to long-time storage and is fragile, namely the integrity cannot be guaranteed, and the pelvis cannot play a key role in sex identification. The skull is made of hard tissues, is not easy to damage, and can be completely preserved after death. Therefore, the skull is the second choice for sex determination. The sex identification according to the skull becomes a research hotspot in related fields of forensic anthropology, archaeology, criminal investigation and the like.
Currently, there are two major methods for sex determination, morphological method and measurement method. The morphological method is mainly characterized in that a anthropologist manually observes differences of various characteristics such as the approximate contour of a skull, the size of the skull, the angle of a key skull region, the thickness degree of a real skull, the abrasion degree of the skull and the like to carry out sex determination, for example, the skull of a male is large, thick and strong, an eyebrow arch is developed, the superior orbital edge is round, the forehead is inclined, cheekbones are high and strong, the cheekbones are thick, and the mastoid and the occipital eminence are developed; female skull is small, smooth and delicate, the arch of the eyebrow is underdeveloped, the superior orbital edge is sharp and thin, the forehead is steep and straight, the cheekbones are low and delicate, the zygomatic arch is thin and weak, the mastoid and the extraoccipital protuberant process are underdeveloped, and the like. The measurement method is to calibrate some measuring points on the skull solid model, the X-ray photograph of the skull and the three-dimensional digital model, define some measuring indexes according to the research of anthropologists, and construct a discriminant function according to the measuring indexes by a statistical method. In conclusion, the morphological method is easily affected by subjective factors and professional knowledge of experts to cause inconsistency of judgment results, and the morphological method is strong in subjective dependence and free of sufficient theoretical support; the measurement method is complex to operate, has high requirement on the measurement precision of a measurement item, is greatly influenced by the size of the skull, and can have some characteristics of obvious gender difference and cannot be measured. There are studies showing that the measurement error between different observers for most features is more than 10%. In addition, the skull morphology does not change obviously with the age, but the size changes, which increases the measurement difficulty. With the advent of computer technology and CT technology, it has become possible to take advantage of computer-aided measurements. But it is difficult to achieve accurate measurements due to the complex skull. In addition, the accuracy of the existing method for identifying the skull sex is not high and basically does not exceed 90%.
Disclosure of Invention
The invention aims to provide an automatic method for identifying the sex of a skull, which solves the problems of high degree of subjective experience depending on experts, complicated manual measurement, low measurement precision and great influence on the size of the skull in the prior art.
The skull sex determination function obtaining method provided by the invention comprises the following steps:
converting all skull samples in a training skull sample set to be under a Frankfurt coordinate system A, and carrying out scale normalization to normalize the postures and the sizes of the skull samples; the number N of the skull samples in the training skull sample set is not less than 50;
step two, rotating the normalized skull sample N around the Z axis, wherein N is 1,2,3, and N, and obtaining an image of the skull sample every time α is rotated, wherein α is not less than 1 and not more than 90 degrees, so as to obtain a plurality of images of the skull sample under different rotation angles;
extracting global features of a skull sample N by using an improved convolutional neural network method, wherein N is 1,2,3, and N, and the improved convolutional neural network method sequentially comprises input, convolution, downsampling, convolution, downsampling, full connection and output; calculating the probability of the sex of each image of the skull sample n by using a sigmoid function to obtain a probability vector p of the skull sample nn,n=1,2,.3...,N;
Step four, solving the optimal parameter w for the probability vectors of all skull samples by using a least square method*,
qn0 or 1, qnThe values are different according to different genders of the skull samples n;
step five, designing a skull sex determination function: d ═ pxw*L, D is a dependent variable, pxIs an independent variable, l is 0 or 1.
The invention also provides a skull sex determination method. The provided method comprises the following steps:
step 1, converting the skull to be identified to be under a Frankfurt coordinate system A, and carrying out scale normalization to normalize the posture and size of the skull;
step 2, rotating the skull sample to be identified processed in the step 1 around a Z axis, and obtaining an image of the skull sample every time α is rotated, wherein α is more than or equal to 1 and less than or equal to 90 degrees, and obtaining a plurality of images of the skull sample to be identified at different rotation angles;
step 3, extracting the global characteristics of the skull sample to be identified by utilizing an improved convolutional neural network method, wherein the improved convolutional neural network method comprises input, convolution, down-sampling, convolution, down-sampling, full connection and output in sequence; calculating the probability of the sex of each image of the skull sample to be identified by using a sigmoid function to obtain a probability vector p of the skull sample to be identifiedx;
And 4, adopting the formula (7) constructed by the invention to carry out sex determination:
D=pxw*-l (7)
assuming that l-0 represents a male, l-1 represents a female, if the value of | D | is less than 0.5, the gender of the skull sample to be identified is male, otherwise, the gender of the skull sample to be identified is female;
assuming that l-0 represents a female and l-1 represents a male, if the value of D is greater than 0.5, the gender of the skull sample to be identified is male, otherwise, the gender of the skull sample to be identified is female.
The invention has the beneficial effects that:
(1) the method adopted by the invention overcomes the defects that in the prior art, experts with professional knowledge are required to participate and the manual measurement is complicated;
(2) the method adopted by the invention overcomes the problems of large influence of skull size change, low measurement precision and the like in the prior art;
(3) the method adopted by the invention has the advantages of high automation degree, convenient operation and higher accuracy rate which is up to more than 94.4%.
Drawings
FIG. 1 is a Frankfurt coordinate system;
FIG. 2 is a normalized three-dimensional skull model according to an embodiment;
FIG. 3 is a diagram showing the results of the identification in example 3.
Detailed Description
The skull sample is three-dimensional grid data, and a Frankfurt coordinate system A is established according to the facial physiological sites of the skull, for example, see FIG. 1, and can be determined by four skull characteristic points, namely upper points of the ears on the left and right sides, lower marginal points of the orbits of the left eye, and glabellar points, which are respectively Lp、Rp、Mp、VpAnd (4) showing. Frankfurt plane Lp,Rp,MpAnd (5) determining three points. Wherein: origin of coordinates: to be provided withIs a normal vector and passes through a point VpPlane and straight line LpRpThe intersection point of the two points is marked as the origin O' of the Frankfurt plane; an X axis: note Lp、RpAnd MpThe plane formed by the three points is an XO' Y plane in a coordinate system, and the upper point L of the left ear doorpTo the upper point R of the right earpIs in the X-axis direction; z-axis: marking the direction which passes through the original point O 'and is vertical to the XO' Y plane upwards as a Z axis; y-axis: and recording a straight line which is simultaneously vertical to the X axis and the Z axis as a Y axis of the coordinate system, determining the positive direction of the Y axis by a right-hand rule, and performing scale normalization after the scale normalization is a unified coordinate system. Setting L of all skull modelsp-RpThe distance between the two is 1, then each vertex (x, y, z) of the skull is subjected to scale transformation to (x/| L)p-Rp|,y/|Lp-Rp|,z/|Lp-Rp|)。
The improved convolutional neural network of the invention is an improvement on the LeNet-5 model of the convolutional neural network: 1) the input layer image size of the standard LeNET5 model is 32 × 32, but in order to preserve the depth semantics and content information of the image, the input layer image size of the present invention is expanded, e.g., the input layer image size is changed to 256 × 256; 2) setting the convolution kernel size according to the skull dataset features, e.g., setting the convolution kernel size to 17 × 17; 3) and a convolution layer is added after the convolution layer of the standard model, so that more depth information of the image is prevented from being lost. The improved convolution neural network comprises the specific processes of inputting, convolution, down-sampling, convolution, down-sampling, full connection and outputting.
The invention adopts sigmoid function to calculate the probability value of each image belonging to male and female, and the calculation formula is as follows:
f(x)c=p(y=c|x)=sigmoid(W*X+b) (1)
wherein, f (x)cIs the probability function that a sample is classified as male or female, X is the global feature, W is the weight, b is the offset. c 0 or 1 according to the sex of the male and the female.
The invention adopts a regression model of a least square method to minimize the residual square sum of the square loss function so as to obtain the optimal parameters. The square loss function is as follows:
wherein p isjw-qjpnω-qnDenotes the residual, pnDenotes the probability of the nth sample, w denotes the weight, qnRepresenting the true class of the nth sample in the set of training samples. q. q.snRepresenting the true class of the nth sample in the training sample set, the two classes are denoted by 0 and 1 since the sexing problem belongs to the dichotomy problem.
To obtain the optimal parameters, minimizing the sum Q of the residual squared, equation (2) can be converted into:
wherein,
further breaking down equation (3), the above equation can be converted to:
is easily obtained according to the formula (4)The method is a constant and has little influence on solving the optimal parameters. Thus, the above formula is translated into solving the parameters that minimize S;
the parameter w is derived based on the geometric knowledge of the differential, and it can be known that a locally optimal solution of the function can be obtained when the derivative of the parameter w is zero, that is, the optimal parameter is calculated as follows:
the optimal parameter w can be obtained by solving the equation (6)*Designing a decision function:
D=pxw*-l (7)
wherein p isxIs a probability vector formed by a plurality of images of the skull to be identified, w*Is the most importantThe optimal parameter, l is a hypothetical label, and takes the value of 0 or 1.
The skull data in the present invention is three-dimensional mesh data, and the description will be made with respect to the skull data being three-dimensional mesh data. The adopted three-dimensional grid data is head section image data which accords with DICOM standard and is obtained by head CT scanning.
Example 1:
the skull data of this embodiment is three-dimensional mesh data, and the description will be made with respect to the skull data being three-dimensional mesh data. The adopted three-dimensional grid data is head section image data which accords with a DICOM standard and is obtained by scanning a head CT, the CT data is denoised and subjected to redundancy removal, the three-dimensional grid data of the skull is obtained by reconstructing by using a Marching Cubes method, and each three-dimensional grid data comprises 100000 vertexes and 200000 triangular patches. Assuming 56 training samples, 28 males and 28 females, the following specific implementation steps are followed:
the method comprises the following steps: unifying three-dimensional grid data of the skull in a training sample set to a Frankfurt coordinate system, taking the distance between the center points of the left and right ear holes of each sample as a distance measurement unit of the sample, namely carrying out scale normalization on coordinates of all points on the sample, and normalizing the posture and the size of the sample;
step two: for each skull data in the normalized training sample set, keeping an X axis and a Y axis, obtaining an image every 18 degrees of rotation around a Z axis, and obtaining 20 images of the skull sample under different angles;
step three: the global characteristics of a skull sample n are extracted by using an improved convolutional neural network method, wherein n is 1,2, 3. The specific process is as follows: firstly, initializing the weight and bias parameters of each layer; next, the input layer grayscale image size is 256 × 256, and the pixel values are normalized to between 0 and 1 for simplicity of calculation. The first convolutional layer C1 is composed of six feature maps, a 17 × 17 kernel and offsets, and the parameters of the feature map layers are counted by the convolution kernel and the activation function. To simplify the eigen-map layer operations and save time cost, two convolutional layers are followed by a downsampling layer. The image size of the second convolutional layer C2 is 224 × 224, and the image size of the downsampled layer S3 is 112 × 112. Similarly, the third convolutional layer C4 contains 6 images of size 96 × 96, the fourth convolutional layer C5 has 6 sub-sample maps of size 80 × 80, and the downsampled layer S6 image size becomes 40 × 40. Layers C1 to S6 are CNN feature extraction processes, and the fully-connected layer C7 is a feature representation layer of an image, which is 40 × 40 × 6 in size.
Calculating the probability that each image of each skull sample belongs to male and female respectively, f (x)cP (y ═ c | X) ═ sigmoid (W ═ X + b), c ═ 1 denotes males, c ═ 0 denotes females;
step four: and regarding the probability value of each image of each sample obtained in the step three, and regarding a plurality of images of each sample as a research object to form a probability vector. The regression model applying the least squares method minimizes the sum of the squared residuals of the squared loss function to obtain the optimal parameters. This example is represented by 1 for the male skull and 0 for the female skull; using an optimum parameter w*The decision function is constructed as follows:
D=pxw*-l (7);
in this example:
whereinA probability vector representing the 1 st sample,a probability vector representing the 2 nd sample,a probability vector representing the 3 rd sample.The probability vectors of the 56 th sample are represented by the formula in step three: f (x)cP (y ═ c | X) ═ sigmoid (W × X + b) is calculated, and since there are many samples, only the probability vector of the first sample is given in this example, the determination of the vector is determined by the skull training samples, which, in this embodiment, finally, the following is obtained: w is a*=[-0.0000;-0.0678;-0.3188;-0.2325;0.8005;0.7806;1.3203;-0.1481;0.4322;-0.2780;0.2172;0.5314;0.6997;0.3086;0.3575;-0.1631;-0.1430;-0.1339;-0.2908;-0.0000]。
Example 2:
this example identifies the skull gender shown in fig. 2 using the function constructed in example 1:
step 1: converting the three-dimensional grid data of the skull to be identified (figure 2) into a unified Frankfurt coordinate system and carrying out scale normalization to normalize the posture and size of the skull;
step 2: acquiring an unknown skull image: the skull testing method comprises the steps of (1) obtaining an image by rotating a tested skull about the Z axis by 18 degrees every time the tested skull keeps the X axis and the Y axis, and obtaining 20 images of skull samples at different angles;
and step 3: extracting the global characteristics of the skull sample to be identified by utilizing an improved convolutional neural network method, and calculating the probability p of the sex of each image of the skull sample to be identifiedx=(0.6465,0.6108,0.5958,0.5865,0.5759,0.5513,0.5481,0.5207,0.5119,0.5029,0.5214,0.5106,0.4748,,0.4242,0.4269,0.5508,0.5377,0.5116,0.5089,0.5737);
And 4, step 4: again according to equation (7) in the detailed description: d ═ pxw*-l, assuming label l takes 1, the value of D is calculated to be 0.7833, D0.7833>0.5, the skull to be identified is male, consistent with the results of the anthropologist's identification.
Example 3:
this example selects 36 persons with known gender as the test set, 18 women and 18 men, and identifies gender by using the function of example 1, if as shown in fig. 3, the average correctness rate of women is 94.7%, the average correctness rate of men is 94%, and the average correctness rate is 94.4%.
In summary, the examples of the present invention disclose preferred embodiments thereof, but are not limited thereto. Those skilled in the art can easily understand the core idea of the present invention from the above-mentioned embodiments, and the modifications or replacements without departing from the basic technical solution of the present invention are within the protection scope of the present invention.
Claims (2)
1. A method for obtaining a skull sexing function, the method comprising:
converting all skull samples in a training skull sample set to be under a Frankfurt coordinate system A, and carrying out scale normalization; the number N of the skull samples in the training skull sample set is not less than 50;
step two, rotating the normalized skull sample N around the Z axis, wherein N is 1,2,3, and N, and obtaining an image of the skull sample every time α is rotated, wherein α is not less than 1 and not more than 90 degrees, so as to obtain a plurality of images of the skull sample under different rotation angles;
extracting global features of a skull sample N by using an improved convolutional neural network method, wherein N is 1,2,3, and N, and the improved convolutional neural network method sequentially comprises input, first convolution, second convolution, downsampling, third convolution, fourth convolution, downsampling, full connection and output; calculating the probability of the sex of each image of the skull sample n by using a sigmoid function, and forming a probability vector p of the skull sample n by using a plurality of obtained probabilitiesn,n=1,2,.3...,N;
Step four, solving the optimal parameter w for the probability vectors of all skull samples by using a least square method*,
qn0 or 1, qnThe values are different according to different genders of the skull samples n;
step five, designing a skull sex determination function: d ═ pxw*L, D is a dependent variable, pxIs an independent variable, l is 0 or 1.
2. A method of sex determination of a skull, the method comprising:
step 1, converting the skull to be identified to a Frankfurt coordinate system A, and carrying out scale normalization;
step 2, rotating the skull sample to be identified processed in the step 1 around a Z axis, and obtaining an image of the skull sample every time α is rotated, wherein α is more than or equal to 1 and less than or equal to 90 degrees, and obtaining a plurality of images of the skull sample to be identified at different rotation angles;
step 3, extracting the global characteristics of the skull sample to be identified by utilizing an improved convolutional neural network method, wherein the improved convolutional neural network method comprises input, first convolution, second convolution, downsampling, third convolution, fourth convolution, downsampling, full connection and output in sequence; calculating the probability of the sex of each image of the skull sample to be identified by using a sigmoid function to obtain a probability vector p of the skull sample to be identifiedx;
Step 4, performing sex determination on the skull sample to be identified by adopting the formula (7) constructed by the method of claim 1:
D=pxw*-l (7)
assuming that l-0 represents a male, l-1 represents a female, if the value of | D | is less than 0.5, the gender of the skull sample to be identified is male, otherwise, the gender of the skull sample to be identified is female;
assuming that l-0 represents a female and l-1 represents a male, if the value of D is greater than 0.5, the gender of the skull sample to be identified is male, otherwise, the gender of the skull sample to be identified is female.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711397107.4A CN108197539A (en) | 2017-12-21 | 2017-12-21 | A kind of Diagnosis of Crania By Means identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711397107.4A CN108197539A (en) | 2017-12-21 | 2017-12-21 | A kind of Diagnosis of Crania By Means identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108197539A true CN108197539A (en) | 2018-06-22 |
Family
ID=62583316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711397107.4A Pending CN108197539A (en) | 2017-12-21 | 2017-12-21 | A kind of Diagnosis of Crania By Means identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108197539A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109118501A (en) * | 2018-08-03 | 2019-01-01 | 上海电气集团股份有限公司 | Image processing method and system |
CN111462055A (en) * | 2020-03-19 | 2020-07-28 | 沈阳先进医疗设备技术孵化中心有限公司 | Skull detection method and device |
CN112907537A (en) * | 2021-02-20 | 2021-06-04 | 司法鉴定科学研究院 | Skeleton sex identification method based on deep learning and on-site virtual simulation technology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1801213A (en) * | 2005-09-30 | 2006-07-12 | 铁岭市公安局213研究所 | Method and apparatus for three-dimensional cranium body source identification |
CN102521875A (en) * | 2011-11-25 | 2012-06-27 | 北京师范大学 | Partial least squares recursive craniofacial reconstruction method based on tensor space |
CN102831443A (en) * | 2012-07-27 | 2012-12-19 | 北京师范大学 | Skull sex determining method based on spatial analysis |
US9165360B1 (en) * | 2012-09-27 | 2015-10-20 | Zepmed, Llc | Methods, systems, and devices for automated analysis of medical scans |
CN105760344A (en) * | 2016-01-29 | 2016-07-13 | 杭州电子科技大学 | Distributed principal component analysis neural network modeling method for chemical exothermic reaction |
-
2017
- 2017-12-21 CN CN201711397107.4A patent/CN108197539A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1801213A (en) * | 2005-09-30 | 2006-07-12 | 铁岭市公安局213研究所 | Method and apparatus for three-dimensional cranium body source identification |
CN102521875A (en) * | 2011-11-25 | 2012-06-27 | 北京师范大学 | Partial least squares recursive craniofacial reconstruction method based on tensor space |
CN102831443A (en) * | 2012-07-27 | 2012-12-19 | 北京师范大学 | Skull sex determining method based on spatial analysis |
US9165360B1 (en) * | 2012-09-27 | 2015-10-20 | Zepmed, Llc | Methods, systems, and devices for automated analysis of medical scans |
CN105760344A (en) * | 2016-01-29 | 2016-07-13 | 杭州电子科技大学 | Distributed principal component analysis neural network modeling method for chemical exothermic reaction |
Non-Patent Citations (2)
Title |
---|
RONG RONG REN 等: "Automatic Sex Identification Based on Convolution Neural Network and Least Square Method", 《2016 INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND INTELLECTUALIZATION (ICEITI 2016)》 * |
赵倩娜: "计算机辅助三维颅骨性别鉴定方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109118501A (en) * | 2018-08-03 | 2019-01-01 | 上海电气集团股份有限公司 | Image processing method and system |
CN111462055A (en) * | 2020-03-19 | 2020-07-28 | 沈阳先进医疗设备技术孵化中心有限公司 | Skull detection method and device |
CN111462055B (en) * | 2020-03-19 | 2024-03-08 | 东软医疗系统股份有限公司 | Skull detection method and device |
CN112907537A (en) * | 2021-02-20 | 2021-06-04 | 司法鉴定科学研究院 | Skeleton sex identification method based on deep learning and on-site virtual simulation technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107369160B (en) | Choroid neogenesis blood vessel segmentation algorithm in OCT image | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
CN110503680B (en) | Unsupervised convolutional neural network-based monocular scene depth estimation method | |
JP4950787B2 (en) | Image processing apparatus and method | |
Yaqub et al. | A deep learning solution for automatic fetal neurosonographic diagnostic plane verification using clinical standard constraints | |
CN110348330A (en) | Human face posture virtual view generation method based on VAE-ACGAN | |
CN107358648A (en) | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image | |
CN106951840A (en) | A kind of facial feature points detection method | |
JP4780198B2 (en) | Authentication system and authentication method | |
CN111462120A (en) | Defect detection method, device, medium and equipment based on semantic segmentation model | |
CN101159015A (en) | Two-dimension human face image recognizing method | |
CN113781640A (en) | Three-dimensional face reconstruction model establishing method based on weak supervised learning and application thereof | |
De Jong et al. | An automatic 3D facial landmarking algorithm using 2D gabor wavelets | |
CN108197539A (en) | A kind of Diagnosis of Crania By Means identification method | |
CN111797264A (en) | Image augmentation and neural network training method, device, equipment and storage medium | |
CN111507184B (en) | Human body posture detection method based on parallel cavity convolution and body structure constraint | |
CN112750531A (en) | Automatic inspection system, method, equipment and medium for traditional Chinese medicine | |
CN111951383A (en) | Face reconstruction method | |
CN116091686A (en) | Method, system and storage medium for three-dimensional reconstruction | |
CN113298742A (en) | Multi-modal retinal image fusion method and system based on image registration | |
CN114066953A (en) | Three-dimensional multi-modal image deformable registration method for rigid target | |
CN112489048A (en) | Deep network-based automatic optic nerve segmentation method | |
CN107194364B (en) | Huffman-L BP multi-pose face recognition method based on divide and conquer strategy | |
Perakis et al. | Partial matching of interpose 3D facial data for face recognition | |
EP3853814B1 (en) | Analyzing symmetry in image data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180622 |