CN114511882B - Ear acupoint positioning method - Google Patents
Ear acupoint positioning method Download PDFInfo
- Publication number
- CN114511882B CN114511882B CN202210106612.3A CN202210106612A CN114511882B CN 114511882 B CN114511882 B CN 114511882B CN 202210106612 A CN202210106612 A CN 202210106612A CN 114511882 B CN114511882 B CN 114511882B
- Authority
- CN
- China
- Prior art keywords
- texture
- model
- shape
- picture
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000004458 analytical method Methods 0.000 claims abstract description 25
- 238000005457 optimization Methods 0.000 claims abstract description 13
- 230000008859 change Effects 0.000 claims description 47
- 210000005069 ears Anatomy 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 238000012706 support-vector machine Methods 0.000 claims description 7
- 210000000624 ear auricle Anatomy 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 6
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 238000011269 treatment regimen Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 229920001218 Pullulan Polymers 0.000 description 2
- 239000004373 Pullulan Substances 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 235000019423 pullulan Nutrition 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000228740 Procrustes Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000001467 acupuncture Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2134—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an auricular point positioning method. The invention comprises the following steps: training sample picture preprocessing, AAM model training, determining the initial position of the ear of the measured picture, and carrying out optimization solution on the shape weight p and the texture weight lambda in the AAM model to complete the matching of the characteristic points of the measured picture and realize the positioning of the auricular point. According to the invention, ear acupoints are reconstructed, HOG+SVM is selected to process pictures, error influence caused by ambient light brightness, small-amplitude rotation and scale scaling is reduced, and meanwhile, an AAM model is used for establishing a shape model and a texture model, so that the stability of a final model is higher; and the shape weight p and the texture weight lambda are subjected to optimization solution, so that the convergence speed is ensured. A target area can be provided for automated condition diagnosis, a target location for automated condition identification and analysis, and a bean-stick location for the patient's ear when a treatment regimen is given.
Description
Technical Field
The invention belongs to the technical field of ear acupoint positioning, and relates to an ear acupoint positioning method.
Background
Since the 80 s of the 20 th century, china strengthens the research and application of ear acupoints, more than 100 ear acupoint treatment schemes for indications are available, the research of ear acupoints is continuously refined, 91 ear acupoints are scattered and distributed on ears according to the GB/T13734-2008 national standard, and as ears of each person have larger morphological structure difference and a plurality of ear acupoints, doctors who do not have long-term professional training can hardly judge the positions of the acupoints of ears of different persons, and the difficulty of human identification is also aggravated by metering errors existing in human eyes.
Computer vision technology has been accumulated and iterated for decades, and has emerged as a very excellent model to handle real-world tasks, and the use of artificial intelligence in many highly repetitive tasks can greatly reduce human input. The AAM is taken as an active shape model based on texture and can be well suitable for different human ears, so that the AAM active texture model-based auricular point positioning method is provided, is well suitable for various brightness environments and replaces manual completion of complex auricular point positioning.
Disclosure of Invention
The invention aims to provide an auricular point positioning method.
The method specifically comprises the following steps:
step one, training sample picture pretreatment:
Taking M ear photos as training sample pictures; 91 characteristic points on the human ear of each photo are obtained to form an input sample set; storing the sample picture and the calibrated characteristic point coordinates;
step two, training an AAM model: model training of AAM includes shape models and texture models;
Step 1, building a shape model:
Scaling the preprocessed pictures in the step, so that the diagonal length of each picture is consistent;
In a training set, M pictures are used as training samples, N (91) characteristic points are calibrated for each picture, and the coordinate expression is as follows: (x i、yi) sequentially arranging x i、yi into a vector s j according to the position of each point in the image;
S j represents the j-th sample shape, the generalized Prak analysis is firstly carried out on the feature point vector, then the main analysis is carried out, and the shape model S is obtained through training; S avg is an average model, S m represents a change, and p m represents a shape change weight;
Step 2, building a texture model;
Firstly, according to rotation translation and scaling operation obtained by generalized Praele analysis, acting on a sample picture to extract a stable texture model; then, main analysis and analysis are carried out, and a texture model is obtained through training: a (x) represents the texture expression at a certain feature point, S is a shape model, aavg is an average texture model, ai is texture change obtained through main analysis, and lambda i is the weight of the texture change;
Step three, determining the initial position of the ear of the measured picture, and providing the initial position and the initial size for the optimization iteration of the step four;
Step four, carrying out optimization solution on the shape weight p and the texture weight lambda in the AAM model, firstly finding the closest shape model, then finding the closest texture model, and then finding the closest shape model until the result is completely converged or reaches a certain iteration number; and (3) completing the matching of the feature points of the detected picture, and realizing the positioning of the auricular points.
The specific calibration scheme of the first step is as follows: reserved points: selecting most of boundary points of the acupoint areas as characteristic points, and obtaining the auricular point areas through the characteristic points; added points: adding a single auxiliary point by adopting an interpolation method, wherein the addition of a point set can increase the constraint on the model; deleted points: the junction inside the earlobe portion is deleted.
And thirdly, expressing the target into a one-dimensional vector form through a traditional direction gradient histogram and a support vector machine, learning two categories of the target and the non-target by using a support vector machine classifier, wherein the non-target is selected from pictures which are easy to be classified into mistakes with human ears such as skin and hair of a person, and then positioning the ears of the detected pictures by using a sliding window and non-maximum suppression.
The function of the optimization solution of the shape weight change (Δp) in step four is:
Under the condition that the texture weight is unchanged, optimally solving the shape weight change delta p, wherein t represents a test picture, W (p) is used for carrying out rotation scaling on the picture according to the rotation scaling information of p, and t (W (p)) is used for extracting local texture features from the test image after the rotation scaling change; a λ represents a texture model under the lambda condition, and W (deltap) represents a model after shape micro deformation; the front part is the texture feature in the test picture, the rear subtracted part is the texture feature to be solved, the best deltap is found so as to minimize the error, and the optimal solution of the L 2 error of the two differences represents the optimal change step deltap of the shape change;
The function of the optimal solution of the texture weight variation Δλ is:
in the case of shape weight determination, the texture weight change is optimally solved, and Δλ represents the texture change.
The invention designs a set of marking point groups suitable for visual tasks, and selects auricle edge points, curvature maximum points and equal division points to reconstruct ear acupuncture points. The gray level images themselves are different due to the change of the environment, the difference between the exposed images and the images with dark environment is very large, and the HOG+SVM features are selected to process the images, so that the error influence caused by the brightness of the ambient light, small-amplitude rotation and scale scaling is reduced; simultaneously, an AAM model is used for establishing a shape model and a texture model, so that the model has higher stability; and the convergence speed is ensured by carrying out optimization solution on the shape weight p and the texture weight lambda. A target area can be provided for automated condition diagnosis, a target location for automated condition identification and analysis, and a bean-stick location for the patient's ear when a treatment regimen is given.
Drawings
FIG. 1 is a reference diagram for preprocessing a training sample picture in an embodiment of the present invention;
FIG. 2 is a GB/T13734-2008 national standard auricular point name and location map;
FIG. 3 is a schematic view of a conventional ear acupoint partition;
FIG. 4 is a graph of the effect of generalized Prucker analysis before alignment;
FIG. 5 is a graph of the effect of alignment through generalized Praelic analysis;
FIG. 6 is a graph of the effect of changing the shape of the ear of the shape change weight;
FIG. 7 is an effect diagram of the stitching after averaging the texture features of all sample pictures;
FIG. 8 is a graph of the effect of changing texture changes in a texture change weight model;
fig. 9 is a schematic diagram of an optimal searching situation of feature points in an iterative process.
Detailed Description
Step one, training sample picture pretreatment:
Taking M pictures of the ears (the ears need to be positively positioned in the pictures and cannot be inclined too much, and the ratio of the ears in the pictures cannot be too small) as training sample pictures; as shown in fig. 1, manually calibrating the acquired sample picture, acquiring 91 feature points on human ears by referring to GB/T13734-2008 national standard ear point names and positioning shown in fig. 2 and a common ear point partition chart shown in fig. 3, and manually labeling the above trained sample to acquire more accurate feature points; data of respective N (91) feature points are obtained, respectively, to constitute an input sample set.
Taking a sample picture as an example, the specific calibration scheme is as follows:
Reserved points: by selecting most points of the boundary points of the acupoint areas as characteristic points, the concerned auricular point area can be obtained by connecting the points, for example, the auricular point in fig. 2 is the point 18 in fig. 1 in the calibration, and the points 16, 39 and 41 can form the area behind the auricular point.
Added points: in this embodiment, the interpolation method is adopted to add 17 points between 16 and 18 points, the addition of the point set increases the constraint on the model, if the initial positioning of the 16 points is accurate but the 18 points are far away, the correction effect of the 16 information on the 18 is not obvious under the condition that the 16 and 18 points are far away, and the problem can be optimized at the middle adding point. Although interpolation is favorable for model convergence accuracy, the more and better the point set is, the greater the calibration difficulty of the point set is, and the slower the calculation speed is, the patent selects to perform interpolation or non-interpolation only once between important points (key points of national standards and regional boundary points) according to the distance.
Deleted points: the boundary points inside the earlobe part are deleted because the division scheme established by the national standard in the area is divided proportionally according to the position, and the boundary points are not uniform in texture characteristics and are unfavorable for acquiring a stable texture model, so that the deletion of the boundary points is necessary.
The above two types of pruning operations finally generate 91 points, the 91 feature points are calibrated to divide the ears integrally, for example, a triangular fossa region in fig. 3 is drawn by 34-35-36-37-48-49-50-51-52-53, meanwhile, the region of the auricular point can be divided, for example, the auricular point behind the auricular point is formed by 16-17-18-39-40-41, and other regions are similar, in addition, the feature points also characterize a plurality of points, for example, 18-auricular point 72-mesh and the like, so that the set of calibration schemes can be used for reconstructing and representing 91 auricular points in traditional Chinese medicine.
All the samples calibrated according to the scheme are divided into two parts, wherein the first part is a picture data set, the storage format is png, and the traditional picture storage format is adopted; the other part is a calibration data set, the format of the calibration data set is to sequentially reserve the coordinates of 91 points in a picture coordinate system, and each sample has a corresponding txt file to store the coordinate information of the characteristic points.
Step two, training an AAM model: model training of AAM includes shape models and texture models;
Step 1, building a shape model:
Scaling the preprocessed pictures in the step, so that the diagonal length of each picture is consistent;
In this embodiment, the entire size of the picture is scaled according to the diagonal length of the picture being 300, the original aspect ratio of the picture is maintained, and the picture is scaled to the same scale (the diagonal length is uniform). And correspondingly scaling the characteristic point coordinate points.
In a training set, M pictures are used as training samples, N (91) characteristic points are calibrated for each picture, and the coordinate expression is as follows: (x i,yi) jth sample shapeThe position of each point in the image is a vector formed by orderly arranging x i、yi in sequence, and each sample picture is provided with a shape vector s j, and the length of each sample picture is 2 XN, and j represents M samples.
Firstly, performing generalized Prak analysis on the feature point vectors, then performing main analysis, and training to obtain a shape model;
Data were aligned by generalized pullulan analysis (Generalized Procrustes Analysis): the different sizes of ears in different pictures, different rotation angles and different ear positions lead to three large linear transformations of rotation translation and scaling of the characteristic points of each group of samples, and the transformations are not included in the shape model, namely the comprehensive model of the samples is trained by data after similar transformations (SIMILARITY TRANSFORM) are removed. First two samples are needed, the first is the template sample, the second is the alignment sample (all training samples are traversed), with the two samples, the alignment of all samples onto the template sample can be accomplished by:
Deleting the data translation component: for each group of samples S j, the mean value of all the characteristic point positions is obtained Then subtracting the mean value from the whole characteristic point set to remove the translation component of the data;
Scaling components of the deleted data: and calculating the standard deviation of the characteristic points in each group of samples, and dividing the integral data by the standard deviation to obtain the de-scaled data.
Deleting the rotation component: the rotation angle at which the deviation of the two samples is smallest is found. Solving covariance matrixes of the two groups of samples, then performing singular value decomposition (SVD decomposition) to obtain a left matrix and a right matrix, generating a rotation matrix, and multiplying the rotation matrix to rotate data to the angle of the template.
As shown in fig. 5, the aligned effects (the points are simply and sequentially connected effect displays) are obtained through generalized pullulan analysis, so that the aligned data is more neat compared with the original data in fig. 4, rotation and scaling of the ear shape are not generated, and the difference of each line is that the ear shape of each person is different and not due to external factors such as shooting factors.
The principal component analysis is a common data dimension reduction mode, 91 feature points exist in the example, each feature point has two dimensions, the shape model has 182 dimensions (namely 182 directions can move in the whole shape model in the optimization solving process), the dimensions of the data are too high, the feature points are not mutually independent, and in this case, the principal component analysis dimension reduction can extract the main change direction of a sample:
All the aligned feature data are subjected to decentralization (the analysis of unaligned data is meaningless, the aligned data can represent real shape transformation), M is the total number of the feature points, N is the number of the feature points (91), and each feature point has x i、yi two components: An ith x-feature representing the jth sample, The ith y-feature representing the jth sample:
generating a covariance matrix:
The matrix shows the relation of different characteristic changes (if the value of the corresponding position is large, the change coincidence ratio of the two characteristics is proved to be high, and the similarity is strong);
Then, the eigenvalue decomposition is carried out, and eigenvectors (S 1,S2,...,Sls') corresponding to the largest Ls eigenvalues are extracted, wherein the eigenvectors are expression of the shape change direction, the earlier the shape change direction of the data is shown to be larger, each eigenvector is 182-dimensional, the overall change direction of the model is shown, and a shape model is built
S avg is a model obtained by averaging each aligned feature, which is also an average model (initialization model), S m represents each change, and p m represents a shape change weight;
As shown in fig. 6, only the first two changing directions are selected for verification (the first two directions of the principal component analysis are directions S 1 S2 with the largest model difference), and the change of the shape of the human ear can be intuitively seen, so that all ear models can be obtained by modifying the change weights. And the standard shape model is deformed to reconstruct all the ear shapes.
Step 2, building a texture model;
Extracting texture features: firstly, according to rotation translation and scaling operation obtained by generalized Praele analysis, the method acts on the picture (the characteristic points are subjected to similar transformation, and the picture is subjected to similar transformation), and a stable texture model is extracted; the texture extraction scheme is to cut the 20 x 20 region around each feature point to obtain 91 x 20 texture expression. The effect of splicing all sample pictures after averaging the texture features is shown in fig. 7;
after the texture features of all samples are obtained, main analysis (finding the average texture and the rule of texture change) is carried out, and a texture model is obtained through training: a (x) represents the texture expression at a certain feature point, S is a shape model, aavg is an average texture model, ai is texture change obtained through main analysis, lambda i is weight of the texture change, and all textures can be obtained through superposition of the operations; all pixels can be generated from the average texture and texture variations; as shown in fig. 8, under the condition of the average shape, the modification of the texture change weight shows the effect on the model texture change: there is a change in ear color generated by modifying the weights of the texture changes. Taking lambda 1 in fig. 8 as an example, the effect diagram of changing from 0 to 1.7 to-2, the brightness of each characteristic point of the earlobe part becomes dark as a whole in the process of viewing from 0- >0.8- >1.7 from the earlobe part, and the earlobe becomes bright in the process of changing from 0 to-2. Modifying the weights of the texture variations enables fitting the texture features of all ears.
Step three, determining the initial position of the ear of the measured picture, and providing the initial position and the initial size for the optimization iteration of the step four;
The method comprises the steps of expressing a target into a one-dimensional vector form through a traditional direction gradient Histogram (HOG) +support vector machine (SVM), learning two categories of a target and a non-target through a support vector machine classifier, selecting pictures which are easy to be in error classification with human ears, such as human skin, hair and the like, by using the support vector machine classifier, and then realizing the positioning of the ears of the detected pictures through a sliding window and non-maximum suppression.
And fourthly, carrying out optimization solution on the shape weight p and the texture weight lambda in the AAM model, completing the matching of the feature points of the measured picture, and realizing the positioning of the auricular point.
Finding the closest shape model, finding the closest texture model, and finding the closest shape model, and optimizing and solving until the result is completely converged or a certain iteration number is reached.
The function of the optimization solution for the shape weight change (Δp) is:
and under the condition that the texture weight is unchanged, optimally solving the shape weight change (delta p), wherein t represents a test picture, W (p) is used for carrying out rotation scaling on the picture according to the rotation scaling information of p, and t (p)) is used for extracting local texture features of the test image after the rotation scaling change, namely extracting 20 x 20 areas around each feature point. a λ denotes a texture model under λ, and W (Δp) denotes a model after shape micro-deformation. The front part is the texture feature in the test picture, the rear subtracted part is the texture feature to be solved, the best Δp is found so that the error is minimized, and the optimal solution of the L 2 error for the two differences represents the optimal change step size (Δp) of the shape change under certain texture conditions.
The function of the optimal solution of the texture weight variation Δλ is:
in the case of shape weight determination, the optimal solution of texture weight variation, the variable Δλ, is the texture variation to be solved;
As shown in fig. 9, the best finding situation of the feature points in the iterative process is: the convergence can be realized under the condition of iterating for ten times, the convergence speed is high, and the stability is good.
The final solution can be obtained through the iterative operation. The accurate positioning of 91 feature points can be realized through the process, the success rate of convergence accuracy rate of the method under the condition of shooting by standard equipment tends to 100%, and the average pixel error of the calibration points is smaller than 5 pixel points, so that the method is far superior to the existing auricular point partitioning algorithm in terms of the accuracy of positioning and convergence accuracy rate. The success rate of shooting the unstable data light environment condition of the mobile phone is up to 95% or more, and the high accuracy can be ensured when people in the data set wear eyes, ear nails, mask and hair are blocked in a small amount. For the very special situations that complex ears and larger parts of ears are shielded, accurate positioning cannot be achieved due to the fact that the difference of texture information is too large, along with the continuous accumulation of data sets, the adaptability of the scheme to various complex situations is also continuously enhanced, and the problems are solved.
The human ear characteristic point positioning scheme based on the movable appearance model has good performance in terms of accuracy, robustness and accuracy, namely, the scheme comprises the change of the shape of the ear and the change of the texture of the ear, and even under the condition of few samples, the stability of the scheme is well ensured, and the speed can be controlled within 10 seconds.
Claims (4)
1. An auricular point positioning method is characterized in that: the method specifically comprises the following steps:
step one, training sample picture pretreatment:
taking M ear photos as training sample pictures; acquiring N characteristic points on the human ear of each photo, wherein N=91, and forming an input sample set; storing the sample picture and the calibrated characteristic point coordinates;
step two, training an AAM model: model training of AAM includes shape models and texture models;
Step 1, building a shape model:
Scaling the preprocessed pictures in the step, so that the diagonal length of each picture is consistent;
M pictures in a training set are used as training samples, N characteristic points are calibrated for each picture, and the coordinate expression is as follows: (x i、yi) sequentially arranging x i、yi into a vector s j according to the position of each point in the image;
S j represents the j-th sample shape, the generalized Prak analysis is firstly carried out on the feature point vector, then the main analysis is carried out, and the shape model S is obtained through training; S avg is an average model, S m represents a change, and p m represents a shape change weight;
Step 2, building a texture model;
Firstly, according to rotation translation and scaling operation obtained by generalized Praele analysis, acting on a sample picture to extract a stable texture model; then, main analysis and analysis are carried out, and a texture model is obtained through training: a (x) represents the texture expression at a certain feature point, S is a shape model, aavg is an average texture model, ai is texture change obtained through main analysis, and lambda i is the weight of the texture change;
Step three, determining the initial position of the ear of the measured picture, and providing the initial position and the initial size for the optimization iteration of the step four;
Step four, carrying out optimization solution on the shape weight p and the texture weight lambda in the AAM model, firstly finding the closest shape model, then finding the closest texture model, and then finding the closest shape model until the result is completely converged or reaches a certain iteration number; and (3) completing the matching of the feature points of the detected picture, and realizing the positioning of the auricular points.
2. The ear point location method according to claim 1, characterized in that: the specific calibration scheme of the first step is as follows: reserved points: selecting most of boundary points of the acupoint areas as characteristic points, and obtaining the auricular point areas through the characteristic points; added points: adding a single auxiliary point by adopting an interpolation method, wherein the addition of a point set can increase the constraint on the model; deleted points: the junction inside the earlobe portion is deleted.
3. The ear point location method according to claim 1, characterized in that: and thirdly, expressing the target into a one-dimensional vector form through a traditional direction gradient histogram and a support vector machine, learning two categories of the target and the non-target by using a support vector machine classifier, wherein the non-target is selected from pictures which are easy to be classified into mistakes with human ears such as skin and hair of a person, and then positioning the ears of the detected pictures by using a sliding window and non-maximum suppression.
4. The ear point location method according to claim 1, characterized in that: the function of the optimization solution of the shape weight change (Δp) in step four is:
Under the condition that the texture weight is unchanged, optimally solving the shape weight change delta p, wherein t represents a test picture, W (p) is used for carrying out rotation scaling on the picture according to the rotation scaling information of p, and t (W (p)) is used for extracting local texture features from the test image after the rotation scaling change; a λ represents a texture model under the lambda condition, and W (deltap) represents a model after shape micro deformation; the front part is the texture feature in the test picture, the rear subtracted part is the texture feature to be solved, the best deltap is found so as to minimize the error, and the optimal solution of the L 2 error of the two differences represents the optimal change step deltap of the shape change;
The function of the optimal solution of the texture weight variation Δλ is:
in the case of shape weight determination, the texture weight change is optimally solved, and Δλ represents the texture change.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210106612.3A CN114511882B (en) | 2022-01-28 | 2022-01-28 | Ear acupoint positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210106612.3A CN114511882B (en) | 2022-01-28 | 2022-01-28 | Ear acupoint positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114511882A CN114511882A (en) | 2022-05-17 |
CN114511882B true CN114511882B (en) | 2024-06-28 |
Family
ID=81551609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210106612.3A Active CN114511882B (en) | 2022-01-28 | 2022-01-28 | Ear acupoint positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114511882B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024187227A1 (en) * | 2023-03-15 | 2024-09-19 | Masoud Jafarzadeh | An auricular light acupuncture therapy system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014967A (en) * | 2004-09-08 | 2007-08-08 | 皇家飞利浦电子股份有限公司 | Feature extraction algorithm for automatic ear recognition |
CN101369309B (en) * | 2008-09-26 | 2011-08-24 | 北京科技大学 | Human ear image normalization method based on active apparent model and outer ear long axis |
FR2971873B1 (en) * | 2011-02-22 | 2014-01-03 | Fittingbox | METHOD FOR DETECTING A PREDEFINED SET OF FACE CHARACTERISTIC POINTS |
CN103514442B (en) * | 2013-09-26 | 2017-02-08 | 华南理工大学 | Video sequence face identification method based on AAM model |
CN105718885B (en) * | 2016-01-20 | 2018-11-09 | 南京邮电大学 | A kind of Facial features tracking method |
CN106548521A (en) * | 2016-11-24 | 2017-03-29 | 北京三体高创科技有限公司 | A kind of face alignment method and system of joint 2D+3D active appearance models |
US10423823B1 (en) * | 2018-03-19 | 2019-09-24 | University Of South Florida | Unconstrained ear recognition using a combination of deep learning and handcrafted features |
CN109740426B (en) * | 2018-11-23 | 2020-11-06 | 成都品果科技有限公司 | Face key point detection method based on sampling convolution |
-
2022
- 2022-01-28 CN CN202210106612.3A patent/CN114511882B/en active Active
Non-Patent Citations (2)
Title |
---|
基于AAM模型和RS-SVM的人脸识别研究;王李冬;王玉槐;;计算机工程与应用;20090801(第22期);全文 * |
快速人脸检测与特征定位;朱文佳;戚飞虎;;中国图象图形学报;20051230(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114511882A (en) | 2022-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109408653B (en) | Human body hairstyle generation method based on multi-feature retrieval and deformation | |
JP7526412B2 (en) | Method for training a parameter estimation model, apparatus for training a parameter estimation model, device and storage medium | |
CN109377557B (en) | Real-time three-dimensional face reconstruction method based on single-frame face image | |
CN106068514B (en) | System and method for identifying face in free media | |
JP2023036784A (en) | Virtual facial makeup removal, fast facial detection and landmark tracking | |
Gupta et al. | Texas 3D face recognition database | |
US12014463B2 (en) | Data acquisition and reconstruction method and system for human body three-dimensional modeling based on single mobile phone | |
CN111445582A (en) | Single-image human face three-dimensional reconstruction method based on illumination prior | |
CN108537126B (en) | Face image processing method | |
CN111325846B (en) | Expression base determination method, avatar driving method, device and medium | |
US20240037852A1 (en) | Method and device for reconstructing three-dimensional faces and storage medium | |
CN107610209A (en) | Human face countenance synthesis method, device, storage medium and computer equipment | |
CN104123749A (en) | Picture processing method and system | |
CN111325851A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
JP2000511316A (en) | Fundamental component analysis of image / control point position combination for automatic control point positioning | |
CN108460398A (en) | Image processing method, device, cloud processing equipment and computer program product | |
KR20090092473A (en) | 3D Face Modeling Method based on 3D Morphable Shape Model | |
CN111028354A (en) | Image sequence-based model deformation human face three-dimensional reconstruction scheme | |
CN108564619B (en) | Realistic three-dimensional face reconstruction method based on two photos | |
CN111950430A (en) | Color texture based multi-scale makeup style difference measurement and migration method and system | |
CN111951383A (en) | Face reconstruction method | |
CN114511882B (en) | Ear acupoint positioning method | |
CN114648613A (en) | Three-dimensional head model reconstruction method and device based on deformable nerve radiation field | |
KR20230085931A (en) | Method and system for extracting color from face images | |
CN110110603A (en) | A kind of multi-modal labiomaney method based on facial physiologic information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |