CN106600595A - Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm - Google Patents

Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm Download PDF

Info

Publication number
CN106600595A
CN106600595A CN201611192069.4A CN201611192069A CN106600595A CN 106600595 A CN106600595 A CN 106600595A CN 201611192069 A CN201611192069 A CN 201611192069A CN 106600595 A CN106600595 A CN 106600595A
Authority
CN
China
Prior art keywords
human body
training
method based
image
measuring method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611192069.4A
Other languages
Chinese (zh)
Inventor
张巍伟
张冬斌
邓建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIAMEN KERUITE INFORMATION TECHNOLOGY Co Ltd
Original Assignee
XIAMEN KERUITE INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAMEN KERUITE INFORMATION TECHNOLOGY Co Ltd filed Critical XIAMEN KERUITE INFORMATION TECHNOLOGY Co Ltd
Priority to CN201611192069.4A priority Critical patent/CN106600595A/en
Publication of CN106600595A publication Critical patent/CN106600595A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention belongs to the artificial intelligence image identification field, and provides a human body characteristic dimension automatic measuring method based on an artificial intelligence algorithm; the method comprises the following steps: S1, collecting mass user two dimension images, intercepting human body different characteristic portions on the two dimension images so as to form history training samples; S2, respectively using different artificial intelligence classifiers for different human body portions, training the history training samples, and forming human body different portion positioning models according to the training result; S3, using the positioning model formed by the S2 to position different human body portions on the two dimension image provided by a new user; S4, carrying out three dimensional fitting according to the human body different portion positioning result in S3, thus automatically measuring the three dimensional human body characteristic dimensions of the new user. The method firstly uses the artificial intelligence method to automatically identify and measure the real human body three dimensional dimensions in the two dimension human body image uploaded by the user in our nation, thus realizing an identification process from the two dimension image to a three dimensional structure.

Description

A kind of characteristics of human body's size automatic measuring method based on intelligent algorithm
Technical field
The invention belongs to artificial intelligence's field of image recognition, particularly relates to a kind of characteristics of human body based on intelligent algorithm Size automatic measuring method.
Background technology
Take in industry in footwear, it is often necessary to measure the characteristic sizes such as the three-dimensional dimension of human body, such as bust, height, waistline.It is existing In having technology, three-dimensional dimension is usually the three dimensional point cloud that human body is gathered by human body three-dimensional scanning device, then by some Algorithm calculates the characteristic size at each position of human body.But this measuring method needs the place specified, and carries by the external world For hardware device (such as human body three-dimensional scanner) measure.Above-mentioned factors cause measurement data inconvenient, not in time, Consumer's Experience is affected largely.
The content of the invention
To solve the above problems, the present invention proposes a kind of characteristics of human body size automatic measuring side based on intelligent algorithm Method, to realize that only needing to human body 2-dimentional photo just can obtain characteristics of human body position three-dimensional data automatically.Technical scheme is as follows:
A kind of characteristics of human body's size automatic measuring method based on intelligent algorithm, including:
The two dimensional image of S1, collection mass users, and carry out human body different characteristic position respectively to the two dimensional image Intercept to generate history training sample;
S2, different artificial intelligence's graders are respectively adopted for human body different parts the history training sample is carried out Training, and the location model of human body different parts is generated according to training result;
S3, the location model generated using S2 carry out the positioning of human body different parts to the two dimensional image that new user provides;
S4, the result positioned according to S3 human bodies different parts carry out three-dimensional fitting with the 3 D human body of the new user of automatic measurement Characteristic size.
Further, the user's two dimensional image for gathering in step S1 at least includes direct picture, side image and back view Picture.
Further, in step S2, using the high position of AdaBoost Algorithm for Training human bioequivalence degree, using CNN convolution Neural network algorithm goes to other positions for training positioning human body.
Further, using the position of nose in AdaBoost Algorithm for Training human bodies, using CNN convolutional neural networks algorithms Go to train other positions of positioning human body.
Further, using in AdaBoost Algorithm for Training human bodies during the position of nose, first with 60,000 intercepted The image cropping of other pixel compositions beyond nose is gone out 150,000 negative samples as negative sample as positive sample by nose figure The sample of cascade classifier is trained as AdaBoost.
Further, when positioning nose in step S3, slid in image to be detected using a sliding window, often Secondary slip all can draw an output according to the model for having trained, and this output is a column vector, the maximum of column vector Corresponding class has been corresponded at value just, if slided into At vector maximization, i.e., nose is positioned successfully;
One translated in the positive and negative scope of x-axis with the vertical nose of coordinate system and nose when other characteristics of human body's sizes are calculated Positioned in segment limit and measured.
Further, when the three-dimensional dimension of new user's bust is measured in step S4, in direct picture, positioning takes bust wheel Two wide edge points, and the length between which is taken as oval long diameter;In side image, positioning takes bust profile Two edge points, and take the length between which as oval short diameter, determine one using long diameter and the simulation of short diameter Individual ellipse, calculates the approximate length that oval girth obtains final product bust.
Further, it is when the three-dimensional dimension of new user's bust is measured in step S4, fixed in front, side and back side portrait Position takes the fitting of multiple click-through pedestrian body bust profiles of bust profile, takes the Zhou Changwei busts of the human body bust profile of fitting Approximate length.
Further, when carrying out the intercepting of the multiple characteristic portions of human body in step S1 to the two dimensional image, using artificial Click measures position to be measured, and retain click position coordinate points and as center intercept one it is square Pixel, the picture of these pixels composition is used for carrying out the training of artificial intelligence's grader as history training sample.
Further, the intercepting of human body at least 30 characteristic portions is carried out in step S1 to the two dimensional image.
A kind of characteristics of human body's size automatic measuring method based on intelligent algorithm of the present invention employs a kind of artificial intelligence Can algorithm, not by any external device on the premise of, according to user provide 2-dimentional photo can just calculate human body Some three-dimensional feature sizes, and reached very high precision.By using convolutional neural networks algorithm and coordinating big data Scheme go to analyze the picture that user uploads, and thus calculate each characteristics of human body's size in picture portrait, the present invention can To calculate the characteristic size at any position, and each position characteristic size computational accuracy and calculating speed on have Very big raising.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing Accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are the present invention Some embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with root Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is that a kind of flow process of the characteristics of human body's size automatic measuring method based on intelligent algorithm of the present invention is illustrated Figure;
Schematic diagram is shone in the front of the stance of taking pictures of the standardization that Fig. 2 is required when being using the inventive method;
Schematic diagram is shone in the side of the stance of taking pictures of the standardization that Fig. 3 is required when being using the inventive method;
Schematic diagram is shone at the back side of the stance of taking pictures of the standardization that Fig. 4 is required when being using the inventive method;
Fig. 5 is the schematic diagram of the fitting bust using the inventive method embodiment one;
Fig. 6 is the schematic diagram of the fitting bust using the inventive method embodiment two;
Fig. 7 is the locating accuracy of each algorithm of the invention with the variation diagram of frequency of training.
Specific embodiment
To make purpose, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is The a part of embodiment of the present invention, rather than the embodiment of whole.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of characteristics of human body's size automatic measuring method based on intelligent algorithm, including:
The two dimensional image of S1, collection mass users, and carry out human body different characteristic position respectively to the two dimensional image Intercept to generate history training sample;
In this step, mass users data are obtained by shops, and these data are all users in dedicated technician Instruct the standardization data of lower collection.When such as gathering user's two dimensional image direct picture, side image and back side image, specification Stance is as shown in Figure 2 to 4.Standardization take pictures stance specification so that subsequently using intelligent algorithm go identification positioning and Computation and measurement characteristics of human body's size has reached very high precision.
After obtaining the standardization data of mass users, you can to carry out the intercepting work at characteristics of human body position.Using The artificial picture click for collecting above goes out position to be measured.Height to be such as surveyed, collector is just in portrait head The suitable position and foot's suitable position sent out carries out left mouse button click, while the left mouse button for retaining each image clicks on position The coordinate points put, for then having intercepted centered on these coordinate points in later-stage utilization these coordinate points are square, these Foursquare size is adjusted according to different human body position.Some pixels around characteristics of human body position have been intercepted thus Point, the picture next step of these pixel compositions can be used for carrying out intelligent algorithm training.
S2, different artificial intelligence's graders are respectively adopted for human body different parts the history training sample is carried out Training, and the location model of human body different parts is generated according to training result;
In this step, the foursquare figure of the characteristic portion surrounding pixel composition of human body is had been obtained in previous step Piece, the square picture that everyone same class characteristic portion surrounding pixel is constituted is sorted out, and each characteristic portion is classified as One class, such as chooses altogether 30 characteristic portions, gives this 30 class tagged, and label is 0~29, these marks in the realization of code Label have reformed into matrixes different one by one;The selection of characteristic portion will be determined according to different sample data volumes.That is If sample data volume is bigger, the picture size of the characteristic portion that can be chosen is less.Because picture is less for machine Less easy to identify, but can position more accurately, and data volume than larger so discrimination can be improved, so The less sample of picture is have selected in algorithm, setting accuracy is also improved in the case where discrimination is not affected.Here figure Piece size is not only by a width picture compression or to expand that size, but only be relative with whole image sectional drawing of human body Picture size;
Then the method that cascade classifier training and convolutional neural networks training can be adopted to carry out jointly, needs explanation It is that the effect finally wanted can be reached using any of which method individually, the protection model of patent of the present invention is both fallen within Enclose.
Preferably, in step S2, using the high position of AdaBoost Algorithm for Training human bioequivalence degree, using CNN convolution god Jing network algorithms go to other positions for training positioning human body.Because AdaBoost algorithms are in the obvious region of block feature, such as Nose, discrimination are very high, training pattern in can reach 98% discrimination.Other non-block portions are positioned with CNN in addition Position when ratio is high with AdaBoost algorithm discriminations, that is to say, that select different intelligent algorithms to go according to different parts Positioning, such Billy are good with a kind of single algorithm effect.In addition, why going difference using two kinds of different intelligent algorithms The different position of positioning, is because that the data volume between each position collected in actual life is unbalanced.Have Position data volume is more, and some position data volumes are few, if so the less position of data volume utilizes CNN algorithms, data volume is more Position utilize AdaBoost algorithms because AdaBoost algorithms train undesirable when data volume is less, although CNN exists When data volume is less, training effect does not have data volume many, effect is good, but in the case where data volume is few, CNN knows Rate is not still above AdaBoost algorithms, so selecting CNN at the few position of data sample, selects when data volume is more AdaBoost algorithms.So do the different parts of an identification process simultaneously using two kinds of algorithms, realize effects of the 1+1 more than 2 Really.
S3, the location model generated using S2 carry out the positioning of human body different parts to the two dimensional image that new user provides;
In this step, after previous step generates human body location model, can treated using a sliding window The image of detection slides over, and slides each time and all can draw an output according to the model for having trained, and this output is One column vector, has just corresponded to corresponding class at the maximum of column vector.If having slided into the column vector of the output in somewhere Just correspond at the label vector maximum of characteristics of human body position at maximum, then just position successfully.Such as, so train one It is individual can be used for positioning the model of nose after, as shown in Fig. 2 sliding on image to be detected using a sliding window again It is dynamic.Slide each time and all an output can be drawn according to the model for having trained, this output is a column vector, column vector Maximum at just corresponded to corresponding class.If slided into At subtab vector maximization.So just positioning nose success.
The positioning of this step nose can be carried out during characteristics of human body's dimensional measurement afterwards first all.Human body is being calculated so Just can be fixed to carry out in a segment limit that the vertical nose of coordinate system and nose are translated in the positive and negative scope of x-axis when characteristic size Position and measurement, so while ambient interferences are excluded just improve accuracy rate, and improve recognition speed.For training pattern When, it is a large amount of square picture at each position for having intercepted the first-class size of portrait training.After training terminates, just The model trained using this and picture effect to be identified.This mechanism is first by the pixel on picture according to before The picture size of training sectional drawing is partitioned into the region of the sizes such as multiple, these regions is called pixel fast.Before training A label has been stamped to each position, this label is also a column vector.Such as nose is the 5th class, just label 5th row of vector is taken as 1, is all taken as 0 elsewhere, this defines the label of nose.Then the model that trains and treat Each block of pixels of the picture of identification is acted on, and effect every time can all obtain the column vector of the row of N rows 1, and this row of N rows 1 Column vector understands maximum in the value of certain a line, maximum if in the 5th row value, and explanation is the 5th class, that is, the label of nose is correspondingly Upper, that illustrates that this block of pixels just represents the region of nose, then positioning nose success.It is in several pixels in practical operation Acted between block respectively, see the output of last column vector, then the label of the output of this column vector and certain position is compared To position certain position.
S4, the result positioned according to S3 human bodies different parts carry out three-dimensional fitting with the 3 D human body of the new user of automatic measurement Characteristic size.
When this step is embodied as, according to this position positioned in the shape contour and its step S3 at certain position of human body Relevant positional information, carry out the fitting of body shape, then the girth of digital simulation shape, that is, obtain 3 D human body feature chi It is very little.
By taking the three-dimensional dimension of survey calculation bust as an example, can have two kinds of fit approach, it is a kind of user provide just It is (as shown in Figure 2) in the image of face to position two edge points A and B for taking bust profile, and the length between which is taken as ellipse Long diameter;It is (as shown in Figure 3) in side image to position two edge points C and F for taking bust profile, and take between which Length is as oval short diameter, oval using long diameter and short diameter simulation determination one, as shown in figure 5, calculating ellipse Girth obtains final product the approximate length of bust.The survey calculation mode is simple, and only need to provide two pictures can calculate bust Approximate length, and through a large amount of test checkings, its error is less than 2mm, complies fully with the customized requirement of clothing.
During the three-dimensional dimension of another new user's bust of measurement, in direct picture, positioning takes bust wheel as shown in Figure 2 Wide multiple point A, C, E, D and B totally 5 points, then take two point C and F of bust profile as shown in Figure 3 in side image, then Multiple point G, I and Hs of bust profile are taken as shown in Figure 4 in image overleaf.As shown in fig. 6, by the point of above-mentioned intercepting with people Based on body bust footprint characteristic, fitting generates a bust profile, the Zhou Changwei busts of the human body bust profile of digital simulation Approximate length.The data that this fit approach is obtained are more accurate, and its error is less than 0.5mm.
In step S2, AdaBoost algorithms are the processes of an iteration, for the distribution for adaptively changing training sample, So that base grader is focused on the sample that those are difficult point.Cascade classifier is trained first such as, using 60,000 intercepted Nose figure is altogether cut out 15 by reasonably cutting out mode as positive sample, the picture of other pixel compositions beyond nose Ten thousand negative samples as negative sample as training cascade classifier sample, then iterative step include:
(1) give training sample (x1,y1),...,(xi,yi),...,(xn,yn), wherein xiRepresent i-th sample, yi=0 It is expressed as negative sample, yi=1 is expressed as positive sample, and it is 150,000 that n is training sample sum;
(2) weight of training sample is initialized;
(3) iterative process, first to train a Weak Classifier, calculates the accuracy rate of Weak Classifier first.Then it is suitable to select Threshold value cause accuracy rate highest.Finally weight samples are updated again;
(4) after so having carried out n iteration, n Weak Classifier is obtained, then by this n Weak Classifier according to each The importance of Weak Classifier carries out weight and distributes.These Weak Classifiers are obtained into one according to the Weight superposition for distributing finally Strong classifier.
In step S2, CNN convolutional neural networks algorithm goes to train other positions of positioning human body to specifically include:
(1) picture pretreatment:The characteristics of human body position being truncated in previous step is rotated so that characteristic portion side Xiang Weizheng facing to.Then image is carried out into Denoising disposal, and is directed to different types of noise, done using different process Method;Common noise mainly has following several in the picture:
(a) additive noise
The intensity of this noise and original signal does not have any association, such as image add in transmitting procedure " channel is made an uproar Sound " just belongs to such case.This noise is added on original ideal noiseless image and is formed new image, i.e.,:
G=f+n;
(b) multiplicative noise
An other class is and the related noise of the signal of image that this noise like can change with the change of signal, can Certain coefficient is multiplied by be expressed as original signal, so new image can just be expressed as:
G=f+fn;
(c) " spiced salt " noise
This noise like is typically occurred in image segmentation processing procedure, and during such as image goes background, Jing often occurs This noise like.Or image carries out during the conversion in domain that also this noise situations often occurs in Jing.As training sample is passed through More than pretreatment, then when carry out fixation and recognition so as to the picture that measures characteristics of human body's chi also have to pass through above it is identical Processing mode, to guarantee to train picture and identification picture to be in same aspect.
(2) forward-propagating process:The training rank of convolutional neural networks has been put into through the characteristic portion picture of pretreatment Section.Picture pixels value between 0~255, for the ease of calculating, normalization, divided by 255, so all of pixel value all 0~ Between 1, it is Training in addition, that is, tagged training, these labels can be regarded as the matrix of N row string, Algorithm in one have 30 classes, i.e., 10 between 0~9 are digital.Such as this numeral is 3, then it is exactly the 4th class, in Be be exactly 30 row 1 row matrix.This matrix all elements is all 0, except the 4th row the 1st row that place be 1. that This matrix just stamps mono- label of l for be trained 4 this numeral.Tell that the result of machine training will be with this label ratio Compared with being so possible to training and go down.The mathematical formulae that normalization process is used is as follows:
(3) build convolutional neural networks model:
Design neutral net is exactly to design the number of plies of neutral net from for longitudinal direction, when position to be learnt is difficult to know Not, when feature is more fine and smooth, increase the number of plies of neutral net.Design neutral net is exactly to design each layer for laterally The neuron number of neutral net, when position to be learnt is difficult to, when feature is more fine and smooth, increases neuron Number.Last discrimination and recognition speed can be also improved so.
6 layers of convolutional neural networks are adopted in this algorithm, the input layer of ground floor is which includes.The second layer is convolutional layer, The number of this layer of convolution kernel is 6, and convolution kernel size is 5.Third layer is pond layer, and sampling proportion is 2.4th layer is convolution Layer, the number of this layer of convolution kernel is 12, and convolution kernel size is 5.Layer 5 is pond layer, and sampling proportion is 2, last layer For output layer.This step is arranged after being over, and just passes it on this function of cnnsetup, truly to build one Complete convolutional neural networks.Design neutral net can be different according to actual problem, but corresponding change all exists Within the protection interest field of this patent.
It is exactly that real meaning builds nerve net below.First assume there was only 1 input feature vector inputmap, this and input are schemed The quantity of picture is different, is input into the image of training usually by batch, is exactly 60,000 samples altogether, 50 samples of each batch This is input into simultaneously, so a total of 1200 batches.Characteristic image of the artwork after convolution kernel filtering is claimed to be characterized map. Element in convolution kernel is initialized below just.Define two variables, fan_out and fan_in:
Fan_out=[kernelsize (l)] ^2* [outputmaps (l)] ^2;
Wherein kernelsize (l) is the size of l layer convolution kernels, the number of outputmaps (l) l layer convolution kernels.
Fan_in=inputmaps* [kernelsize (l)] ^2;
Inputmaps is the feature map number that last layer is input to this layer, and l is l layers.
Just can be initialized in each layer of convolution kernel using formula after so having calculated the value of fan_in and fan_out Element
What fan_out was preserved is to need to extract this Zhang Tezheng map in this layer for feature map of last layer Outputmaps kind features, how many convolution kernel of each layer is with regard to how many outputmaps features.Extract every kind of feature to use The convolution kernel that arrives is different, so what fan_out preserved is number of parameters that this layer of new feature of output needs study.Study Parameter is exactly w, b.Because all of effort is provided to determine w and b.Here it is a relation more than 1 pair.Convolution kernel is bigger, Output characteristic is more, and parameter to be learnt is more.
And fan_in is preserved, in this layer, a neuron will be connected to all of characteristic pattern in last layer, then These characteristic patterns are saved the number of the learning parameter of needs again with fan_out.Namely this layer for each Characteristic pattern, it is a many-to-one relation here that how many parameter is connected to front layer.Mapsize is represented and is schemed after convolution The size of picture.Can be tried to achieve using equation below:
Mapsize (l+1)=mapsize (l)-kernelsize (l)+1;
The mapsize (l+1) on the wherein equal sign left side is the size of the image without convolution, the mapsize on the right of equal sign L () is the size of the image through convolution.
Convolution kernel weight and biasing, that is, initialization convolution nuclear element is defined below.
Corresponding j-th input feature vector of i-th output characteristic map of the lower target meaning correspondence l layers of wherein this k Figure.
B (l) { j }=0;
Wherein k (l) { i } { j } is the weight of convolution kernel, and b (l) { j } is the biasing of convolution kernel.
Next layer of input feature vector figure number is equal to the characteristic pattern number of this layer of output:
Inputmaps=outputmaps (l);
Inputmaps represents the input feature vector map numbers of next layer (l+1 layers), and it is defeated that outputmaps (l) represents l layers The characteristic pattern number for going out.
The above is to construct convolutional layer, builds pond layer below:
Mapsize (l+1)=mapsize (l)/scale (l);
B (l) { j }=0;
Here divided by scale, scale is equal to 2 to mapsize (l), is exactly size mapsize (l+1) of figure after pond, Without overlapping between the domain of pond, if input picture size is 28*28, then the image after pooling is 14*14, can be with profit Calculated with above formula.This layer only needs to initialize b just can be with, because not being convolutional layer, no w.
The pond layer in convolutional neural networks has been constructed above, the output layer in convolutional neural networks has been built below:
The last layer of this layer is the layer through Chi Huahou, includes inputmaps characteristic pattern.Each characteristic pattern it is big Little is mapsize.So, the neuron number of this layer is inputmaps* (size of each characteristic pattern).Mapsize here The columns of line number × feature map of=feature map, so using the row * row for after prod functions being exactly feature map.
Fvnum=prod (mapsize) × inputmaps;
Wherein fvnum is one layer before output layer of neuron number, and prod is the continued product for calculating array element.
Calculate onum afterwards again:
Onum=size (y, 1);
Onum is the number of label, that is, the number of output layer neuron.Divide how many classes, it is natural individual with regard to how many Output neuron is the setting of last layer of neutral net here.
Calculate ffb afterwards again:
Ffb (l)=zeros (onum, 1);
Wherein ffb is the corresponding base biases of output layer each neuron (biasing b).
Calculate ffW afterwards again:
Wherein ffW (l) is the weights that output layer preceding layer is connected with output layer, is full connection between this two-layer.
A convolutional neural networks are constructed thus.
(4) convolutional neural networks training process
Obtain first total number m=size of sample (x, 3);
Iteration is divided into so multiple batches of again each time:
Numbatches=m/batchsize;
Wherein numbatches represents batch, notes such as 60,000 samples, per batches 50, then just there is altogether 1200 batches Secondary, batchsize is represented.And this once can only train at last.Why will in batches because Refer here to one it is critically important the reason for, be that the renewal of weight is not that one pictures of a pictures update, but one batch What next batch updated simultaneously.It is noted that 50 pictures are simultaneously done convolution, directly just can be with convolution function.
Iteration has 1200 batches each time, but can arrange 10 iteration even more successive ignition.
Repetitive exercise, once represents training once per iteration again and again
Batch_x=x (:,:,kk((l-1)*batchsize+1:l*batchsize));
Batch_y=y (:,:,kk((l-1)*batchsize+1:l*batchsize));
Batch_x is represented after upsetting sample order and convolution god is put into per 50 samples of batch (being 50 samples in this example) It is trained in Jing networks, batch_y is the process for taking training sample corresponding label.
Afterwards under current network weight and network inputs calculating network output:
Net=cnnff (net, batch_x);
Specific calculating network output procedure is as follows:
When calculating network is exported, if sample is through convolutional layer, following process is done to sample:
Z=z+convn (a (l -1) { i }, k (l) { i } { j }, ' valid');
Wherein a (l -1) { i } represents the output image of last layer, and k (l) { i } { j } represents the convolution kernel of this layer, by upper one The convolution kernel of the output image and this layer of layer carries out convolution operation, by the correspondence position of different characteristic map of same image It is added, obtains result z.Recycle sigmoid functions and biasing b { j } to be mapped to another image afterwards, be re-used as down afterwards The output of one layer of convolutional neural networks:
A (l+1) { j }=sigm (z+b (l) { j });
Wherein a (l+1) { j } is the output of l+1 layer convolutional neural networks.
When calculating network is exported, if sample is through pond layer, following process is done to sample:
Z=convn (a (l -1) { j }, ones (scale (l))/(scale (l) ^2), ' valid');
For example average pond to be performed on the domain that scale sizes are 2, then can be with convolution size as 2*2, each unit Element is all 1/4 convolution kernel.Because the acquiescence convolution step-length of convn functions is 1, and the domain of pondization operation does not overlap, So for convolution results above.The result of most terminal cistern is needed from convolution results obtained above with 2 as step-length, is jumped The value in average pond is read out:
A (l) { j }=z (1:scale(l):end,1:scale(l):end,:);
The characteristic pattern that last layer is obtained pull into one it is vectorial, as the characteristic vector finally extracted:
Sa=size (a (n) { j });
Sa is the size of j-th characteristic pattern
Fv=[fv;reshape(a(n){j},sa(1)*sa(2),sa(3))];
Formula is represented and for all of feature map to pull into a column vector above.Also one-dimensional is exactly corresponding sample index. Each sample string, is often classified as corresponding characteristic vector.
Using the final output value of sigmoid (w*X+b) calculating network.Note while calculating batchsize sample This output valve:
O=sigm (net.ffW*net.fv+repmat (net.ffb, 1, size (net.fv, 2)));
end
O represents the output of neutral net.
After obtaining network output above, by corresponding sample label with bp algorithms obtaining error to network weight, The derivative of (the namely elements of those convolution kernels):
Net=cnnbp (net, batch_y);
After obtaining network output above, by corresponding sample label with bp algorithms obtaining error to network weight The derivative of (the namely elements of those convolution kernels).
Specific bp algorithmic procedures are as follows:
The size of last layer of characteristic pattern is obtained first.Here preceding layer of last layer all referring to output layer:
Sa=size (a { n } { 1 });
Because be by last layer of characteristic pattern pull into one it is vectorial, for a sample, intrinsic dimensionality is this Sample:
Fvnum=sa (1) * sa (2);
What is preserved inside fvd is the characteristic vector (being pulled into characteristic pattern in cnnff functions) of all samples, so Need exist for again.Convert the form of feature map.That d is preserved is delta, that is, sensitivity or residual error:
D (n) { j }=reshape (net.fvd (((j-1) * fvnum+1):j*fvnum,:),sa(1),sa(2),sa (3));
After obtaining residual error, just by residual error direction propagate, using weight renewal function go change convolution kernel in weight w and Biasing b.
If this layer is convolutional layer in back-propagation process, weight k of convolution kernel is updated to:
K (n) { ii } { j }=k ((l) { ii } { j }-alpha*dk (l) { ii } { j };
The biasing b of convolutional layer is updated to:
B (l) { j }=b (l) { j }-alpha*db (l) { j };
What wherein dk was preserved is derivative of the error to convolution kernel, and what db was preserved is used above derivative of the error to biasing The formula of common right value update:
W_new=W_old-alpha*de/dw (error is to weights derivative)
Last layer is individually calculated for output layer, because it is individually definition and is full connection:
FfW=ffW-alpha*dffW;
Ffb=ffb-alpha*dffb;
Wherein ffW is convolution kernel weight w after output layer updates, and ffb is convolution kernel biasing b after output layer updates.
The model at personal body characteristicses position has been trained just through above step, it is possible to train using this Model carries out the positioning of characteristics of human body and measurement.
During one of core is step S2 in the present invention, different artificial intelligence are respectively adopted for human body different parts Energy grader is trained to the history training sample.As a example by testing bust, using the 1W people for upsetting when test Identification is gone at body real features position.Picture size to be identified is 28*28.The design of the neutral net that identification process is used and instruction As the design of neutral net when practicing, the process that a direction is propagated simply is lacked.And sample to be tested will take Label, to check the correctness of identification, finally calculates the discrimination of sample.
Applicant is respectively adopted respective algorithm and goes to test out the discrimination of sample, and Fig. 7 is the positioning of each algorithm of the invention With the variation diagram of frequency of training, Fig. 7 illustrates to be determined with the model that two kinds of intelligent algorithm reasonable combination are trained accuracy rate A kind of position accuracy rate highest, than good with intelligent algorithm merely, so that the precision of the measurement of last characteristics of human body's size Raising is obtained for speed, effects of the 1+1 more than 2 is realized.
Testing time table of the table 1 for respective algorithm, as shown in table 1, using the mixed model of CNN+AdaBoost average On time-consuming, far superior to only using the situation of certain independent algorithm.
Table 1
Finally it should be noted that various embodiments above is only to illustrate technical scheme, rather than a limitation;To the greatest extent Pipe has been described in detail to the present invention with reference to foregoing embodiments, it will be understood by those within the art that:Its according to So the technical scheme described in foregoing embodiments can be modified, or which part or all technical characteristic are entered Row equivalent;And these modifications or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology The scope of scheme.

Claims (10)

1. a kind of characteristics of human body's size automatic measuring method based on intelligent algorithm, it is characterised in that include:
The two dimensional image of S1, collection mass users, and carry out the intercepting at human body different characteristic position respectively to the two dimensional image To generate history training sample;
S2, different artificial intelligence's graders are respectively adopted for human body different parts the history training sample is instructed Practice, and the location model of human body different parts is generated according to training result;
S3, the location model generated using S2 carry out the positioning of human body different parts to the two dimensional image that new user provides;
S4, the result positioned according to S3 human bodies different parts carry out three-dimensional fitting with the 3 D human body feature of the new user of automatic measurement Size.
2. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 1, its feature exist In the user's two dimensional image gathered in step S1 at least includes direct picture, side image and back side image.
3. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 1, its feature exist In, in step S2, using the high position of AdaBoost Algorithm for Training human bioequivalence degree, being gone using CNN convolutional neural networks algorithms Other positions of training positioning human body.
4. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 3, its feature exist In using the position of nose in AdaBoost Algorithm for Training human bodies, removing training positioning human body using CNN convolutional neural networks algorithms Other positions.
5. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 4, its feature exist In using in AdaBoost Algorithm for Training human bodies during the position of nose, first with the 60000 nose figures for having intercepted as positive sample The image cropping of other pixel compositions beyond nose is gone out 150,000 negative samples and is instructed as negative sample as AdaBoost by this Practice the sample of cascade classifier.
6. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 5, its feature exist In, when positioning nose in step S3, slid in image to be detected using a sliding window, sliding every time all can basis The model for having trained draws an output, and this output is a column vector, and phase has just been corresponded at the maximum of column vector The class answered, if slided into, I.e. nose is positioned successfully;
The one section of model translated in the positive and negative scope of x-axis with the vertical nose of coordinate system and nose when other characteristics of human body's sizes are calculated Positioned and measured in enclosing.
7. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 1, its feature exist In including during the three-dimensional feature size that new user certain concrete human body is measured in step S4:According to certain position of human body Shape contour and its step S3 in the relevant positional information at this position that positions, carry out the fitting of body shape, then calculate The girth of fitted shapes, that is, obtain 3 D human body characteristic size.
8. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 7, its feature exist In, when the three-dimensional dimension of new user's bust is measured in step S4, in direct picture, positioning takes two marginal ends of bust profile Point, and the length between which is taken as oval long diameter;In side image, positioning takes two edge points of bust profile, And the length between which is taken as oval short diameter, oval, a calculating ellipse is determined using long diameter and the simulation of short diameter Girth obtain final product the approximate length of bust.
9. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 7, its feature exist In, when the three-dimensional dimension of new user's bust is measured in step S4, in front, side and back side image, positioning takes bust profile Multiple fittings for clicking through pedestrian's body bust profile, take the approximate length of the Zhou Changwei busts of the human body bust profile of fitting.
10. the characteristics of human body's size automatic measuring method based on intelligent algorithm according to claim 1, its feature exist In, when the intercepting of the multiple characteristic portions of human body is carried out in step S1 to the two dimensional image, will using artificial click measurement The position of measurement, and retain the coordinate points of click position and intercept a foursquare pixel as center, these The picture of pixel composition is used for carrying out the training of artificial intelligence's grader as history training sample.
CN201611192069.4A 2016-12-21 2016-12-21 Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm Pending CN106600595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611192069.4A CN106600595A (en) 2016-12-21 2016-12-21 Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611192069.4A CN106600595A (en) 2016-12-21 2016-12-21 Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm

Publications (1)

Publication Number Publication Date
CN106600595A true CN106600595A (en) 2017-04-26

Family

ID=58600299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611192069.4A Pending CN106600595A (en) 2016-12-21 2016-12-21 Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm

Country Status (1)

Country Link
CN (1) CN106600595A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107270829A (en) * 2017-06-08 2017-10-20 南京华捷艾米软件科技有限公司 A kind of human body measurements of the chest, waist and hips measuring method based on depth image
CN107729854A (en) * 2017-10-25 2018-02-23 南京阿凡达机器人科技有限公司 A kind of gesture identification method of robot, system and robot
CN108537323A (en) * 2018-03-30 2018-09-14 滁州学院 A kind of aluminium electrolutic capacitor core diameter calculation method based on artificial neural network
CN109559373A (en) * 2018-10-25 2019-04-02 武汉亘星智能技术有限公司 A kind of method and system based on the 2D human body image amount of progress body
CN109685001A (en) * 2018-12-24 2019-04-26 石狮市森科智能科技有限公司 Human body measurements of the chest, waist and hips data acquisition method and intelligence sell clothing system and Intelligent unattended sells clothing machine
CN109934081A (en) * 2018-08-29 2019-06-25 厦门安胜网络科技有限公司 A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network
CN110135443A (en) * 2019-05-28 2019-08-16 北京智形天下科技有限责任公司 A kind of human body three-dimensional size prediction method based on machine learning
CN110264514A (en) * 2019-06-27 2019-09-20 杭州智珺智能科技有限公司 A kind of human body bust and waistline measurement method based on random optimizing strategy
CN110279433A (en) * 2018-09-21 2019-09-27 四川大学华西第二医院 A kind of fetus head circumference automatic and accurate measurement method based on convolutional neural networks
CN111028239A (en) * 2019-08-10 2020-04-17 杭州屏行视界信息科技有限公司 Ellipse accurate identification method for special body measuring clothes
CN111696177A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, device and medium for generating human three-dimensional model and simulated portrait animation
WO2020211255A1 (en) * 2019-04-17 2020-10-22 平安科技(深圳)有限公司 Human body shape and physique data acquisition method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013508A (en) * 2007-02-12 2007-08-08 西安工程大学 Method for constructing divisional composite three-dimensional parameterized digital mannequin
CN101228973A (en) * 2007-01-22 2008-07-30 殷实 Non-contact measurement method and system for human outside measurement
CN102426653A (en) * 2011-10-28 2012-04-25 西安电子科技大学 Static human body detection method based on second generation Bandelet transformation and star type model
CN102657531A (en) * 2012-04-28 2012-09-12 深圳泰山在线科技有限公司 Human body torso girth measurement method and device based on computer visual sense
CN103577792A (en) * 2012-07-26 2014-02-12 北京三星通信技术研究有限公司 Device and method for estimating body posture
US20160070989A1 (en) * 2014-09-10 2016-03-10 VISAGE The Global Pet Recognition Company Inc. System and method for pet face detection
CN105426867A (en) * 2015-12-11 2016-03-23 小米科技有限责任公司 Face identification verification method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101228973A (en) * 2007-01-22 2008-07-30 殷实 Non-contact measurement method and system for human outside measurement
CN101013508A (en) * 2007-02-12 2007-08-08 西安工程大学 Method for constructing divisional composite three-dimensional parameterized digital mannequin
CN102426653A (en) * 2011-10-28 2012-04-25 西安电子科技大学 Static human body detection method based on second generation Bandelet transformation and star type model
CN102657531A (en) * 2012-04-28 2012-09-12 深圳泰山在线科技有限公司 Human body torso girth measurement method and device based on computer visual sense
CN103577792A (en) * 2012-07-26 2014-02-12 北京三星通信技术研究有限公司 Device and method for estimating body posture
US20160070989A1 (en) * 2014-09-10 2016-03-10 VISAGE The Global Pet Recognition Company Inc. System and method for pet face detection
CN105426867A (en) * 2015-12-11 2016-03-23 小米科技有限责任公司 Face identification verification method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
夏明等: "人体特征断面形状拟合及围度尺寸预测", 《纺织学报》 *
郭烈等: "基于人体典型部位特征组合的行人检测方法", 《汽车工程》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107270829A (en) * 2017-06-08 2017-10-20 南京华捷艾米软件科技有限公司 A kind of human body measurements of the chest, waist and hips measuring method based on depth image
CN107270829B (en) * 2017-06-08 2020-06-19 南京华捷艾米软件科技有限公司 Human body three-dimensional measurement method based on depth image
CN107729854A (en) * 2017-10-25 2018-02-23 南京阿凡达机器人科技有限公司 A kind of gesture identification method of robot, system and robot
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN108537323A (en) * 2018-03-30 2018-09-14 滁州学院 A kind of aluminium electrolutic capacitor core diameter calculation method based on artificial neural network
CN108537323B (en) * 2018-03-30 2022-04-19 滁州学院 Aluminum electrolytic capacitor roll core diameter calculation method based on artificial neural network
CN109934081A (en) * 2018-08-29 2019-06-25 厦门安胜网络科技有限公司 A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network
CN110279433B (en) * 2018-09-21 2020-03-27 四川大学华西第二医院 Automatic and accurate fetal head circumference measuring method based on convolutional neural network
CN110279433A (en) * 2018-09-21 2019-09-27 四川大学华西第二医院 A kind of fetus head circumference automatic and accurate measurement method based on convolutional neural networks
CN109559373A (en) * 2018-10-25 2019-04-02 武汉亘星智能技术有限公司 A kind of method and system based on the 2D human body image amount of progress body
CN109685001A (en) * 2018-12-24 2019-04-26 石狮市森科智能科技有限公司 Human body measurements of the chest, waist and hips data acquisition method and intelligence sell clothing system and Intelligent unattended sells clothing machine
WO2020211255A1 (en) * 2019-04-17 2020-10-22 平安科技(深圳)有限公司 Human body shape and physique data acquisition method, device and storage medium
CN110135443A (en) * 2019-05-28 2019-08-16 北京智形天下科技有限责任公司 A kind of human body three-dimensional size prediction method based on machine learning
CN110264514A (en) * 2019-06-27 2019-09-20 杭州智珺智能科技有限公司 A kind of human body bust and waistline measurement method based on random optimizing strategy
CN111028239A (en) * 2019-08-10 2020-04-17 杭州屏行视界信息科技有限公司 Ellipse accurate identification method for special body measuring clothes
CN111696177A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, device and medium for generating human three-dimensional model and simulated portrait animation

Similar Documents

Publication Publication Date Title
CN106600595A (en) Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm
CN105809198B (en) SAR image target recognition method based on depth confidence network
CN110097103A (en) Based on the semi-supervision image classification method for generating confrontation network
CN104732240B (en) A kind of Hyperspectral imaging band selection method using neural network sensitivity analysis
CN108830188A (en) Vehicle checking method based on deep learning
CN103886342B (en) Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning
CN106446942A (en) Crop disease identification method based on incremental learning
CN104484681B (en) Hyperspectral Remote Sensing Imagery Classification method based on spatial information and integrated study
CN107316013A (en) Hyperspectral image classification method with DCNN is converted based on NSCT
CN106650830A (en) Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN107239514A (en) A kind of plants identification method and system based on convolutional neural networks
CN106683102B (en) SAR image segmentation method based on ridge ripple filter and convolutional coding structure learning model
CN105913081B (en) SAR image classification method based on improved PCAnet
CN107909109A (en) SAR image sorting technique based on conspicuousness and multiple dimensioned depth network model
CN107644415A (en) A kind of text image method for evaluating quality and equipment
CN104866868A (en) Metal coin identification method based on deep neural network and apparatus thereof
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN104866871B (en) Hyperspectral image classification method based on projection structure sparse coding
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN107590515A (en) The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN104298999B (en) EO-1 hyperion feature learning method based on recurrence autocoding
CN106203444B (en) Classification of Polarimetric SAR Image method based on band wave and convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170426