CN107280118A - A kind of Human Height information acquisition method and the fitting cabinet system using this method - Google Patents

A kind of Human Height information acquisition method and the fitting cabinet system using this method Download PDF

Info

Publication number
CN107280118A
CN107280118A CN201610193187.0A CN201610193187A CN107280118A CN 107280118 A CN107280118 A CN 107280118A CN 201610193187 A CN201610193187 A CN 201610193187A CN 107280118 A CN107280118 A CN 107280118A
Authority
CN
China
Prior art keywords
mrow
human
human body
height
body head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610193187.0A
Other languages
Chinese (zh)
Other versions
CN107280118B (en
Inventor
阮仕涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Prafly Technology Co Ltd
Original Assignee
Shenzhen Prafly Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Prafly Technology Co Ltd filed Critical Shenzhen Prafly Technology Co Ltd
Priority to CN201610193187.0A priority Critical patent/CN107280118B/en
Publication of CN107280118A publication Critical patent/CN107280118A/en
Application granted granted Critical
Publication of CN107280118B publication Critical patent/CN107280118B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41HAPPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
    • A41H1/00Measuring aids or methods
    • A41H1/02Devices for taking measurements on the human body
    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41HAPPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
    • A41H1/00Measuring aids or methods
    • A41H1/02Devices for taking measurements on the human body
    • A41H1/04Stands for taking measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Textile Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of Human Height information acquisition method and the fitting cabinet system using this method, the human foot carried out for the image to collection in analysis acquisition Human Height information, the image is provided with one with place has the mark post of multiple index points, and method includes:S1, the neural network model set up according to sample data between human body head arc top position and face feature;S2, multiple index points in image determine the mapping relations between each pixel and physical height;S3, the human body head arc top position determined based on neural network model in image;S4, pixel distance between human body head arc top position and a selected index point is determined, and according to mapping relations and the physical height of selected index point, calculate the physical height corresponding to the position of human body head arc top.It can still be calculated in the case of the influence for thering is hair and jewelry to block and obtain human body head arc top position, pass through the test equipments such as ultrasonic wave more cost-saving than ever.

Description

A kind of Human Height information acquisition method and the fitting cabinet system using this method
Technical field
The present invention relates to clothes service field, more particularly to a kind of Human Height information acquisition method and use this method Fitting cabinet system.
Background technology
In a very long time people customize clothes when, it is necessary to manually obtain partes corporis humani divide size.If In customization shop, it is this manually will increase human cost obtaining the means of human dimension information, time-consuming, and measurement size Disunity.In order to solve problem above, fitting cabinet system is arisen at the historic moment.Cabinet system of fitting can be with the corresponding size of automatic spring human body Clothes tried on for client, eliminate the artificial embarrassment contacted with client's body of cutting the garment according to the figure, reduce human cost, make customization Heading for unification of clothes, intellectuality.Within the system, it is necessary to by Human Height, shoulder breadth, waistline, hip circumference equidimension information certainly Dynamic selection apparel size simultaneously flicks cabinet door, and wherein shoulder breadth, waistline, hip circumference equidimension information size can pass through 3D sensor (its In contain imaging sensor) just can all obtain, but being blocked by hair and jewelry is influenceed, and imaging sensor is difficult to obtain Take human body crown positional information so that Human Height information is difficult to obtain.
The content of the invention
The technical problem to be solved in the present invention is that the drawbacks described above for prior art is believed there is provided a kind of Human Height Cease acquisition methods and the fitting cabinet system using this method.
The technical solution adopted for the present invention to solve the technical problems is:A kind of Human Height information acquisition method is constructed, The human foot that image for being collected to image capture module is carried out in analysis acquisition Human Height information, the image is set with place Being equipped with one has the mark post of multiple index points, and methods described includes:
S1, the neural network model set up according to sample data between human body head arc top position and face feature, the god Input variable through network model is human body face feature, and output variable is human body head arc top position;
S2, multiple index points in image determine the mapping relations between each pixel and physical height;
S3, the human body head arc top position determined based on the neural network model in image;
S4, pixel distance between human body head arc top position and a selected index point is determined, and according to step S2 In obtained mapping relations and the physical height of selected index point, the physics calculated corresponding to the position of human body head arc top is high Degree.
In Human Height information acquisition method of the present invention, the human face feature includes multiple characteristic points;
The step S3 includes:
S31, based on Face datection algorithm to carrying out Face datection in image, obtain the position of face;
S32, the characteristic point for extracting based on facial modeling algorithm face, nerve is inputted by the position of each characteristic point Network model, obtained output as human body head arc top position coordinate.
In Human Height information acquisition method of the present invention, the step S2 includes:Determine two in image Pixel distance P between index pointd, the physical height S according to known between two index points on mark postd, calculating obtains every Mapping relations between individual pixel and physical height are:The physical height s=S of each pixeld/Pd
In Human Height information acquisition method of the present invention, if in the short transverse of human body head arc top position Pixel coordinate be that pixel coordinate in v, the short transverse of selected index point is vs
The step S4 includes:
S41, determine that the pixel distance between human body head arc top position and selected index point is | vs-v|;
S42, calculated based on below equation and obtain physical height corresponding to the position of human body head arc top:
H=(v-vs)×s+l
Wherein, s represents the physical height of each pixel, and l represents the physical height of selected index point.
In Human Height information acquisition method of the present invention, the step S1 includes:
S11, the characteristic point for being selected as input, output is used as using human body head arc top position;
S12, determine activation primitive f (x)=ln (1+ex), then the neural network model set up is:
Wherein, a(3)Output is represented, x represents input, and n represents the number of characteristic point,It is expressed as l layers of i-th cell defeated Enter weighted sum,It is expressed as coupling parameter between l layers of jth unit and l+1 layers of i-th cell,For l+1 layers i-th The bias term of unit,ForThe row vector of composition, b(l)ForThe column vector of composition;
S13, using sample data the neural network model is trained, determines the number of the undetermined parameter in above-mentioned model Value.
In Human Height information acquisition method of the present invention, also include after the step S13:
S14, the neural network model progress error analysis trained using another lot sample notebook data to this.
In Human Height information acquisition method of the present invention, also include before the step S1:
S0, determine that the relation between depth error and height error is:Wherein, Δ d misses for depth Difference, Δ L represents height error, and L represents the object height in image, and d1 is known measurement position;According to relationship analysis L's Value determines that camera chooses former to the visual field of human body to the influence between depth error and height error, and then according to the influence Then.
The invention also discloses one kind fitting cabinet system, including the cabinet with multiple cabinet doors, control module, IMAQ Module and control module, the control module are used for the sizes information that method as described above determines to include Human Height, And the corresponding cabinet door in cabinet is opened according to the sizes information.
Implement the Human Height information acquisition method and the fitting cabinet system using this method of the present invention, have with following Beneficial effect:Human Height information acquisition method used in the present invention, the neural network model that foundation is passed through to the image of acquisition And index point is set on vertical marker post, it can still be calculated in the case of the influence for thering is hair and jewelry to block and obtain people Body head Hu Ding positions, pass through the test equipments such as ultrasonic wave more cost-saving than ever.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the structural representation of fitting cabinet system;
Fig. 2 is the schematic diagram for setting index point;
Fig. 3 is the flow chart of Human Height information acquisition method;
Fig. 4 is the schematic diagram that the relation between depth error and height error is determined in step S0;
Fig. 5 is the design sketch that human face characteristic point is extracted based on ASM algorithms;
Fig. 6 is the design sketch that human face characteristic point is extracted based on shape regression algorithm;
Fig. 7 is chosen as the specific embodiment of the characteristic point of the input of neural network model;
Fig. 8 is the schematic diagram of neural network model.
Embodiment
In order to which technical characteristic, purpose and effect to the present invention are more clearly understood from, now compare accompanying drawing and describe in detail The embodiment of the present invention.
With reference to Fig. 1 and Fig. 2, fitting cabinet system includes fitting cabinet 1, control module 2, IMAQ with multiple cabinet doors 11 Module, described image acquisition module includes:Imaging sensor 5 (on the cabinet that may be mounted at fitting cabinet 1, can also independently pacify Dress), test desk 4, mark post 3.Fitting personnel stand on the footprint of test desk 4, rear heel and the foot print patterns on the table top of test desk 4 Rear heel alignment, against mark post, shot by imaging sensor 5 after picture, sent control module 2 to be analyzed, it is possible to Calculating obtains Human Height information and other sizes information, and then determines suitable suit length according to these information and control Corresponding cabinet door 11 is made to open.
With reference to the system, the Human Height information acquisition method of the present invention is described in detail.This method is used for image The human foot that the image that acquisition module is collected is carried out in analysis acquisition Human Height information, the image is provided with one with place to be had The mark post of multiple index points.With reference to Fig. 3, methods described is broadly divided into following several big steps:
S1, the neural network model set up according to sample data between human body head arc top position and face feature, the god Input variable through network model is human body face feature, and output variable is human body head arc top position;
S2, multiple index points in image determine the mapping relations between each pixel and physical height;
S3, the human body head arc top position determined based on the neural network model in image;
S4, pixel distance between human body head arc top position and a selected index point is determined, and according to step S2 In obtained mapping relations and the physical height of selected index point, the physics calculated corresponding to the position of human body head arc top is high Degree.
On step S4:
The principle of the step is that Human Height is divided into two parts height:Part I height is selected index point correspondence Physical height, this is can just to know in advance;Part II height is selected index point and human body head arc top position The height put, this Partial Height be by by the pixel between the human body head arc top position in image and selected index point away from Obtained from physical height is converted to.
With reference to Fig. 2, if using the 9th index point as selected index point, its physical height is l.Then it needs to be determined that Only Part II height, i.e., the 9th index point and the physical height of human body head arc top position.If human body head arc top Pixel coordinate in the short transverse of position is v, and the pixel coordinate in the short transverse of selected index point is vs, then human body head Pixel distance between portion arc top position and selected index point is | vs-v|;Then according to each pixel for being determined in step S2 with Mapping relations between physical height, you can change the pixel distance between human body head arc top position and selected index point For physical height, if the physical height of each pixel is represented with s in step S2, then the thing corresponding to the position of human body head arc top Reason height:
H=(v-vs)×s+l (A)
On step S2:
Because the actual physics height between any two index point is known, after collection image, according to image In the pixel distances of same two index points can just know the how many physical heights of each pixel correspondence.
If the pixel distance between two index points in image is Pd, between two index points on mark post The physical height S knownd, calculating the mapping relations obtained between each pixel and physical height is then:The physics of each pixel is high Spend s=Sd/Pd
Because the depth of human body crown position in actual measurement and index point position are not consistent so as to result in calculating mistake Difference, is illustrated in fig. 4 shown below, and wherein f represents focal length, and O is projection centre, and L represents object height, its depth (object distance or working distance) point Not Wei d1, d2 when it is corresponding be projected as l1, l2, object is parallel to image plane in the ideal case.Can by similar triangle theory Know:Assuming that d1 is known measurement position, L is, it is known that thenRepresent unit length institute in image plane The dimension of object of representative, byUnderstand measurement height of the L in d2: It is depth error to make d2=d1+ Δ d, wherein Δ d.Therefore the object height deviation caused by depth error is:It is deformed intoIt follows that in situation constant d1 Under, L is smaller, and the influence that Δ d is brought is smaller, therefore camera should be to the visual field selection principle of human body:Meeting people's body While high measurement, it should so that camera is as far as possible small to the visual field of human body.It is mentioned above, in step S4 when finally calculating height Actually only calculated with a selected index point, at least will be using two index points when calculating s in step S2, therefore be somebody's turn to do Selection principle is actually to ensure there are two index points in image.
Setting on the index point on mark post:
Because personal height is variant, the index point photographed in picture may be different, therefore from human foot with attached Closely start up to set multiple index points on vertical marker post, as shown in Figure 2.Generally obtaining the scaling method of camera intrinsic parameter is Zhang Zhengyou plane reference algorithms, current invention assumes that (in the present invention, only having used the focal length in intrinsic parameter known to camera intrinsic parameter The influence that f analysis depths error is caused to measurement), comprise the following steps that:As shown in Fig. 2 between indicating on mark post between dot center Away from t, unit mm, mark post top indicates dot center to mark post bottom spacing T.In camera space, due to human body rear heel position Approached with the depth information of crown position, therefore vertical mark post makes it on table top near rear heel on measurement table top, due to In practice it is difficult to so that mark post is consistent with the depth information of crown position, result in measurement mentioned above due to depth error Caused measurement error.
On step S1 and S3:
V is the pixel coordinate in the short transverse of human body head arc top position in above-mentioned formula (A), and it is actually logical Step S3 determinations are crossed, and the neural network model used in step S3 is set up by step S1.The neural network model Input variable is human body face feature, and output variable is human body head arc top position.Also just say if to obtain human body head arc Position is pushed up, then it needs to be determined that human face feature.The human face feature includes multiple characteristic points.Therefore, the step S3 Including:
S31, based on Face datection algorithm to carrying out Face datection in image, obtain the position of face;
S32, characteristic point (eyes, eyebrow, nose, face, the face for extracting based on facial modeling algorithm face Outline), the position of each characteristic point is inputted into neural network model, obtained output as human body head arc top position seat Mark.
According to anatomic theory, the position on the crown is relevant with the position of the eyes of people, eyebrow, face, nose, chin.For The terrible position to the crown, need to know the position of face and the position of human face five-sense-organ.Face is obtained using Face datection algorithm Position and the position of human face five-sense-organ is obtained using facial modeling algorithm.
Wherein, Face datection algorithm can use Viola-Jones algorithms, namely based on the integrated Adaboost of Harr features Algorithm is realized.The basic ideas of Face detection algorithm are:Position constraint knot between the textural characteristics of face and each characteristic point Close.Can be using active shape model (referred to as:) or AAM or shape are returned ASM.
ASM basic thought is as follows:1) training sample of lineup's face image is chosen, the shape of face is described with shape vector Shape;2) each sample in training set is alignd, shape is as similar as possible between making sample, then with principal component analysis (PCA analyses) to right Shape vector statistical modeling after neat;3) in the unknown test sample of shape vector, system is met with local texture model search The optimum shape of meter description scope.
Specifically, ASM includes model training and feature point search two parts.
Model training includes training sample, generates shape vector;Shape vector is normalized;Shape vector PCA is analyzed. In PCA analyses, any one shape vector in training set can be carried out approximate by average shape and change in shape parameter Represent:
Here P is the preceding t characteristic vector of the covariance matrix of shape vector, P=(P1,P2,...,Pt), b be characterized to Corresponding characteristic value is measured,For average shape.
Feature point search includes setting up original shape model, searches the new position coordinates of characteristic point and adjustment model ginseng Number.
During the new position coordinates of characteristic point is searched, for the ith feature point in model, before and after it Both sides respectively select L (L centered on it on two characteristic point line directions>M) individual pixel, then calculates the gray scale of this L pixel Value derivative simultaneously normalizes to obtain a local feature.
In 2L+1 pixel coverage, local gray level line that 2 (L-m)+1 length be 2m+1 is being had around original characteristic point Feature is managed, the minimum 2m+1 of mahalanobis distance local gray level texture is found in the range of this, with this 2m+1 center Point is the new position of original characteristic point, referred to as suggested point.
Characteristic point all in model is all calculated and obtained behind new position, it is possible to one of model is obtained Suggested shape x, the suggested shape x so obtained by original characteristic point to new calculating have one Displacement, all puts the displacement of every bit together, constitutes a motion vector dx
Dx=(dx1,dy1,dx2,dy2,...,dxk,dyk) (2)
Wherein, K is characterized a number.
During model parameter adjustment, calculating is obtained after a motion vector, you can the parameter of the current model of adjustment. Modification rotation, scaling, the amount of translation make it that the position after "current" model adjustment with suggested shape x is closest.I.e. The position that characteristic point in model is adjusted to the suggested point with being found by local gray level model is closest.
Undated parameter db meets following equation:
Db=PTdx (3)
Wherein, b variable quantity is db, PTFor in formula (1) covariance matrix P it is inverse, dx is motion vector.
Arbitrary shape vector can be expressed with formula (1) and (3).
Undated parameter Δ x meets following equation
Δ x=Pdb (4)
According to described above, the position of human face characteristic point (eyebrow, eyes, nose, face, chin) can be obtained.Wherein, Design sketch is shown in Fig. 5, and round dot is human face characteristic point.Fig. 5 (a) is face average characteristics point diagram, and Fig. 5 (b) is the spy of ASM iteration 3 times Point diagram is levied, Fig. 5 (c) is the convergent feature point diagrams of ASM.Other similar methods are including AAM etc..
Shape is returned:Shape homing method predicts facial contours S in cascaded fashion.From an initial shape S0, S lead to Cross estimation shape increment Delta S one by one and enter stepwise improvement.Under a common version, a shape increment Delta StIn t (taking 10) level is returned:
ΔSt=Wtφt(I,St-1) (5)
Wherein, I is input picture, St-1For the shape of previous stage;φtIt is characterized mapping function;WtFor linear regression square Battle array.φtDependent on I and St-1.Acquire in this manner is characterized in refer to as " shape coding " feature.Pass through Δ S is added to St-1, recurrence is brought into next stage.
For study φt, we have proposed regularization method:φtThe Feature Mapping function of one group of independence is broken down into, such asEachArrived by the independent region recurrence learning around l-th landmark point.This The regularization method of proposition can effectively filter out main noise and the weaker feature of identification, reduce the complexity of study Degree, so as to cause preferably Generalization Capability.To learn eachWe go to conclude two-value spy using the Assembled tree based on recurrence Levy.For prediction landmark point landmark, this binary feature encodes the structural information of the intuitive in a region.Integrated All local binary feature goes composition characteristic mapping phitAfterwards, we have learnt W for the shape estimation identification of full figuret
According to above-mentioned introduction, the shape of face, which is returned, can be very good to return out the position on the people crown, and effect is shown in Fig. 6.Its In, round dot is characterized a little, and Fig. 6 (a) is face initial characteristicses point diagram, and the position on the crown is included in shape S, and Fig. 6 (b) is Fig. 6 (a) the feature point diagram of the 5th iterative regression generation, Fig. 6 (c) is the feature point diagram of the 10th iterative regression generation of Fig. 6 (a).
The neural network model of the present invention is described in detail below.
The step S1 includes:
S11, the characteristic point for being selected as input, output is used as using human body head arc top position;
S12, determine activation primitive f (x)=ln (1+ex), then the neural network model set up is:
Wherein, a(3)Output is represented, x represents input, and n represents the number of characteristic point,It is expressed as l layers of i-th cell defeated Enter weighted sum (including bias unit),It is expressed as coupling parameter between l layers of jth unit and l+1 layers of i-th cell,The bias term of l+1 layers of i-th cell,For the parameter that couples between l layers of all units and l+1 layers of i-th cell, i.e., ForThe row vector of composition, b(l)For l+1 layers of bias term, it isThe column vector of composition;
S13, using sample data the neural network model is trained, determines the number of the undetermined parameter in above-mentioned model Value.
It is preferred that, also include after S13:
S14, the neural network model progress error analysis trained using another lot sample notebook data to this.
With reference to Fig. 7, for a number of people, obtained characteristic point is returned based on above-mentioned ASM or shape, with two canthus lines Horizontal direction be X-axis, cross nasion vertical direction be Y-axis, set up rectangular coordinate system.Go out simultaneously from anatomy skull construction angle Hair, has carried out one to the feature of input layer and has targetedly chosen, as shown in fig. 7,
X1 represents looks away from (peak is to X on the outside of the vertical height on the outside of eyebrow between peak and inner side canthus, i.e. eyebrow The distance of axle);Presiding judge's (distance of the inner side canthus to the vertical range between subnasal point, i.e. subnasal point to X-axis) during x2 is represented;X3 tables Show that subnasal point to the vertical range between corners of the mouth points outside is;X4 represents lower presiding judge (vertical range between being put under subnasal point to volume); X5 represents interior eye spacing (air line distance in canthus between end points);X6 represents the external eyes spacing (straight line between canthus the outer end point Distance);X7 represents glabella away from (the distance between peak on the outside of two eyebrows);X8 represents the corners of the mouth away from (between left and right side bicker Air line distance);X9 represents nose spacing (air line distance between left and right side naricorn);Y represents height (the human body head on the number of people crown Portion arc top position) it is (the signified position of arrow).
It will be seen that blocking due to hair, we are by the actual measurement to pixel, for head from Fig. 7 The position on top can be inaccurate, and method of the invention is the relation of each bone site in position and face in order to count the crown, So as to accurate determination crown position.
If there is M sample (x, y) of sample, wherein x is N-dimensional characteristic vector, and y is desired value.Then sample set is represented by:F ={ (x(1),y(1)),(x(2),y(2),…,(x(M),y(M)), (x, y) data in sample set can be with image capturing system The height measuring system (such as mechanical height measuring system) of one standard, passes through Mechanical measurement system in the image of collection The conversion of crown slide calliper rule position is obtained.Data are typically randomly divided into two parts, it is a part of as training set (train_set), Another part is used as test set (test_set).The input value of training set is represented with X, Y represents the desired value of training set, sets up line Property return (Linear regression) model:
X θ=Y,
Wherein,It is expressed as j-th of feature of i-th of sample, y(i)Represent i-th The desired value of individual sample, then pass through Nomal Equation:θ=(XTX)-1XTY can solve regression coefficient θ.
But it is due to that relation between each bone in crown position and face is probably a kind of complicated non-linear relation, so The determination of crown position is realized with neutral net, therefore establishes the god of three layers of an input layer, hidden layer and output layer Through network model, such as Fig. 8.
In order to better illustrate, the neural network model set up in specific embodiment is used:Input layer includes 9 nerves Member, hidden layer includes 11 neurons, and output layer comprises only an output unit.To prevent gradient disappearance (gradient Vanishing problem) not convergence problem, activation primitive uses ReLU functions, and expression formula is:F (x)=max (0, x).Its Smoothed version is:
F (x)=ln (1+ex) (6)
Its derivative is exactly the activation primitive sigmoid functions that we commonly use, i.e.,
Neural network model BP training process is as follows:
1) carry out feedforward conduction to calculate, utilize forward conduction formula, Wo MenyongL layers of i-th cell input are expressed as to add Power and (including bias unit),It is expressed as coupling parameter between l layers of jth unit and l+1 layers of i-th cell,L The bias term of+1 layer of i-th cell.Then pass through hidden layerThen willBring formula (6) into and obtain hidden layer Activation value(i.e. an in Fig. 8), for the activation value linear convergent rate of output layer(i.e. a in Fig. 8).
2) to output layer, calculate:δ(3)=-(y-a(3))
3) for hidden layer, calculate:
4) value of the partial derivative finally needed is calculated:
Wherein, J is sample (x(i),y(i)) cost function:
Therefore, include m sample { (x for given one(1),y(1)),(x(2),y(2)),…,(x(m),y(m)) data set, We can define overall cost function for (wherein slRepresent the number of l layers of neuron):
Section 2 is a regularization term (being also weight attenuation term) in formula, the purpose is to reduce the amplitude of weight, is prevented Overfitting.
5) network parameter in final updated neural network model and:
I.e. by above-mentioned step, constantly iterative optimization procedure we can obtain the position y on the optimal crown.
6) neural net regression Model Error Analysis:By each sample x in test_set(i)It is input to what is trained Neural network model, obtains predicted valueSeek its error with actual value:
One group of sample error value e can so be obtained(1),e(2),e(3),…,e(n), wherein n is the sample in test_set Number, so as to obtain the mean μ of sample error valueeAnd standard deviation sigmae.For any given one new measured value with, in confidence level Under conditions of 0.99, the confidential interval of measured value is [y+ μe-3σe,y+μe+3σe], under conditions of confidence level is 0.95, put Interval letter is [y+ μe-2σe,y+μe+2σe].The measured value y of acquisition can pass through the foregoing image pixel height conversion of this patent For actual physical height.
In summary, the Human Height information acquisition method and the fitting cabinet system using this method of the present invention is implemented, Have the advantages that:Human Height information acquisition method used in the present invention, the god that foundation is passed through to the image of acquisition Index point is set through network model and on vertical marker post, still can be with the case of the influence for thering is hair and jewelry to block Calculating obtains human body head arc top position, passes through the test equipments such as ultrasonic wave more cost-saving than ever.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot Form, these are belonged within the protection of the present invention.

Claims (8)

1. a kind of Human Height information acquisition method, it is characterised in that for being divided the image that fitting cabinet system acquisition is arrived The human foot that analysis is obtained in Human Height information, the image is provided with one with place has the mark post of multiple index points, methods described Including:
S1, the neural network model set up according to sample data between human body head arc top position and face feature, the nerve net The input variable of network model is human body face feature, and output variable is human body head arc top position;
S2, multiple index points in image determine the mapping relations between each pixel and physical height;
S3, the human body head arc top position determined based on the neural network model in image;
S4, determine pixel distance between human body head arc top position and a selected index point, and according in step S2 The physical height of the mapping relations that arrive and selected index point, calculates the physical height corresponding to the position of human body head arc top.
2. Human Height information acquisition method according to claim 1, it is characterised in that the human face feature includes Multiple characteristic points;
The step S3 includes:
S31, based on Face datection algorithm to carrying out Face datection in image, obtain the position of face;
S32, the characteristic point for extracting based on facial modeling algorithm face, neutral net is inputted by the position of each characteristic point Model, obtained output as human body head arc top position coordinate.
3. Human Height information acquisition method according to claim 1, it is characterised in that the step S2 includes:It is determined that The pixel distance P between two index points in imaged, the physical height according to known between two index points on mark post Sd, calculating the mapping relations obtained between each pixel and physical height is:The physical height s=S of each pixeld/Pd
4. Human Height information acquisition method according to claim 3, it is characterised in that if human body head arc top position Short transverse on pixel coordinate be that pixel coordinate in v, the short transverse of selected index point is vs
The step S4 includes:
S41, determine that the pixel distance between human body head arc top position and selected index point is | vs-v|;
S42, calculated based on below equation and obtain physical height corresponding to the position of human body head arc top:
H=(v-vs)×s+l
Wherein, s represents the physical height of each pixel, and l represents the physical height of selected index point.
5. Human Height information acquisition method according to claim 1, it is characterised in that the step S1 includes:
S11, the characteristic point for being selected as input, output is used as using human body head arc top position;
S12, determine activation primitive f (x)=ln (1+ex), then the neural network model set up is:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>a</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>w</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>a</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msup> <mi>b</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>a</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msubsup> <mi>z</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>z</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msubsup> <mi>b</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, a(3)Output is represented, x represents input, and n represents the number of characteristic point,It is expressed as l layers of i-th cell weighted input With,It is expressed as coupling parameter between l layers of jth unit and l+1 layers of i-th cell,For l+1 layers of i-th cell Bias term,ForThe row vector of composition, b(l)ForThe column vector of composition;
S13, using sample data the neural network model is trained, determines the numerical value of the undetermined parameter in above-mentioned model.
6. Human Height information acquisition method according to claim 5, it is characterised in that also wrapped after the step S13 Include:
S14, the neural network model progress error analysis trained using another lot sample notebook data to this.
7. Human Height information acquisition method according to claim 1, it is characterised in that also wrapped before the step S1 Include:
S0, determine that the relation between depth error and height error is:Wherein, Δ d is depth error, Δ L represents height error, and L represents the object height in image, and d1 is known measurement position;According to relationship analysis L value Visual field selection principle of the camera to human body is determined to the influence between depth error and height error, and then according to the influence.
8. one kind fitting cabinet system, it is characterised in that including the cabinet with multiple cabinet doors, control module, image capture module And control module, the control module based on the method described in claim any one of 1-7 for being determined comprising Human Height Sizes information, and the corresponding cabinet door in cabinet is opened according to the sizes information.
CN201610193187.0A 2016-03-30 2016-03-30 A kind of Human Height information acquisition method and the fitting cabinet system using this method Expired - Fee Related CN107280118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610193187.0A CN107280118B (en) 2016-03-30 2016-03-30 A kind of Human Height information acquisition method and the fitting cabinet system using this method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610193187.0A CN107280118B (en) 2016-03-30 2016-03-30 A kind of Human Height information acquisition method and the fitting cabinet system using this method

Publications (2)

Publication Number Publication Date
CN107280118A true CN107280118A (en) 2017-10-24
CN107280118B CN107280118B (en) 2019-11-12

Family

ID=60087659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610193187.0A Expired - Fee Related CN107280118B (en) 2016-03-30 2016-03-30 A kind of Human Height information acquisition method and the fitting cabinet system using this method

Country Status (1)

Country Link
CN (1) CN107280118B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197165A (en) * 2019-06-04 2019-09-03 南京信息工程大学 A method of identification customer's figure
CN110569593A (en) * 2019-09-05 2019-12-13 武汉纺织大学 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment
CN110604574A (en) * 2019-09-16 2019-12-24 河北微幼趣教育科技有限公司 Human body height measuring method based on video imaging principle
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN111387987A (en) * 2020-03-26 2020-07-10 苏州沃柯雷克智能系统有限公司 Height measuring method, device, equipment and storage medium based on image recognition
CN112418025A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Weight detection method and device based on deep learning
CN112464747A (en) * 2020-11-10 2021-03-09 广州富港万嘉智能科技有限公司 Height detection method and device based on image acquisition equipment
CN112922888A (en) * 2019-12-06 2021-06-08 佛山市云米电器科技有限公司 Fan safety control method, fan and computer readable storage medium
CN113297882A (en) * 2020-02-21 2021-08-24 湖南超能机器人技术有限公司 Intelligent morning check robot, height measuring method and application
CN114375177A (en) * 2019-09-01 2022-04-19 Lg电子株式会社 Body measurement device and control method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111909A1 (en) * 2002-12-16 2004-06-17 Unes Pourmanafzadeh Method and apparatus for leg length discrepancy measurement
US20040234108A1 (en) * 2003-05-22 2004-11-25 Motorola, Inc. Identification method and apparatus
CN101363722A (en) * 2008-09-25 2009-02-11 广州广电运通金融电子股份有限公司 Height measurement method and measurement device thereof
CN101512551A (en) * 2006-03-21 2009-08-19 阿菲克姆智能牧场管理系统公司 A method and a system for measuring an animal's height
CN103697820A (en) * 2013-12-17 2014-04-02 杭州华为数字技术有限公司 Method for measuring sizes based on terminal and terminal equipment
CN104257385A (en) * 2014-10-16 2015-01-07 辽宁省颅面复原技术重点实验室 Method for measuring height of human body in video images
CN104434113A (en) * 2014-12-01 2015-03-25 江西洪都航空工业集团有限责任公司 Stature measuring method
CN204671166U (en) * 2015-03-16 2015-09-30 查氏电子实业(深圳)有限公司 A kind of body height measuring device
CN105069837A (en) * 2015-07-30 2015-11-18 武汉变色龙数据科技有限公司 Garment fitting simulation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111909A1 (en) * 2002-12-16 2004-06-17 Unes Pourmanafzadeh Method and apparatus for leg length discrepancy measurement
US20040234108A1 (en) * 2003-05-22 2004-11-25 Motorola, Inc. Identification method and apparatus
CN101512551A (en) * 2006-03-21 2009-08-19 阿菲克姆智能牧场管理系统公司 A method and a system for measuring an animal's height
CN101363722A (en) * 2008-09-25 2009-02-11 广州广电运通金融电子股份有限公司 Height measurement method and measurement device thereof
CN103697820A (en) * 2013-12-17 2014-04-02 杭州华为数字技术有限公司 Method for measuring sizes based on terminal and terminal equipment
CN104257385A (en) * 2014-10-16 2015-01-07 辽宁省颅面复原技术重点实验室 Method for measuring height of human body in video images
CN104434113A (en) * 2014-12-01 2015-03-25 江西洪都航空工业集团有限责任公司 Stature measuring method
CN204671166U (en) * 2015-03-16 2015-09-30 查氏电子实业(深圳)有限公司 A kind of body height measuring device
CN105069837A (en) * 2015-07-30 2015-11-18 武汉变色龙数据科技有限公司 Garment fitting simulation method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197165A (en) * 2019-06-04 2019-09-03 南京信息工程大学 A method of identification customer's figure
CN114375177A (en) * 2019-09-01 2022-04-19 Lg电子株式会社 Body measurement device and control method thereof
CN110569593A (en) * 2019-09-05 2019-12-13 武汉纺织大学 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment
CN110604574A (en) * 2019-09-16 2019-12-24 河北微幼趣教育科技有限公司 Human body height measuring method based on video imaging principle
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN112922888A (en) * 2019-12-06 2021-06-08 佛山市云米电器科技有限公司 Fan safety control method, fan and computer readable storage medium
CN112922888B (en) * 2019-12-06 2022-09-30 佛山市云米电器科技有限公司 Fan safety control method, fan and computer readable storage medium
CN113297882A (en) * 2020-02-21 2021-08-24 湖南超能机器人技术有限公司 Intelligent morning check robot, height measuring method and application
CN111387987A (en) * 2020-03-26 2020-07-10 苏州沃柯雷克智能系统有限公司 Height measuring method, device, equipment and storage medium based on image recognition
CN112418025A (en) * 2020-11-10 2021-02-26 广州富港万嘉智能科技有限公司 Weight detection method and device based on deep learning
CN112464747A (en) * 2020-11-10 2021-03-09 广州富港万嘉智能科技有限公司 Height detection method and device based on image acquisition equipment

Also Published As

Publication number Publication date
CN107280118B (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN107280118B (en) A kind of Human Height information acquisition method and the fitting cabinet system using this method
CN108876879B (en) Method and device for realizing human face animation, computer equipment and storage medium
CN104850825B (en) A kind of facial image face value calculating method based on convolutional neural networks
Rae et al. Recognition of human head orientation based on artificial neural networks
CN105094300B (en) A kind of sight line tracking system and method based on standardization eye image
CN110110629A (en) Personal information detection method and system towards indoor environmental condition control
CN108926355A (en) X-ray system and method for object of standing
CN107610209A (en) Human face countenance synthesis method, device, storage medium and computer equipment
CN110223272A (en) Body imaging
CN106780591A (en) A kind of craniofacial shape analysis and Facial restoration method based on the dense corresponding points cloud in cranium face
CN106164978A (en) Parametrization deformable net is used to construct the method and system of personalized materialization
CN106796449A (en) Eye-controlling focus method and device
CN105426882B (en) The method of human eye is quickly positioned in a kind of facial image
JP2018055470A (en) Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system
CN109086723A (en) A kind of method, apparatus and equipment of the Face datection based on transfer learning
KR20210067913A (en) Data processing method using a learning model
EP3699929A1 (en) Patient weight estimation from surface data using a patient model
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN111176447A (en) Augmented reality eye movement interaction method fusing depth network and geometric model
CN109310475A (en) System and method for automatically generating facial repair capsule and application scheme to solve the facial deviation of observable
Esme et al. Effects of aging over facial feature analysis and face recognition
KR20220133834A (en) Data processing method using a learning model
CN116343325A (en) Intelligent auxiliary system for household body building
Huang et al. LNSMM: Eye gaze estimation with local network share multiview multitask

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191112

Termination date: 20200330

CF01 Termination of patent right due to non-payment of annual fee