CN103413145A - Articulation point positioning method based on depth image - Google Patents

Articulation point positioning method based on depth image Download PDF

Info

Publication number
CN103413145A
CN103413145A CN2013103742367A CN201310374236A CN103413145A CN 103413145 A CN103413145 A CN 103413145A CN 2013103742367 A CN2013103742367 A CN 2013103742367A CN 201310374236 A CN201310374236 A CN 201310374236A CN 103413145 A CN103413145 A CN 103413145A
Authority
CN
China
Prior art keywords
point
feature
node
value
articulation point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103742367A
Other languages
Chinese (zh)
Other versions
CN103413145B (en
Inventor
刘亚洲
张艳
孙权森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201310374236.7A priority Critical patent/CN103413145B/en
Publication of CN103413145A publication Critical patent/CN103413145A/en
Application granted granted Critical
Publication of CN103413145B publication Critical patent/CN103413145B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an articulation point positioning method based on a depth image. The method comprises a training process and a recognition process. The training process comprises the first step of calculating the random characters of a training sample, and the second step of training a decision tree classifier according to the characters. The recognition process comprises the third step of calculating the random characters of a test sample, the fourth step of classifying the pixel points of an object according to the decision tree classifier to obtain the portions of different classes of the object, and the fifth step of calculating the position of the articulation point of each portion. The method can reflect the local gradient information around pixels, is high in calculation efficiency, enhances the rotation invariance of the characters, and improves the accuracy of object recognition.

Description

Articulation point localization method based on depth image
Technical field
The present invention relates to computer vision, pattern-recognition and field of human-computer interaction, more particularly, relate to a kind of localization method of articulation point based on depth image.
Background technology
Articulation point localization method based on depth image refers to, comprises in the depth image of target at a width, determines the method for the articulation point position of target.The target here specifically refers to staff or human body.By determining the articulation point position of target, can judge skeleton structure, and then computing machine can realize the response for target, finally realize the purpose of man-machine interaction or the automatic processing and identification of computing machine.
Depth image take the form of a kind of two dimensional gray figure.But be different from traditional gray level image, on each pixel of depth image with message reflection be the distance of target object apart from video camera, so the pixel value of depth image is called depth value.Depth image has the following advantages: 1, can not run into the impact of the factors such as illumination, shade; 2, use depth image can directly utilize the three-dimensional information of target object, simplified greatly the problems such as three-dimensional reconstruction, identification, location of object.
The articulation point location comprises two committed steps: the study of sorter and the location of articulation point.At first the study of sorter depend on feature selecting, and for current goal, whether the feature of selection has very strong descriptive power, directly determined the success or not of target identification; Then, on the basis of fixed feature, determine a series of rules that can classify to current goal.The definite of articulation point refers to, after the sorter that utilizes study to obtain completes the Classification and Identification of target, at each position of target, finds the position of articulation point.
In the feature extraction of traditional visible images, Gradient Features is the common features of two large classes with putting feature.Gradient Features is such as the Canny operator, Laplace-Gaussian operator and histograms of oriented gradients HOG etc.For the first two operator, can reasonablely detect the point of all edges in image, but these two kinds of methods probably are divided into image several disconnected region units.HOG is method very classical in human detection and identification, and its advantage is that processing accuracy is high, detects effective.But it is high that shortcoming is dimension, and computing cost is large, therefore process in real time and be difficult to guarantee.On the other hand, common some feature such as angle point, round dot etc., although dimension is not high, but be difficult to adapt to the changeable form of human body in the situation that background is more in disorder, and the some feature also need to carry out the operations such as cluster, strengthened the difficulty of dealing with problems, caused the problem that Detection accuracy is low.Therefore, simple employing Gradient Features or some feature are not good solutions.
Summary of the invention
The technical problem to be solved in the present invention is, in above-mentioned target identification technology, the problem that real-time is poor or accuracy rate is low of the target identification that the employing single features causes as basis of characterization, propose a kind of method of extracting the random character of depth data and carried out training classifier, finally completed the method for articulation point location.
The technical solution that realizes the object of the invention is: the method comprises trains and identifies two processes,
Training process comprises the following steps:
1) random character of calculation training sample;
2) according to described features training decision tree classifier.
Identifying comprises the following steps:
3) calculate the random character of test sample book;
4) utilize decision tree classifier to classify to each pixel of target, obtain the different classes of position of target;
5) calculate the position of the articulation point at each position.
In said method, the training sample in described step 1) refers to the depth image that only retains target and mark through true value.
In said method, described step 1) comprises following concrete steps:
11) employing formula (1) is calculated the centre of form c (cx, cy) of target:
cx = 1 k Σ i = 1 k x i cy = 1 k Σ i = 1 k y i - - - ( 1 )
Wherein, k means the sum of pixel on target, (x i, y i) mean the coordinate of each pixel on target, i=1,2 ... k;
12) take mark point is starting point, generates with two different reference point that random vector is pointed, and wherein, the length rz=r1* α/valz of random vector, r1 are the random length generated, and α is coefficient, and valz is the depth value of starting point; Angle beta=the θ of random vector+ο, θ are the angles of starting point and straight line that the centre of form connects and horizontal axis, and ο is the random angle generated; If two reference point have one at least not on image, the value of the feature of starting point is 1; Otherwise calculate the depth difference of two reference point: if depth difference is greater than at self-defined threshold set
Figure BDA0000371471660000033
In optional one, the value of the feature of starting point is 1, otherwise the value of the feature of starting point is 0;
13) for each mark point, repeating step 12) fn feature of fn generation, the inferior ordered pair feature generated according to feature is simultaneously carried out the numbering of 1~fn.
In said method, described step 2) comprise following concrete steps:
21) using the root node of decision tree classifier as present node;
22) calculate the information gain of each feature of present node:
Gain ( ϵ ) = entroy ( T ) - Σ i = 1 m T i T entroy ( T i ) ,
Wherein, the numbering of ε representation feature, ε=1,2 ... fn, T mean that present node namely marks a sample set, T iMean the subset of sample set, m means the subset number, according to the value that is numbered the feature of ε, is 0 or 1 here, and will mark and a little be divided into two subsets is m=2, and entroy (T) means the information entropy of sample set, entroy ( T ) = - Σ j = 1 s p ( C j , T ) log 2 p ( C j , T ) , P(C j, T) mean to belong to classification C in sample set T jFrequency, s means the number of classification in T;
23) will have the numbering of feature of maximum information gain as the numbering of present node;
24) if it is 0 that the mark point is numbered the value of the feature of present node numbering, this mark point is divided into to the left branch node of present node, otherwise is divided into the right branch node;
25) using branch node as present node, if the information entropy of present node is less than the threshold value h τ of entropy, perhaps the number of plies of decision tree reaches maximum number of plies depth, perhaps the mark point number of present node is less than sample point minimal amount small, stop division, using present node as leaf node, otherwise repeating step 22)~25);
26) the mark point category distribution of leaf node is carried out to probability statistics.
In said method, in described step 3), test sample book is to have removed background, only retains the depth image of target.
In said method, described step 3) comprises following concrete steps:
31) employing formula (1) is calculated the centre of form of target;
The point of 32) take on target is starting point, generates with two different reference point that random vector is pointed, and the length rz=r1* α/valz of random vector wherein, r1 is the random length generated, and α is coefficient, and valz is the depth value of starting point; Angle beta=the θ of random vector+ο, θ are the angles of starting point and straight line that the centre of form connects and horizontal axis, and ο is the random angle generated; Two reference point have one at least not on image, and the value of the feature of starting point is 1; Otherwise calculate the depth difference of two reference point, if depth difference is greater than at self-defined threshold set
Figure BDA0000371471660000041
In optional one, the value of the feature of starting point is 1, otherwise the value of the feature of starting point is 0.
33) for each mark point, repeating step 32) fn feature of fn generation, the inferior ordered pair feature generated according to feature is simultaneously carried out the numbering of 1~fn.
In said method, described step 4) comprises following concrete steps:
41) using the root node of decision tree as present node;
42) if the value of the feature that is numbered the present node numbering of pixel is 0, is divided into the left branch node of present node, otherwise is divided into the right branch node;
43) branch node pixel is divided into is as present node, repeating step 42), 43), until pixel arrives leaf node;
44), if the maximum probability of all categories of leaf node is greater than probability threshold value p τ, judges classification with maximum probability classification as pixel, otherwise give up this pixel.
In said method, described step 5) comprises following concrete steps:
51) from identical category each the some q iSet out, find respectively corresponding articulation point position candidate p i, i=1,2 ... r, wherein, r is the sum of the point of identical category.
52) institute's related node position candidate is screened, find the position of articulation point.
In said method, described step 51) comprise following concrete steps:
511) with q iCentered by point, generating yardstick is the rectangular characteristic zone of w * h;
512) in employing formula (1) calculated characteristics zone with the centre of form of the generic point of central point;
513) distance of computing center's point and the centre of form;
514) if distance is not more than distance threshold d τ, using the centre of form as the articulation point position candidate, otherwise generate point centered by the centre of form, yardstick is the rectangular characteristic zone of w * h, repeating step 512)~514), if repeat abundant number of times, for example also do not find the articulation point position candidate 30 times, the centre of form that will obtain for the last time is as articulation point position candidate p i.
In said method, described step 52) comprise following concrete steps:
521) with p 1As the initial score object, according to p i, i=2,3 ..., the order of r, repeat next step;
522), according to the sequencing that becomes the score object, calculate each score object and p iDistance, as score object and a p iDistance be less than threshold value dis τ, the mark of this score object adds 1, no longer score object and p of calculated for subsequent iDistance.If all score objects and p iThe distance all be not less than dis τ, by p iAs next one score object;
523) select the score object with highest score tops, if tops is greater than score threshold sco τ, should score to liking the articulation point position; Otherwise reduce sco τ, until find the articulation point position.
The present invention compared with prior art, its remarkable advantage: the present invention utilizes the character of depth image, proposed to adopt the random character of the pixel depth difference of random 2 on every side as pixel, pixel local gradient information on every side be can reflect, a feature and Gradient Features good combination can be regarded as.This feature only relates to the simple arithmetic operations of pixel value, and counting yield is high, for processing advantage is provided in real time.In addition, in the random angle of this random character, add the deviation angle of the pixel of target for the target centre of form, strengthened the rotational invariance of feature, improved the accuracy rate of target identification.
The accompanying drawing explanation
Fig. 1 is based on the articulation point localization method process flow diagram of depth image.
Fig. 2 is the staff schematic diagram of mark true value.
Fig. 3 is the schematic diagram of generating reference point.
Fig. 4 is the schematic diagram that adopts decision tree classifier to classify to pixel.
Fig. 5 is the sorted position of staff schematic diagram.
Fig. 6 is the schematic diagram of staff articulation point position.
Embodiment
Integrated operation flow process of the present invention as shown in Figure 1.Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described in further detail.The depth image of data source of the present invention for obtaining from image capture device, image capture device can be binocular vision device or structured light projecting device.This object point of the value representation of each pixel on depth image is to the distance of camera projection centre.By depth image, can obtain shape information and the three dimensional local information of object.
The present invention adopts and has removed background, and the depth map that only retains the staff target is made sample.On this basis, the depth map of process true value mark is as training sample, and the depth map of process true value mark is not as test sample book.11 articulation points of staff mark are respectively: the binding site of palm and wrist, and the finger tip of 5 fingers and finger root, 0~10 means the label of the mark of these 11 articulation points, to mean different classifications, each articulation point marks the point of sufficient amount.As shown in Figure 2.
Articulation point localization method of the present invention comprises that training classifier and application class device carry out two key steps of target identification.
The process of training classifier refers to by known training sample, and the process of the classifying rules of target is determined in study, comprises the random character of calculation training sample and according to two processes of described features training decision tree classifier.
Step 1: the random character of calculation training sample.
Step 11: the angle that calculates the centre of form, the every bit on target and straight line that the centre of form connects and the horizontal axis of target.
Employing formula (1) is calculated the centre of form c (cx, cy) of target:
cx = 1 k Σ i = 1 k x i cy = 1 k Σ i = 1 k y i - - - ( 1 )
Wherein, k means the sum of pixel on target, (x i, y i) mean the coordinate of each pixel on target, i=1,2 ... k.
Employing formula (2) is calculated the every bit p (x, y) of target and the angle that c (cx, cy) connects straight line and horizontal axis:
θ=arctan[(y-cy)/(x-cx)] (2)
Step 12: read the articulation point markup information of target, for each mark point l (x, y), number of computations is the feature of fn, can make that the fn span is 1000~10000, and wherein the computation process of each feature is as follows:
Step 121: the l (x, y) of take is starting point, generates at random two reference point.
The arbitrary reference point generated is designated as q (qx, qy), and q (qx, qy) take l (x, y) be starting point, take random-length as rz, angle is the vector of β point pointed at random, namely
qx = x + rz * cos β qy = y + rz * sin β - - - ( 3 )
Wherein, rz=r1* α/valz, r1 are the random length generated, and can make it get the arbitrary value in 0~1 at every turn, and α is coefficient, can make that its span is that 1000~10000, valz is the depth value of l (x, y); β=θ+ο, θ are the angles that l (x, y) and centre of form c (cx, cy) connect straight line and horizontal axis, and employing formula (2) is calculated, and ο is the random angle generated, and can make it get the arbitrary value in 0 °~360 ° at every turn.Fig. 3 is shown in by the reference point schematic diagram.
Step 122: the eigenwert of determining l (x, y) according to the situation of reference point.
If two reference point are all on image, their depth value is respectively d 1And d 2, the poor dd=d of compute depth 1-d 2, the value f of the feature of l (x, y) is determined by following formula:
f = 1 , dd > t 0 , dd < t - - - ( 4 )
Wherein, t is the random threshold value generated, and can be at self-defined threshold set
Figure BDA0000371471660000082
In optional one.Here can establish
Figure BDA0000371471660000083
If two reference point have one at least not on image, f=1.
Step 123: according to the order of calculated characteristics, be the ε time, determine l (x, y) feature be numbered ε.
Step 2: according to the features training decision tree classifier of step 1 extraction.
The corresponding training sample of the root node of decision tree classifier namely marks a sample set, and what branch node was corresponding is the subset of mark point sample set, using root node as present node, carries out following steps:
Step 21: adopt following formula to calculate the information gain of each feature of present node:
Gain ( &epsiv; ) = entroy ( T ) - &Sigma; i = 1 m T i T entroy ( T i ) - - - ( 5 )
Wherein, the numbering of ε representation feature, ε=1,2 ... fn, T mean that present node namely marks a sample set, T iMean the subset of sample set, m means the subset number, according to the value that is numbered the feature of ε, is 0 or 1 here, and will mark and a little be divided into two subsets is m=2, and entroy (T) means the information entropy of sample set, entroy ( T ) = - &Sigma; j = 1 s p ( C j , T ) log 2 p ( C j , T ) , P(C j, T) mean to belong to classification C in sample set T jFrequency, s means the number of classification in T, s=11 here.
Step 22: present node is split into to branch node according to the feature with maximum information gain.
The numbering that will have the feature of maximum information gain is numbered as present node, is 0 if the mark point is numbered the value of the feature of present node numbering, this mark point is divided into to the left branch node of present node, otherwise is divided into the right branch node.
Step 23: each branch node of present node, respectively as present node, is judged to the condition that it stops dividing below whether meeting:
A) information entropy of node is less than the threshold value h τ of entropy, and h τ can get 0.5 here;
B) number of plies of decision tree reaches maximum number of plies depth, can make here that the depth span is 10~30;
C) the mark point number of node is less than sample point minimal amount small, and the span that can make small here is 100~1000.
If present node does not meet the condition that stops dividing, repeating step 22~23; If present node meets the condition that stops dividing, present node is leaf node.Category distribution to the mark point of leaf node is carried out probability statistics.
If obtain k leaf node after the decision tree classifier training finishes, the mark point number of i leaf node is designated as n i, i=1,2 ... k.The label of i leaf node is that the mark point number of the classification of j is designated as n Ij, j=0,1 ... 10.The label of i leaf node is the probability of the classification of j
Figure BDA0000371471660000091
The maximum probability of the mark point classification of this leaf node is P (i, j max)=max{P Ij, j=0,1 ..., 10.Wherein, j maxMean to have the classification of maximum probability.
Next step, adopt decision tree classifier to carry out Classification and Identification to test sample book, and Classification and Identification comprises the random character that calculates test sample book, three of the positions process of utilizing decision tree classifier image to be classified and calculate articulation point.
Step 3: the random character that calculates test sample book.
Step 31: employing formula (1) is calculated the centre of form of target, and employing formula (2) is calculated each pixel of target and the angle of straight line that the centre of form connects and horizontal axis.
Step 32: for each pixel p (x, y) of target, number of computations is the feature of fn, and wherein the computation process of each feature is as follows:
Step 321: the p (x, y) of take is starting point, generates at random two reference point.
The arbitrary reference point generated is designated as q (qx, qy), q (qx, qy) be to take p (x, y) to be starting point, the random-length of take is the vector of β point pointed as rz, random angle, employing formula (3) is calculated, in formula, and rz=r1* α/valz, r1 is the random length generated, can make it get the arbitrary value in 0~1, α is coefficient at every turn, can make that its span is 1000~10000, valz is the depth value of p (x, y); β=θ+ο, θ are the angles that p (x, y) and centre of form c (cx, cy) connect straight line and horizontal axis, and employing formula (2) is calculated, and ο is the random angle generated, and can make it get the arbitrary value in 0 °~360 ° at every turn.
Step 322: according to the situation of reference point, determine the eigenwert of p (x, y), its method and step 122 are identical.
Step 323: according to the order of calculated characteristics, be the ε time, determine p (x, y) feature be numbered ε.
Step 4: utilize decision tree classifier to classify to each pixel p (x, y) of target, assorting process as shown in Figure 4.
Step 41: using the root node of decision tree as present node, carry out following steps.
Step 42: according to the value of the feature that is numbered present node numbering of pixel, be 0 or 1, be divided into the left or right branch node of present node, and using branch node as present node.
Step 43: repeating step 42, until present node is leaf node, if the P of this leaf node is (i, j max) be greater than probability threshold value p τ, j maxBe the class label of pixel, otherwise give up this pixel.
After for all pixels of target, classifying, belong to the position that other pixel of same class forms staff, pixel for the different parts intersection, its classification is usually clear and definite not, step 43 can arrange p τ >=0.7, can remove the pixel that the classification determinacy is lower like this, belongs to same classification at the pixel that guarantees to a greater extent same position, other pixel of same class is distributed in same position, for having simplified condition in the articulation point position of further determining each position.Staff station diagram after Fig. 5 presentation class, the black line of two position boundarys and do not have the part of digital label to mean the indefinite pixel be rejected of classification.
Step 5: the particular location that calculates the articulation point at each position.
Step 51: from each some q of identical category iSet out, find respectively corresponding articulation point position candidate p i, i=1,2 ... r, wherein, r is the sum of the point of identical category.
Step 511: with q iCentered by point, generating yardstick is the rectangular characteristic zone W (x, y, w, h) of w * h.
If W (x, y, w, h) not exclusively on image, with the lap of image and W (x, y, w, h) as characteristic area.
Step 512: in characteristic area, employing formula (1) is calculated all and central point q iThe centre of form c (cx, cy) of generic point.
Step 513: adopt following formula to calculate q iDistance with c (cx, cy): dis = ( cx - x ) 2 + ( cy - y ) 2 .
Step 514: if dis≤d τ, using c (cx, cy) as articulation point position candidate p iIf dis>d τ, with c (cx, cy) point centered by, generate rectangular characteristic zone W (cx, cy, w, h), repeating step 512)~514), for example also do not find the articulation point position candidate 30 times if repeat abundant number of times, the centre of form that will obtain for the last time is as articulation point position candidate p i.Here d τ is distance threshold, can make that its span is 0.1~0.3.
Step 52: to the related node position candidate p of institute iScreen, find the position of articulation point.
Step 521: with p 1As the initial score object, according to p i, i=2,3 ..., the order of r, repeat next step.
Step 522: according to the sequencing that becomes the score object, calculate each score object and p iDistance, as score object and a p iDistance be less than threshold value dis τ, the mark of this score object adds 1, no longer score object and p of calculated for subsequent iDistance.If all score objects and p iThe distance all be not less than dis τ, by p iAs next one score object.Here dis τ can get 2~4.
Step 523: select the score object with highest score tops, if tops is greater than score threshold sco τ, should score to liking the articulation point position; Otherwise reduce sco τ, until find the articulation point position.Here sco τ can get 2~4.
The position of the institute's related node finally, found as shown in Figure 6.

Claims (10)

1. localization method of the articulation point based on depth image is characterized in that comprising training process and identifying:
The step of training process is as follows:
1) random character of calculation training sample;
2) according to random character training decision tree classifier;
The step of identifying is as follows:
3) calculate the random character of test sample book;
4) utilize decision tree classifier to classify to each pixel of target, obtain the different classes of position of target;
5) calculate the position of the articulation point at each position.
2. the localization method of the articulation point based on depth image according to claim 1 is characterized in that: the training sample in described step 1) refers to and only retains target and through the set of the depth image of true value mark.
3. the localization method of the articulation point based on depth image according to claim 1 and 2, it is characterized in that: described step 1) concrete steps are as follows:
11) employing formula (1) is calculated the centre of form c (cx, cy) of target:
cx = 1 k &Sigma; i = 1 k x i cy = 1 k &Sigma; i = 1 k y i - - - ( 1 )
Wherein, k means the sum of pixel on target, (x i, y i) mean the coordinate of each pixel on target, i=1,2 ... k;
12) take mark point is starting point, generates with two different reference point that random vector is pointed, and wherein, the length rz=r1* α/valz of random vector, r1 are the random length generated, and α is coefficient, and valz is the depth value of starting point; Angle beta=the θ of random vector+ο, θ are the angles of starting point and straight line that the centre of form connects and horizontal axis, and ο is the random angle generated; If two reference point have one at least not on image, the value of the feature of starting point is 1; Two reference point all, on image, are calculated the depth difference of two reference point, if depth difference is greater than at self-defined threshold set
Figure FDA0000371471650000023
In optional one, the value of the feature of starting point is 1, otherwise the value of the feature of starting point is 0;
13) for each mark point, repeating step 12) fn feature of fn generation, the inferior ordered pair feature generated according to feature is simultaneously carried out the numbering of 1~fn.
4. the localization method of the articulation point based on depth image according to claim 1 is characterized in that: described step 2) comprise following concrete steps:
21) using the root node of decision tree classifier as present node;
22) calculate the information gain of each feature of present node:
Gain ( &epsiv; ) = entroy ( T ) - &Sigma; i = 1 m T i T entroy ( T i ) ,
Wherein, the numbering of ε representation feature, ε=1,2 ... fn, T mean that present node namely marks a sample set, T iMean the subset of sample set, m means the subset number, is 0 or 1 according to the value of the feature that is numbered ε, and will mark and a little be divided into two subsets is m=2, and entroy (T) means the information entropy of sample set, entroy ( T ) = - &Sigma; j = 1 s p ( C j , T ) log 2 p ( C j , T ) , P(C j, T) mean to belong to classification C in sample set T jFrequency, s means the number of classification in T;
23) will have the numbering of feature of maximum information gain as the numbering of present node;
24) if it is 0 that the mark point is numbered the value of the feature of present node numbering, this mark point is divided into to the left branch node of present node, otherwise is divided into the right branch node;
25) using branch node as present node, if the information entropy of present node is less than the threshold value h τ of entropy, perhaps the number of plies of decision tree reaches maximum number of plies depth, perhaps the mark point number of present node is less than sample point minimal amount small, stop division, using present node as leaf node, otherwise repeating step 22)~25);
26) the mark point category distribution of leaf node is carried out to probability statistics.
5. the localization method of the articulation point based on depth image according to claim 1, it is characterized in that: the test sample book in described step 3) refers to the set of the depth image that only retains target.
6. according to claim 1 or 5 based on the articulation point localization method of depth image, it is characterized in that: described step 3) comprises following concrete steps:
31) employing formula (1) is calculated the centre of form of target;
The point of 32) take on target is starting point, generates with two different reference point that random vector is pointed, and the length rz=r1* α/valz of random vector wherein, r1 is the random length generated, and α is coefficient, and valz is the depth value of starting point; Angle beta=the θ of random vector+ο, θ are the angles of starting point and straight line that the centre of form connects and horizontal axis, and ο is the random angle generated; If two reference point have one at least not on image, the value of the feature of starting point is 1; If two reference point all, on image, are calculated the depth difference of two reference point, if depth difference is greater than at self-defined threshold set
Figure FDA0000371471650000031
In optional one, the value of the feature of starting point is 1, otherwise the value of the feature of starting point is 0;
33) for each mark point, repeating step 32) fn feature of fn generation, the inferior ordered pair feature generated according to feature is simultaneously carried out the numbering of 1~fn.
7. the localization method of the articulation point based on depth image according to claim 1, it is characterized in that: described step 4) comprises following concrete steps:
41) using the root node of decision tree classifier as present node;
42) if the value of the feature that is numbered the present node numbering of pixel is 0, is divided into the left branch node of present node, otherwise is divided into the right branch node;
43) branch node pixel is divided into is as present node, repeating step 42), 43), until pixel arrives leaf node;
44), if the maximum probability of all categories of leaf node is greater than probability threshold value p τ, judges classification with maximum probability classification as pixel, otherwise give up this pixel.
8. the localization method of the articulation point based on depth image according to claim 1, it is characterized in that: described step 5) comprises following concrete steps:
51) from identical category each the some q iSet out, find respectively corresponding articulation point position candidate p i, i=1,2 ... r, wherein, r is the sum of the point of identical category;
52) institute's related node position candidate is screened, find the position of articulation point.
9. according to the described localization method of articulation point based on depth image of claim 1 or 8, it is characterized in that: described step 51) comprise following concrete steps:
511) with q iCentered by point, generating yardstick is the rectangular characteristic zone of w * h;
512) in employing formula (1) calculated characteristics zone with the centre of form of the generic point of central point;
513) distance of computing center's point and the centre of form;
514) if distance is not more than distance threshold d τ, using the centre of form as the articulation point position candidate, otherwise generate point centered by the centre of form, yardstick is the rectangular characteristic zone of w * h, repeating step 512)~514), if repeat abundant number of times, also do not find the articulation point position candidate, the centre of form that will obtain for the last time is as articulation point position candidate p iDescribedly repeat abundant number of times for being more than or equal to 30 times.
10. according to the described localization method of articulation point based on depth image of claim 1 or 8, it is characterized in that: described step 52) comprise following concrete steps:
521) with p 1As the initial score object, according to p i, i=2,3 ..., the order of r, repeat next step;
522), according to the sequencing that becomes the score object, calculate each score object and p iDistance, as score object and a p iDistance be less than threshold value dis τ, the mark of this score object adds 1, no longer score object and p of calculated for subsequent iDistance; If all score objects and p iThe distance all be not less than dis τ, by p iAs next one score object;
523) select the score object with highest score tops, if tops is greater than score threshold sco τ, should score to liking the articulation point position; Otherwise reduce sco τ, until find the articulation point position.
CN201310374236.7A 2013-08-23 2013-08-23 Intra-articular irrigation method based on depth image Expired - Fee Related CN103413145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310374236.7A CN103413145B (en) 2013-08-23 2013-08-23 Intra-articular irrigation method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310374236.7A CN103413145B (en) 2013-08-23 2013-08-23 Intra-articular irrigation method based on depth image

Publications (2)

Publication Number Publication Date
CN103413145A true CN103413145A (en) 2013-11-27
CN103413145B CN103413145B (en) 2016-09-21

Family

ID=49606152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310374236.7A Expired - Fee Related CN103413145B (en) 2013-08-23 2013-08-23 Intra-articular irrigation method based on depth image

Country Status (1)

Country Link
CN (1) CN103413145B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion
CN104091152A (en) * 2014-06-30 2014-10-08 南京理工大学 Method for detecting pedestrians in big data environment
CN105389569A (en) * 2015-11-17 2016-03-09 北京工业大学 Human body posture estimation method
CN105893970A (en) * 2016-03-31 2016-08-24 杭州电子科技大学 Nighttime road vehicle detection method based on luminance variance characteristics
CN106096551A (en) * 2016-06-14 2016-11-09 湖南拓视觉信息技术有限公司 The method and apparatus of face part Identification
CN106558071A (en) * 2016-11-10 2017-04-05 张昊华 A kind of method and terminal for obtaining human synovial information
CN106846403A (en) * 2017-01-04 2017-06-13 北京未动科技有限公司 The method of hand positioning, device and smart machine in a kind of three dimensions
CN107203756A (en) * 2016-06-06 2017-09-26 亮风台(上海)信息科技有限公司 A kind of method and apparatus for recognizing gesture
CN107436679A (en) * 2016-05-27 2017-12-05 富泰华工业(深圳)有限公司 Gestural control system and method
CN107766848A (en) * 2017-11-24 2018-03-06 广州鹰瞰信息科技有限公司 The pedestrian detection method and storage medium of vehicle front
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN109484935A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of lift car monitoring method, apparatus and system
CN110598510A (en) * 2018-06-13 2019-12-20 周秦娜 Vehicle-mounted gesture interaction technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359372A (en) * 2008-09-26 2009-02-04 腾讯科技(深圳)有限公司 Training method and device of classifier, and method apparatus for recognising sensitization picture
CN102411711A (en) * 2012-01-04 2012-04-11 山东大学 Finger vein recognition method based on individualized weight

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359372A (en) * 2008-09-26 2009-02-04 腾讯科技(深圳)有限公司 Training method and device of classifier, and method apparatus for recognising sensitization picture
CN102411711A (en) * 2012-01-04 2012-04-11 山东大学 Finger vein recognition method based on individualized weight

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091152A (en) * 2014-06-30 2014-10-08 南京理工大学 Method for detecting pedestrians in big data environment
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion
CN104050460B (en) * 2014-06-30 2017-08-04 南京理工大学 The pedestrian detection method of multiple features fusion
CN105389569B (en) * 2015-11-17 2019-03-26 北京工业大学 A kind of estimation method of human posture
CN105389569A (en) * 2015-11-17 2016-03-09 北京工业大学 Human body posture estimation method
CN105893970A (en) * 2016-03-31 2016-08-24 杭州电子科技大学 Nighttime road vehicle detection method based on luminance variance characteristics
CN107436679A (en) * 2016-05-27 2017-12-05 富泰华工业(深圳)有限公司 Gestural control system and method
CN107203756B (en) * 2016-06-06 2020-08-28 亮风台(上海)信息科技有限公司 Method and equipment for recognizing gesture
CN107203756A (en) * 2016-06-06 2017-09-26 亮风台(上海)信息科技有限公司 A kind of method and apparatus for recognizing gesture
CN106096551A (en) * 2016-06-14 2016-11-09 湖南拓视觉信息技术有限公司 The method and apparatus of face part Identification
CN106096551B (en) * 2016-06-14 2019-05-21 湖南拓视觉信息技术有限公司 The method and apparatus of face position identification
CN106558071A (en) * 2016-11-10 2017-04-05 张昊华 A kind of method and terminal for obtaining human synovial information
CN106558071B (en) * 2016-11-10 2019-04-23 张昊华 A kind of method and terminal obtaining human synovial information
CN106846403B (en) * 2017-01-04 2020-03-27 北京未动科技有限公司 Method and device for positioning hand in three-dimensional space and intelligent equipment
CN106846403A (en) * 2017-01-04 2017-06-13 北京未动科技有限公司 The method of hand positioning, device and smart machine in a kind of three dimensions
CN109484935A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of lift car monitoring method, apparatus and system
CN107766848A (en) * 2017-11-24 2018-03-06 广州鹰瞰信息科技有限公司 The pedestrian detection method and storage medium of vehicle front
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN110598510A (en) * 2018-06-13 2019-12-20 周秦娜 Vehicle-mounted gesture interaction technology
CN110598510B (en) * 2018-06-13 2023-07-04 深圳市点云智能科技有限公司 Vehicle-mounted gesture interaction technology

Also Published As

Publication number Publication date
CN103413145B (en) 2016-09-21

Similar Documents

Publication Publication Date Title
CN103413145A (en) Articulation point positioning method based on depth image
CN103971102B (en) Static gesture recognition method based on finger contour and decision-making trees
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN103325122B (en) Based on the pedestrian retrieval method of Bidirectional sort
CN108090429B (en) Vehicle type recognition method for graded front face bayonet
CN109902806A (en) Method is determined based on the noise image object boundary frame of convolutional neural networks
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
CN102043945B (en) License plate character recognition method based on real-time vehicle tracking and binary index classification
CN106096602A (en) A kind of Chinese licence plate recognition method based on convolutional neural networks
CN104809481A (en) Natural scene text detection method based on adaptive color clustering
CN106845487A (en) A kind of licence plate recognition method end to end
CN109063768A (en) Vehicle recognition methods, apparatus and system again
CN107346550B (en) It is a kind of for the three dimensional point cloud rapid registering method with colouring information
CN107341523A (en) Express delivery list information identifying method and system based on deep learning
CN103390164A (en) Object detection method based on depth image and implementing device thereof
CN105574063A (en) Image retrieval method based on visual saliency
CN105718912B (en) A kind of vehicle characteristics object detecting method based on deep learning
CN104766046A (en) Detection and recognition algorithm conducted by means of traffic sign color and shape features
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN103473571A (en) Human detection method
CN107392141A (en) A kind of airport extracting method based on conspicuousness detection and LSD straight-line detections
CN104408449A (en) Intelligent mobile terminal scene character processing method
CN104268514A (en) Gesture detection method based on multi-feature fusion
CN104778470A (en) Character detection and recognition method based on component tree and Hough forest
CN104091171A (en) Vehicle-mounted far infrared pedestrian detection system and method based on local features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160921

Termination date: 20200823

CF01 Termination of patent right due to non-payment of annual fee