CN102800126A - Method for recovering real-time three-dimensional body posture based on multimodal fusion - Google Patents

Method for recovering real-time three-dimensional body posture based on multimodal fusion Download PDF

Info

Publication number
CN102800126A
CN102800126A CN2012102308982A CN201210230898A CN102800126A CN 102800126 A CN102800126 A CN 102800126A CN 2012102308982 A CN2012102308982 A CN 2012102308982A CN 201210230898 A CN201210230898 A CN 201210230898A CN 102800126 A CN102800126 A CN 102800126A
Authority
CN
China
Prior art keywords
pixel
human body
shoulder
depth
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102308982A
Other languages
Chinese (zh)
Inventor
肖俊
刘彬
庄越挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2012102308982A priority Critical patent/CN102800126A/en
Publication of CN102800126A publication Critical patent/CN102800126A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a method for recovering a real-time three-dimensional body posture based on multimodal fusion. The method can be used for recovering three-dimensional framework information of a human body by utilizing multiple technologies of depth map analysis, color identification, face detection and the like to obtain coordinates of main joint points of the human body in a real world. According to the method, on the basis of scene depth images and scene color images synchronously acquired at different moments, position information of the head of the human body can be acquired by a face detection method; position information of the four-limb end points with color marks of the human body are acquired by a color identification method; position information of the elbows and the knees of the human body is figured out by virtue of the position information of the four-limb end points and a mapping relation between the color maps and the depth maps; and an acquired framework is subjected to smooth processing by time domain information to reconstruct movement information of the human body in real time. Compared with the conventional technology for recovering the three-dimensional body posture by near-infrared equipment, the method provided by the invention can improve the recovery stability, and allows a human body movement capture process to be more convenient.

Description

Method based on the real-time human body three-dimensional pose recovery of multi-modal fusion
Technical field
The present invention relates to a kind of method of real-time human body three-dimensional pose recovery, relate in particular to the method for utilizing depth map and colour code that the human body three-dimensional attitude is recovered in real time.
Background technology
The human body three-dimensional pose recovery is meant the exercise data that obtains human body in the reality through equipment, comprises the three dimensional space coordinate information of each main articulation point etc.; Again these exercise datas are calculated and play up, thereby set up role's in the virtual scene athletic posture.Through such technology, can role's in the motion of human body in the reality and the virtual world motion be bound, thereby drive the motion of virtual role.At present, the 3 d pose recovery technology of human body is widely used in the fields such as film, animation shooting and game making, and this technology is compared to the modeling technique of traditional computer animation, and efficient is higher, can accomplish real-time.
The technology of realization human body three-dimensional pose recovery has a variety of, mainly is divided into optical system and anoptic system.Non-optical equipment generally obtains the exercise data of human body through gravity accelerator or auxiliary plant equipment, use be not very extensive.And in the present optical system, be main through near infrared gear mostly, promptly identify the position of gauge point (material by reflecting rate is higher is processed) by a plurality of infrared pick-up heads, again through scaling algorithm, be the coordinate in the three dimensions with the gauge point coordinate conversion.The advantage of this technology is that the attitude of recover is more accurate, and the robustness of system is than higher, and shortcoming is that to carry out the flow process of capturing movement comparatively complicated, and cost is higher.
Also the someone use a plurality of common cameras provide various visual angles information, extract the eigenwert of each visual angle silhouette again after, from database, find out similar attitude.This technological merit is that hardware cost is lower, but needs the support of specific set of data, and for the action that will catch bigger restriction is also arranged.
Along with Microsoft releases interactive device Kinect of new generation, the technology of human body three-dimensional pose recovery has had new breakthrough again.Kinect equipment can capturing scenes depth map; Each pixel and pixel value with expression distance from each reference position to its scene location corresponding with its position in scene (in other words in the depth map; Depth map has the form of image; Wherein, pixel value is pointed out the pattern information of the object in the scene, rather than brightness or color).People such as Jamie Shotton have described a kind of method based on machine learning and have recovered human body attitude in their paper " Real-Time Human Pose Recognition in Parts from Single Depth lmages ".The Prime Sense of Israel a company has also developed a kind of technology based on heuristic, through depth map being carried out the method for background subtraction, scene rebuilding, recovers the human body three-dimensional framework information.In above method, only need through a kinect equipment, without any need for gauge point can recover the athletic posture of human body in real time, this compares with the traditional optical system, is greatly improved.Simultaneously, this technology also greatly reduces the cost of human body three-dimensional pose recovery, makes this technology can get into the home entertaining field.
Yet said method still has certain gap with conventional optical device on stability, and realizes that difficulty is bigger.
Summary of the invention
The method that the purpose of this invention is to provide a kind of real-time human body three-dimensional pose recovery based on multi-modal fusion.
Based on the method for the real-time human body three-dimensional pose recovery of multi-modal fusion, its step is following:
1) accepts synchronously to comprise human body with the frame speed that is not less than 25 frame per seconds in interior scene depth graphic sequence and scene cromogram sequence; Each frame scene depth figure in the said scene depth graphic sequence is made up of picture element matrix; This pixel of the value representation of each pixel in the picture element matrix the position in the corresponding scene to the distance of reference position, the i.e. depth value of this pixel; Each frame picture in the said scene cromogram sequence is made up of picture element matrix, this pixel of the value representation of each pixel in the picture element matrix the represented colouring information in position in the corresponding scene, represent by the RGB color value;
2) cut apart background and the foreground pixel of said scene depth figure, obtain the zone of statement human body among the scene depth figure, i.e. foreground pixel;
3) foreground pixel among the processing scene depth figure marks out the pixel of representing trunk, head and four limbs among the scene depth figure;
4) detect through people's face, identify the people's face position in the scene cromogram, the mapping through scene cromogram and scene depth figure obtains the projection coordinate of human body head in scene depth figure; And converting the three-dimensional coordinate in the real world into, said projection coordinate is tri-vector (X, Y; Z); Wherein (X Y) points to certain pixel of scene depth figure particularly, and Z is the depth value of this pixel;
5), calculate neck and the projection coordinate of shoulder in scene depth figure, and convert the three-dimensional coordinate in the real world into according to the projection coordinate of head in scene depth figure;
6) label of wearing through the four limbs end points that has color obtains the projection coordinate in scene depth figure of hand and foot, and converts the three-dimensional coordinate in the real world into;
7) through the three-dimensional coordinate of hand and shoulder, calculate the projection coordinate of elbow joint in scene depth figure, and convert the three-dimensional coordinate in the real world into;
8) through the three-dimensional coordinate of foot and buttocks, calculate the projection coordinate of knee joint in scene depth figure, and convert the three-dimensional coordinate in the real world into;
9) according to step 2) to 8) handle each frame in scene depth graphic sequence and the scene cromogram sequence; And the three-dimensional coordinate of the partes corporis humani position that each frame-grab is arrived is formed according to organization of human body and the output skeleton pattern; The constraint space and the confidence level of the three-dimensional coordinate of each human body of catching are set; Skeleton pattern is carried out smoothing processing, and said constraint space has represented that each human body of catching allows maximum displacement range in adjacent two frames.
Said 5) computing method in:
A) after obtaining the three-dimensional coordinate of head, according to preset neck reference length L Neck, neck reference depth D NeckAnd the actual grade D of head Head, calculate the physical length L_Real of neck through formula Neck:
L_Real neck=D head*L neck/D neck
On head and line segment that trunk is connected, according to the physical length L_Real of neck NeckObtain the position of neck;
B) after obtaining the three-dimensional coordinate of neck, this method is according to preset shoulder reference width W Shoulder, shoulder reference depth D ShoulderAnd the actual grade R of neck Neck, calculate the developed width W_Real of shoulder through formula Shoulder:
W_Real shoulder=R neck*W shoulder/D shoulder
On the residing horizontal line section of neck, according to shoulder developed width W_Real ShoulderObtain the position of human body left and right sides shoulder;
C) when calculating shoulder position, should be noted that the user sometimes can take the attitude of leaning to one side, in this case, the width of shoulder projection when needing adjustment to lean to one side; This step need calculate the depth D of shoulder position, the left and right sides Left, D RightAnd the preset length W of shoulder Shoulder, the shoulder width W after changing so ProjectedShould be:
W Projected = W shoulder 2 - ( D left - D right ) 2
Pass through W Projected, the position of calculating left and right sides shoulder according to step b);
D) through the method for Local Search, guarantee above-mentioned a), b), c) the shoulder coordinate that obtains of step is in the foreground pixel; With left side shoulder is example, and this local search approach is when surveying left shoulder, if the left shoulder pixel (x of estimation; Y) be in the background pixel, so the pixel on this estimation pixel right side of search (x+t, y+t); Wherein t is the hunting zone threshold value, finds out the foreground pixel nearest apart from this estimation pixel in the pixel in this scope, if fail to find the point that is in foreground pixel; The value that increases t to enlarge the hunting zone, till finding the most contiguous foreground pixel so ladderingly.
Said 6) acquisition methods in:
A) user need wear the label that has color in hand and foot and comes aid identification hand and foot position, and the color of institute's descriptive markup thing should distinguish over the color at other position of user's body;
B) be the hsv color space with cromogram by the RBG color space conversion; And the hsv color characteristic of extracting hand and foot's label to each frame scene cromogram, uses this threshold value that it is carried out filtering as threshold value again; The pixel that does not meet this color characteristic is removed; Obtain color threshold figure, and pass through the corrosion and the expansive working of image, remove the noise among the color threshold figure;
C) through above processing; Obtain a bianry image, wherein the position of hand and foot's label can be by corresponding patch (Blob) statement, with the central point of this patch position as the four limbs end points; Through coordinate conversion, obtain hand and the foot three-dimensional coordinate in real world respectively again.
Said 7) computing method in:
A) when calculating, need in the foreground pixel among the scene depth figure, mark the pixel that belongs to the arm position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively; When arm shelters from trunk in the dead ahead; Need to calculate the depth value of trunk " barycenter "; And with the depth value of the depth value of each pixel on the trunk and trunk " barycenter " relatively,, depth difference belongs to arm regions if, then marking this pixel greater than a certain threshold value; Otherwise this pixel belongs to torso area; " barycenter " in certain zone refers to the mean depth that this is regional; For this reason; Can be through calculating the histogram of this regional depth value, and the mean value that will have the depth value of highest frequency or have two or more depth values of highest frequency is made as the depth value of this zone barycenter;
B) successfully mark out the pixel of arm regions after, be starting point with the hand, all are labeled as the pixel of arm to traversal in the depth map, if the distance of this pixel and hand satisfies the constraint condition of little arm lengths, then it are labeled as potential elbow region; Be starting point afterwards again with the shoulder; On arm, search the shoulder distance once more and meet the pixel that big arm lengths retrains; These points and the point that marks out ancon are before got the estimation scope that common factor can obtain ancon, and the mid point with these points is labeled as human body ancon position again.
Said 8) computing method in:
A) when calculating; Need in the foreground pixel among the scene depth figure, mark out the pixel that belongs to the shank position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively;
B) successfully mark out the pixel of leg area after, be starting point with the foot, all are labeled as the pixel of shank to traversal in the depth map, if the distance of this pixel and foot satisfies the constraint condition of shank length, then it are labeled as potential knee point; Be starting point afterwards again with the buttocks; On shank, search buttocks once more apart from the pixel that meets the thigh length constraint; These points and the point that marks out knee are before got the estimation scope that common factor can obtain knee, and the mid point with these points is labeled as the human knee position again.
Said 9) disposal route in:
A. to everyone constraint length D of body region definition and confidence level C, constraint length D can describe restriction range; Restriction range is meant that one is the centre of sphere with this human body; D is the spheroid of radius, and this spheroid has been described this human body in the time of adjacent two frames, the maximum displacement scope that is allowed; The constraint space size at different human body position can be different, and the constraint space of hand is compared shoulder can be big;
B. confidence level has been represented the order of accuarcy of the present coordinate figure of this human body, and the value of C is high more, and then the position of this human body is accurate more; The confidence level of everyone body region all is set to 0 when initial; In a new frame; If the position that this human body is new is in the constraint space of this human body of former frame; Then the confidence level increase of this human body a bit if the confidence level of this human body has reached maximal value, does not then need to change; Otherwise; If outside the constraint space of position this human body in previous frame that this human body is new; Then only need to move the distance of Length/C to new position; Wherein Length is the length of original position, this position and the represented line segment of reposition, the confidence level at this position is reduced a bit subsequently again.
The present invention utilizes multiple technologies such as depth map parsing, color identification, the detection of people's face; Come to obtain in real time the coordinate of the main articulation point of human body in real world, thereby recover the three-dimensional framework information of human body, human body is carried out the technology that 3 d pose recovers compared to traditional near infrared gear that utilizes; Improved the stability of recovering; Reduced use cost, also made the human motion capture process more easy, come into home entertaining for human body 3 d pose recovery technology new solution is provided.
Description of drawings
Below in conjunction with accompanying drawing and embodiment the present invention is further described.
Fig. 1 is the method flow diagram based on the real-time human body three-dimensional pose recovery of multi-modal fusion;
Fig. 2 is scene depth figure used in the present invention;
Fig. 3 is the human body three-dimensional skeleton design sketch that the present invention recovered.
Embodiment
In conjunction with accompanying drawing 1, based on the method for the real-time human body three-dimensional pose recovery of multi-modal fusion, its step is following:
1) obtains scene depth image and scene coloured image
This method obtains with the frame speed that is not less than 25 frame per seconds that length is 640, width is the scene depth image and the scene color image sequence of 480 pixels; Each frame scene depth figure (as shown in Figure 2) in the said scene depth graphic sequence is made up of picture element matrix; This pixel of the value representation of each pixel in the picture element matrix the position in the corresponding scene to the distance of reference position, the i.e. depth value of this pixel; Wherein scene depth figure and scene cromogram all are synchronous at each constantly, and each pixel of scene depth image and scene coloured image also is through aliging.
2) depth image being carried out background wipes out
Background and the foreground pixel of cutting apart said scene depth figure, the zone of statement human body, i.e. foreground pixel among the acquisition scene depth figure.When realizing that human body attitude is followed the tracks of, we are only interested in foreground pixel (user), so need get rid of background pixel.In actual implementation procedure, a lot of different background removal approach are arranged.The method that the present invention realizes is in depth map, to confirm the health of a patch (Blob) as object earlier, from patch, removes other patch with obvious different depth value then.This mode needs at first to confirm that certain has the patch of minimum dimension, confirms that this size relates to the coordinate system of real world and the conversion between the projected coordinate system.The projection coordinate of the object that can be obtained by depth map, in order to confirm the actual coordinate of object, the formula below needing to use is " real world " coordinate (X with (x, y, depth value) coordinate conversion of object r, Y r, depth value):
X r=(X-fovx/2) the * Pixel Dimensions * degree of depth/reference depth
Y r=(Y-fovy/2) the * Pixel Dimensions * degree of depth/reference depth
Here, fovx and fovy are the visual field (is unit with the pixel) of the depth map on x and the y direction.Pixel Dimensions is to locate the length that pixel faces toward for set a distance (reference depth) at camera.Subsequently, we just can utilize the coordinate of object in real world to calculate its Euclidean distance, thereby avoid because the nearly far away little error that causes.After determining the size of patch by real-world coordinates, can filter out from camera nearest, and the maximum patch of size, and set it and be human region.
A kind of in addition more succinct method is through threshold value is set exploratoryly; Depth value all is set to background pixel greater than the pixel of a certain threshold value,, is set to background pixel the patch of size less than a certain threshold value; Realization is more easy like this, but accuracy is also lower.
3) each zone of mark human body in scene depth figure
Handle the foreground pixel among the scene depth figure, mark out the zone of representing trunk, head and four limbs among the scene depth figure;
4) method that detects through people's face is calculated human head location
In this step, the Ha Er sorter (Haar Cascade Classifier) that the present invention uses OpenCV to provide carries out people's face and detects, and from the scene cromogram, obtains the pixel at user's head place in real time; Mapping through scene cromogram and scene depth figure obtains the projection coordinate of human body head in scene depth figure; And being converted into the three-dimensional coordinate in the real world through the coordinate transformation method of describing in the step 2, said projection coordinate is tri-vector (X, Y; Z); Wherein (X Y) points to certain pixel of scene depth figure particularly, and Z is the depth value of this pixel.
5), calculate the shoulder coordinate according to head position
A) after obtaining the three-dimensional coordinate of head, this method is according to preset neck reference length L Neck, neck reference depth D NeckAnd the actual grade D of head Head, calculate the physical length L_Real of neck through formula Neck:
L_Real neck=D head*L neck/D neck
On head and line segment that trunk is connected, according to the physical length L_Real of neck NeckObtain the position of neck;
B) after obtaining the three-dimensional coordinate of neck, this method is according to preset shoulder reference width W Shoulder, shoulder reference depth D ShoulderAnd the actual grade R of neck Neck, calculate the developed width W_Real of shoulder through formula Shoulder:
W_Real shoulder=R neck*W shoulder/D shoulder
On the residing horizontal line section of neck, according to shoulder developed width W_Real ShoulderObtain the position of human body left and right sides shoulder;
C) when calculating shoulder position, should be noted that the user sometimes can take the attitude of leaning to one side, in this case, the width of shoulder projection when needing adjustment to lean to one side; This step need calculate the depth D of shoulder position, the left and right sides Left, D RightAnd the preset length W of shoulder Shoulder, the shoulder width W after changing so ProjectedShould be:
W Projected = W shoulder 2 - ( D left - D right ) 2
Pass through W Projected, the position of calculating left and right sides shoulder according to step b);
D) through the method for Local Search, guarantee above-mentioned a), b), c) the shoulder coordinate that obtains of step is in the foreground pixel; With left side shoulder is example; This local search approach is when surveying left shoulder, if (x y) is in the background pixel the left shoulder pixel of estimation; This estimates the pixel (x+t on pixel right side in search so; Y+t), wherein t is the hunting zone threshold value, finds out the foreground pixel nearest apart from this estimation pixel in the pixel in this scope.If fail to find the point that is in foreground pixel, the value that increases t to enlarge the hunting zone, till finding the most contiguous foreground pixel so ladderingly.
6) label of wearing through the four limbs end points that has color obtains the projection coordinate in scene depth figure of hand and foot, and converts the three-dimensional coordinate in the real world into:
A) in the present invention, the user need wear the label that has color in hand and foot and come aid identification hand and foot position, and the color of institute's descriptive markup thing should distinguish over the color at other position of user's body;
B) be the hsv color space with cromogram by the RBG color space conversion; And the hsv color characteristic of extracting hand and foot's label to each frame scene cromogram, uses this threshold value that it is carried out filtering as threshold value again; The pixel that does not meet this color characteristic is removed, obtain color threshold figure; And corrosion and expansive working through image, remove the noise among the color threshold figure;
C) through above processing; Obtain a bianry image, wherein the position of hand and foot's label can be by corresponding patch (Blob) statement, and we are with the central point of this patch position as the four limbs end points; Through coordinate conversion, obtain hand and the foot three-dimensional coordinate in real world respectively again.
7) through the three-dimensional coordinate of hand and shoulder, calculate the projection coordinate of elbow joint in scene depth figure, and convert the three-dimensional coordinate in the real world into:
A) when calculating, need in the foreground pixel among the scene depth figure, mark the pixel that belongs to the arm position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively; It should be noted that the situation that shelters from trunk when arm in the dead ahead; At this moment if judge a certain pixel before the trunk represented be trunk or arm regions, need to calculate the depth value of trunk " barycenter ", and relatively with the degree of depth of this pixel and the depth value that removes trunk " barycenter "; If depth difference is greater than a certain threshold value; Then mark this pixel and belong to arm regions, otherwise this pixel belongs to torso area; " barycenter " in certain zone refers to the mean depth that this is regional; For this reason; Can be through calculating the histogram of this regional depth value, and the depth value mean value of two or more depth values of highest frequency (or have) that will have a highest frequency is made as the depth value of this zone barycenter;
B) successfully mark out the pixel of arm regions after, be starting point with the hand, all are labeled as the pixel of arm to traversal in the depth map, if the distance of this pixel and hand satisfies the constraint condition of little arm lengths, then it are labeled as potential elbow region; Be starting point afterwards again with the shoulder; On arm, search the shoulder distance once more and meet the pixel that big arm lengths retrains; These points and the point that marks out ancon are before got the estimation scope that common factor can obtain ancon, and the mid point with these points is labeled as human body ancon position again.
8) through the three-dimensional coordinate of foot and buttocks, calculate the projection coordinate of knee joint in scene depth figure, and convert the three-dimensional coordinate in the real world into:
A) when calculating, need in the foreground pixel among the scene depth figure, mark out the pixel that belongs to the shank position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively;
B) successfully mark out the pixel of leg area after, be starting point with the foot, all are labeled as the pixel of shank to traversal in the depth map, if the distance of this pixel and foot satisfies the constraint condition of shank length, then it are labeled as potential knee point; Be starting point afterwards again with the buttocks; On shank, search buttocks once more apart from the pixel that meets the thigh length constraint; These points and the point that marks out knee are before got the estimation scope that common factor can obtain knee, and the mid point with these points is labeled as the human knee position again.
9) according to step 2) to 8) handle each frame in scene depth graphic sequence and the scene cromogram sequence; And with each frame-grab to the three-dimensional coordinate of partes corporis humani position form according to organization of human body and export human skeleton model as shown in Figure 3; The constraint space and the confidence level of the three-dimensional coordinate of each human body of catching are set; The human skeleton model is carried out smoothing processing, and said constraint space has represented that each human body of catching allows maximum displacement range in adjacent two frames:
A) to everyone constraint length D of body region definition and confidence level C, constraint length D can describe restriction range; Restriction range is meant that one is the centre of sphere with this human body; D is the spheroid of radius, and this spheroid has been described this human body in the time of adjacent two frames, the maximum displacement scope that is allowed; The constraint space size at different human body position can be different, and the constraint space of hand is compared shoulder can be big;
B) confidence level has been represented the order of accuarcy of the present coordinate figure of this human body, and the value of C is high more, and then the position of this human body is accurate more; The confidence level of everyone body region all is set to 0 when initial; In a new frame; If the position that this human body is new is in the constraint space of this human body of former frame; Then the confidence level increase of this human body a bit if the confidence level of this human body has reached maximal value, does not then need to change; Otherwise; If outside the constraint space of position this human body in previous frame that this human body is new; Then only need to move the distance of Length/C to new position; Wherein Length is the length of original position, this position and the represented line segment of reposition, the confidence level at this position is reduced a bit subsequently again.

Claims (6)

1. method based on the real-time human body three-dimensional pose recovery of multi-modal fusion is characterized in that its step is following:
1) accepts synchronously to comprise human body with the frame speed that is not less than 25 frame per seconds in interior scene depth graphic sequence and scene cromogram sequence; Each frame scene depth figure in the said scene depth graphic sequence is made up of picture element matrix; This pixel of the value representation of each pixel in the picture element matrix the position in the corresponding scene to the distance of reference position, the i.e. depth value of this pixel; Each frame picture in the said scene cromogram sequence is made up of picture element matrix, this pixel of the value representation of each pixel in the picture element matrix the represented colouring information in position in the corresponding scene, represent by the RGB color value;
2) cut apart background and the foreground pixel of said scene depth figure, obtain the zone of statement human body among the scene depth figure, i.e. foreground pixel;
3) foreground pixel among the said scene depth figure of processing marks out the pixel of representing trunk, head and four limbs among the scene depth figure;
4) detect through people's face, identify the people's face position in the scene cromogram, the mapping through scene cromogram and scene depth figure obtains the projection coordinate of human body head in scene depth figure; And converting the three-dimensional coordinate in the real world into, said projection coordinate is tri-vector (X, Y; Z); Wherein (X Y) points to certain pixel of scene depth figure particularly, and Z is the depth value of this pixel;
5), calculate neck and the projection coordinate of shoulder in scene depth figure, and convert the three-dimensional coordinate in the real world into according to the projection coordinate of head in scene depth figure;
6) label of wearing through the four limbs end points that has color obtains the projection coordinate in scene depth figure of hand and foot, and converts the three-dimensional coordinate in the real world into;
7) through the three-dimensional coordinate of hand and shoulder, calculate the projection coordinate of elbow joint in scene depth figure, and convert the three-dimensional coordinate in the real world into;
8) through the three-dimensional coordinate of foot and buttocks, calculate the projection coordinate of knee joint in scene depth figure, and convert the three-dimensional coordinate in the real world into;
9) according to step 2) to 8) handle each frame in scene depth graphic sequence and the scene cromogram sequence; And the three-dimensional coordinate of the partes corporis humani position that each frame-grab is arrived is formed according to organization of human body and the output skeleton pattern; The constraint space and the confidence level of the three-dimensional coordinate of each human body of catching are set; Skeleton pattern is carried out smoothing processing, and said constraint space has represented that each human body of catching allows maximum displacement range in adjacent two frames.
2. the method for the real-time human body three-dimensional pose recovery based on multi-modal fusion according to claim 1 is characterized in that said 5) in computing method:
A) after obtaining the three-dimensional coordinate of head, according to preset neck reference length L Neck, neck reference depth D NeckAnd the actual grade D of head Head, calculate the physical length L_Real of neck through formula Neck:
L_Real neck=D head*L neck/D neck
On head and line segment that trunk is connected, according to the physical length L_Real of neck NeckObtain the position of neck;
B) after obtaining the three-dimensional coordinate of neck, according to preset shoulder reference width W Shoulder, shoulder reference depth D ShoulderAnd the actual grade R of neck Neck, calculate the developed width W_Real of shoulder through formula Shoulder:
W_Real shoulder=R neck*W shoulder/D shoulder
On the residing horizontal line section of neck, according to shoulder developed width W_Real ShoulderObtain the position of human body left and right sides shoulder;
C) when calculating shoulder position, under the attitude situation that the user takes to lean to one side, the width of shoulder projection when needing adjustment to lean to one side; This set-up procedure need calculate the depth D of shoulder position, the left and right sides Left, D RightAnd the preset length W of shoulder Shoulder, the shoulder width W after changing so ProjectedShould be:
W Projected = W shoulder 2 - ( D left - D right ) 2
Pass through W Projected, the position of calculating left and right sides shoulder according to step b);
D) through the method for Local Search, guarantee above-mentioned a), b), c) the shoulder coordinate that obtains of step is in the foreground pixel; The method of described Local Search is when surveying left shoulder, if (x y) is in the background pixel the left shoulder pixel of estimation; So the pixel on this estimation pixel right side of search (x+t, y+t), wherein t is the hunting zone threshold value; Find out the foreground pixel nearest in the pixel in this scope apart from this estimation pixel; If fail to find the point that is in foreground pixel, the value that increases t to enlarge the hunting zone, till finding the most contiguous foreground pixel so ladderingly.
3. the method for the real-time human body three-dimensional pose recovery based on multi-modal fusion according to claim 1 is characterized in that said 6) in acquisition methods:
A) user need wear the label that has color in hand and foot and comes aid identification hand and foot position, and the color of institute's descriptive markup thing should distinguish over the color at other position of user's body;
B) be the hsv color space with cromogram by the RBG color space conversion; And the hsv color characteristic of extracting hand and foot's label to each frame scene cromogram, uses this threshold value that it is carried out filtering as threshold value again; The pixel that does not meet this color characteristic is removed; Obtain color threshold figure, and pass through the corrosion and the expansive working of image, remove the noise among the color threshold figure;
C) through above processing; Obtain a bianry image, wherein the position of hand and foot's label can be by corresponding patch (Blob) statement, with the central point of this patch position as the four limbs end points; Through coordinate conversion, obtain hand and the foot three-dimensional coordinate in real world respectively again.
4. the method for the real-time human body three-dimensional pose recovery based on multi-modal fusion according to claim 1 is characterized in that said 7) in computing method:
A) when calculating, need in the foreground pixel among the scene depth figure, mark the pixel that belongs to the arm position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively; When arm shelters from trunk in the dead ahead; Need to calculate the depth value of trunk " barycenter "; And with the depth value of the depth value of each pixel on the trunk and trunk " barycenter " relatively,, depth difference belongs to arm regions if, then marking this pixel greater than a certain threshold value; Otherwise this pixel belongs to torso area; " barycenter " in certain zone refers to the mean depth that this is regional; For this reason; Can be through calculating the histogram of this regional depth value, and the mean value that will have the depth value of highest frequency or have two or more depth values of highest frequency is made as the depth value of this zone barycenter;
B) successfully mark out the pixel of arm regions after, be starting point with the hand, all are labeled as the pixel of arm to traversal in the depth map, if the distance of this pixel and hand satisfies the constraint condition of little arm lengths, then it are labeled as potential elbow region; Be starting point afterwards again with the shoulder; On arm, search the shoulder distance once more and meet the pixel that big arm lengths retrains; These points and the point that marks out ancon are before got the estimation scope that common factor can obtain ancon, and the mid point with these points is labeled as human body ancon position again.
5. the method for the real-time human body three-dimensional pose recovery based on multi-modal fusion according to claim 1 is characterized in that said 8) in computing method:
A) when calculating; Need in the foreground pixel among the scene depth figure, mark out the pixel that belongs to the shank position; Mark out the pixel of expression metastomium among the scene depth figure earlier respectively through the position of left and right sides shoulder, again all the other positions that are connected with trunk among the scene depth figure are labeled as the pixel of expression four limbs and head respectively;
B) successfully mark out the pixel of leg area after, be starting point with the foot, all are labeled as the pixel of shank to traversal in the depth map, if the distance of this pixel and foot satisfies the constraint condition of shank length, then it are labeled as potential knee point; Be starting point afterwards again with the buttocks; On shank, search buttocks once more apart from the pixel that meets the thigh length constraint; These points and the point that marks out knee are before got the estimation scope that common factor can obtain knee, and the mid point with these points is labeled as the human knee position again.
6. the method for the real-time human body three-dimensional pose recovery based on multi-modal fusion according to claim 1 is characterized in that said 9) in disposal route:
C) to everyone constraint length D of body region definition and confidence level C, constraint length D can describe restriction range; Restriction range is meant that one is the centre of sphere with this human body; D is the spheroid of radius, and this spheroid has been described this human body in the time of adjacent two frames, the maximum displacement scope that is allowed; The constraint space size at different human body position can be different, and the constraint space of hand is compared shoulder can be big;
D) confidence level has been represented the order of accuarcy of the present coordinate figure of this human body, and the value of C is high more, and then the position of this human body is accurate more; The confidence level of everyone body region all is set to 0 when initial; In a new frame; If the position that this human body is new is in the constraint space of this human body of former frame; Then the confidence level increase of this human body a bit if the confidence level of this human body has reached maximal value, does not then need to change; Otherwise; If outside the constraint space of position this human body in previous frame that this human body is new; Then only need to move the distance of Length/C to new position; Wherein Length is the length of original position, this position and the represented line segment of reposition, the confidence level at this position is reduced a bit subsequently again.
CN2012102308982A 2012-07-04 2012-07-04 Method for recovering real-time three-dimensional body posture based on multimodal fusion Pending CN102800126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102308982A CN102800126A (en) 2012-07-04 2012-07-04 Method for recovering real-time three-dimensional body posture based on multimodal fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102308982A CN102800126A (en) 2012-07-04 2012-07-04 Method for recovering real-time three-dimensional body posture based on multimodal fusion

Publications (1)

Publication Number Publication Date
CN102800126A true CN102800126A (en) 2012-11-28

Family

ID=47199222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102308982A Pending CN102800126A (en) 2012-07-04 2012-07-04 Method for recovering real-time three-dimensional body posture based on multimodal fusion

Country Status (1)

Country Link
CN (1) CN102800126A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336953A (en) * 2013-07-05 2013-10-02 深圳市中视典数字科技有限公司 Movement judgment method based on body sensing equipment
CN103745226A (en) * 2013-12-31 2014-04-23 国家电网公司 Dressing safety detection method for worker on working site of electric power facility
CN104167016A (en) * 2014-06-16 2014-11-26 西安工业大学 Three-dimensional motion reconstruction method based on RGB color and depth image
CN104573612A (en) * 2013-10-16 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for estimating postures of multiple overlapped human body objects in range image
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
CN105574525A (en) * 2015-12-18 2016-05-11 天津中科智能识别产业技术研究院有限公司 Method and device for obtaining complex scene multi-mode biology characteristic image
CN106535759A (en) * 2014-04-09 2017-03-22 拜耳消费者保健股份公司 Method, apparatus, and computer-readable medium for generating a set of recommended orthotic products
CN106846324A (en) * 2017-01-16 2017-06-13 河海大学常州校区 A kind of irregular object height measurement method based on Kinect
CN107169262A (en) * 2017-03-31 2017-09-15 百度在线网络技术(北京)有限公司 Recommend method, device, equipment and the computer-readable storage medium of body shaping scheme
CN107230226A (en) * 2017-05-15 2017-10-03 深圳奥比中光科技有限公司 Determination methods, device and the storage device of human body incidence relation
CN107481286A (en) * 2017-07-11 2017-12-15 厦门博尔利信息技术有限公司 Dynamic 3 D schematic capture algorithm based on passive infrared reflection
CN107808128A (en) * 2017-10-16 2018-03-16 深圳市云之梦科技有限公司 A kind of virtual image rebuilds the method and system of human body face measurement
CN108295469A (en) * 2017-12-04 2018-07-20 成都思悟革科技有限公司 Game visual angle effect method based on motion capture technology
CN108542021A (en) * 2018-03-18 2018-09-18 江苏特力威信息系统有限公司 A kind of gym suit and limbs measurement method and device based on vitta identification
CN109353907A (en) * 2017-09-05 2019-02-19 日立楼宇技术(广州)有限公司 The security prompt method and system of elevator operation
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN110781820A (en) * 2019-10-25 2020-02-11 网易(杭州)网络有限公司 Game character action generating method, game character action generating device, computer device and storage medium
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN111144207A (en) * 2019-11-21 2020-05-12 东南大学 Human body detection and tracking method based on multi-mode information perception
CN111640176A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
CN111680670A (en) * 2020-08-12 2020-09-18 长沙小钴科技有限公司 Cross-mode human head detection method and device
CN112037319A (en) * 2020-08-19 2020-12-04 上海佑久健康科技有限公司 Human body measuring method, system and computer readable storage medium
CN112150448A (en) * 2020-09-28 2020-12-29 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN113065532A (en) * 2021-05-19 2021-07-02 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN113158910A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Human skeleton recognition method and device, computer equipment and storage medium
CN113343925A (en) * 2021-07-02 2021-09-03 厦门美图之家科技有限公司 Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113384844A (en) * 2021-06-17 2021-09-14 郑州万特电气股份有限公司 Fire extinguishing action detection method based on binocular vision and fire extinguisher safety practical training system
CN113591726A (en) * 2021-08-03 2021-11-02 电子科技大学 Cross mode evaluation method for Taijiquan training action
WO2021253777A1 (en) * 2020-06-19 2021-12-23 北京市商汤科技开发有限公司 Attitude detection and video processing methods and apparatuses, electronic device, and storage medium
CN116563952A (en) * 2023-07-07 2023-08-08 厦门医学院 Dynamic capture missing data recovery method combining graph neural network and bone length constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253373A (en) * 2005-08-30 2008-08-27 东芝开利株式会社 Indoor machine of air conditioner
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253373A (en) * 2005-08-30 2008-08-27 东芝开利株式会社 Indoor machine of air conditioner
CN101657825A (en) * 2006-05-11 2010-02-24 普莱姆传感有限公司 Modeling of humanoid forms from depth maps
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HIMANSHU PRAKASH JAIN ET AL: "Real-Time Upper-Body Human Pose Estimation Using a Depth Camera", 《COMPUTER VISION/COMPUTER GRAPHICS COLLABORATION TECHNIQUES LECTURE NOTES IN COMPUTER SCIENCE》 *
周娟 等: "基于强度图和深度图的多模态人脸识别", 《计算机工程与应用》 *

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336953B (en) * 2013-07-05 2016-06-01 深圳市中视典数字科技有限公司 A kind of method passed judgment on based on body sense equipment action
CN103336953A (en) * 2013-07-05 2013-10-02 深圳市中视典数字科技有限公司 Movement judgment method based on body sensing equipment
US10265858B2 (en) 2013-07-29 2019-04-23 Samsung Electronics Co., Ltd. Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
CN105407774B (en) * 2013-07-29 2018-09-18 三星电子株式会社 Automatic sweeping system, sweeping robot and the method for controlling sweeping robot
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
CN104573612A (en) * 2013-10-16 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for estimating postures of multiple overlapped human body objects in range image
CN104573612B (en) * 2013-10-16 2019-10-22 北京三星通信技术研究有限公司 The device and method of the posture for the multiple human objects being overlapped in estimating depth image
CN103745226A (en) * 2013-12-31 2014-04-23 国家电网公司 Dressing safety detection method for worker on working site of electric power facility
CN106535759A (en) * 2014-04-09 2017-03-22 拜耳消费者保健股份公司 Method, apparatus, and computer-readable medium for generating a set of recommended orthotic products
CN104167016A (en) * 2014-06-16 2014-11-26 西安工业大学 Three-dimensional motion reconstruction method based on RGB color and depth image
CN104167016B (en) * 2014-06-16 2017-10-03 西安工业大学 A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN105574525A (en) * 2015-12-18 2016-05-11 天津中科智能识别产业技术研究院有限公司 Method and device for obtaining complex scene multi-mode biology characteristic image
CN105574525B (en) * 2015-12-18 2019-04-26 天津中科虹星科技有限公司 A kind of complex scene multi-modal biological characteristic image acquiring method and its device
CN106846324A (en) * 2017-01-16 2017-06-13 河海大学常州校区 A kind of irregular object height measurement method based on Kinect
CN106846324B (en) * 2017-01-16 2020-05-01 河海大学常州校区 Irregular object height measuring method based on Kinect
CN107169262B (en) * 2017-03-31 2021-11-23 百度在线网络技术(北京)有限公司 Method, device, equipment and computer storage medium for recommending body shaping scheme
CN107169262A (en) * 2017-03-31 2017-09-15 百度在线网络技术(北京)有限公司 Recommend method, device, equipment and the computer-readable storage medium of body shaping scheme
CN107230226A (en) * 2017-05-15 2017-10-03 深圳奥比中光科技有限公司 Determination methods, device and the storage device of human body incidence relation
CN107481286A (en) * 2017-07-11 2017-12-15 厦门博尔利信息技术有限公司 Dynamic 3 D schematic capture algorithm based on passive infrared reflection
CN109353907A (en) * 2017-09-05 2019-02-19 日立楼宇技术(广州)有限公司 The security prompt method and system of elevator operation
CN109353907B (en) * 2017-09-05 2020-09-15 日立楼宇技术(广州)有限公司 Safety prompting method and system for elevator operation
CN107808128B (en) * 2017-10-16 2021-04-02 深圳市云之梦科技有限公司 Method and system for measuring five sense organs of human body through virtual image reconstruction
CN107808128A (en) * 2017-10-16 2018-03-16 深圳市云之梦科技有限公司 A kind of virtual image rebuilds the method and system of human body face measurement
CN108295469B (en) * 2017-12-04 2021-03-26 成都思悟革科技有限公司 Game visual angle conversion method based on motion capture technology
CN108295469A (en) * 2017-12-04 2018-07-20 成都思悟革科技有限公司 Game visual angle effect method based on motion capture technology
CN108542021A (en) * 2018-03-18 2018-09-18 江苏特力威信息系统有限公司 A kind of gym suit and limbs measurement method and device based on vitta identification
CN111640176A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
US11436802B2 (en) 2018-06-21 2022-09-06 Huawei Technologies Co., Ltd. Object modeling and movement method and apparatus, and device
CN110909580B (en) * 2018-09-18 2022-06-10 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
JP2021513175A (en) * 2018-09-18 2021-05-20 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Data processing methods and devices, electronic devices and storage media
WO2020057121A1 (en) * 2018-09-18 2020-03-26 北京市商汤科技开发有限公司 Data processing method and apparatus, electronic device and storage medium
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
US11238273B2 (en) 2018-09-18 2022-02-01 Beijing Sensetime Technology Development Co., Ltd. Data processing method and apparatus, electronic device and storage medium
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN110781820B (en) * 2019-10-25 2022-08-05 网易(杭州)网络有限公司 Game character action generating method, game character action generating device, computer device and storage medium
CN110781820A (en) * 2019-10-25 2020-02-11 网易(杭州)网络有限公司 Game character action generating method, game character action generating device, computer device and storage medium
CN111144207A (en) * 2019-11-21 2020-05-12 东南大学 Human body detection and tracking method based on multi-mode information perception
WO2021253777A1 (en) * 2020-06-19 2021-12-23 北京市商汤科技开发有限公司 Attitude detection and video processing methods and apparatuses, electronic device, and storage medium
CN111680670A (en) * 2020-08-12 2020-09-18 长沙小钴科技有限公司 Cross-mode human head detection method and device
CN111680670B (en) * 2020-08-12 2020-12-01 长沙小钴科技有限公司 Cross-mode human head detection method and device
CN112037319A (en) * 2020-08-19 2020-12-04 上海佑久健康科技有限公司 Human body measuring method, system and computer readable storage medium
CN112150448B (en) * 2020-09-28 2023-09-26 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN112150448A (en) * 2020-09-28 2020-12-29 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN113158910A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Human skeleton recognition method and device, computer equipment and storage medium
CN113065532A (en) * 2021-05-19 2021-07-02 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN113065532B (en) * 2021-05-19 2024-02-09 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN113384844B (en) * 2021-06-17 2022-01-28 郑州万特电气股份有限公司 Fire extinguishing action detection method based on binocular vision and fire extinguisher safety practical training system
CN113384844A (en) * 2021-06-17 2021-09-14 郑州万特电气股份有限公司 Fire extinguishing action detection method based on binocular vision and fire extinguisher safety practical training system
CN113343925A (en) * 2021-07-02 2021-09-03 厦门美图之家科技有限公司 Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113343925B (en) * 2021-07-02 2023-08-29 厦门美图宜肤科技有限公司 Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113591726A (en) * 2021-08-03 2021-11-02 电子科技大学 Cross mode evaluation method for Taijiquan training action
CN116563952A (en) * 2023-07-07 2023-08-08 厦门医学院 Dynamic capture missing data recovery method combining graph neural network and bone length constraint
CN116563952B (en) * 2023-07-07 2023-09-15 厦门医学院 Dynamic capture missing data recovery method combining graph neural network and bone length constraint

Similar Documents

Publication Publication Date Title
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
US10417775B2 (en) Method for implementing human skeleton tracking system based on depth data
CN109544636B (en) Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method
CN105631861B (en) Restore the method for 3 D human body posture from unmarked monocular image in conjunction with height map
CN106055091B (en) A kind of hand gestures estimation method based on depth information and correcting mode
CN101604447B (en) No-mark human body motion capture method
CN102982557B (en) Method for processing space hand signal gesture command based on depth camera
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN102074034B (en) Multi-model human motion tracking method
Yilmaz et al. A differential geometric approach to representing the human actions
CN102184541B (en) Multi-objective optimized human body motion tracking method
CN103714322A (en) Real-time gesture recognition method and device
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism
CN102075686B (en) Robust real-time on-line camera tracking method
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
CN109344694B (en) Human body basic action real-time identification method based on three-dimensional human body skeleton
CN102609683A (en) Automatic labeling method for human joint based on monocular video
CN104715493A (en) Moving body posture estimating method
CN111027432B (en) Gait feature-based visual following robot method
CN102682452A (en) Human movement tracking method based on combination of production and discriminant
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN103559505A (en) 3D skeleton modeling and hand detecting method
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
Darujati et al. Facial motion capture with 3D active appearance models

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121128