CN103593679A - Visual human-hand tracking method based on online machine learning - Google Patents
Visual human-hand tracking method based on online machine learning Download PDFInfo
- Publication number
- CN103593679A CN103593679A CN201210292291.7A CN201210292291A CN103593679A CN 103593679 A CN103593679 A CN 103593679A CN 201210292291 A CN201210292291 A CN 201210292291A CN 103593679 A CN103593679 A CN 103593679A
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- point
- pixel
- window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a visual human-hand tracking method based on online machine learning and belongs to the field of intelligent man-machine interaction. The method is that: 1) determining a human-hand target in a human-hand image; 2) carrying out feature extraction on the image and training a Hoff forest detector; at the same time, carrying out feature-point detection in a target window so as to initialize feature-point clusters in a sub-tracker; 3) carrying out feature extraction on each frame of image, which is shot subsequently, and then using the Hoff forest detector to carry out human-hand detection in a partial search window so as to determine a target window of a current frame; if the detection fails, adopting the positions of feature points which are estimated by the sub-tracker to be on the target at a previous frame, at a current frame so as to obtain feature points which are successfully tracked and if a ratio of the number of the successfully tracked feature points to the number of total feature points of the feature-point clusters is larger than a set threshold, the tracking is considered to be valid, or the tracking of the sub-tracker is invalid. The visual human-hand tracking method based on the online machine learning realizes complementation of a tracker and a detector to obtain a more robust tracking result.
Description
Technical field
The invention belongs to visual target tracking and intelligent human-machine interaction field, be specifically related to a kind of vision hand tracking algorithm based on online machine learning of robust.
Background technology
Vision hand tracking technology is a gordian technique that merges many fields such as image processing, pattern-recognition, artificial intelligence.Vision hand tracking technology is widely used in the field that video monitoring, intelligent television, intelligent robot etc. need man-machine interaction.Because have huge application prospect, the international and domestic research to vision hand tracking is in the ascendant.
Under man-machine interaction environment, be subject to the impact of daylight and light, light changes greatly; In background static interference thing and dynamic disturbance thing come in every shape and mode of motion unpredictable; Mutual motion in staff and environment between other objects is comparatively complicated, and is easily blocked.In the face of these difficulties, how to realize stable hand tracking, thereby carry out more intelligent and stable man-machine interaction, there is important Research Significance.
When the conventional art of anterior optic hand tracking can be divided into method and the method based on model based on outward appearance.Method based on outward appearance is extracted feature from image, and the special characteristic having with staff mates, as optical flow method, average drifting method, maximum stable extremal region method etc.Method based on model utilizes the 3D model of staff or 2D model to estimate the feature of staff, and mates with the feature observing, as particle filter, geometry hand model, graph model etc.Above-mentioned classic method depends on the fusion for many features of specific environment aspect robustness, and lacks reliable theoretical foundation.Machine learning has in recent years obtained research widely in field of machine vision.Object detection method based on sorter provides higher robustness for target following.But simple target detection but lacks robustness for simple cosmetic variation as illumination variation, rapid movement etc.How combining target detects and the advantage of target following reaches higher robustness and has important theoretical research and application value.
Summary of the invention
For the technical matters existing in prior art, the object of the present invention is to provide a kind of vision hand tracking method based on online machine learning; The present invention combines the detection based on sorter and the tracking based on motion continuity by on-line study, to realize the hand tracking to real world applications scene robust.By utilizing Hough forest classified device (detecting device) to classify to the pixel in region of search, obtain the conservative of target but stable estimation; The tracker (be called sub-tracker) of utilization based on optical flow method carries out the stronger but understable estimation of adaptability to target; Utilize semi-supervised learning mechanism that the two is produced to new sample in conjunction with the tracking results obtaining and carry out online updating Hough forest detecting device, thereby realize the complementation of tracker and detecting device, obtain the tracking results of robust more.
Technology contents of the present invention:
A hand tracking method for the robust of machine learning, mainly comprises the steps:
1, initialization: in the face of camera, do the simple staff detection based on the colour of skin or the mode of manually selecting and draw a circle to approve staff target in image with the fist of erectting.The staff target place window (being target window) that utilization is obtained and the 16 channel image features of extracting produce positive negative sample, with these sample training Hough forest detecting devices; In target window, do the Good-Features-to-Track feature point detection of following the tracks of for optical flow method and carry out the characteristic point group of initialization tracker simultaneously.
2, often by camera, obtain the image that a frame is new, first carry out feature extraction.Then certain neighborhood of above frame target window, as the Local Search window of present frame, utilizes Hough forest detecting device feature to each pixel in Local Search window to classify, and returns to it and belong to all kinds of probability.If pixel belongs to the probability of staff target, be greater than certain threshold value, think that it belongs to staff, otherwise think and belong to background.The pixel that simultaneously belongs to staff target produces the single ballot to staff target's center according to the mechanism of generalised Hough transform.The nearest extreme point in distance Shang Zheng target's center position of final vote figure is the center of detecting target, and records the ballot value of this extreme point.If extreme point ballot value is less than certain threshold value, thinks and detect unsuccessfully, further the result of checking procedure 3; Otherwise think and detect successfully, the position that above-mentioned extreme point is effective target's center.According to current goal center, with respect to the displacement of previous frame target's center position, determine the new target window of present frame.
3, the mechanism based on semi-supervised learning, parallel with testing process, adopt syndrome tracker based on LK optical flow method to estimate the unique point of previous frame in target corresponding position on present frame, the unique point of succeeding and following the tracks of.For following the tracks of the unique point of losing, by the method for feature point detection, supplemented, obtained new characteristic point group.If the feature that optical flow method is successfully followed the tracks of is counted, with respect to the characteristic point group total characteristic ratio of counting, be greater than certain threshold value, think that sub-tracker tracking effectively; Otherwise think sub-tracker follow the tracks of invalid, if aforementioned detection successfully now produces new unique point for restarting the tracking (reinitializing the target window of tracker) of tracker in the definite target window of detection.
4, for producing the sample of the online training of more accurate Hough forest detecting device, need to carry out cutting apart of target.If detected successfully, the back projection that Hough ballot is carried out in the position of the target's center that utilization detects, obtains the Seed Points of " figure cuts " split plot design, and produces the segmentation result of target.Based on this segmentation result, define positive negative sample, thereby train online Hough forest classified device.If detected unsuccessfully, utilize the unique point that tracker is effectively followed the tracks of as the Seed Points of " figure cuts " split plot design, to cut apart and online training.If detection and tracking are failure all, present frame is not trained online; And in follow-up some frames, carry out the testing process of full images, until target location successfully detected, now utilize the target detecting to carry out the tracing process of restarting systems.
Concrete, initial phase the present invention adopts the colour of skin detecting based on people's face to extract initialization staff target.The Adaboost people face detection algorithm of utilization based on class Haar feature detects people's face, extracts the colour of skin.Each non-face area of skin color is done to shape analysis, locate initial staff target.
The method that characteristic extracting module is extracted for the feature of 16 passages (we improve rear 9 passages) of target detection is:
1) the RGB image of camera acquisition is transformed into the feature that Lab space provides L, a and tri-passages of b;
2) the RGB image of camera acquisition is converted into gray-scale map, gray-scale map is asked to the first order derivative of x direction (being horizontal direction) and y direction (being vertical direction), on the basis of first order derivative, ask second derivative, thereby provide the feature of other four passages, be designated as Ix, Iy, Ixx, Iyy; The derivation operator adopting is the Sobel operator on three rank:
3), for each pixel in gray-scale map, ask gradient orientation histogram (HOG) feature in its local window (take the image block that preset value is size centered by this pixel).The method of calculating HOG is: utilize 2) the first order derivative Ix of middle extraction and direction Ang and the big or small Mag that Iy carrys out compute gradient:
Ang=atan2(Ix,Iy),
Gradient direction Ang between 0~π is divided into 9 deciles, and in local window, the gradient direction of each point is weighted with gradient amplitude size Mag 9 interval ballots of histogram.After calculating the gradient direction of each point, obtain the gradient direction image of 9 passages, Ang
1~Ang
9, arbitrary passage II wherein
cthe value of each pixel of the gradient direction image of (c passage, c=1,2...9) represents c interval ballot value (being frequency) of this regional area direction.To each directional image, ask its integral image to obtain the histogram integral image of 9 passages.Utilize the c passage II of histogram integral image
ccan calculate fast the histogram of gradients of c the angular interval in certain region:
9 passage gradient set of histograms of all pixels of whole image become the feature of other 9 passages, are designated as h1, h2 ..., h9.
By above-mentioned steps, to each point, can obtain the feature of 16 dimensions.
Detection-phase, utilizes above-mentioned feature, corresponding to the aforementioned Hough ballot extreme point that obtains, as the concrete methods of realizing that detects target location, is:
1) to each pixel in current image frame, utilize Hough forest classified device to classify to it, obtain the probability that this point belongs to staff target.In Hough forest, the decision function of the internal node of decision tree is:
Wherein (p, q) and (r, s) is random 2 points of selecting in pixel local window, I
afeature for the random a certain passage of selecting from above-mentioned 16 channel characteristics.According to the structure of known Hough forest, each leaf node of every stochastic decision tree stores the probability that the feature that enters this leaf node belongs to target, if this probability is greater than certain value, this leaf node also stores the ballot vector to target's center.Each pixel is returned to the probability that it belongs to target, if it belongs to target, return to the ballot vector of this pixel to target's center simultaneously, and be the cumulative weight proportional to probable value in relevant position of confidence map at Hough parameter space.Travel through all stochastic decision trees to produce whole ballots.
2) obtain the confidence map about target location after traveling through all pixels, its extreme point is the target location detecting.If the ballot value of this extreme point is not less than certain threshold value H, thinks and detect effectively.
Tracking phase, the method that basic tracker is followed the tracks of is:
1) use LK sparse optical flow method to follow the tracks of each unique point in previous frame syndrome.For Partial Feature point, its local light flow equation group can access the least square solution that error is enough little, to these points, can obtain stable tracking.And for some unique point, its local light flow equation group can not get effective least square solution, cause to follow the tracks of and lose.
2) LK sparse optical flow method in syndrome being followed the tracks of to the unique point of losing supplements.Supplementary mode is by pixel being carried out to point that stochastic sampling obtains as the unique point of augmenting in target window, repeatedly sampling if necessary to guarantee that the point and other points that obtain keep certain distance.And according to the distance of itself and other point, appropriate location (according to flock of birds rule) is adjusted to in its position.
The method of online updating Hough forest is:
1), if detected successfully, the back projection that utilizes so the position of target's center to carry out Hough ballot, obtains " strong point " of this position.The strong point is near those pixels that produce ballot target location.Utilize these strong points as Seed Points, carry out the Target Segmentation based on " figure cuts " algorithm.
2), if detected unsuccessfully, utilize the tracking results of reliable basic tracker to guide cutting procedure.The unique point that use LK optical flow method tenacious tracking arrives as Seed Points, starts the Target Segmentation based on " figure cuts " algorithm as Seed Points.
3) based on 1) or 2) segmentation result that obtains, using the pixel that is divided into prospect as positive sample, be divided into the pixel of background as negative sample, Hough forest classified device is trained online.
Compared with prior art, technique effect of the present invention:
The present invention has realized the hand tracking based on vision of robust, by utilizing the target detection based on Hough forest, obtained the robustness to blocking, disturbing, syndrome tracker and sorter by conjunction with based on LK optical flow method, strengthened the robustness to illumination variation and rapid movement.Framework of the present invention is also applicable to carrying out the expansion of different trackers and sorter, makes it to meet more application demand.
Accompanying drawing explanation
Fig. 1 is overall framework figure of the present invention;
Fig. 2 is the process flow diagram of feature extracting method;
Fig. 3 is the schematic diagram of the training of Hough forest classified device and detection;
Fig. 4 is in conjunction with the schematic diagram of Hough forest testing result and tracker tracking results;
Embodiment
With reference to figure 1, the specific design of hand tracking system that the present invention is based on online machine learning is as follows:
(1) systemic-function:
Program utilizes USB camera to obtain image, extracts characteristics of image and detects after initial staff target, carries out initial Hough forest training, obtains initial Hough forest classified device (detecting device).Program turns to the hand tracking stage by staff detection-phase simultaneously, in every two field picture of camera subsequent acquisition, extract characteristics of image for Hough forest classified device, and certain neighborhood of the target window of above frame determines search window, in search window, carry out respectively staff target detection and tracking.If Hough forest classified device (detecting device) successfully detects the center of target, utilize this center to carry out back projection, the Seed Points of generation figure segmentation method carries out Target Segmentation.If detected unsuccessfully, the unique point of utilizing optical flow method to follow the tracks of starts figure as Seed Points and cuts method cutting procedure, the target that the target being partitioned into finally traces into as system.For carrying out on-line study constantly to promote the performance of detecting device, based on above-mentioned segmentation result, carry out the training of online Hough forest classified device.If detection and tracking are failure simultaneously, at present frame, do not train online so, and in follow-up some frames, carry out the staff testing process of full images, until target location successfully detected, now utilize the target detecting to carry out the tracing process of restarting systems.
(2) system input:
The RGB image that camera obtains.
(3) system output:
The staff target of irising out, comprises center and surrounds window.
(4) specific implementation:
First sorter (detecting device) and basic tracker complete independently respectively and detect and basic tracing task.The feature of 16 passages that Hough forest classified device (detecting device) utilization is extracted, classifies and obtains Hough vote information each pixel of search window, and the voting results that finally obtain form confidence map, and its extreme point is as the target location of detecting.If extreme point ballot value is too small, thinks and detect unsuccessfully.Otherwise, think and detect successfully, calculate the support pixel of the target location detecting, the cutting procedure cutting as Seed Points startup figure.On the other hand, if detected unsuccessfully, utilize the unique point of successfully following the tracks of to start as Seed Points the cutting procedure that figure cuts.The target's center that the center of gravity of the target being finally partitioned into is returned as system.Finally, utilize final segmentation result to produce positive negative sample, train online Hough forest classified device, for the target detection of next frame.
With reference to figure 2, the step of feature extraction is as follows:
(1) RGB is converted into Lab;
(2) RGB is converted into gray level image;
(3) calculate single order and the second derivative Ix of x direction and y direction, Iy, Ixx, Iyy;
(4) histogram of gradients of statistics subwindow, calculates the 9 class HoG features of tieing up in subwindow;
By all these Feature Combinations together, with regard to obtaining the feature of 16 dimensions, a pixel is described.This pixel can be key point or point of interest, as angle point etc., can be also common pixel.
With reference to figure 3, Hough forest detects and the schematic diagram of online training is explained as follows:
Because the test of Hough forest and the step of training are similar, so be placed in a figure, describe.For each pixel in search window, with Hough forest, its feature is classified.Every random tree to the assorting process of proper vector is: at each internal node, according to the feature passage of choosing at random and aforementioned decision function, determine the branch that proper vector enters; The leaf node of the branch that final proper vector enters has been stored the proper vector that enters this branch and has been belonged to the probability of target, thereby obtains the probability that proper vector to be measured belongs to target.The probability that the feature that all random trees are obtained belongs to target is averaging, if this average probability is greater than certain threshold value, thinks that this pixel belongs to target, utilizes its all ballot vector to vote at Hough parameter space.In search window, the ballot of all pixels adds up, and forms the confidence map about target's center.The center of the extreme point of this confidence map for detecting.Process and the said process of online training are similar, different is that it is input as target's center position and each training sample, proper vector enters after certain leaf node on the other hand, the frequency of the respective classes of this leaf node storage adds 1, upgrade the probability distribution of all categories of leaf node storage, thereby realized the object of on-line study.Utilize the center of detecting the pixel in target to be carried out to back projection, the Seed Points that obtains cutting apart.That based on these Seed Points, carries out cutting based on figure cuts apart.The target's center that finally detect of the center of gravity of segmentation result for revising.Finally be divided into the pixel of prospect as positive sample, the pixel that is divided into background is trained Hough forest as negative sample.
With reference to figure 4, in conjunction with the schematic diagram of Hough forest testing result and tracker tracking results, explain as follows:
In conjunction with the fundamental purpose of Hough forest testing result and tracker tracking results, be to improve by the adaptability of tracker the classification capacity of detecting device, by the stability of detecting device, improve conversely the stability of tracker, by both complementary entire system tracking performances that improves.Utilize successfully the result detecting, the in the situation that of LK optical flow method tracking characteristics group failure, syndrome is reset,, by carrying out Corner Detection and broca scale is processed in target window, the syndrome regenerating is distributed on target.Conversely, detecting failed in the situation that, utilizing unique point that LK optical flow method successfully follows the tracks of as Seed Points, starting the cutting procedure based on figure segmentation method, by the result of cutting apart, train online Hough forest detecting device, thereby promote detecting device follow-up detection effect in system.Both complementations promote the whole tracking performance of system greatly.
Above-mentioned example is of the present invention giving an example, although disclose for the purpose of illustration most preferred embodiment of the present invention and accompanying drawing, but it will be appreciated by those skilled in the art that: without departing from the spirit and scope of the invention and the appended claims, various replacements, variation and modification are all possible.Therefore, the present invention should not be limited to most preferred embodiment and the disclosed content of accompanying drawing.
Claims (7)
1. the vision hand tracking method based on online machine learning, the steps include:
1) in the staff image of taking, draw a circle to approve staff target, and using staff place window as target window;
2) image of taking is carried out to feature extraction, training Hough forest detecting device; In target window, carry out feature point detection and carry out the characteristic point group in initial beggar's tracker simultaneously;
3) for each two field picture of follow-up shooting, carry out feature extraction, then the Local Search window that the contiguous setting range of previous frame target window of take is present frame, utilizes Hough forest detecting device in Local Search window, to carry out staff detection, determines the target window of present frame; If detect unsuccessfully, carry out step 4);
4) adopt previous frame that sub-tracker the estimates unique point in target in the position of present frame, the unique point of succeeding and following the tracks of, if counting, the feature that success is followed the tracks of is greater than setting threshold with respect to the characteristic point group total characteristic ratio of counting, think and follow the tracks of effectively, otherwise think that sub-tracker tracking is invalid;
Wherein, if step 3) detect successfully, in the target window of present frame, produce new unique point for restarting the tracking of tracker; And target is cut apart, based on this segmentation result, trained online Hough forest classified device; If detected unsuccessfully, utilize step 4) the success unique point of following the tracks of cuts apart target, carries out the training of online Hough forest classified device based on this segmentation result; If detection and tracking are failure all, in follow-up some frames, carry out full images detection, until successfully detect target location, then the target detecting is followed the tracks of.
2. the method for claim 1, is characterized in that image to taking carries out feature extracting method and is:
21) the RGB image of shooting is transformed into the feature that Lab space provides L, a and tri-passages of b;
22) the RGB image of shooting is converted into gray-scale map, gray-scale map is asked to the first order derivative of x direction (being horizontal direction) and y direction (being vertical direction), on the basis of first order derivative, ask second derivative, the feature that obtains other four passages, is designated as Ix, Iy, Ixx, Iyy;
23) for each pixel in gray-scale map, ask the gradient orientation histogram feature in its local window: the direction Ang of gradient and big or small Mag;
24) the gradient direction Ang between 0~π is divided into 9 deciles, in local window, the gradient direction of each pixel is weighted with gradient amplitude size Mag 9 interval ballots of histogram, calculate the gradient direction of each pixel, obtain the gradient direction image of 9 passages: Ang
1~Ang
9;
25) to each directional image, ask its integral image to obtain the histogram integral image of 9 passages, and utilize histogram integral image to calculate the histogram of gradients of 9 passages of each pixel;
26) 9 passage gradient set of histograms of all pixels of whole image are become to the feature of other 9 passages, be designated as h1, h2 ..., h9.
3. method as claimed in claim 1 or 2, it is characterized in that utilizing Hough forest detecting device in Local Search window, to carry out staff detection, the method of determining the target window of present frame is: utilize Hough forest detecting device feature to each pixel in Local Search window to classify, and return to it and belong to all kinds of probability; If pixel belongs to the probability of staff target, be greater than setting threshold, think that it belongs to staff target, otherwise think and belong to background; The pixel that belongs to staff target is produced to the single ballot to staff target's center simultaneously; Then by the nearest extreme point in the distance Shang Zheng target's center position of final vote figure, be the center of detecting target, and record the ballot value of this extreme point; If this extreme point ballot value is less than certain threshold value H, thinks and detect unsuccessfully; Otherwise think and detect successfully, according to current goal center, with respect to the displacement of previous frame target's center position, determine the target window of present frame.
4. method as claimed in claim 3, is characterized in that obtaining described extreme point and as the method that detects the center of target is:
41) to each pixel in present frame, utilize Hough forest classified device to classify to it, obtain the probability that this point belongs to staff target;
42) obtain the confidence map about target location after traveling through all pixels, the extreme point of confidence map is the target location detecting; If the ballot value of this extreme point is not less than setting threshold H, to think and detect effectively, this extreme point is as the center of detecting target.
5. the method for claim 1, its feature is supplemented by the method for feature point detection following the tracks of the unique point of losing for described sub-tracker, obtains new characteristic point group.
6. the method for claim 1, is characterized in that training online the method for Hough forest classified device to be:
61), if detected successfully, the back projection that utilizes the center of target to carry out Hough ballot, obtains the strong point of this position; Utilize these strong points to carry out Target Segmentation as Seed Points; The described strong point is near the center of target, to produce the pixel of ballot;
62), if detected unsuccessfully, the unique point of utilizing described success to follow the tracks of guides cutting procedure, use LK optical flow method tenacious tracking to unique point as Seed Points, carry out Target Segmentation;
63) based on 61) or 62) segmentation result that obtains, using the pixel that is divided into prospect as positive sample, be divided into the pixel of background as negative sample, Hough forest classified device is trained online.
7. the method for claim 1, is characterized in that described sub-tracker is for the tracker based on optical flow method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210292291.7A CN103593679A (en) | 2012-08-16 | 2012-08-16 | Visual human-hand tracking method based on online machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210292291.7A CN103593679A (en) | 2012-08-16 | 2012-08-16 | Visual human-hand tracking method based on online machine learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103593679A true CN103593679A (en) | 2014-02-19 |
Family
ID=50083811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210292291.7A Pending CN103593679A (en) | 2012-08-16 | 2012-08-16 | Visual human-hand tracking method based on online machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103593679A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103985137A (en) * | 2014-04-25 | 2014-08-13 | 北京大学深圳研究院 | Moving object tracking method and system applied to human-computer interaction |
CN104809455A (en) * | 2015-05-19 | 2015-07-29 | 吉林大学 | Action recognition method based on distinguishable binary tree voting |
CN106683113A (en) * | 2016-10-27 | 2017-05-17 | 纳恩博(北京)科技有限公司 | Characteristic point tracking method and device |
CN107886057A (en) * | 2017-10-30 | 2018-04-06 | 南京阿凡达机器人科技有限公司 | Detection method of waving, system and a kind of robot of a kind of robot |
CN108062861A (en) * | 2017-12-29 | 2018-05-22 | 潘彦伶 | A kind of intelligent traffic monitoring system |
CN108229282A (en) * | 2017-05-05 | 2018-06-29 | 商汤集团有限公司 | Critical point detection method, apparatus, storage medium and electronic equipment |
CN109712171A (en) * | 2018-12-28 | 2019-05-03 | 上海极链网络科技有限公司 | A kind of Target Tracking System and method for tracking target based on correlation filter |
CN109978801A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | A kind of image processing method and image processing apparatus |
CN110097578A (en) * | 2019-05-09 | 2019-08-06 | 电子科技大学 | Plastic grains tracking |
US10372228B2 (en) | 2016-07-20 | 2019-08-06 | Usens, Inc. | Method and system for 3D hand skeleton tracking |
US10733474B2 (en) | 2018-07-03 | 2020-08-04 | Sony Corporation | Method for 2D feature tracking by cascaded machine learning and visual tracking |
CN113033256A (en) * | 2019-12-24 | 2021-06-25 | 武汉Tcl集团工业研究院有限公司 | Training method and device for fingertip detection model |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630363A (en) * | 2009-07-13 | 2010-01-20 | 中国船舶重工集团公司第七○九研究所 | Rapid detection method of face in color image under complex background |
-
2012
- 2012-08-16 CN CN201210292291.7A patent/CN103593679A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630363A (en) * | 2009-07-13 | 2010-01-20 | 中国船舶重工集团公司第七○九研究所 | Rapid detection method of face in color image under complex background |
Non-Patent Citations (4)
Title |
---|
HONG LIU ETC: "Robust Hand Tracking with Hough Forest and Multi-cue Flocks of Features", 《8TH INTERNATIONAL SYMPOSIUM ON VISUAL COMPUTING》 * |
MATHIAS KOLSCH ETC: "Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration", 《PROCEEDING OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 * |
崔文欢: ""基于在线机器学习的人手运动跟踪算法"", 《NSTL国家科技图书文献中心》 * |
胡文静: "自动人脸识别技术研究及其在人员身份认证系统中的实现", 《中国博士学位论文全文数据库》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015161697A1 (en) * | 2014-04-25 | 2015-10-29 | 深港产学研基地 | Method and system for tracking moving object applied to man-machine interaction |
CN103985137B (en) * | 2014-04-25 | 2017-04-05 | 深港产学研基地 | It is applied to the moving body track method and system of man-machine interaction |
CN103985137A (en) * | 2014-04-25 | 2014-08-13 | 北京大学深圳研究院 | Moving object tracking method and system applied to human-computer interaction |
CN104809455A (en) * | 2015-05-19 | 2015-07-29 | 吉林大学 | Action recognition method based on distinguishable binary tree voting |
CN104809455B (en) * | 2015-05-19 | 2017-12-19 | 吉林大学 | Action identification method based on the ballot of discriminability binary tree |
US10372228B2 (en) | 2016-07-20 | 2019-08-06 | Usens, Inc. | Method and system for 3D hand skeleton tracking |
CN106683113A (en) * | 2016-10-27 | 2017-05-17 | 纳恩博(北京)科技有限公司 | Characteristic point tracking method and device |
CN106683113B (en) * | 2016-10-27 | 2020-02-04 | 纳恩博(北京)科技有限公司 | Feature point tracking method and device |
CN108229282A (en) * | 2017-05-05 | 2018-06-29 | 商汤集团有限公司 | Critical point detection method, apparatus, storage medium and electronic equipment |
CN107886057A (en) * | 2017-10-30 | 2018-04-06 | 南京阿凡达机器人科技有限公司 | Detection method of waving, system and a kind of robot of a kind of robot |
CN107886057B (en) * | 2017-10-30 | 2021-03-30 | 南京阿凡达机器人科技有限公司 | Robot hand waving detection method and system and robot |
CN108062861B (en) * | 2017-12-29 | 2021-01-15 | 北京安自达科技有限公司 | Intelligent traffic monitoring system |
CN108062861A (en) * | 2017-12-29 | 2018-05-22 | 潘彦伶 | A kind of intelligent traffic monitoring system |
US10733474B2 (en) | 2018-07-03 | 2020-08-04 | Sony Corporation | Method for 2D feature tracking by cascaded machine learning and visual tracking |
CN109712171A (en) * | 2018-12-28 | 2019-05-03 | 上海极链网络科技有限公司 | A kind of Target Tracking System and method for tracking target based on correlation filter |
CN109712171B (en) * | 2018-12-28 | 2023-09-01 | 厦门瑞利特信息科技有限公司 | Target tracking system and target tracking method based on correlation filter |
CN109978801A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | A kind of image processing method and image processing apparatus |
CN109978801B (en) * | 2019-03-25 | 2021-11-16 | 联想(北京)有限公司 | Image processing method and image processing device |
CN110097578A (en) * | 2019-05-09 | 2019-08-06 | 电子科技大学 | Plastic grains tracking |
CN110097578B (en) * | 2019-05-09 | 2021-08-17 | 电子科技大学 | Plastic particle tracking method |
CN113033256A (en) * | 2019-12-24 | 2021-06-25 | 武汉Tcl集团工业研究院有限公司 | Training method and device for fingertip detection model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103593679A (en) | Visual human-hand tracking method based on online machine learning | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN106327526A (en) | Image object tracking method and image object tracking system | |
CN104183127B (en) | Traffic surveillance video detection method and device | |
CN103198493B (en) | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation | |
CN105260749B (en) | Real-time target detection method based on direction gradient binary pattern and soft cascade SVM | |
CN102609720B (en) | Pedestrian detection method based on position correction model | |
CN103886325B (en) | Cyclic matrix video tracking method with partition | |
CN103426179B (en) | A kind of method for tracking target based on mean shift multiple features fusion and device | |
CN101847265A (en) | Method for extracting moving objects and partitioning multiple objects used in bus passenger flow statistical system | |
CN103164858A (en) | Adhered crowd segmenting and tracking methods based on superpixel and graph model | |
CN103208008A (en) | Fast adaptation method for traffic video monitoring target detection based on machine vision | |
CN110084165A (en) | The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations | |
CN106355604A (en) | Target image tracking method and system | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN105608417A (en) | Traffic signal lamp detection method and device | |
CN104615986A (en) | Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change | |
CN105404894A (en) | Target tracking method used for unmanned aerial vehicle and device thereof | |
CN103605971A (en) | Method and device for capturing face images | |
CN104123714B (en) | A kind of generation method of optimal objective detection yardstick in people flow rate statistical | |
CN105184229A (en) | Online learning based real-time pedestrian detection method in dynamic scene | |
CN103426008A (en) | Vision human hand tracking method and system based on on-line machine learning | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
CN104463909A (en) | Visual target tracking method based on credibility combination map model | |
CN115620090A (en) | Model training method, low-illumination target re-recognition method and device and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140219 |