CN103530613B - Target person hand gesture interaction method based on monocular video sequence - Google Patents

Target person hand gesture interaction method based on monocular video sequence Download PDF

Info

Publication number
CN103530613B
CN103530613B CN201310481745.XA CN201310481745A CN103530613B CN 103530613 B CN103530613 B CN 103530613B CN 201310481745 A CN201310481745 A CN 201310481745A CN 103530613 B CN103530613 B CN 103530613B
Authority
CN
China
Prior art keywords
palm
target person
model
target
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310481745.XA
Other languages
Chinese (zh)
Other versions
CN103530613A (en
Inventor
黄飞
侯立民
田泽康
谢建
许永喜
张琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianchen Times Technology Co.,Ltd.
Original Assignee
Yi Teng Teng Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yi Teng Teng Polytron Technologies Inc filed Critical Yi Teng Teng Polytron Technologies Inc
Priority to CN201310481745.XA priority Critical patent/CN103530613B/en
Publication of CN103530613A publication Critical patent/CN103530613A/en
Application granted granted Critical
Publication of CN103530613B publication Critical patent/CN103530613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a target person hand gesture interaction method based on a monocular video sequence. The target person hand gesture interaction method comprises the following steps: first, acquiring an image in a monocular video frame sequence, using a movement detection algorithm to extract a movement foreground mask, and using a palm classifier to detect the minimum circumscribed rectangle frame of a palm and screen the palm of a target person; extracting a color histogram model from the palm image of the target person, calculating the reverse projection drawing of the palm image of the target person, and summarizing the area model of the palm of the target person; for a tracked target region, using a color model to calculate the reverse projection drawing, and calculating the area of the current frame of a target person hand to judge the static gesture of the target person hand: a fist or the palm; using the fist gesture or the palm gesture to carry out click or move interaction control. The target person hand can be screened under complicated background, and the tracking of any hand gesture and the recognition of any preset trajectory are finished. The method can be applied on an embedded platform with low operation capability and is simple, speedy and stable.

Description

A kind of target person gesture interaction method based on monocular video sequence
Technical field
The present invention relates to human-computer interaction technology, particularly relate to a kind of target person gesture interaction side based on monocular video sequence Method.
Background technology
Based on the motion sensing control technology of gesture identification, have become as a kind of at present important man-machine interaction means.It passes through Common camera gathers the motion picture of user, by algorithm for pattern recognition, the hand-characteristic in image is detected and is determined Position, and identify the shape of hand, this identification information is converted into operation signal, feeds back to the terminals such as intelligent television, and touch Send out operational order corresponding, such as the switching of TV programme, the regulation of volume, simple game interactive etc..Gesture Recognition base Photographic head provisioned in intelligent terminal, installs corresponding identification software in terminal, you can complete above operation, thus in hardware On cost and mode of operation, all there is great advantage, thus this technology is being increasingly becoming the standard configuration module of intelligent television.
Research according to gesture identification and the evolution of application, substantially can be divided into following several technological means:
(1) be based on data glove or adornment: special glove or marker are worn by user, by photographic head Lai It is identified, glove are particular design in itself, there is obvious feature, but the mode of operation of this Worn type is it is clear that hardly possible To meet the needs of natural man-machine interaction, thus the method is not widely used all the time;
(2) it is based on 3d depth camera: by three-dimensional scanning device, obtain the dynamic 3 D model of operator, because its work Make in 3d space, thus avoid the substantial amounts of difficult problem such as color interference, image segmentation present in 2d space.But 3d scanning sets Standby volume is larger, and hardware cost is higher, and required operational capability is higher, thus is difficult to integrated and is applied to popular intelligence eventually On the end such as equipment such as TV, mobile phone;
(3) technology based on the image recognition of common camera 2d: because this technology is to be realized based on common camera , thus be also the technology of most large-scale application potentiality, but the shortcoming of this technology is also clearly: a) for simple base For the gestures detection technology of features of skin colors, the illumination of environment is easy to change the color characteristic of hand so that detection becomes Difficult;B) for the gestures detection based on shape facility, similar target object present in complex background easily causes Flase drop;C) although it has been lifted in terms of accuracy of detection for merging manifold gestures detection algorithm, but The impact of illumination and background can not fundamentally be solved the problems, such as, prior, with the lifting of algorithm complex, its operand is anxious Increase severely plus it is clear that being difficult to meet the needs of the terminals such as existing intelligent television.
To sum up, current Gesture Recognition Algorithm is due to itself complexity and required substantial amounts of Video processing, The operation of smoothness is hardly resulted on the embedded platform of existing intelligent terminal such as intelligent television.
Thus how to develop simple and quick and stable Gesture Recognition Algorithm so as to can be embedded in low operational capability Applied on platform and become current urgent problem, and for all of gesture interaction system, the inspection of gesture Survey, follow the tracks of and identification is it is critical that part.
Content of the invention
The present invention propose a kind of target person gesture interaction algorithm based on monocular video sequence, solve in prior art by In Gesture Recognition Algorithm itself complexity and required substantial amounts of Video processing, existing intelligent terminal as intelligence electricity Depending on embedded platform on hardly result in smoothness operation problem.
The technical scheme is that and be achieved in that:
A kind of target person gesture interaction method based on monocular video sequence, comprises the steps:
Before s1: interactive system starts, carry out target person palm selection process: obtain monocular video frame sequence image, use Motion detection algorithm extracts sport foreground mask, using palm detection of classifier palm minimum enclosed rectangle frame, is covered according to prospect Film and the overlapping cases of palm minimum enclosed rectangle frame, select out target person palm;
S2: target person palm feature modeling process: extract color histogram graph model from target person palm image, using face Color Histogram model, calculates the back projection figure of target person palm image, the Area Model of statistics target person palm;
S3: target person palm follows the tracks of process: using staff target tracking algorism with previous frame people's palm position as initial bit Putting and be tracked, find the target person palm position in present frame, if can't detect people's palm, comprehensively using color model Back projection figure and sport foreground mask carry out the tracking of people's palm;
S4: target person hand identification process: include movement locus identification process, set up mapping point system and static gesture knowledge Other process;
S5: interactive process: clicked on using fist and two kinds of attitudes of palm or moved interactive controlling.
Preferably, in step s1 motion detection algorithm when asking for foreground mask using Statistical background model, such as previous frame Detect or trace into target person palm, then update staff rectangular area using background model data in background model renewal, make Update non-staff rectangular area with current frame image data.
Preferably, when in step s1, palm detection of classifier is to multiple palm, successively according to overlapping number of times, rectangle frame chi In very little, rectangle frame position and rectangle frame, the color of target is screened, and confirms target person palm.
Preferably, in the rectangular histogram of hsv color space in step s2, the component bins number of h, s, v be respectively 64,32, 32}.
Preferably, extract pixel during color histogram graph model in step s2 and be located at area on the lower side in the middle part of palm detection block Domain.
Preferably, using color histogram graph model in step s2, back projection the two-value of target person palm image are calculated Change, during the Area Model of extraction target person palm, the full-size(d) of palm is the colour of skin valid pixel size of target person palm.
Preferably, the specifically comprising the following steps that of step s3
(1) if can't detect palm, directly carried out using the back projection figure and sport foreground mask of color model The tracking of staff;Otherwise enter step (2);
(2) in previous frame target person palm near zone, calculate the reverse of this region using target person palm color model Projection, and carry out binaryzation, obtain color space makshsv;
(3) use maskhsv and maskfg, exclusion color and motion artifacts, obtain masktrack;
(4) use staff target tracking algorism with previous frame people's hand position as initial position, masktrack is carried out with Track, finds the target person palm position in present frame.
Preferably, staff target tracking algorism is meanshift algorithm.
Preferably, the specifically comprising the following steps that of static gesture identification process in step s4
(1) to the target area tracing into, calculate back projection using color model, count having of this back projection in figure Effect pixel count, extracts the area of target person palm, it is compared with Area Model, is then identified as fist if less than Area Model Head, is then identified as palm if greater than Area Model;
(2) calculate the people's palm region area after present frame back projection in the case of can't detect, and by it and mesh Mark people's palmar aspect product module type is divided by, and is then considered palm if greater than threshold value, and otherwise for fist, described threshold value is in maskhsv Pixel count.
Preferably, set up specifically comprising the following steps that of mapping point system in step s4
(1) to caching multiframe people's palm positional information count, obtain staff manipulation zone of comfort center and Width and height;
(2) Bing Jiangci center is corresponding with display device center, by the zone of comfort width of this staff manipulation and display device Width corresponds to, and the zone of comfort height and the width ratio of staff manipulation is consistent with display device, sets up mapping point system, and shows Mouse.
The target person gesture interaction algorithm based on monocular video sequence that the present invention provides, selects including target staff, with Track, movement locus identify, stationary posture identifies, the self organizing maps of operating area and viewing area, and interactive controlling part, It has the advantage that as follows:
1st, target staff selection can be carried out under complex background;
2nd, the tracking of any hand and the identification of any desired guiding trajectory can be carried out;
3rd, the zone of comfort self organizing maps of staff manipulation;
4th, operation that can be smooth on the embedded platform as intelligent television for the existing intelligent terminal.
Brief description
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing Have technology description in required use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, also may be used So that other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the workflow block diagram of the present invention;
Fig. 2 is the structured flowchart of man-machine interactive system in Fig. 1;
Fig. 3 is the schematic flow sheet that in the present invention, target person palm selects process;
Fig. 4 is the schematic flow sheet of target person palm feature modeling process in the present invention;
Fig. 5 is the schematic flow sheet that in the present invention, target person palm follows the tracks of process;
Fig. 6 is the schematic flow sheet of movement locus identification process in the present invention;
Fig. 7 is the schematic flow sheet setting up mapping point system in the present invention;
Fig. 8 is the schematic flow sheet of static gesture identification process in the present invention;
Fig. 9 is the schematic flow sheet of interactive process in the present invention;
Figure 10 is the total working schematic flow sheet of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation description is it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of not making creative work Embodiment, broadly falls into the scope of protection of the invention.
In order to contribute to and clarifying the description of subsequent embodiment, carry out specifically in the specific embodiment to the present invention Before bright, part term is explained, following explanation is applied to this specification and claims.
Makshsv, color space;
Maskfg, foreground mask;
Masktrack, follows the tracks of mask;
Haar-adaboost, the algorithm of the adaboost based on haar feature;
Lbp-adaboost, the algorithm of the adaboost based on lbp feature;
Hog-boost, the algorithm of the boost based on hog feature;
As shown in Figure 1, Figure 2, shown in Fig. 3 and Figure 10, the present invention, when carrying out staff selection process, starts harvester such as first Photographic head, shoots images of gestures video flowing and image/video stream is processed, by carrying out gesture to gesture image/video stream Segmentation, images of gestures video is converted into corresponding picture frame, sets up gesture template further according to corresponding picture frame, finally by Gesture identification process is converted into corresponding mouse action realizing gesture.
Before interactive system starts, what staff was selected comprises the following steps that;
(1) obtain monocular video sequence data;
(2) motion detection algorithm is used to extract sport foreground mask maskfg;
(3) judging whether the pixel comprising in maskfg is more than area, if so, then entering step (4), conversely, directly making Updated with background model data, there is no target person palm, area is staff image pixel value.
(4) judge whether palm grader detects palm boundary rectangle frame, if so, then enter step (5), conversely, directly Updated using background model data, there is no target person palm.
(5) judge whether target person palm comprises most foreground pixel numbers in multiple palms external world rectangle frame, if It is then to update the background model in motion detection algorithm using current frame data, the selection of target person palm completes, conversely, directly Updated using background model data, there is no target person palm.
The present invention uses background model to update target person palm area, can also be the linear combination of multiple background models, Palm grader is haar-adaboost, any one or other of lbp-adaboost and hog-boost.
As shown in figure 4, carry out target person palm feature modeling process, mesh when palm detection of classifier goes out target person palm Mark people's palm characteristic model includes dimension model and color histogram graph model, specifically comprises the following steps that
(1) judge whether target person palm rectangle [i] is detected, if so, then in rectangle [i] middle and lower part area Domain, calculates histogram distribution histplam [i] in hsv color space for the pixel, the as color model of target person palm;H, S, v represent tone, saturation and brightness respectively;
(2) in rectangle [i] region, calculate the back projection figure of histplam [i] and carry out binaryzation, obtain Color space maskhsv, calculates the pixel count in color space maskhsv, as dimension model.
The invention is not restricted to hsv color space, rgb color space and yuv color space, other also may be used.
As shown in figure 5, after selecting out staff, carrying out target hand tracking, specifically comprise the following steps that
(1) judge whether to follow the tracks of the color model histplam [i] of target person palm, if so, then in previous frame target person Handss near zone, in hsv color space, calculates back projection figure using target person palm color model histpalm [i], goes forward side by side Row binaryzation, obtains color space makshsv;
(2) comprehensively use maskhsv and maskfg, exclusion color and motion artifacts, obtain masktrack, (3) use people Handss track algorithm carries out local optimum search, with previous frame people's hand position as initial position, is tracked on masktrack, Find the target person hand position in present frame.
Hand tracking algorithm can adopt meanshift algorithm, meanshift algorithm be a kind of stable in related data The method finding local peaking in Density Distribution, the invention is not restricted to meanshift iteration optimization searching algorithm, other similar calculations Method also may be used.
Target person hand identification process: include movement locus identification process, set up mapping point system and static gesture identification Process, the present invention, by pre-defining gesture template, extracts the feature of template, sets up the judgement letter of the gesture feature of current input Number, is trained by machine learning method, the matching result of prediction, as last recognition result, analyzes current gesture Implication.Gesture template is monochrome bitmap, only needs palm at present with clenching fist two kinds and can meet demand, identifies the handss in picture frame After gesture, need to arrange corresponding mouse position and action, be simultaneously converted into corresponding system mouse event.
As shown in fig. 6, before system start-up, carrying out movement locus identification process, specifically comprising the following steps that
First, it is determined that whether tracing into people's hand position of this frame, if it is not, then not activation system, otherwise then cache this People's hand position of one frame, caches multiframe people's hand position, and whole people's hand positions of caching are carried out resampling so as to and desired guiding trajectory Quantity and distribution have comparability, the position data of resampling and default track is compared, that is, to two tracks Carrying out similarity mode, if being more than threshold value, activation system, otherwise not starting.
Multiframe staff as shown in fig. 7, when system first time being in starting state, setting up mapping point system, to caching Positional information is counted, and obtains zone of comfort center and width, Bing Jiangci center and the display device center of staff manipulation Corresponding, this width is corresponding with display device width, and height and the width ratio is consistent with display device, sets up mapping point system, And show mouse, specifically comprise the following steps that
First, it is determined that whether system starts, if so, then calculate the staff place-centric centerpalm of caching, will Centerpalm is corresponding with display center, calculates the staff location boundary width widthpalm of caching, by widthpalm with aobvious Show that the wide corresponding of device, the height in human hand movement region and the ratio of widthpalm are equal with the ratio of width to height of display, calculate this frame The position of staff is mapped as on display corresponding position, sets up mapping point system, and shows mouse.Conversely, not showing mouse.
Carry out human hand movement region using the staff track statistic followed the tracks of and display control area maps one by one, do not limit In using center and width statistic, can also be height or summit etc..
As shown in figure 8, carrying out static gesture identification after system start-up, calculate the back projection in this region using color model Figure, counts the valid pixel number of this in figure, it is compared with Area Model, is then identified as fist if less than threshold value, if greatly Then it is identified as palm in threshold value, static gesture identification process step is as follows:
First, it is determined that whether tracing into people's hand position of this frame, if so, then being calculated using histplam and following the tracks of inframe Maskhsv simultaneously counts pixel count area in maskhsv, and judge, calculates the back projection figure in this region using color model, Count the valid pixel number of this in figure, it is compared with Area Model, be then identified as fist if less than threshold value, if greater than threshold Value is then identified as palm, conversely, target staff is lost.
As shown in figure 9, man-machine interaction manipulation need to pre-define gesture mapping model, handss are set up according to the result of gesture identification The mutual corresponding relation of power-relation, is converted into corresponding system command, for example, mouse-click, mouse are static etc., then drive according to demand Dynamic specific system acting simulates corresponding system mouse event, carries out man-machine interaction.After system start-up, if palm, then Mus Mark is mapped to new position;Otherwise then determine whether fist, if fist, then mouse mappings to new position and start to click;If It is not fist, then comparison object staff loses frame number and wati, if being less than, mouse is static, on the contrary termination system.
The invention is not restricted to be clicked on using fist and two kinds of staff stationary posture of palm and moved control, drag operation Also may be used.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention Within god and principle, any modification, equivalent substitution and improvement made etc., should be included within the scope of the present invention.

Claims (10)

1. a kind of target person gesture interaction method based on monocular video sequence is it is characterised in that comprise the steps:
Before s1: interactive system starts, carry out target person palm selection process: obtain monocular video frame sequence image, using motion Detection algorithm extracts sport foreground mask, using palm detection of classifier palm minimum enclosed rectangle frame, according to foreground mask Maskfg and the overlapping cases of palm minimum enclosed rectangle frame, select out target person palm;
S2: target person palm feature modeling process: extract color histogram graph model from target person palm image, straight using color Square graph model, calculates the back projection figure of target person palm image, the Area Model of statistics target person palm;
S3: target person palm follows the tracks of process: is entered for initial position with previous frame people's palm position using staff target tracking algorism Line trace, finds the target person palm position in present frame, if can't detect people's palm, comprehensive anti-using color model Carry out the tracking of people's palm to projection and sport foreground mask;
S4: target person hand identification process: include movement locus identification process, set up mapping point system and static gesture identified Journey;
S5: interactive process: clicked on using fist and two kinds of attitudes of palm or moved interactive controlling.
2. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that In described step s1, when asking for foreground mask using Statistical background model, such as previous frame detects or traces into motion detection algorithm Target person palm, then update staff rectangular area using background model data, using current frame image in background model renewal Data updates non-staff rectangular area.
3. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that When in described step s1, palm detection of classifier is to multiple palm, successively according to overlapping number of times, rectangle frame size, rectangle frame position And the color of target is screened in rectangle frame, confirm target person palm.
4. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that In the rectangular histogram of hsv color space in described step s2, the component bins number of h, s, v is respectively { 64,32,32 }.
5. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that Extract pixel during color histogram graph model in described step s2 and be located at region on the lower side in the middle part of palm detection block.
6. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that Using color histogram graph model in described step s2, calculate back projection the binaryzation of target person palm image, extract target During the Area Model of people's palm, the full-size(d) of palm is the colour of skin valid pixel size of target person palm.
7. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that Described s3 specifically comprises the following steps that
(1) if can't detect palm, directly carry out staff using the back projection figure and sport foreground mask of color model Tracking;Otherwise enter step (2);
(2) in previous frame target person palm near zone, calculate the back projection in this region using target person palm color model Figure, and carry out binaryzation, obtain color space makshsv;
(3) use maskhsv and maskfg, exclusion color and motion artifacts, obtain masktrack;
(4) use staff target tracking algorism with previous frame people's hand position as initial position, masktrack be tracked, Find the target person palm position in present frame.
8. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 or 7, its feature exists In described staff target tracking algorism is meanshift algorithm.
9. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 it is characterised in that In described s4, static gesture identification process specifically comprises the following steps that
(1) to the target area tracing into, calculate back projection using color model, count effective picture of this back projection in figure Prime number, extracts the area of target person palm, it is compared with Area Model, is then identified as fist if less than Area Model, such as Fruit is then identified as palm more than Area Model;
(2) calculate the people's palm region area after present frame back projection in the case of can't detect, and by it and target person Palm Area Model is divided by, and is then considered palm if greater than threshold value, and otherwise for fist, described threshold value is the picture in maskhsv Prime number.
10. a kind of target person gesture interaction method based on monocular video sequence according to claim 1 or 7, its feature It is, in described s4, set up specifically comprising the following steps that of mapping point system
(1) multiframe people's palm positional information of caching is counted, obtain zone of comfort center and the width of staff manipulation And height;
(2) Bing Jiangci center is corresponding with display device center, by the zone of comfort width of this staff manipulation and display device width Corresponding, the zone of comfort height and the width ratio of staff manipulation is consistent with display device, sets up mapping point system, and shows Mus Mark.
CN201310481745.XA 2013-10-15 2013-10-15 Target person hand gesture interaction method based on monocular video sequence Active CN103530613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310481745.XA CN103530613B (en) 2013-10-15 2013-10-15 Target person hand gesture interaction method based on monocular video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310481745.XA CN103530613B (en) 2013-10-15 2013-10-15 Target person hand gesture interaction method based on monocular video sequence

Publications (2)

Publication Number Publication Date
CN103530613A CN103530613A (en) 2014-01-22
CN103530613B true CN103530613B (en) 2017-02-01

Family

ID=49932612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310481745.XA Active CN103530613B (en) 2013-10-15 2013-10-15 Target person hand gesture interaction method based on monocular video sequence

Country Status (1)

Country Link
CN (1) CN103530613B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731323B (en) * 2015-02-13 2017-07-04 北京航空航天大学 A kind of gesture tracking method of many direction of rotation SVM models based on HOG features
CN104794737B (en) * 2015-04-10 2017-12-15 电子科技大学 A kind of depth information Auxiliary Particle Filter tracking
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN105354551B (en) * 2015-11-03 2019-07-16 北京英梅吉科技有限公司 Gesture identification method based on monocular cam
US10599919B2 (en) * 2015-12-31 2020-03-24 Microsoft Technology Licensing, Llc Detection of hand gestures using gesture language discrete values
CN105825170B (en) * 2016-03-10 2019-07-02 浙江生辉照明有限公司 Toward the detection method and device of renaturation gesture
CN106599771B (en) * 2016-10-21 2019-11-22 上海未来伙伴机器人有限公司 A kind of recognition methods and system of images of gestures
CN107015636A (en) * 2016-10-27 2017-08-04 蔚来汽车有限公司 The aobvious equipment gestural control method of virtual reality
CN108230353A (en) * 2017-03-03 2018-06-29 北京市商汤科技开发有限公司 Method for tracking target, system and electronic equipment
CN107688391B (en) * 2017-09-01 2020-09-04 广州大学 Gesture recognition method and device based on monocular vision
CN109697394B (en) * 2017-10-24 2021-12-28 京东方科技集团股份有限公司 Gesture detection method and gesture detection device
CN107886541B (en) * 2017-11-13 2021-03-26 天津市勘察设计院集团有限公司 Real-time monocular moving target pose measuring method based on back projection method
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture
CN108549489B (en) * 2018-04-27 2019-12-13 哈尔滨拓博科技有限公司 gesture control method and system based on hand shape, posture, position and motion characteristics
CN108989553A (en) 2018-06-29 2018-12-11 北京微播视界科技有限公司 The method, apparatus and electronic equipment of scene manipulation
CN110298298B (en) * 2019-06-26 2022-03-08 北京市商汤科技开发有限公司 Target detection and target detection network training method, device and equipment
CN110347266B (en) * 2019-07-23 2020-05-22 哈尔滨拓博科技有限公司 Space gesture control device based on machine vision
CN110780735B (en) * 2019-09-25 2023-07-21 上海芯龙光电科技股份有限公司 Gesture interaction AR projection method and device
CN110728229B (en) * 2019-10-09 2023-07-18 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
CN111367415B (en) * 2020-03-17 2024-01-23 北京明略软件系统有限公司 Equipment control method and device, computer equipment and medium
CN114510142B (en) * 2020-10-29 2023-11-10 舜宇光学(浙江)研究院有限公司 Gesture recognition method based on two-dimensional image, gesture recognition system based on two-dimensional image and electronic equipment
CN113052019A (en) * 2021-03-10 2021-06-29 南京创维信息技术研究院有限公司 Target tracking method and device, intelligent equipment and computer storage medium
CN116719419B (en) * 2023-08-09 2023-11-03 世优(北京)科技有限公司 Intelligent interaction method and system for meta universe

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020656A (en) * 2012-12-19 2013-04-03 中山大学 Device and method for identifying gestures through compressed infrared sensing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402680B (en) * 2010-09-13 2014-07-30 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020656A (en) * 2012-12-19 2013-04-03 中山大学 Device and method for identifying gestures through compressed infrared sensing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于空间直方图的CamShift目标跟踪算法";冀治航,等;《微电子学与计算机》;20090731;第26卷(第7期);第194-197页 *
"基于视觉的目标跟踪算法研究";赵运基;《中国博士学位论文全文数据库》;20121115(第11期);论文第2章 *

Also Published As

Publication number Publication date
CN103530613A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103530613B (en) Target person hand gesture interaction method based on monocular video sequence
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
CN107168527B (en) The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN106201173B (en) A kind of interaction control method and system of user's interactive icons based on projection
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
CN102200834B (en) Television control-oriented finger-mouse interaction method
CN102081918B (en) Video image display control method and video image display device
CN101719015B (en) Method for positioning finger tips of directed gestures
CN102831439B (en) Gesture tracking method and system
CN106547356B (en) Intelligent interaction method and device
CN101477631B (en) Method, equipment for extracting target from image and human-machine interaction system
CN102096471B (en) Human-computer interaction method based on machine vision
CN103150019A (en) Handwriting input system and method
Wu et al. Robust fingertip detection in a complex environment
CN106200971A (en) Man-machine interactive system device based on gesture identification and operational approach
WO2020082275A1 (en) Method and device for processing drawn content on terminal apparatus, and terminal apparatus
CN102222342A (en) Tracking method of human body motions and identification method thereof
CN103106388B (en) Method and system of image recognition
CN110032932A (en) A kind of human posture recognition method based on video processing and decision tree given threshold
CN104199548B (en) A kind of three-dimensional man-machine interactive operation device, system and method
Hartanto et al. Real time hand gesture movements tracking and recognizing system
CN109643165A (en) Gesture decision maker, gesture operation device and gesture determination method
CN104123008B (en) A kind of man-machine interaction method and system based on static gesture
Brancati et al. Robust fingertip detection in egocentric vision under varying illumination conditions
Xu et al. Bare hand gesture recognition with a single color camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 214000 Jiangsu Province, Wuxi City Linghu Wuxi national hi tech Industrial Development Zone, Road No. 111 Wuxi Software Park Building Room 701 whale

Applicant after: YST TECHNOLOGY Co.,Ltd.

Address before: 214000 Jiangsu province Wuxi Zhenze Wuxi national hi tech Industrial Development Zone, No. 18 Wuxi Road, software park, whale D building room 602

Applicant before: WUXI YSTEN TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information

Address after: 214000 Jiangsu province Wuxi city Wuxi District Linghu Road No. 111 Wuxi Software Park, whale D building room 701

Applicant after: YSTEN TECHNOLOGY CO.,LTD.

Address before: 214000 Jiangsu Province, Wuxi City Linghu Wuxi national hi tech Industrial Development Zone, Road No. 111 Wuxi Software Park Building Room 701 whale

Applicant before: YST TECHNOLOGY Co.,Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200512

Address after: Room 402, building C, Liye building, Southeast University Science Park, No. 20, Qingyuan Road, Xinwu District, Wuxi City, Jiangsu Province

Patentee after: Easy Star Technology Wuxi Co.,Ltd.

Address before: 214000 Jiangsu province Wuxi city Wuxi District Linghu Road No. 111 Wuxi Software Park, whale D building room 701

Patentee before: YSTEN TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221031

Address after: Room 407, Building C, Science Park, Southeast University, No. 20 Qingyuan Road, Xinwu District, Wuxi City, Jiangsu Province, 214028

Patentee after: Shijia Tianchen Technology Co.,Ltd.

Address before: Room 402, building C, Liye building, Southeast University Science Park, No.20 Qingyuan Road, Xinwu District, Wuxi City, Jiangsu Province, 214028

Patentee before: Easy Star Technology Wuxi Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 5C-1, No. 118 Jinghui East Road, Xinwu District, Wuxi City, Jiangsu Province, 214111

Patentee after: Tianchen Times Technology Co.,Ltd.

Address before: Room 407, Building C, Science Park, Southeast University, No. 20 Qingyuan Road, Xinwu District, Wuxi City, Jiangsu Province, 214028

Patentee before: Shijia Tianchen Technology Co.,Ltd.