CN103105924A - Man-machine interaction method and device - Google Patents

Man-machine interaction method and device Download PDF

Info

Publication number
CN103105924A
CN103105924A CN2011103611200A CN201110361120A CN103105924A CN 103105924 A CN103105924 A CN 103105924A CN 2011103611200 A CN2011103611200 A CN 2011103611200A CN 201110361120 A CN201110361120 A CN 201110361120A CN 103105924 A CN103105924 A CN 103105924A
Authority
CN
China
Prior art keywords
image
target image
testing image
negative sample
decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103611200A
Other languages
Chinese (zh)
Other versions
CN103105924B (en
Inventor
郑锋
赵颜果
宋展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201110361120.0A priority Critical patent/CN103105924B/en
Publication of CN103105924A publication Critical patent/CN103105924A/en
Application granted granted Critical
Publication of CN103105924B publication Critical patent/CN103105924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A man-machine interaction method comprises the following steps: a first step of obtaining a target image appointed by a user and establishing classifiers including a plurality of decision-making trees based on random forest training, a second step of storing a plurality of classifiers including the plurality of decision-making trees and a positive-negative sample union, a third step of obtaining an image to be detected, a fourth step of calculating probability that the image to be detected is the same as the target image, a fifth step of judging whether the image to be detected is the target image or not according to a preset judging threshold value, a sixth step of analyzing and obtaining relevancy between the image to be detected and the positive-negative sample union, a seventh step of judging whether the image to be detected is the target image or not according to first relevancy threshold value, an eighth step of judging the image to be detected is the target image at last when the image to be detected is the same as the target image under the judging threshold value and the first relevancy threshold value, and a ninth step of utilizing the image to be detected to adjust parameters of the classifiers and complement the positive-negative sample union. The invention further provides a man-machine interaction device. According to the man-machine interaction method and the device, the user defined target image can be achieved, and recognition precision and system stability can be continuously enhanced in the process of using.

Description

Man-machine interaction method and device
[technical field]
The present invention relates to art of image analysis, particularly relate to a kind of man-machine interaction method and device.
[background technology]
In recent years, along with popularizing of intelligent terminal, seek the hot issue that a kind of more naturally simpler man-machine interaction mode becomes scientific research and industrial field already.Make a general survey of the developing history of human-computer interaction technology, develop into gradually the contactless modes of operation such as vision, voice, attitude from modes such as mouse, keyboard, telepilots, and vision technique is as of paramount importance means wherein.Namely obtain picture by camera, based on image intelligent analytical technology decision operation person's action and intention, and then control machine.But it faces maximum problem is complicacy and the uncertainty of environment, makes also full maturity not of this technology.Development along with the 3D technology, Microsoft has released the Kinect system, it is by the dynamic three-dimensional reconstruction technology, man-machine interaction is extended to real 3d space by the image space of 2D, the depth information of 3d space has effectively solved comparatively complicated background segment problem in the 2D space, make this technology be tending towards ripe, and be applied in the equipment such as televisor, game machine, be used as external human-computer interaction device.
But be based on the gesture body sense control technology of 3D technology: this technology take Kinect system of Microsoft as representative has realized three-dimensional real-time reconstruction to scene by the dynamic 3 D reconfiguration technique, visual detection algorithm is imported 3d space by 2D carries out, reduced the difficulty of identification, but hardware cost and calculated amount have been increased, and small product size is larger, is difficult to be embedded in existing intelligent terminal.
And conventional art is all to preset specific target image, makes the user in use, provides target must be limited in default specific objective image, and flexibility ratio is lower.
[summary of the invention]
Based on the various deficiencies of conventional art, but be necessary to provide a kind of man-machine interaction method and device of User Defined target image.
A kind of man-machine interaction method comprises the steps:
Step S201 receives the learn command that the user inputs, and starts mode of learning;
Step S202 obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees;
Step S203 stores the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection;
Step S204 receives the detection instruction of user's input, the start detection pattern;
Step S205 obtains testing image;
Step S206 utilizes the described testing image of the classifier calculated probability identical with target image of described a plurality of decision trees, a plurality of probable values that output is corresponding;
Step S207 according to described a plurality of probable values and default decision threshold, judges whether described testing image is described target image;
Step S208 analyzes and obtains the degree of correlation of described testing image and positive negative sample intersection;
Step S209 according to the described degree of correlation and the first default relevant bottom valve value, judges whether described testing image is described target image;
Step S210, when all judgement was identical with described target image under described decision threshold and the described first relevant bottom valve value when described testing image, the described testing image of final decision was described target image;
Step S212 utilizes the final decision described testing image identical with described target image, adjusts the parameter of the sorter of described a plurality of decision trees;
Step S214, when the degree of correlation of the final decision described testing image identical with described target image satisfies the second default relevant bottom valve value, described testing image is added in described positive negative sample intersection as positive sample, when the probable value of the final decision described testing image different from described target image reaches default correction threshold values, described testing image is added in described positive negative sample intersection as negative sample.
In a preferred embodiment of the present invention, the action that gathers positive sample and negative sample in described step S202 comprise to described target image be rotated, projection, convergent-divergent or translation process, and gather respectively positive sample.
In a preferred embodiment of the present invention, described step S206 is the variance yields that first extracts described testing image, utilize default variance threshold values to get rid of the described testing image that does not meet the demands, then calculate the described testing image probability identical with described target image that satisfies the requirement of described variance threshold values.
In a preferred embodiment of the present invention, the most probable value of checking negative sample when described default decision threshold is set to train in described step S202.
In a preferred embodiment of the present invention, man-machine interaction method also comprises the steps
Step S301 extracts and records the coordinate information of judging the described testing image identical with described target image;
Step S302 judges that when detecting the described testing image dead time identical with described target image reach the Preset Time threshold values for the first time, begins to record the movement locus of described testing image;
Step S303 judges that when detecting the described testing image dead time identical with described target image reach the Preset Time threshold values for the second time, stops recording the track of described testing image;
Step S304 carries out word identification according to the track of described record.
A kind of human-computer interaction device, it comprises:
The learn command receiving element is used for receiving the learn command that the user inputs, and starts mode of learning;
Training unit is used for responding described learn command, obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees
Storage unit is used for storing the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection.
Detection command reception unit is used for receiving the detection instruction of user's input, the start detection pattern.
Image acquisition unit is used for obtaining testing image
Recognition unit, for the described testing image of the classifier calculated probability identical with described target image that utilizes described a plurality of decision trees, a plurality of probable values that output is corresponding; And according to described a plurality of probable values and default decision threshold, judge whether described testing image is described target image
Comparing unit is for the degree of correlation of analyzing and obtain described testing image and positive negative sample intersection; And according to the described degree of correlation and the first default relevant bottom valve value, judge whether described testing image is described target image.
Identifying unit is used for all judging under described decision threshold and the described first relevant bottom valve value at described testing image that when identical with described target image, the described testing image of final decision is described target image.
Updating block is used for utilizing the final decision described testing image identical with described target image, adjusts the parameter of the sorter of described a plurality of decision trees; When the degree of correlation of the final decision described testing image identical with described target image satisfies the second default relevant bottom valve value, described testing image is added in described positive negative sample intersection as positive sample, when the probable value of the final decision described testing image different from described target image reaches default correction threshold values, described testing image is added in described positive negative sample intersection as negative sample.
In a preferred embodiment of the present invention, the action of the positive sample of described training unit collection and negative sample comprise to target image be rotated, projection, convergent-divergent or translation process, and gather respectively positive sample.
In a preferred embodiment of the present invention, described recognition unit is used for extracting the variance yields of described testing image, utilize default variance threshold values to get rid of the described testing image that does not meet the demands, and calculate the described testing image probability identical with described target image that satisfies the requirement of variance threshold values.
In a preferred embodiment of the present invention, described default decision threshold is set to the most probable value of described training unit checking negative sample when training.
Above-mentioned man-machine interaction method and device can receive user's self-defined target image, as identifying object, provide good dirigibility for the user uses.And utilize testing image sorter to be revised and the positive negative sample of supplementary target image, the stability that makes man-machine interaction is strengthen continuously in use, realizes the human-computer interaction function of better effects if.
[description of drawings]
Fig. 1 is the flow chart of steps of the man-machine interaction method of an embodiment;
Fig. 2 is the flow chart of steps based on the action Writing method of man-machine interaction method;
Fig. 3 is the functional block diagram of the human-computer interaction device of an embodiment.
[embodiment]
The not high problem of flexibility ratio when solving that in conventional art, the user uses, but a kind of man-machine interaction method and device of User Defined target image have been proposed.
As shown in Figure 1, it is the flow chart of steps of the man-machine interaction method of an embodiment, comprises the steps:
Step S201 receives the learn command that the user inputs, and starts mode of learning.
Step S202 obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees.
The target image of the described user's of obtaining appointment is namely the User Defined target image.Suppose, the user wishes to utilize palm to realize man-machine interaction.Just can under mode of learning, provide the palm image as target image by camera.Gather positive sample more comprehensively in order to make, the present invention by target image is rotated, the processing such as projection, convergent-divergent and translation, to obtain the more positive sample of horn of plenty.
The sorter of described a plurality of decision trees is by target image being carried out then a feature description is set up by many default (as 10) decision trees a plurality of sorters that are used for image recognition, realizing the calculating of similar probability.
Step S203 stores the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection.
Step S204 receives the detection instruction of user's input, the start detection pattern.
Step S205 obtains testing image.
The mode of obtaining testing image can be by the camera photographic images, and then the moving window by different size obtains testing image in the mode of searching thoroughly from the image of taking.
Step S206 utilizes the described testing image of the classifier calculated probability identical with target image of described a plurality of decision trees, a plurality of probable values that output is corresponding.
In order to improve detection efficiency, in one embodiment of the invention, step S206 is the variance yields that at first extracts testing image, and the testing image that the default direct eliminating of variance threshold values of recycling does not meet the demands only calculates the testing image probability identical with described target image that satisfies the requirement of variance threshold values.
Step S207 according to described a plurality of probable values and default decision threshold, judges whether testing image is target image.
Decision procedure can be first a plurality of probable values to be averaged to obtain the average probability value, then according to the average probability value whether greater than default decision threshold, judge whether testing image identical with target image.In a preferred embodiment of the present invention, the most probable value of checking negative sample when described default decision threshold is set to train in step S202.
Step S208 analyzes and obtains the degree of correlation of testing image and positive negative sample intersection.
Step S209 according to the described degree of correlation and the first default relevant bottom valve value, judges whether testing image is target image.
After judging that testing image is target image, need the instruction carried out, according to the different situations setting, as the action of mouse beacon pointer or other.
Step S210, when all judgement was identical with target image under decision threshold and the first relevant bottom valve value when testing image, the final decision testing image was target image.
Step S212 utilizes the final decision testing image identical with target image, adjusts the parameter of the sorter of described a plurality of decision trees.
Step S214, when the degree of correlation of the final decision testing image identical with target image satisfies the second default relevant bottom valve value, testing image is added in positive negative sample intersection as positive sample, when the probable value of the final decision testing image different from target image reaches default correction threshold values, testing image is added in positive negative sample intersection as negative sample.
Because the user is when the target setting image, the sample size of the target image that provides is limited, in order further to improve accuracy of identification, above-mentioned man-machine interaction method utilize to judge it is that the testing image of target image replenishes as the positive sample of target image and the parameter adjustment of sorter in image recognition processes.Also utilize and judge it is not the testing image of target image, but very additional as the negative sample of target image near the testing image of target image.
Above-mentioned man-machine interaction method can receive user's self-defined target image, as identifying object, provides good dirigibility for the user uses.And utilize testing image sorter to be revised and the positive negative sample of supplementary target image, the stability that makes man-machine interaction is strengthen continuously in use, realizes the human-computer interaction function of better effects if.
As shown in Figure 2, the action Writing method based on above-mentioned man-machine interaction method comprises the steps:
Step S301 extracts and records the coordinate information of judging the testing image identical with target image.
The coordinate information of testing image can be processed acquisition by the identical a plurality of testing images of result of determination being carried out cluster and coordinate weighted mean.
Step S302 judges that when detecting the testing image dead time identical with target image reach the Preset Time threshold values for the first time, begins to record the movement locus of testing image.
Step S303 judges that when detecting the testing image dead time identical with target image reach the Preset Time threshold values for the second time, stops recording the track of testing image.
Step S304 carries out word identification according to the track that records.To realize that the user utilizes the target image (palm) that oneself defines mobile, realizes writing input function.
The record of track need to select the impact point in the testing image identical with target image to carry out record, and in one embodiment of the invention, definite mode of impact point comprises the steps:
The point that initialization need to be followed the tracks of.
Calculate the light stream pyramid of two two field pictures, calculate the impact point in the corresponding current frame image of point of being followed the tracks of by initialized needs according to the light stream between two two field pictures.
Exchange with the exchange of previous frame image and current frame image and with the pyramid of previous frame image and current frame image, calculate in the previous frame image for the impact point in current frame image according to the light stream between two two field pictures.
As shown in Figure 3, it is the functional block diagram of the human-computer interaction device 40 of an embodiment, comprising: learn command receiving element 400, training unit 402, storage unit 404, detect command reception unit 406, image acquisition unit 408, recognition unit 410, comparing unit 412, identifying unit 414 and updating block 416.
Learn command receiving element 400 is used for receiving the learn command of user's input, starts mode of learning
Training unit 402 is used for the response learn command, obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees.
The target image of the described user's of obtaining appointment is namely the User Defined target image.Suppose, the user wishes to utilize palm to realize man-machine interaction.Just can under mode of learning, provide the palm image as target image by camera.Gather positive sample more comprehensively in order to make, training unit 402 of the present invention be used for by target image is rotated, the processing such as projection, convergent-divergent and translation, to obtain the more positive sample of horn of plenty.The sorter of described a plurality of decision trees is by target image being carried out then a feature description is set up by many default (as 10) decision trees a plurality of sorters that are used for image recognition, realizing the calculating of similar probability.
Storage unit 404 is used for storing the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection.
Detect the command reception unit 406 detection instructions for reception user input, the start detection pattern.
Image acquisition unit 408 is used for obtaining testing image.
The mode of obtaining testing image can be by the camera photographic images, and then the moving window by different size obtains testing image in the mode of searching thoroughly from the image of taking.
Recognition unit 410 is used for utilizing the described testing image of the classifier calculated probability identical with target image of described a plurality of decision trees, a plurality of probable values that output is corresponding; And according to described a plurality of probable values and default decision threshold, judge whether testing image is target image.
In order to improve detection efficiency, in one embodiment of the invention, recognition unit 410 is used for extracting the variance yields of testing image, the testing image that utilizes the default direct eliminating of variance threshold values not meet the demands, and calculate the testing image probability identical with described target image that satisfies the requirement of variance threshold values.
Decision procedure can be first a plurality of probable values to be averaged to obtain the average probability value, then according to the average probability value whether greater than default decision threshold, judge whether testing image identical with target image.In a preferred embodiment of the present invention, described default decision threshold is set to the most probable value of training unit 402 checking negative sample when training.
Comparing unit 412 is used for analyzing and obtaining the degree of correlation of testing image and positive negative sample intersection; And according to the described degree of correlation and the first default relevant bottom valve value, judge whether testing image is target image.
Identifying unit 414 is used for all judging under decision threshold and the first relevant bottom valve value at testing image that when identical with target image, the final decision testing image is target image.
Updating block 416 is used for utilizing the final decision testing image identical with target image, adjusts the parameter of the sorter of described a plurality of decision trees; When the degree of correlation of the final decision testing image identical with target image satisfies the second default relevant bottom valve value, testing image is added in positive negative sample intersection as positive sample, when the probable value of the final decision testing image different from target image reaches default correction threshold values, testing image is added in positive negative sample intersection as negative sample.
Because training unit 402 is when receiving user's target setting image, the sample size that gathers is limited, in order further to improve accuracy of identification, above-mentioned human-computer interaction device is in image recognition processes, updating block 416 utilization judges it is that the testing image of target image replenishes as the positive sample of target image and the parameter adjustment of sorter, also utilize and judge it is not the testing image of target image, but very additional as the negative sample of target image near the testing image of target image.
Above-mentioned human-computer interaction device can receive user's self-defined target image, as identifying object, provides good dirigibility for the user uses.And utilize testing image sorter to be revised and the positive negative sample of supplementary target image, the stability that makes man-machine interaction is strengthen continuously in use, realizes the human-computer interaction function of better effects if.
The above embodiment has only expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to the scope of the claims of the present invention.Should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (9)

1. a man-machine interaction method, is characterized in that, comprises the steps:
Step S201 receives the learn command that the user inputs, and starts mode of learning;
Step S202 obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees;
Step S203 stores the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection;
Step S204 receives the detection instruction of user's input, the start detection pattern;
Step S205 obtains testing image;
Step S206 utilizes the described testing image of the classifier calculated probability identical with target image of described a plurality of decision trees, a plurality of probable values that output is corresponding;
Step S207 according to described a plurality of probable values and default decision threshold, judges whether described testing image is described target image;
Step S208 analyzes and obtains the degree of correlation of described testing image and positive negative sample intersection;
Step S209 according to the described degree of correlation and the first default relevant bottom valve value, judges whether described testing image is described target image;
Step S210, when all judgement was identical with described target image under described decision threshold and the described first relevant bottom valve value when described testing image, the described testing image of final decision was described target image;
Step S212 utilizes the final decision described testing image identical with described target image, adjusts the parameter of the sorter of described a plurality of decision trees;
Step S214, when the degree of correlation of the final decision described testing image identical with described target image satisfies the second default relevant bottom valve value, described testing image is added in described positive negative sample intersection as positive sample, when the probable value of the final decision described testing image different from described target image reaches default correction threshold values, described testing image is added in described positive negative sample intersection as negative sample.
2. man-machine interaction method according to claim 1, is characterized in that, the action that gathers positive sample and negative sample in described step S202 comprise to described target image be rotated, projection, convergent-divergent or translation process, and gather respectively positive sample.
3. man-machine interaction method according to claim 1, it is characterized in that, described step S206 is the variance yields that first extracts described testing image, utilize default variance threshold values to get rid of the described testing image that does not meet the demands, then calculate the described testing image probability identical with described target image that satisfies the requirement of described variance threshold values.
4. man-machine interaction method according to claim 1, is characterized in that, the most probable value of checking negative sample when described default decision threshold is set to train in described step S202.
5. man-machine interaction method according to claim 1, is characterized in that, described man-machine interaction method also comprises the steps
Step S301 extracts and records the coordinate information of judging the described testing image identical with described target image;
Step S302 judges that when detecting the described testing image dead time identical with described target image reach the Preset Time threshold values for the first time, begins to record the movement locus of described testing image;
Step S303 judges that when detecting the described testing image dead time identical with described target image reach the Preset Time threshold values for the second time, stops recording the track of described testing image;
Step S304 carries out word identification according to the track of described record.
6. a human-computer interaction device, is characterized in that, it comprises:
The learn command receiving element is used for receiving the learn command that the user inputs, and starts mode of learning;
Training unit is used for responding described learn command, obtains the target image of user's appointment, gathers positive sample and negative sample, and based on the random forest training, sets up the sorter that comprises a plurality of decision trees
Storage unit is used for storing the described a plurality of sorters that comprise a plurality of decision trees, and stores described positive sample and negative sample, forms positive negative sample intersection.
Detection command reception unit is used for receiving the detection instruction of user's input, the start detection pattern.
Image acquisition unit is used for obtaining testing image
Recognition unit, for the described testing image of the classifier calculated probability identical with described target image that utilizes described a plurality of decision trees, a plurality of probable values that output is corresponding; And according to described a plurality of probable values and default decision threshold, judge whether described testing image is described target image
Comparing unit is for the degree of correlation of analyzing and obtain described testing image and positive negative sample intersection; And according to the described degree of correlation and the first default relevant bottom valve value, judge whether described testing image is described target image.
Identifying unit is used for all judging under described decision threshold and the described first relevant bottom valve value at described testing image that when identical with described target image, the described testing image of final decision is described target image.
Updating block is used for utilizing the final decision described testing image identical with described target image, adjusts the parameter of the sorter of described a plurality of decision trees; When the degree of correlation of the final decision described testing image identical with described target image satisfies the second default relevant bottom valve value, described testing image is added in described positive negative sample intersection as positive sample, when the probable value of the final decision described testing image different from described target image reaches default correction threshold values, described testing image is added in described positive negative sample intersection as negative sample.
7. human-computer interaction device according to claim 6, is characterized in that, the action of the positive sample of described training unit collection and negative sample comprise to target image be rotated, projection, convergent-divergent or translation process, and gather respectively positive sample.
8. human-computer interaction device according to claim 6, it is characterized in that, described recognition unit is used for extracting the variance yields of described testing image, utilize default variance threshold values to get rid of the described testing image that does not meet the demands, and calculate the described testing image probability identical with described target image that satisfies the requirement of variance threshold values.
9. human-computer interaction device according to claim 6, is characterized in that, described default decision threshold is set to the most probable value of described training unit checking negative sample when training.
CN201110361120.0A 2011-11-15 2011-11-15 Man-machine interaction method and device Active CN103105924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110361120.0A CN103105924B (en) 2011-11-15 2011-11-15 Man-machine interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110361120.0A CN103105924B (en) 2011-11-15 2011-11-15 Man-machine interaction method and device

Publications (2)

Publication Number Publication Date
CN103105924A true CN103105924A (en) 2013-05-15
CN103105924B CN103105924B (en) 2015-09-09

Family

ID=48313850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110361120.0A Active CN103105924B (en) 2011-11-15 2011-11-15 Man-machine interaction method and device

Country Status (1)

Country Link
CN (1) CN103105924B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281852A (en) * 2013-07-11 2015-01-14 上海瀛联体感智能科技有限公司 Target tracking algorithm based on fusion 2D detection
CN105069470A (en) * 2015-07-29 2015-11-18 腾讯科技(深圳)有限公司 Classification model training method and device
CN105427129A (en) * 2015-11-12 2016-03-23 腾讯科技(深圳)有限公司 Information delivery method and system
CN106295531A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 A kind of gesture identification method and device and virtual reality terminal
CN107797666A (en) * 2017-11-21 2018-03-13 出门问问信息科技有限公司 Gesture identification method, device and electronic equipment
CN110013197A (en) * 2019-04-16 2019-07-16 上海天诚通信技术股份有限公司 A kind of sweeping robot object identification method
CN110363074A (en) * 2019-06-03 2019-10-22 华南理工大学 One kind identifying exchange method for complicated abstract class of things peopleization
WO2020147598A1 (en) * 2019-01-15 2020-07-23 北京字节跳动网络技术有限公司 Model action method and apparatus, speaker having screen, electronic device, and storage medium
CN112817563A (en) * 2020-03-26 2021-05-18 腾讯科技(深圳)有限公司 Target attribute configuration information determination method, computer device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1828632A (en) * 2005-02-28 2006-09-06 株式会社东芝 Object detection apparatus, learning apparatus, object detection system, object detection method
WO2009018161A1 (en) * 2007-07-27 2009-02-05 Gesturetek, Inc. Enhanced camera-based input
US20090183125A1 (en) * 2008-01-14 2009-07-16 Prime Sense Ltd. Three-dimensional user interface
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1828632A (en) * 2005-02-28 2006-09-06 株式会社东芝 Object detection apparatus, learning apparatus, object detection system, object detection method
WO2009018161A1 (en) * 2007-07-27 2009-02-05 Gesturetek, Inc. Enhanced camera-based input
US20090183125A1 (en) * 2008-01-14 2009-07-16 Prime Sense Ltd. Three-dimensional user interface
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281852A (en) * 2013-07-11 2015-01-14 上海瀛联体感智能科技有限公司 Target tracking algorithm based on fusion 2D detection
CN105069470A (en) * 2015-07-29 2015-11-18 腾讯科技(深圳)有限公司 Classification model training method and device
CN105427129A (en) * 2015-11-12 2016-03-23 腾讯科技(深圳)有限公司 Information delivery method and system
CN106295531A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 A kind of gesture identification method and device and virtual reality terminal
CN107797666A (en) * 2017-11-21 2018-03-13 出门问问信息科技有限公司 Gesture identification method, device and electronic equipment
WO2020147598A1 (en) * 2019-01-15 2020-07-23 北京字节跳动网络技术有限公司 Model action method and apparatus, speaker having screen, electronic device, and storage medium
CN110013197A (en) * 2019-04-16 2019-07-16 上海天诚通信技术股份有限公司 A kind of sweeping robot object identification method
CN110363074A (en) * 2019-06-03 2019-10-22 华南理工大学 One kind identifying exchange method for complicated abstract class of things peopleization
CN112817563A (en) * 2020-03-26 2021-05-18 腾讯科技(深圳)有限公司 Target attribute configuration information determination method, computer device, and storage medium
CN112817563B (en) * 2020-03-26 2023-09-29 腾讯科技(深圳)有限公司 Target attribute configuration information determining method, computer device, and storage medium

Also Published As

Publication number Publication date
CN103105924B (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN109919251B (en) Image-based target detection method, model training method and device
TWI786313B (en) Method, device, storage medium, and apparatus of tracking target
US10198823B1 (en) Segmentation of object image data from background image data
CN103105924B (en) Man-machine interaction method and device
CN107808143B (en) Dynamic gesture recognition method based on computer vision
CN107767405B (en) Nuclear correlation filtering target tracking method fusing convolutional neural network
US10902056B2 (en) Method and apparatus for processing image
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN103353935B (en) A kind of 3D dynamic gesture identification method for intelligent domestic system
CN104317391B (en) A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
EP2374089B1 (en) Method, apparatus and computer program product for providing hand segmentation for gesture analysis
CN102257511B (en) Method, apparatus and computer program product for providing adaptive gesture analysis
WO2020078017A1 (en) Method and apparatus for recognizing handwriting in air, and device and computer-readable storage medium
CN103226835B (en) Based on method for tracking target and the system of online initialization gradient enhancement regression tree
CN104049760B (en) The acquisition methods and system of a kind of man-machine interaction order
CN102831439A (en) Gesture tracking method and gesture tracking system
CN111259751A (en) Video-based human behavior recognition method, device, equipment and storage medium
WO2016025713A1 (en) Three-dimensional hand tracking using depth sequences
US20130155026A1 (en) New kind of multi-touch input device
CN111444764A (en) Gesture recognition method based on depth residual error network
CN112995757B (en) Video clipping method and device
CN113516113A (en) Image content identification method, device, equipment and storage medium
She et al. A real-time hand gesture recognition approach based on motion features of feature points
CN113378770A (en) Gesture recognition method, device, equipment, storage medium and program product
CN103106388A (en) Method and system of image recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant