CN110414393A - A kind of natural interactive method and terminal based on deep learning - Google Patents

A kind of natural interactive method and terminal based on deep learning Download PDF

Info

Publication number
CN110414393A
CN110414393A CN201910636847.1A CN201910636847A CN110414393A CN 110414393 A CN110414393 A CN 110414393A CN 201910636847 A CN201910636847 A CN 201910636847A CN 110414393 A CN110414393 A CN 110414393A
Authority
CN
China
Prior art keywords
click
deep learning
area
clicking
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910636847.1A
Other languages
Chinese (zh)
Inventor
胡宏波
熊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Rockchip Electronics Co Ltd
Original Assignee
Fuzhou Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Rockchip Electronics Co Ltd filed Critical Fuzhou Rockchip Electronics Co Ltd
Priority to CN201910636847.1A priority Critical patent/CN110414393A/en
Publication of CN110414393A publication Critical patent/CN110414393A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The present invention provides a kind of natural interactive method and terminal based on deep learning, using deep learning model in the key point for clicking implementation body comprising clicking identification in the corresponding image of click on area for implementing body;The movement for clicking implementation body is determined according to the key point;It calculates the click and implements the region where the movement of body and the depth difference of the click on area, determine that the click implements whether body clicks the click on area according to the depth difference;It does not need clicking the default feature texture of implementation body setting, it is able to detect the click without feature and implements whether body has carried out physics click, achieve the effect that natural interaction, be very suitable to the terminal of the human-computer interactions such as point reader, game machine, improves the convenience of human-computer interaction.

Description

A kind of natural interactive method and terminal based on deep learning
Technical field
The present invention relates to field of human-computer interaction more particularly to a kind of natural interactive methods and terminal based on deep learning.
Background technique
Existing point reads terminal and generallys use talking pen perhaps finger point read mode realization point reading talking pen or a finger point Reading have the characteristics that one it is common, be exactly that talking pen or finger need to have pre-set feature texture.It is logical that point, which reads terminal, It crosses and identifies the feature texture on talking pen or finger to determine talking pen or the specific signified position of finger and judge talking pen Or whether finger contacts with its signified position, and carries out corresponding response according to this.
However, above-mentioned point read mode carry out will ensuring that talking pen or finger have before reading every time it is pre-set Textural characteristics, and to guarantee the integrality of textural characteristics, otherwise, it will lead to a read error, inevitably cause one to operator Fixed inconvenience.
Summary of the invention
The technical problems to be solved by the present invention are: a kind of natural interactive method and terminal based on deep learning is provided, A reading can be realized without presetting feature texture, improve the convenience of human-computer interaction.
In order to solve the above-mentioned technical problem, a kind of technical solution that the present invention uses are as follows:
A kind of natural interactive method based on deep learning, comprising steps of
S1, implemented using deep learning model comprising clicking identification in the corresponding image of click on area for implementing body and clicking The key point of body;
S2, the movement for clicking implementation body is determined according to the key point;
The depth difference in region and the click on area where S3, the calculating movement for clicking implementation body, according to described Depth difference determines that the click implements whether body clicks the click on area.
Further, include: before the step S1
Implement body using the click for being labeled with key point information to be trained the deep learning model, makes the depth Learning model can position the key point for clicking implementation body.
Further, the step S2 includes:
The movement for clicking implementation body is determined according to the key point using the classifier of deep learning model.
Further, the step S2 further include:
Judge that described click implements whether the movement of body is click action, if so, thening follow the steps S3.
Further, the step S3 includes:
Corresponding depth map is converted by described image by depth convolutional neural networks, obtains depth information;
The region where the movement for clicking implementation body and the depth of the click on area are calculated according to the depth information It is poor to spend;
Determine that the click implements whether body contacts with the click on area according to the depth difference, if so, the point It hits and implements the body click click on area, otherwise, the click implements body and do not click on the click on area.
Further, after the step S3 further include:
If body is implemented in the click clicks the click on area, described click is implemented into body to the point of the click on area It hits and is converted into identifiable click action mode.
In order to solve the above-mentioned technical problem, the another technical solution that the present invention uses are as follows:
A kind of natural interaction terminal based on deep learning, including memory, processor and storage are on a memory and can The computer program run on the processor, the processor perform the steps of when executing the computer program
S1, implemented using deep learning model comprising clicking identification in the corresponding image of click on area for implementing body and clicking The key point of body;
S2, the movement for clicking implementation body is determined according to the key point;
The depth difference in region and the click on area where S3, the calculating movement for clicking implementation body, according to described Depth difference determines that the click implements whether body clicks the click on area.
Further, include: before the step S1
Implement body using the click for being labeled with key point information to be trained the deep learning model, makes the depth Learning model can position the key point for clicking implementation body.
Further, the step S2 includes:
The movement for clicking implementation body is determined according to the key point using the classifier of deep learning model.
Further, the step S2 further include:
Judge that described click implements whether the movement of body is click action, if so, thening follow the steps S3.
Further, the step S3 includes:
Corresponding depth map is converted by described image by depth convolutional neural networks, obtains depth information;
The region where the movement for clicking implementation body and the depth of the click on area are calculated according to the depth information It is poor to spend;
Determine that the click implements whether body contacts with the click on area according to the depth difference, if so, the point It hits and implements the body click click on area, otherwise, the click implements body and do not click on the click on area.
Further, after the step S3 further include:
If body is implemented in the click clicks the click on area, described click is implemented into body to the point of the click on area It hits and is converted into identifiable click action mode.
The beneficial effects of the present invention are: the key point for implementing body, and root are clicked based on deep learning model automatic identification It is determined according to the key point that identifies and clicks the movement for implementing body, implemented the movement of body based on described click and pass through not same district in image Depth difference between domain, which determines to click, implements whether body clicks the click on area, does not need clicking the default spy of implementation body setting Texture is levied, the click without feature is able to detect and implements whether body has carried out physics click, achieve the effect that natural interaction, it is non- The terminal of the often human-computer interactions such as suitable point reader, game machine, improves the convenience of human-computer interaction.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of natural interactive method based on deep learning of the embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram of natural interaction terminal based on deep learning of the embodiment of the present invention;
Fig. 3 is the schematic diagram of the training data for training deep learning model of the embodiment of the present invention;
Fig. 4 is that the embodiment of the present invention successfully detects finger and the written schematic diagram whether contacted;
Fig. 5 is that the embodiment of the present invention can not determine finger and the written schematic diagram whether contacted;
Fig. 6 is the schematic diagram of the relative depth information in image of the embodiment of the present invention between different pixels;
Label declaration:
1, a kind of natural interaction terminal based on deep learning;2, memory;3, processor.
Specific embodiment
To explain the technical content, the achieved purpose and the effect of the present invention in detail, below in conjunction with embodiment and cooperate attached Figure is explained.
Natural interactive method and terminal proposed by the present invention based on deep learning can be suitable for any required realization people The scene of machine interaction, such as point reader, game machine, dummy keyboard, virtual mouse, interaction, VR scene, AR scene, MR Scape etc. illustrates below with reference to specific application scenarios.
Please refer to Fig. 1, a kind of natural interactive method based on deep learning, comprising steps of
S1, implemented using deep learning model comprising clicking identification in the corresponding image of click on area for implementing body and clicking The key point of body;
Wherein, it is shot by a camera to comprising the click on area for clicking implementation body, obtains corresponding image, so After acquire described image, the image of acquisition can be post-processed by ISP module, specifically, image 4A effect can be carried out Fruit processing is clicked and implements body and can be arbitrarily to can be realized the material object of click, such as pen, hand, game paddle etc., and click on area The interface of human-computer interaction, such as books, interactive game interface, metope, desktop etc. can be that by;
A candidate can be intercepted out in the drawings with the dynamic detection region to be detected clicked where implementing body Then regional frame carries out identifying processing for the image in the frame of candidate region, click the key point for implementing body to identify;
Wherein, in order to intercept out candidate region, the angle point of continuum is first found, all angle points can be surrounded by then calculating Matrix, this matrix is candidate region frame;
Deep learning model is the spy using multilayer convolution study to training data (the click picture of labeled characteristic point) Sign;Then it is identified using data characteristics and (corresponds to the present invention, i.e. click recognition);
Single layer network is the weight matrix of a N*N, and multitiered network is exactly M layers of weight matrix, wherein each layer of matrix It is not equal big;
Training process is exactly by the weighed value adjusting of weight matrix N*N*M to locally or globally optimal;
Multitiered network may learn the low-level features, mid-level features and advanced features of data;
After the completion of feature identification, by classifier, i.e., weight is comprehensive can determine whether it is click action;
It, to clicking before the key point for implementing body identifies, is needed using being labeled with key using deep learning model The click of point information is implemented body and is trained to the deep learning model, enables the deep learning model to the click Implement body key point positioned, be illustrated in figure 3 the training data for training deep learning model comprising As clicking the various postures and key point information on hand for implementing the hand of body;
Specifically, deep learning model includes convolutional neural networks model, convolutional neural networks can be dynamic for clicking The key point information location tasks of work are using supervision, the semi-supervised or modes convolutional neural networks that training is completed in advance such as unsupervised Model, trained concrete mode the present embodiment do not limit:
For example, monitor mode can be used in convolutional neural networks model, training is completed in advance, such as uses the mark of click action The preparatory training convolutional neural networks model of data;
The network structure of convolutional neural networks model can need flexible design according to click action Information locating task, The present embodiment is not intended to limit: for example, convolutional neural networks model can include but is not limited to convolutional layer, elu layers of linear R, pond Change layer, full articulamentum etc., the network number of plies is more, then network is deeper;For another example, deep learning prototype network structure can use but not It is limited to Mobilenet, depth residual error network (Deep Residual Network, ResNet) or VGGnet (Visual Geometry Group Network) etc. networks structure;
S2, the movement for clicking implementation body is determined according to the key point;
It can recognize that according to the key point identified using the classifier of deep learning model and click the movement for implementing body, For example, the specific posture sold can determine according to the key point recognized if clicking implementation body is hand, and herein, classification Device can be selected according to actual needs, be also possible to other classifiers, such as Sigmoid, softmax, weighted etc.;
The depth difference in region and the click on area where S3, the calculating movement for clicking implementation body, according to described Depth difference determines that the click implements whether body clicks the click on area;
Due to only one video camera, so to convert its correspondence for picture by specific depth convolutional neural networks Depth map implement whether body connects with plane to be measured, such as desktop, paper etc. to judge to click to obtain depth information;
Specifically, the relative depth information among original image can be obtained by the training of a neural network, i.e., in picture Each pixel between opposite distant relationships, as shown in Figure 6;And when the distance between different objects in picture are close It waits, their depth information can also approach, and it is similar to will appear as color in figure, closer, and color just approaches identical, and work as it Distance farther out when, colour-difference is away from larger, accordingly it may determine that whether different objects have contact;
In another alternative embodiment, further include in step s 2 judge it is described click implement body movement whether be Click action, if so, thening follow the steps S3;
Implement whether the movement of body is the movement that can generate clicking operation that is, first judging to click, if so, Just execute step S3, progress further judge click implementation body whether with plane contact to be measured, as Fig. 4,5 be respectively can It is correct to detect and correctly detect to click the schematic diagram whether implementation body contacts with plane to be measured, so by first carrying out The judgement for implementing body posture is clicked, whether can generate clicking operation, if not clicking operation can be generated if can prejudge in advance Click action can not then execute the subsequent judgement whether contacted, not only increase recognition efficiency, also reduce resource consumption;
In another alternative embodiment, the step S3 includes:
Corresponding depth map is converted by described image by depth convolutional neural networks, obtains depth information;
The region where the movement for clicking implementation body and the depth of the click on area are calculated according to the depth information It is poor to spend;
Determine that the click implements whether body contacts with the click on area according to the depth difference, if so, the point It hits and implements the body click click on area, otherwise, the click implements body and do not click on the click on area.
In another alternative embodiment, after judging that click implementation body contacts with click on area, needing will be described Physics is clicked to be associated with the particular content of click on area (for example interact books, interactive game etc.), so that corresponding man-machine Interactive device can make consistent correlation behavior, for example point reads sounding, and game machine carries out corresponding response etc. after click;
Specifically, the click is implemented body to the click if the click implements body and clicks the click on area The click in region is converted into identifiable click action mode, and identifiable click action mode refers to external system herein, Such as point reader, game machine, the correspondence equipment of dummy keyboard connection, the point that the correspondence equipment etc. of virtual mouse connection can identify Action mode is hit, the external system is enabled to make corresponding response operation according to the click action mode recognized, than Such as point reader, if it is determined that there is some English word touched in written to finger, then by finger and it is written in The contact of some English word, which is converted into, reads the English word, then after point reader recognizes this instruction, can read pair In English word, for another example, in dummy keyboard, when recognize finger touch some letter when, then by finger and some word Female contact is converted into the output letter, then when the equipment connecting with dummy keyboard recognizes this instruction, then output pair The letter answered;
It is converted into the click action mode that external system can identify to physical points blow mode by above-mentioned, improves The versatility of natural interactive method out, can be suitable for the various application scenarios for needing to carry out human-computer interaction.
Referring to figure 2., a kind of natural interaction terminal 1 based on deep learning, including memory 2, processor 3 and be stored in On memory 2 and the computer program that can be run on the processor 3, when the processor 3 executes the computer program The corresponding operating procedure of above-mentioned each embodiment of the method is realized respectively.
In conclusion a kind of natural interactive method and terminal based on deep learning provided by the invention, is based on depth It practises model automatic identification and clicks the key point for implementing body, and determined according to the key point identified and click the movement for implementing body, In It determines to click the movement for implementing body to pass through different zones in image based on the movement for clicking implementation body after click action Between depth difference determine to click and implement whether body clicks the click on area, and the physical points blow mode is converted to and can be known Other click action mode, so that human-computer interaction device such as point reader, interactive game machine etc. can identify the click action Mode simultaneously makes corresponding response, does not need clicking the default feature texture of implementation body setting, can easily and efficiently detect not Click with feature implements whether body has carried out physics click, and human-computer interaction device is made according to physics click Corresponding response, achievees the effect that natural interaction, is very suitable to the terminal of the human-computer interactions such as point reader, game machine, improves people The convenience of machine interaction.
The above description is only an embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalents made by bright specification and accompanying drawing content are applied directly or indirectly in relevant technical field, similarly include In scope of patent protection of the invention.

Claims (12)

1. a kind of natural interactive method based on deep learning, which is characterized in that comprising steps of
S1, body is being implemented comprising clicking identification in the corresponding image of click on area for implementing body and clicking using deep learning model Key point;
S2, the movement for clicking implementation body is determined according to the key point;
The depth difference in region and the click on area where S3, the calculating movement for clicking implementation body, according to the depth Difference determines that the click implements whether body clicks the click on area.
2. a kind of natural interactive method based on deep learning according to claim 1, which is characterized in that its feature exists In the step S1 includes: before
Implement body using the click for being labeled with key point information to be trained the deep learning model, makes the deep learning Model can position the key point for clicking implementation body.
3. a kind of natural interactive method based on deep learning according to claim 1, which is characterized in that the step S2 Include:
The movement for clicking implementation body is determined according to the key point using the classifier of deep learning model.
4. a kind of natural interactive method based on deep learning according to claim 1 or 3, which is characterized in that the step Rapid S2 further include:
Judge that described click implements whether the movement of body is click action, if so, thening follow the steps S3.
5. a kind of natural interactive method based on deep learning according to claim 1, which is characterized in that the step S3 Include:
Corresponding depth map is converted by described image by depth convolutional neural networks, obtains depth information;
The region where the movement for clicking implementation body and the depth difference of the click on area are calculated according to the depth information;
Determine that the click implements whether body contacts with the click on area according to the depth difference, if so, clicks reality Donor clicks the click on area, and otherwise, the click implements body and do not click on the click on area.
6. a kind of natural interactive method based on deep learning according to claim 1 or 5, which is characterized in that the step After rapid S3 further include:
If the click implements body and clicks the click on area, by the click turn clicked and implement body to the click on area Turn to identifiable click action mode.
7. a kind of natural interaction terminal based on deep learning, including memory, processor and storage are on a memory and can be The computer program run on the processor, which is characterized in that the processor realized when executing the computer program with Lower step:
S1, body is being implemented comprising clicking identification in the corresponding image of click on area for implementing body and clicking using deep learning model Key point;
S2, the movement for clicking implementation body is determined according to the key point;
The depth difference in region and the click on area where S3, the calculating movement for clicking implementation body, according to the depth Difference determines that the click implements whether body clicks the click on area.
8. a kind of natural interactive method based on deep learning according to claim 7, which is characterized in that its feature exists In the step S1 includes: before
Implement body using the click for being labeled with key point information to be trained the deep learning model, makes the deep learning Model can position the key point for clicking implementation body.
9. a kind of natural interactive method based on deep learning according to claim 7, which is characterized in that the step S2 Include:
The movement for clicking implementation body is determined according to the key point using the classifier of deep learning model.
10. a kind of natural interactive method based on deep learning according to claim 7 or 9, which is characterized in that the step Rapid S2 further include:
Judge that described click implements whether the movement of body is click action, if so, thening follow the steps S3.
11. a kind of natural interactive method based on deep learning according to claim 7, which is characterized in that the step S3 includes:
Corresponding depth map is converted by described image by depth convolutional neural networks, obtains depth information;
The region where the movement for clicking implementation body and the depth difference of the click on area are calculated according to the depth information;
Determine that the click implements whether body contacts with the click on area according to the depth difference, if so, clicks reality Donor clicks the click on area, and otherwise, the click implements body and do not click on the click on area.
12. a kind of natural interactive method based on deep learning according to claim 7 or 11, which is characterized in that described After step S3 further include:
If the click implements body and clicks the click on area, by the click turn clicked and implement body to the click on area Turn to identifiable click action mode.
CN201910636847.1A 2019-07-15 2019-07-15 A kind of natural interactive method and terminal based on deep learning Pending CN110414393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910636847.1A CN110414393A (en) 2019-07-15 2019-07-15 A kind of natural interactive method and terminal based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910636847.1A CN110414393A (en) 2019-07-15 2019-07-15 A kind of natural interactive method and terminal based on deep learning

Publications (1)

Publication Number Publication Date
CN110414393A true CN110414393A (en) 2019-11-05

Family

ID=68361428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910636847.1A Pending CN110414393A (en) 2019-07-15 2019-07-15 A kind of natural interactive method and terminal based on deep learning

Country Status (1)

Country Link
CN (1) CN110414393A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027533A (en) * 2019-12-12 2020-04-17 广东小天才科技有限公司 Conversion method and system of point-to-read coordinates, terminal device and storage medium
CN114115693A (en) * 2020-08-26 2022-03-01 华为技术有限公司 Method and device for inputting text based on virtual keyboard

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104075A1 (en) * 2013-10-16 2015-04-16 Qualcomm Incorporated Z-axis determination in a 2d gesture system
CN104571482A (en) * 2013-10-22 2015-04-29 中国传媒大学 Digital device control method based on somatosensory recognition
CN104750286A (en) * 2013-12-26 2015-07-01 联想(北京)有限公司 Data acquisition method and electronic device
CN105787485A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Identification clicking operation device and identification clicking operation method
CN106547356A (en) * 2016-11-17 2017-03-29 科大讯飞股份有限公司 Intelligent interactive method and device
CN107357414A (en) * 2016-05-09 2017-11-17 株式会社理光 A kind of recognition methods of click action and click action identification device
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
CN108389226A (en) * 2018-02-12 2018-08-10 北京工业大学 A kind of unsupervised depth prediction approach based on convolutional neural networks and binocular parallax
CN108399367A (en) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing
CN108921129A (en) * 2018-07-20 2018-11-30 网易(杭州)网络有限公司 Image processing method, system, medium and electronic equipment
CN109725724A (en) * 2018-12-29 2019-05-07 百度在线网络技术(北京)有限公司 There are the gestural control method and device of screen equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150104075A1 (en) * 2013-10-16 2015-04-16 Qualcomm Incorporated Z-axis determination in a 2d gesture system
CN104571482A (en) * 2013-10-22 2015-04-29 中国传媒大学 Digital device control method based on somatosensory recognition
CN104750286A (en) * 2013-12-26 2015-07-01 联想(北京)有限公司 Data acquisition method and electronic device
CN105787485A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Identification clicking operation device and identification clicking operation method
CN107357414A (en) * 2016-05-09 2017-11-17 株式会社理光 A kind of recognition methods of click action and click action identification device
CN106547356A (en) * 2016-11-17 2017-03-29 科大讯飞股份有限公司 Intelligent interactive method and device
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
CN108399367A (en) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing
CN108389226A (en) * 2018-02-12 2018-08-10 北京工业大学 A kind of unsupervised depth prediction approach based on convolutional neural networks and binocular parallax
CN108921129A (en) * 2018-07-20 2018-11-30 网易(杭州)网络有限公司 Image processing method, system, medium and electronic equipment
CN109725724A (en) * 2018-12-29 2019-05-07 百度在线网络技术(北京)有限公司 There are the gestural control method and device of screen equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027533A (en) * 2019-12-12 2020-04-17 广东小天才科技有限公司 Conversion method and system of point-to-read coordinates, terminal device and storage medium
CN111027533B (en) * 2019-12-12 2024-02-23 广东小天才科技有限公司 Click-to-read coordinate transformation method, system, terminal equipment and storage medium
CN114115693A (en) * 2020-08-26 2022-03-01 华为技术有限公司 Method and device for inputting text based on virtual keyboard
WO2022042449A1 (en) * 2020-08-26 2022-03-03 华为技术有限公司 Method and apparatus for inputting text on the basis of virtual keyboard

Similar Documents

Publication Publication Date Title
Xu A real-time hand gesture recognition and human-computer interaction system
CN111563502B (en) Image text recognition method and device, electronic equipment and computer storage medium
CN108351828A (en) Technology for device-independent automatic application test
Nai et al. Fast hand posture classification using depth features extracted from random line segments
Dhawan et al. Implementation of hand detection based techniques for human computer interaction
Conseil et al. Comparison of Fourier descriptors and Hu moments for hand posture recognition
Nair et al. Hand gesture recognition system for physically challenged people using IOT
CN109919077B (en) Gesture recognition method, device, medium and computing equipment
CN111680594A (en) Augmented reality interaction method based on gesture recognition
CN111078552A (en) Method and device for detecting page display abnormity and storage medium
CN112001394A (en) Dictation interaction method, system and device based on AI vision
CN113762269A (en) Chinese character OCR recognition method, system, medium and application based on neural network
CN111399638A (en) Blind computer and intelligent mobile phone auxiliary control method adapted to same
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN110414393A (en) A kind of natural interactive method and terminal based on deep learning
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
US11157765B2 (en) Method and system for determining physical characteristics of objects
Baumgartl et al. Vision-based hand gesture recognition for human-computer interaction using MobileNetV2
CN108628455B (en) Virtual sand painting drawing method based on touch screen gesture recognition
CN108108648A (en) A kind of new gesture recognition system device and method
WO2023102723A1 (en) Image processing method and system
CN113158870B (en) Antagonistic training method, system and medium of 2D multi-person gesture estimation network
Dhamanskar et al. Human computer interaction using hand gestures and voice
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium
Jeong et al. Hand gesture user interface for transforming objects in 3d virtual space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191105