CN103135745B - Non-contact control method, information equipment and system based on depth image - Google Patents
Non-contact control method, information equipment and system based on depth image Download PDFInfo
- Publication number
- CN103135745B CN103135745B CN201110382053.0A CN201110382053A CN103135745B CN 103135745 B CN103135745 B CN 103135745B CN 201110382053 A CN201110382053 A CN 201110382053A CN 103135745 B CN103135745 B CN 103135745B
- Authority
- CN
- China
- Prior art keywords
- forearm
- user
- locus
- information
- relative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
Disclose a kind of non-contact control method, information equipment and system based on depth image.The method comprising the steps of:User's forearm is detected in the live deep video of capture, to export the information for representing the locus of user's forearm and pointing to;And the locus of expression user's forearm is converted into the order that goal systems is able to carry out with the information pointed to.Using the said structure and method of the present invention, robust the and reliable control to goal systems can be realized in a non-contact manner.
Description
Technical field
The present invention relates to non-contact control, and in particular to a kind of non-contact control method and information based on depth image
Equipment and system.
Background technology
It is well known that in technical field of system control, intelligent non-contact control is one of very promising direction.And
In various non-contact control methods, the method for those view-based access control model information is very important, because visual information energy
It is enough that the mode that the world is perceived as the mankind is provided to machine.
Further, since the fast development of manufacturing technology, cam device becomes increasingly cheaper, and performance becomes more next
It is stronger.Now, camera has had changed into the standard fitting of numerous information equipments, from mobile phone to notebook computer, from certainly
Dynamic ATM is equipped with camera to board, bulletin.These all provide solid base for the application of view-based access control model information
Plinth.But at present in many cases, camera has functioned only as some simple effects, such as in Automatic Teller Machine only
It is only applied to record visual information.Therefore, it is necessary to which the method for developing more view-based access control model information possesses camera to expand
The application of the electronic installation of function.
Patent document 1 (US5594469) proposes a kind of control system of machine based on gesture.Within the system, it is original
Receipt is captured by camera, is then decoded into image sequence.Background removal is carried out to the image in image sequence, so as to know
Other trigger gesture.Trigger gesture is determined using correlation technique.Moved and to aobvious once after trigger gesture is identified, tracking it
Show that device is controlled.
But there are problems that in the system of patent document 1, due to removing the back of the body using background segmentation techniques
Scape, therefore in the case that background is complicated or changes, wrong identification occurs in system.In addition, when system starts, use
Family can not stand before camera head, because system will determine background image first.Otherwise, it will bring making an uproar for bulk
Acoustical signal so that background segment fails.
The content of the invention
The purpose of the present invention is to propose to a kind of non-contact control method and information equipment and system based on depth image.
First aspect of the present invention it is proposed a kind of method for controlling contactless system, including step:In showing for capture
User's forearm is detected in depth of field video, to export the information for representing the locus of user's forearm and pointing to;And
The locus of expression user's forearm is converted into the order that goal systems is able to carry out with the information pointed to.
In the second aspect of the present invention, it is proposed that a kind of information equipment, including:Subject detecting unit, at the scene of capture
User's forearm is detected in deep video, to export the information for representing the locus of user's forearm and pointing to;And letter
Number converting unit, the locus of expression user's forearm and the information pointed to are converted into the life that goal systems is able to carry out
Order.
In the third aspect of the present invention, it is proposed that a kind of contactless system, including above-mentioned information equipment.
Using the said structure and method of the present invention, the sum to the robust of goal systems can be realized in a non-contact manner
Reliable control.
Brief description of the drawings
From detailed description below in conjunction with the accompanying drawings, features described above of the invention and advantage will be apparent from, wherein:
Fig. 1 shows the schematic diagram of Touchless control system according to embodiments of the present invention;
Fig. 2 shows the schematic block diagram of Touchless control system according to embodiments of the present invention;
Fig. 3 is the flow chart of the process of description control method according to embodiments of the present invention;
Fig. 4 is the flow chart for the process that description is detected to user's forearm;
Fig. 5 is the flow chart for describing signal conversion process;And
Fig. 6 is the schematic diagram of a practical application of description method according to embodiments of the present invention.
Embodiment
Below, the preferred embodiment of the present invention is described in detail with reference to the accompanying drawings.In the accompanying drawings, although being shown in different attached
In figure, but identical reference is used to represent identical or similar component.For clarity and conciseness, comprising herein
The detailed description for the function and structure known will be omitted, and otherwise they will make subject of the present invention unclear.
Fig. 1 shows the schematic diagram of Touchless control system according to embodiments of the present invention.As shown in figure 1, according to this hair
The Touchless control system of bright embodiment possesses the information equipment 100, depth camera 110 and display screen of such as computer etc
120.According to another embodiment of the present invention, camera 110 is desirably integrated into information equipment 100.According to the another of the present invention
Embodiment, the connection of camera 110 and information equipment 100 shown in Fig. 1 is schematical, is not meant to them logical
Necessarily there is line connecting relation in the case of often, connection can be established by way of user is set.For example, user can pass through
Input through keyboard, existing gesture is combined into the corresponding relation between the corresponding command and is configured, so as to establish camera and letter
Cease the annexation between equipment.
The certain depth image information of the forearm 130 of user, such as locus and sensing and control corresponding with them
Order processed is stored among information equipment.When the system is operating, the video of the forearm depth image of the user will be by camera 110
Capture, and be input into information equipment 100.According to another implementation example of the present invention, depth camera is infrared photography
Head, so as to avoid the requirement in terms of illumination.
The information equipment 100 can detect locus and the sensing of the forearm of user from deep video.Then, information
Equipment 100 according to the locus of actually detected user's forearm and sensing (such as with relative to predetermined plane, such as display screen,
Space angle represents) whether matched the forearm positions of user with locus set in advance and pointing to and point to conversion
Into corresponding control command, such as left click or right click, cursor up down, move left and right.So, user 150 can brandish
Forearm controls the goal systems in the goal systems being connected with information equipment, or control information equipment.
Fig. 2 shows the schematic block diagram of Touchless control system according to embodiments of the present invention.As shown in Fig. 2
Depth camera 110 captures live deep video, rather than conventional video when user 150 brandishes forearm, and will capture
Deep video be input in information equipment 100.Information equipment 100 detects the locus of arm from the live deep video
With point to and and then produce with the arm control command that to brandish process corresponding, be sent to goal systems 140, control targe
The operation of system.As described above, goal system 140 can also be a part for information equipment.
Stored in the memory cell of information equipment 100 user's forearm set in advance locus and point to and
Corresponding control command, ' left button click ', ' clicking by right key ' and ' double-click ' etc..
As shown in Fig. 2 the forearm detection unit 101 being equipped with information equipment 100, which receives, carrys out depth from the scene of camera
Deep video, and match each frame depth of live video by using the image or model of user's forearm set in advance
Picture.If there is the picture of matching, then it is assumed that the image of user's forearm in the live deep video be present, and and then determine
The locus of user's forearm and sensing.
In addition, equipped with signal conversion unit 104 in information equipment 100, it is by the locus of detection and points to conversion
The order performed into suitable goal systems 140.
As shown in Fig. 2 the definition for facilitating user 150 to carry out self-defined control command is further provided with information equipment 100
Unit 106.When user 150, which will define oneself distinctive forearm, to brandish position and point to, shoot this oneself by camera 100
The depth image of forearm, locus and the sensing of forearm are determined, and they and corresponding control command are had into memory cell
In 105.
The unit of control method according to embodiments of the present invention and information equipment is described with reference to flow chart
Specific operation process.Fig. 3 is the flow chart of the process of description control method according to embodiments of the present invention.
In step S31, user 150 brandishes forearm before depth camera, so as on the screen of control information equipment 100
The action of cursor.The scene that depth camera 110 in step S32, such as computer of information equipment 100 captures forearm regards
Frequently, and it is entered into the forearm detection unit 101 of information equipment.
Next, in step S33, forearm detection unit 101 receives the live video from camera, detect forearm and
The locus of user's forearm and sensing are determined by analysis depth image.If there is forearm and obtain locus
And sensing, in step S34, the locus of detection and sensing are converted into corresponding control command by signal conversion unit 104.
Step S35, control command are sent to goal systems, control it.
For example, after forearm detection unit detects locus and the sensing of forearm, judge that it is pointed to for example on screen
Some position, then cursor is moved on into the position, or be click on the icon of the position, so as to realize the control to goal systems.
According to an embodiment of the invention, auxiliary can also be used as using the extra information such as image of face, eyes or nose
Judgement.Subject detecting unit can be examined using Adaboost algorithm or other object recognition algorithms from depth image
Survey user's forearm and locus and the sensing of the forearm.
Fig. 4 is the flow chart for the process that description is detected to forearm.It is as shown in figure 4, single in step S41, forearm detection
Member 101 is examined using the technology of such as stencil matching or Texture Matching etc according to the forearm template stored in memory cell 105
Survey the live video of capture and determine locus and the sensing of the forearm of detection, and wherein whether to judge in step S42
There is forearm.If it is not, flow goes to step S41, continue to detect, otherwise, in step S43, forearm detection unit 101 exports
The locus of forearm and sensing.
As described above, before forearm detection is carried out, it may be necessary to predefined and specific forearm locus and sensing
Corresponding control command.
Fig. 5 is the flow chart for describing signal conversion process.The signal exported from forearm detection unit 101 corresponds to user
The locus of 150 forearms and sensing.But they can not still be performed by goal systems 140, due to goal systems 140 simultaneously
These actions are not understood that.Therefore, in signal conversion unit 104, in step S71, the forearm space obtained with analysis is obtained
The position signal corresponding with sensing, and be converted into suitably ordering by all signals in step S72.After this, exist
Step S73, signal conversion unit 104 will be ordered and exported.
Fig. 6 is the schematic diagram of a practical application of description method according to embodiments of the present invention.As illustrated, forearm is examined
Survey the forearm locus that unit 101 detects and show that the forearm of user has pointed to " upward " one in the menu on screen with sensing
Column, then cursor moves on to the column and clicked on, it is achieved thereby that the operation to menu item.Then, user moves forearm sensing and closed, and one
Denier forearm detection unit 101 detects that user's forearm has pointed to closing, then signal conversion unit 104 will represent the sky of user's forearm
Between position and point to signal be converted into close goal systems order.
As described above, the apparatus and method of the present invention, which can be used for camera, supports massaging device, such as:Desktop PC, knee
Upper PC, mobile phone, PDA, electronic whiteboard, remote control, supervising device etc..
Above description is only used for realizing embodiments of the present invention, and it should be appreciated by those skilled in the art do not taking off
From any modification or partial replacement of the scope of the present invention, the scope that should belong to the claim of the present invention to limit, because
This, protection scope of the present invention should be defined by the protection domain of claims.
Claims (8)
1. a kind of method for controlling contactless system, including step:
User's forearm is detected in the live deep video of capture, represents user's forearm relative to display screen to export
Locus and the information pointed to, wherein the space angle of the sensing by user's forearm relative to the display screen
To represent;And
Expression user's forearm is converted into goal systems relative to the locus of display screen and the information pointed to can
The order of execution.
2. the method as described in claim 1, wherein the detecting step includes:
Read the user's forearm template prestored;
Matching treatment is performed to each frame in user's forearm template and the live video of capture;
In the event of a match, locus of the forearm in each frame in the live video relative to display screen is exported
And directional information.
3. method as claimed in claim 2, before the step of being additionally included in detection forearm, according to the needs of user come define with
Forearm relative to display screen the locus control command corresponding with sensing the step of.
4. a kind of information equipment, including:
Subject detecting unit, user's forearm is detected in the live deep video of capture, user's forearm is represented to export
Relative to the locus of display screen and the information of sensing, wherein described point to by user's forearm relative to the display
The space angle of screen represents;And
Signal conversion unit, expression user's forearm is converted into relative to the locus of display screen and the information pointed to
The order that goal systems is able to carry out.
5. information equipment as claimed in claim 4, wherein the forearm detection unit reads the user's forearm mould prestored
Plate, matching treatment is performed to each frame in user's forearm template and the live video of capture, and in the event of a match, it is defeated
Locus and directional information of the forearm gone out in each frame in the live video relative to display screen.
6. information equipment as claimed in claim 4, wherein the live deep video is captured by infrared camera.
7. information equipment as claimed in claim 5, in addition to defined with forearm relative to display screen according to the needs of user
The definition unit of the locus control command corresponding with sensing of curtain.
8. a kind of contactless system, including the information equipment as described in one of claim 4~7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110382053.0A CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110382053.0A CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103135745A CN103135745A (en) | 2013-06-05 |
CN103135745B true CN103135745B (en) | 2018-01-02 |
Family
ID=48495687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110382053.0A Expired - Fee Related CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103135745B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106598422B (en) * | 2016-10-25 | 2019-12-13 | 深圳奥比中光科技有限公司 | hybrid control method, control system and electronic equipment |
CN111191083B (en) * | 2019-09-23 | 2021-01-01 | 牧今科技 | Method and computing system for object identification |
US10614340B1 (en) | 2019-09-23 | 2020-04-07 | Mujin, Inc. | Method and computing system for object identification |
CN111601129B (en) * | 2020-06-05 | 2022-04-01 | 北京字节跳动网络技术有限公司 | Control method, control device, terminal and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101699370A (en) * | 2009-11-10 | 2010-04-28 | 北京思比科微电子技术有限公司 | Depth detection based body identification control device |
US20110289455A1 (en) * | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
US8457353B2 (en) * | 2010-05-18 | 2013-06-04 | Microsoft Corporation | Gestures and gesture modifiers for manipulating a user-interface |
-
2011
- 2011-11-25 CN CN201110382053.0A patent/CN103135745B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN103135745A (en) | 2013-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8897490B2 (en) | Vision-based user interface and related method | |
US20210294429A1 (en) | Apparatus, method and recording medium for controlling user interface using input image | |
CN107643828B (en) | Vehicle and method of controlling vehicle | |
CN104956292B (en) | The interaction of multiple perception sensing inputs | |
US10095033B2 (en) | Multimodal interaction with near-to-eye display | |
US6901561B1 (en) | Apparatus and method for using a target based computer vision system for user interaction | |
US20150077329A1 (en) | Eye tracking-based user interface method and apparatus | |
CN102339125A (en) | Information equipment and control method and system thereof | |
KR20130106833A (en) | Use camera to augment input for portable electronic device | |
US20110250929A1 (en) | Cursor control device and apparatus having same | |
CN103135746B (en) | Non-contact control method, system and equipment based on static posture and dynamic posture | |
CN106415472A (en) | Gesture control method, device, terminal apparatus and storage medium | |
CN104935812A (en) | Self-shot mode turn-on control method and device | |
EP2550578A2 (en) | Associated file | |
CN103135745B (en) | Non-contact control method, information equipment and system based on depth image | |
CN103207678A (en) | Electronic equipment and unblocking method thereof | |
CN104636648A (en) | Iris unlocking system and method thereof | |
EP3843368A1 (en) | Terminal controlling method and apparatus, mobile terminal and storage medium | |
CN106791421A (en) | Filming control method and imaging control device | |
WO2019134606A1 (en) | Terminal control method, device, storage medium, and electronic apparatus | |
CN116149477A (en) | Interaction method, interaction device, electronic equipment and storage medium | |
CN106547337A (en) | Using the photographic method of gesture, system and electronic installation | |
CN218214058U (en) | Virtual reality interaction equipment based on gesture recognition | |
KR102369621B1 (en) | Method, apparatus and recovering medium for controlling user interface using a input image | |
KR102289497B1 (en) | Method, apparatus and recovering medium for controlling user interface using a input image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180102 Termination date: 20181125 |