WO2014169566A1 - 一种手势控制方法、装置和系统 - Google Patents

一种手势控制方法、装置和系统 Download PDF

Info

Publication number
WO2014169566A1
WO2014169566A1 PCT/CN2013/083690 CN2013083690W WO2014169566A1 WO 2014169566 A1 WO2014169566 A1 WO 2014169566A1 CN 2013083690 W CN2013083690 W CN 2013083690W WO 2014169566 A1 WO2014169566 A1 WO 2014169566A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
controlled device
control
control center
module
Prior art date
Application number
PCT/CN2013/083690
Other languages
English (en)
French (fr)
Inventor
刘海军
林立东
周云军
黄峥
缪川扬
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to JP2016506752A priority Critical patent/JP6126738B2/ja
Priority to EP13882158.2A priority patent/EP2988210B1/en
Priority to KR1020157032564A priority patent/KR101969624B1/ko
Priority to US14/784,435 priority patent/US10013067B2/en
Priority to DK13882158.2T priority patent/DK2988210T3/en
Publication of WO2014169566A1 publication Critical patent/WO2014169566A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/32Remote control based on movements, attitude of remote control device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals

Definitions

  • the present invention relates to the field of communications, and in particular, to a gesture control method, apparatus, and system. Background technique
  • the traditional method of controlling multiple devices is usually controlled by using the respective remote controllers of the devices, and these remote controllers are often not common to each other, and most of the remote controllers do not have network functions, such as traditional televisions and stereos;
  • Some remote controllers that support the network such as software that supports interworking protocols on devices with computing and network capabilities (such as cell phones, PADs, etc.), control another device based on the software.
  • the above device control method is not convenient enough, the table is existing: people need to select the remote controller of the corresponding device in a pile of remote controllers for device control, and need to constantly change the remote controller according to the different devices to be controlled, or PADs, cell phones, etc. can only be operated by people who are familiar with computer operations for device control, or with simple gestures to control a single device. It can be seen that in order to control different devices, people often learn how to use different control tools, and the operation is too cumbersome. People prefer to use simpler and more natural operation methods to control the device. Based on this demand, gesture control comes into being. Gesture control is a relatively novel control method. When gesture control is performed, camera monitoring on one device The gesture is recognized and recognized, and the device is controlled according to the control command corresponding to the recognized gesture action.
  • the controlled device is to be equipped with a camera configured to implement visual recognition in gesture control.
  • the devices respectively have their own camera and gesture recognition software, which consumes more repetitive resources and easily causes misoperations in the gesture recognition process.
  • the set-top box understands the gesture indication for the television as a manipulation of itself.
  • the main objective of the embodiments of the present invention is to provide a gesture control method, apparatus, and system to achieve uniformity of gesture control.
  • a gesture control method includes:
  • the gesture control center identifies a gesture for the controlled device and transmits the manipulation information corresponding to the recognized gesture to the controlled device; the controlled device performs a corresponding operation according to the received manipulation information.
  • the gesture control center also identifies the controlled device within the visual range.
  • the gesture control center identifies the controlled device within the visual range, at least one of the device identifier, the device address, and the device location of the controlled device is identified and recorded.
  • the gesture control center recognizes the gesture for the controlled device, the operation command corresponding to the gesture and the controlled device pointed by the gesture are recognized.
  • gesture control center when the gesture control center sends the manipulation information, sends a manipulation command, or sends the gesture motion feature data; and/or,
  • control information received by the controlled device is a manipulation command
  • the operation corresponding to the manipulation command is executed;
  • the manipulation information received by the controlled device is the gesture motion characteristic data, the received gesture action is analyzed.
  • the data is levied to obtain a corresponding manipulation command, and the operation corresponding to the manipulation command is executed.
  • the controlled device establishes a connection with the gesture control center, and performs control by using a message in a session manner based on the connection.
  • a gesture control device comprising: a video capture module, an identification module, and a control module; wherein the video capture module is configured to capture a gesture for the controlled device;
  • the identification module is configured to identify the gesture
  • the control module is configured to send, to the controlled device, the manipulation information corresponding to the recognized gesture.
  • the apparatus further includes a ranging module configured to identify the controlled device within the visual range in conjunction with the video acquisition module, and calculate a distance between the device and the controlled device.
  • the device further includes a data storage module configured to record at least one of a device identifier, a device address, and a device location of the identified controlled device when the controlled device within the visual range is identified.
  • the identification module includes an image recognition module and a gesture recognition module, where the image recognition module is configured to recognize a manipulation command corresponding to the gesture;
  • the gesture recognition module is configured to identify a controlled device to which the gesture is directed.
  • the gesture recognition module is configured to: calculate, by using a gesture, an angle of the video capture module; or
  • the calculation is performed using a triangular formula.
  • the control module is configured to send a manipulation command or send gesture action feature data when transmitting the manipulation information.
  • the device includes a network service module configured to establish a connection with the controlled device, and the control is completed by using a message in a session manner based on the connection.
  • a gesture control system including a gesture control center, a controlled device;
  • the gesture control center is configured to recognize a gesture for the controlled device and to the controlled device Sending the manipulation information corresponding to the recognized gesture;
  • the controlled device is configured to perform a corresponding operation according to the received manipulation information.
  • the gesture control center is configured to send a manipulation command or send gesture motion feature data when transmitting the manipulation information; and/or,
  • control information received by the controlled device is a manipulation command, it is configured to perform an operation corresponding to the manipulation command; and when the manipulation information received by the controlled device is the gesture motion feature data, configured to analyze the received gesture motion feature.
  • the data is used to obtain a corresponding manipulation command, and the operation corresponding to the manipulation command is executed.
  • the controlled device When the controlled device performs an operation according to the received manipulation information, it is configured to establish a connection with the gesture control center, and the message is used to complete the manipulation in a session manner based on the connection.
  • the gesture control technology of the embodiment of the present invention can ensure that only a set of gesture recognition devices such as a gesture control center can implement gesture control on multiple devices, realize uniformity of gesture control, and avoid gesture recognition of different devices.
  • the misoperations that may occur during the process also avoid the consumption of repeated resources. It also provides a convenient control method for devices that do not support gesture recognition manipulation, and also saves the cost of adding gesture recognition components to the device, which can effectively improve customer satisfaction.
  • FIG. 1 is a schematic diagram of a gesture control system according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a gesture control system according to another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a barcode used in implementing gesture control according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a two-dimensional code used in implementing gesture control according to an embodiment of the present invention
  • FIG. 5 is a flowchart of gesture control according to an embodiment of the present invention
  • FIG. 6 is a schematic flow chart of a message between a gesture control center and a controlled device according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a gesture control process according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of the principle of determining a controlled device with a gesture pointing according to an embodiment of the present invention. detailed description
  • the technical problem to be solved by the embodiments of the present invention is to implement a control of the controlled device by using a device capable of uniformly controlling the controlled device.
  • the device capable of uniformly controlling the controlled device is a gesture control center, having gesture recognition capability, network capability, capable of recognizing a gesture, recognizing a controlled device targeted by the gesture, and converting the gesture into a manipulation command or storing a gesture motion feature;
  • the control center is also capable of interconnecting with the controlled device and transmitting a message containing the manipulation command or gesture action characteristics to the controlled device.
  • the Gesture Control Center can perform actions that include the following steps:
  • Step one the gesture control center identifies the controlled device within the visual range
  • Step two the gesture control center recognizes the gesture
  • Step 3 The gesture control center sends the manipulation information to the controlled device pointed by the gesture.
  • Step 4 The controlled device performs the corresponding operation according to the received manipulation information.
  • step 1 when the gesture control center identifies the controlled device in the visual range, at least one of the device identifier, the device address, the device location, and the like of the controlled device may be identified, and the information is recorded.
  • step 2 when the gesture control center recognizes the gesture, the gesture corresponding to the gesture and the controlled device of the gesture may be identified, for example, by analyzing the motion feature of the gesture.
  • the gesture control center may send a manipulation command or send gesture function characteristic data.
  • the sent gesture action feature data can be further analyzed by the controlled device to obtain a corresponding manipulation command.
  • step 4 the controlled device can establish a connection with the gesture control center, and based on the connection, the message is used to complete the manipulation in a session manner.
  • manipulation command and the gesture action characteristic data may be protocol instructions, that is, a certain response Use the protocol message specified in the protocol and the corresponding parameters.
  • FIG. 1 there is shown a device scenario in accordance with an embodiment of the present invention, including devices and their interrelationships.
  • the above four devices are equipped with a network interface (for example, a network interface supporting IEEE 802.1 lb/g/n or a network interface supporting IEEE 802.3), so that it can be connected to a communication network such as an IP network.
  • Each device includes a communication module, which has a service capability, is configured to discover and connect with other devices, and can send and receive messages with other devices, and process operation commands, or hand over control commands.
  • the above service capabilities can be implemented using existing Universal Plug and Play (UPnP) technology, or can be implemented using the Multicast Domain Name System (mDNS) and Domain Name System Based Service Discovery (DNS-SD) technology, which can be used in IP networks.
  • mDNS Multicast Domain Name System
  • DNS-SD Domain Name System Based Service Discovery
  • In unicast and multicast query mode respond to queries and provide function calls according to a predefined message format.
  • UPnP technology specifies how media display devices (such as TV), servers (such as DVD players, home storage servers) respond to queries, and which calling functions are provided
  • the gesture control center also includes a video capture module with image and video capture capabilities (such as a camera, with a camera as an example), and a ranging module.
  • the gesture control center also includes an identification module, a data storage module, a control module, a network service module, and the like.
  • the functions of the camera of the gesture control center can be: taking pictures of the device within the visual range of the camera and identifying the information on the device label; capturing the gesture action of the user and identifying the corresponding operation target, manipulation command, or gesture action characteristic data .
  • the ranging module is similar to a handheld laser infrared range finder, using infrared light The propagation and reflection calculates the distance between the gesture control center and the controlled device.
  • a ranging module with an accuracy of about 2 mm can be selected.
  • the control module is capable of transmitting the manipulation information corresponding to the recognized gesture to the controlled device, so that the controlled device can perform a corresponding operation according to the received manipulation information.
  • the recognition module may include an image recognition module and a gesture recognition module to implement image recognition and gesture recognition, respectively.
  • the gesture control center can identify the device within its visual range by implementing a camera and a distance measuring module to rotate the captured image in three dimensions, and searching for a predefined label pattern in the captured image for analysis.
  • the camera of the gesture control center captures an image by taking a picture.
  • the image recognition module of the gesture control center analyzes the image, it recognizes that there are several labels in the image, identifies the labels separately and stores the identified label information, and then performs distance measurement on the controlled device where the label is located, and the measured The distance is stored with the tag information.
  • the gesture control center in Figure 2 does not have a ranging function.
  • the placement relationship of all the devices in Figure 2 is: There is no occlusion between the gesture control center and any other device, that is, the light from the gesture control center can directly illuminate the controlled device.
  • the mutual spatial positional relationship between the above three cameras is determined, that is, the gesture control center records the mutual distance between the three cameras and the mutual angle of the camera direction.
  • the three cameras are not on the same line, and the mutual angles between them cannot be parallel or exceed 90 degrees.
  • the three cameras can communicate with each other, and can exchange captured images, videos, or send their respective captured images and videos to a specified device.
  • the relative angle eg, phase
  • the controlled device, the position of the gesture, and the direction of the gesture can be calculated by mathematical coordinate transformation and trigonometric formula.
  • the above barcode is shown in Figure 3.
  • the information on the barcode label is "dvdplayer-192.1.1.1", indicating that the controlled device corresponding to the barcode label is a DVD player, and the network address is 192.1.1.1.
  • the barcode label can be pasted or printed and embedded on a DVD player.
  • the above two-dimensional code is shown in Fig. 4.
  • the information on the two-dimensional code label is "tv-192.1.1.2", indicating that the controlled device corresponding to the two-dimensional code label is a TV, and the network address is 192.1.1.2.
  • the two-dimensional code label can be pasted or printed and embedded on the television.
  • the tag information may contain more content, such as a short name of the controlled device, a custom name, and the like.
  • the text is directly marked on the controlled device, and the gesture control center identifies the controlled device based on the text marked on the controlled device.
  • the three-dimensional coordinate system of the mathematical space can be established in the recognition module of the gesture control center, wherein the coordinate origin can be selected in the gesture control center, or can be selected in a customized one. At the point of the spatial position, of course, this is stored in the gesture control center. For the situation shown in Figure 2, the origin of the coordinates can also be selected on one of the three cameras.
  • the identification module in the gesture control center is responsible for measuring, identifying, calculating the position of the controlled device, measuring, identifying, and calculating the controlled device to which the gesture points.
  • the gesture control center identifies the controlled device.
  • the gesture control center collects images within the visual range through the camera at startup or periodically, and then identifies the captured images to check for identifiable controlled devices. If the specific identification method involves the barcode and the two-dimensional code shown in FIG. 2 and FIG. 3, it may first determine whether there is a barcode or a two-dimensional code in the image, and after determining the barcode and the two-dimensional code region, the barcode and the two-dimensional code are identified. The information identified.
  • the gesture control center confirms with the identified controlled device. After the controlled device is identified, the gesture control center can interact with the controlled device through the network, for example, using the existing UPnP protocol or DNS-SD to find the controlled device to confirm the address, function and the like of the controlled device.
  • the gesture control center is ready to recognize the gesture.
  • the video capture module (such as the camera) of the gesture control center monitors the image in the video area for gesture collection.
  • the gesture action can be identified by a histogram method or a hidden Markov model method. Firstly, the user's gesture action falls within the capture range of the camera, so that the camera can generate a gesture video and hand it to the recognition module, and the recognition module recognizes the hand from the gesture image of the received gesture video by means of color, contour, structured light analysis, and the like. Position, detecting and segmenting the gesture object, extracting the gesture feature, tracking to the gesture motion; then processing the finger direction and the motion direction sequence, and finally completely recognizing the gesture motion, which can be compared with the predefined gesture motion space, etc. Method, determining the user's gesture action intention.
  • the above recognition of the gesture action further includes identifying the controlled device to which the gesture is directed.
  • One method of determining a controlled device to which a gesture is directed in this embodiment is to perform calculations using gestures and controlled angles. For example: With the user as the coordinate center, the controlled device has an angle with the user gesture. When the gesture control center recognizes the gesture, it can identify the angle and the large distance between the user gesture extension line and each controlled device.
  • the gesture control center can first recognize the arm and use the elbow as the coordinate origin.
  • Example of a scene without any controlled equipment If the palm moves from left to right, the angle changes from zero. To 360. If the palm moves from top to bottom, the angle changes from 0. To 180. .
  • the identification module is to calculate the angle between the elbow-the elbow-the line of each controlled device and the elbow-palm (ie the arm) in the triangle formed by the three points of the controlled device, the palm and the elbow. . The smaller the angle of the angle between the arm and a controlled device, the more the direction of the gesture is directed toward the controlled device.
  • angle of the angle Indicates that the gesture may be directed to the controlled device that the user desires to manipulate; if it is 0°, it can be determined that it is directed to the determined controlled device;
  • the gesture indicating that the gesture is directed is not the controlled device that the user desires to control; after calculating the angle of each controlled device and the gesture, the gesture control center determines the controlled device corresponding to the gesture with the smallest angle, and determines This device is the controlled device that the user controls.
  • the lengths of the following three lines can be calculated: the triangle formed by the three points of the controlled device, the palm, and the elbow, the elbow - Controlled device line and elbow - palm (arm) line, palm - controlled device this line.
  • the identification module can directly measure the length of the above three lines, or can be proportionally calculated after measuring the reference distance. Then use the triangle formula to calculate the above angle and make a judgment.
  • Figure 8 shows how to determine the controlled device to which the gesture points:
  • the palm-arm line points in one direction, while the palm-arm line forms an angle with the elbow-device line, as shown in the figure.
  • the angle between the elbow-DVD player line and the palm-arm line is indicated, and the angle between the elbow-television line and the palm-arm line is also indicated.
  • the gesture control center compares the two angles to determine that the user wants to control the DVD player by gesture.
  • the aforementioned triangles (two triangles in total) are also illustrated in Fig. 8.
  • the triangle formulas can be calculated separately for the two triangles.
  • the method shown in Figure 8 can be applied to gesture control for more controlled devices.
  • the center point of the measurement object (such as the palm, arm, and controlled device), which can be determined by mathematical operations.
  • the gesture control center After the gesture control center recognizes the manipulation command corresponding to the gesture action, the gesture control message is sent to the corresponding controlled device.
  • the corresponding relationship between the gesture action and the instruction message is defined and stored in the gesture control center.
  • the gesture control center determines the control device and the controlled device corresponding to the manipulation command by identifying,
  • the manipulation command is sent to the controlled device through a network connection with the controlled device.
  • the manipulation command can be a general command, such as playing media, shutting down, etc., or device-specific instructions, such as changing channels, increasing volume, etc., or protocol instructions, that is, instructions specified in an application protocol, such as UPnP. Such as media content sharing in agreements.
  • the controlled device that receives the manipulation command message executes the manipulation command.
  • the controlled device receives the manipulation command and executes it through the network connection. As described in step 4, the controlled device executes the internal program according to the specific content of the instruction, or communicates with other devices and cooperates to complete the manipulation command.
  • the controlled device analyzes and calculates the feature data, obtains the corresponding instruction, and executes.
  • the operation of the gesture control center is divided into two major steps, namely, completing the visual discovery of the controlled device, and controlling The sending of the command.
  • the gesture control center scans the controlled device within the visual range to identify and store the scanned tag of the controlled device. You do not need to use the network at this time.
  • the gesture control center can interact with the controlled device through the network, for example, using the existing UPnP protocol or DNS-SD to find the controlled device to confirm the address, function, and the like of the controlled device.
  • the gesture control center recognizes the gesture action
  • the gesture action is analyzed, the action object, the action intention is analyzed, and the action intention is mapped into the manipulation command or the gesture action feature data.
  • the gesture control center sends a manipulation command to the controlled device to which the gesture is directed.
  • the network message can be sent by using protocol packets such as the UPnP protocol.
  • the gesture control center respectively recognizes the gesture action and transmits the corresponding manipulation command to the corresponding controlled device.
  • the controlled device that receives the manipulation command performs the operation corresponding to the manipulation command.
  • the manipulation command may be a general-purpose instruction, such as playing media, shutting down, etc., or may be a device-specific instruction, such as changing a channel, increasing a volume, etc., or may be a protocol instruction, that is, an instruction specified in an application protocol. For example, media content sharing in protocols such as UPnP, DVD players playing content on television, and the like.
  • the controlled device is not limited to the above-mentioned TV, player, storage server, etc., and may be a computer, an audio, a speaker, a projector, a set top box, etc., or even an automobile, a machine tool, a ship, or the like in the industrial field.
  • the controlled device may be a device equipped with a camera in the prior art, capable of independently implementing gesture control based on visual recognition, or other device without a camera installed.
  • the camera of the gesture control center can be of various specifications, for example, it can be a fixed focal length or a variable focal length, and the rotating space can be up, down, left, and right, or only support left and right angles.
  • the ranging module can use laser infrared ranging, or it can use other bands of light ranging. You can use the aforementioned three-camera ranging, or you can use more camera ranging (such as: using weight adjustment).
  • the above communication module may be a transceiver; the ranging module may be a range finder; the identification module may be a single chip microcomputer, a processor, etc., and the image recognition module may be a single chip microcomputer, a processor, etc. capable of processing an image, gesture recognition
  • the module may be a single chip microcomputer, a processor, or the like capable of processing gestures;
  • the data storage module may be a memory;
  • the control module may be an integrated circuit capable of data processing and control, such as a CPU; and the network service module may be a network server.
  • the gesture control technology of the embodiment of the present invention can represent the flow shown in FIG. 7, and the process includes the following steps:
  • Step 710 The gesture control center recognizes the gesture for the controlled device and the controlled control of the gesture Equipment
  • Step 720 The gesture control center sends the manipulation information to the controlled device pointed by the gesture, and the controlled device performs the corresponding operation according to the received manipulation information.
  • the gesture control technology of the embodiment of the present invention can ensure that only a set of gesture recognition devices such as a gesture control center can implement gesture control on multiple devices, thereby achieving uniformity of gesture control and avoiding
  • the misoperations that may be generated by different devices in the gesture recognition process also avoid the consumption of repeated resources; also provide a convenient control method for devices that do not support gesture recognition manipulation, and also save the cost of adding gesture recognition components for the device. This can effectively improve user satisfaction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Selective Calling Equipment (AREA)
  • Details Of Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种手势控制方法、装置和系统,在进行手势控制时,手势控制中心识别针对受控设备的手势,并向受控设备发送识别出的手势对应的操控信息;受控设备根据收到的操控信息执行相应操作。采用本发明的手势控制技术,可以保证只需要一套如手势控制中心这样的手势识别设备即可实现对多个设备的手势操控,实现了手势控制的统一,避免了不同设备在手势识别过程中可能产生的误操作,也避免了重复资源的消耗;还为不支持手势识别操控的设备提供了方便的操控方式,也为设备节省了加装手势识别部件的成本,这都能有效提高用户满意度。

Description

一种手势控制方法、 装置和系统 技术领域
本发明涉及通信领域, 具体涉及一种手势控制方法、 装置和系统。 背景技术
数字多媒体和网络的发展, 丰富了人们日常生活中的娱乐体验。 人们 能够通过遥控器、 手势等控制一个设备的播放, 例如控制切换上一频道、 下一频道节目等。
传统的对多个设备的控制方法, 通常为分别使用设备各自的遥控器进 行控制, 而这些遥控器往往是互不通用的, 遥控器大多不具备网络功能, 例如传统的电视机、 音响; 也有一些支持网络的遥控器, 例如在具有计算 和网络能力的设备(如手机、 PAD 等)上加载支持互通协议的软件, 基于 该软件控制另一设备。
上述的设备控制方式不够方便, 表现有: 人们需要在一堆遥控器中挑 选出对应设备的那一个遥控器进行设备控制, 并且随着要控制的设备的不 同还需要不断地更换遥控器, 或者只能由熟悉电脑操作的人来操作 PAD, 手机等以便进行设备控制, 或者以简单的手势来控制单一的设备。 可见, 为了对不同设备进行控制, 人们往往要学习如何使用不同的操控工具, 操 作过于繁瑣。 人们更希望使用更简单、 更自然的操作方式来控制设备, 基 于该需求, 手势控制应运而生, 手势控制是当前比较新颖的一种控制方式, 进行手势控制时, 一台设备上的摄像头监视手势动作并进行识别, 根据识 别出的手势动作所对应的控制命令对该设备进行控制。
目前, 为了实现手势控制, 受控设备要安装有摄像头, 配置为实现手 势控制中的视觉识别。 在实际应用环境中, 就可能有多个支持手势控制的 设备分别具有各自的摄像头、 手势识别软件, 这既耗费了较多的重复资源, 也容易在手势识别过程中造成误操作, 例如机顶盒将针对电视机的手势指 示理解为对自身的操控。 另外, 有很多设备并没有安装摄像头、 手势识别 软件, 这种设备无法实现手势控制。 发明内容
有鉴于此, 本发明实施例的主要目的在于提供一种手势控制方法、 装 置和系统, 以实现手势控制的统一。
为达到上述目的, 本发明实施例的技术方案是这样实现的:
一种手势控制方法, 包括:
手势控制中心识别针对受控设备的手势, 并向受控设备发送识别出的 手势对应的操控信息; 受控设备根据收到的操控信息执行相应操作。
其中, 识别所述手势之前, 手势控制中心还识别视觉范围内的受控设 备。
其中, 手势控制中心识别视觉范围内的受控设备时, 识别并记录受控 设备的设备标识、 设备地址、 设备位置中至少之一。
其中, 手势控制中心识别针对受控设备的手势时, 识别手势对应的操 控命令、 手势指向的受控设备。
其中, 手势控制中心识别手势指向的受控设备时,
利用手势与手势控制中心的视频采集模块的角度进行计算; 或, 在测量出手势、 受控设备、 手势控制中心之间的距离后, 用三角公式 进行计算。
其中, 手势控制中心发送操控信息时, 发送操控命令, 或是发送手势 动作特征数据; 和 /或,
受控设备收到的操控信息是操控命令时, 执行操控命令对应的操作; 受控设备收到的操控信息是手势动作特征数据时, 分析收到的手势动作特 征数据以得到对应的操控命令, 并执行操控命令对应的操作。
其中, 受控设备建立与手势控制中心的连接, 基于该连接以会话方式 利用消息完成操控。
一种手势控制装置, 包括视频采集模块、 识别模块、 控制模块; 其中, 所述视频采集模块, 配置为捕获针对受控设备的手势;
所述识别模块, 配置为识别所述手势;
所述控制模块, 配置为向受控设备发送识别出的手势对应的操控信息。 其中, 该装置还包括测距模块, 配置为结合所述视频采集模块识别出 视觉范围内的受控设备, 并计算出所述装置与受控设备之间的距离。
其中, 该装置还包括数据存储模块, 配置为在识别视觉范围内的受控 设备时, 记录识别出的受控设备的设备标识、 设备地址、 设备位置中至少 之一。
其中, 所述识别模块包括图像识别模块和手势识别模块; 其中, 所述图像识别模块, 配置为识别手势对应的操控命令;
所述手势识别模块, 配置为识别手势指向的受控设备。
其中, 所述手势识别模块在识别手势指向的受控设备时, 配置为: 利用手势与所述视频采集模块的角度进行计算; 或,
在测量出手势、 受控设备、 所述装置之间的距离后, 使用三角公式进 行计算。
其中, 所述控制模块在发送操控信息时, 配置为发送操控命令, 或是 发送手势动作特征数据。
其中, 所述装置包括网络服务模块, 配置为建立与受控设备的连接, 基于该连接以会话方式利用消息完成操控。
一种手势控制系统, 包括手势控制中心、 受控设备; 其中,
所述手势控制中心, 配置为识别针对受控设备的手势, 并向受控设备 发送识别出的手势对应的操控信息;
所述受控设备, 配置为根据收到的操控信息执行相应操作。
其中, 所述手势控制中心在发送操控信息时, 配置为发送操控命令, 或是发送手势动作特征数据; 和 /或,
所述受控设备收到的操控信息是操控命令时, 配置为执行操控命令对 应的操作; 所述受控设备收到的操控信息是手势动作特征数据时, 配置为 分析收到的手势动作特征数据以得到对应的操控命令, 并执行操控命令对 应的操作。
其中, 所述受控设备根据收到的操控信息执行操作时, 配置为建立与 手势控制中心的连接, 基于该连接以会话方式利用消息完成操控。
采用本发明实施例的手势控制技术, 可以保证只需要一套如手势控制 中心这样的手势识别设备即可实现对多个设备的手势操控, 实现了手势控 制的统一, 避免了不同设备在手势识别过程中可能产生的误操作, 也避免 了重复资源的消耗; 还为不支持手势识别操控的设备提供了方便的操控方 式, 也为设备节省了加装手势识别部件的成本, 这都能有效提高用户满意 度。 附图说明
图 1为本发明一实施例的手势控制系统示意图;
图 2为本发明另一实施例的手势控制系统示意图;
图 3为本发明实施例实现手势控制时使用的条码的示意图;
图 4为本发明实施例实现手势控制时使用的二维码的示意图; 图 5为本发明实施例的手势控制流程图;
图 6为本发明实施例的手势控制中心与受控设备之间的消息流程示意 图;
图 7为本发明实施例的手势控制流程简图; 图 8为本发明实施例确定手势指向的受控设备的原理示意图。 具体实施方式
本发明实施例所要解决的技术问题是, 应用能够统一控制受控设备的 装置实现对受控设备的控制。
上述的能够统一控制受控设备的装置为手势控制中心, 具有手势识别 能力、 网络能力, 能够识别手势、 识别手势所针对的受控设备, 并将手势 转换成操控命令或存储手势动作特征; 手势控制中心还能够与受控设备互 相连接, 并向受控设备发送包含操控命令或手势动作特征的消息。
手势控制中心可以执行包含以下步骤的操作:
步骤一, 手势控制中心识别视觉范围内的受控设备;
步骤二, 手势控制中心识别手势;
步骤三, 手势控制中心向手势指向的受控设备发送操控信息; 步骤四, 受控设备根据收到的操控信息执行相应操作。
进一步地, 步骤一中, 手势控制中心识别视觉范围内的受控设备时, 可以识别受控设备的设备标识、 设备地址、 设备位置等信息中至少之一, 并记录这些信息。
进一步地, 步骤二中, 手势控制中心识别手势时, 可以识别手势对应 的操控命令、 手势指向的受控设备, 如: 通过分析手势的动作特征进行识 别。
进一步地, 步骤三中, 手势控制中心可以发送操控命令, 或是发送手 势动作特征数据。 发送的手势动作特征数据可以由受控设备进一步分析以 得到对应的操控命令。
进一步地, 步骤四中, 受控设备可以建立与手势控制中心的连接, 基 于该连接以会话方式利用消息完成操控。
进一步地, 操控命令、 手势动作特征数据可以是协议指令, 即某一应 用协议中规定的协议消息及对应的参数。
下面将结合附图和实施例对本发明进行详细描述。
参见图 1, 图 1示出了本发明一实施例的装置场景, 其中包括各装置及 其相互关系。
图 1中由左至右示出了四个设备: 家庭存储服务器、 DVD播放机、 电 视、 手势控制中心, 所有设备放置的关系是: 手势控制中心与其它任何设 备中间没有遮挡, 这种情况下, 手势控制中心发出的光线能够不被阻挡而 直接照射到设备上。 当然, 这里不限定手势控制中心发出的光线是惟一角 度的。
上述四个设备上都设置有网络接口(例如支持 IEEE 802.1 lb/g/n的网络 接口, 或者支持 IEEE 802.3的网络接口), 从而可以连接到 IP网络等通信 网络。 各个设备内包含有通信模块, 该通信模块具有业务能力, 配置为与 其它设备互相发现、 连接, 并能与其它设备发送、 接收消息, 以及处理操 控命令、 或转交操控命令等。 上述业务能力可以使用现有的通用即插即用 ( UPnP )技术实现, 也可以使用多播域名系统(mDNS )和基于域名系统 的服务发现(DNS-SD )技术实现, 可以用在 IP 网络中, 以单播、 多播查 询方式, 按照预先定义的报文格式响应查询、 提供功能调用。 例如, UPnP 技术规定了媒体显示设备(如 TV )、 服务器(如 DVD播放器、 家庭存储服 务器)如何响应查询、 提供哪些调用功能。
手势控制中心还包含具有图像、 视频采集能力的视频采集模块(如摄 像头, 下面以摄像头为例), 以及测距模块。手势控制中心还包括识别模块、 数据存储模块、 控制模块、 网络服务模块等。 手势控制中心的摄像头能够 完成的功能有: 对摄像头视觉范围内的设备拍照, 并识别出设备标签上的 信息; 捕获用户的手势动作并识别出对应的操作目标、 操控命令、 或手势 动作特征数据。 测距模块类似手持激光红外线测距仪, 利用红外线等光线 的传播和反射计算手势控制中心与受控设备之间的距离。 通常, 可以选择 精度在 2 毫米左右的测距模块。 控制模块能够向受控设备发送识别出的手 势对应的操控信息, 使得受控设备能够根据收到的操控信息执行相应操作。 识别模块可以包括图像识别模块和手势识别模块, 分别实现图像识别和手 势识别。
手势控制中心可以识别其视觉范围内的设备, 实现方法是摄像头、 测 距模块在三维空间内旋转采集图像, 在采集的图像中查找预定义的标签图 形并进行分析。 如图 1 所示, 手势控制中心的摄像头通过拍照的方式采集 到一幅图像, 图像中有三个受控设备: 电视、 DVD播放机和家庭存储服务 器, 这三个受控设备上粘贴或印刷、 镶嵌了条形码或二维码等标签。 手势 控制中心的图像识别模块对图像进行分析后, 识别出图像中有若干标签, 对标签分别进行识别并存储识别出的标签信息, 然后对标签所在的受控设 备进行距离测量, 将测到的距离和标签信息一起存储。
与图 1类似, 图 2中也有三个受控设备: 电视、 DVD播放机、 家庭存 储服务器; 另外还包括三个具有图像、 视频采集能力的摄像头, 这三个摄 像头属于手势控制中心。 图 2中的手势控制中心不具有测距功能。 图 2中 所有设备的放置关系是: 手势控制中心与其它任何设备中间没有遮挡, 即 手势控制中心发出的光线能够直接照射到受控设备。
上述的三个摄像头之间的相互空间位置关系是确定的, 即手势控制中 心记录了三个摄像头之间的相互距离、 摄像头方向的相互角度。 通常, 三 个摄像头不位于同一直线上,彼此之间的相互角度也不能平行或超过 90度。 三个摄像头之间能够相互通讯, 能够相互交换采集的图像、 视频或将各自 采集到的图像、 视频发送到指定的设备上。
基于上述的位置关系 (三个摄像头之间的相互距离、 角度), 对落入三 个摄像头采集范围的受控设备、 手势, 摄像头采集到其相对角度(例如相 对水平的角度)后, 即可通过数学坐标变换和三角公式计算出受控设备、 手势的位置以及手势方向。
上述的条码如图 3所示, 条码标签上的信息为 "dvdplayer-192.1.1.1", 表示该条码标签对应的受控设备是 DVD播放机, 网络地址为 192.1.1.1。 所 述条码标签可以粘贴或印刷、 镶嵌在 DVD播放机上。
上述的二维码如图 4所示, 二维码标签上的信息为 "tv-192.1.1.2", 表 示该二维码标签对应的受控设备是电视, 网络地址为 192.1.1.2。 所述二维 码标签可以粘贴或印刷、 镶嵌在电视上。
除了上述的条码、 二维码以外, 标签信息还可以包含更多的内容, 例 如受控设备的简称、 自定义名称等。 比如: 直接将文字标注在受控设备上, 由手势控制中心根据标注在受控设备上的文字识别受控设备。
为了识别受控设备、 手势、 手势指向的受控设备, 手势控制中心的识 别模块中可以建立数学空间三维坐标系, 其中的坐标原点可选在手势控制 中心上, 也可以选在自定义的一个空间位置点上, 当然这一点要存储在手 势控制中心中。 对图 2 所示的情形, 坐标原点还可以选在三个摄像头中的 一个上。 手势控制中心内的识别模块负责测量、 标识、 计算出受控设备的 位置, 测量、 标识、 计算出手势指向的受控设备。
下面, 结合图 1、 图 5, 描述手势控制中心的工作流程。
1、 手势控制中心识别受控设备。
手势控制中心在启动时或周期地通过摄像头采集视觉范围内的图像, 然后对采集到的图像进行识别, 检查是否有可识别的受控设备。 如果具体 的识别方法涉及图 2、 图 3所示的条码、 二维码, 则可以先确定图像中是否 有条码、 二维码, 在确定条码、 二维码区域后, 识别条码、 二维码所标识 的信息。
2、 手势控制中心与识别出的受控设备进行确认。 在识别出受控设备后, 手势控制中心可以与受控设备通过网络交互, 例如使用现有的 UPnP协议或 DNS-SD查找该受控设备, 以确认该受控设 备的地址、 功能等信息。
3、 手势控制中心准备识别手势。
手势控制中心的视频采集模块(如摄像头)监视视频区域内的图像, 进行手势采集。
手势动作的识别可采用直方图法或隐马尔科夫模型法。 首先用户的手 势动作要落在摄像头捕获范围内, 这样摄像头能够生成手势视频并交给识 别模块, 识别模块通过颜色、 轮廓、 结构光分析等方法从收到的手势视频 的手势图像中识别出手的位置, 检测分割出手势对象, 提取出手势特征, 跟踪到手势运动; 然后对手指方向、 运动方向序列进行处理, 最后完整地 识别出手势动作, 此时可采用与预定义的手势动作空间对比等方法, 确定 用户的手势动作意图。
上述的手势动作的识别, 还包括识别出手势指向的受控设备。
本实施例中确定手势指向的受控设备的一种方法是利用手势与受控角 度进行计算。 比如: 以用户为坐标中心, 受控设备与用户手势存在角度。 手势控制中心在识别手势时, 能够标识出用户手势延伸线与各受控设备之 间的角度、 巨离。
具体而言, 手势控制中心可以先识别出手臂, 以手肘作为坐标原点。 举例没有任何受控设备的场景: 若手掌自左到右运动, 则角度变化为从 0。 到 360。 , 若手掌自上向下运动, 则角度变化为从 0。 到 180。 。 这里, 识 别模块所要计算的是受控设备、 手掌、 手肘这三点所构成的三角形中, 手 肘 -各受控设备这条线和手肘 -手掌 (即手臂)这条线的夹角。 手臂与某个受控 设备的夹角的角度越小, 表示手势的指向越趋向于这个受控设备。 夹角角 度大小的具体意义如下: 0° -90° : 表示手势指向的可能是用户期望操控的受控设备; 如果是 0° 则可断定是指向这一确定的受控设备;
90° 到 180° : 表示手势指向的很可能不是用户期望操控的受控设备; 在计算出各个受控设备与手势的角度后, 手势控制中心从中确定角度 最小的手势对应的受控设备, 判定这一设备即是用户所用操控的受控设备。
实际应用中, 也可以在测量出手势、 受控设备、 手势控制中心之间的 距离后, 计算以下三条线的长度: 受控设备、 手掌、 手肘这三点所构成的 三角形中, 手肘-受控设备这条线和手肘 -手掌 (手臂)这条线、 手掌-受控设备 这条线。 在长度的计算方面, 识别模块可以直接测量上述三条线的长度, 也可以在测量基准距离后按比例计算。 然后利用三角公式计算上述的角度, 进行判断。
图 8示意了如何确定手势指向的受控设备: 手掌-手臂这条线指向了一 个方向, 与此同时, 手掌-手臂这条线与手肘-设备这条线之间构成夹角, 如 图 8中, 示意了手肘 -DVD播放机这条线与手掌 -手臂这条线之间的夹角, 还示意了手肘 -电视这条线与手掌 -手臂这条线之间的夹角。手势控制中心通 过对这两个夹角进行比较, 确定用户是想通过手势控制 DVD播放机。 图 8 中还示意了前述的三角形(共有两个三角形),作为另一种计算角度的方法, 可以分别对这两个三角形进行三角公式计算。 图 8所示方法可以应用于针 对更多受控设备的手势控制。
实际应用中, 还可能涉及到测量对象(如手掌、 手臂、 受控设备) 的 中心点选取等问题, 这些都可以通过数学运算确定。
4、 手势控制中心识别出手势动作对应的操控命令后, 发送操控命令消 息到对应的受控设备。
手势控制中心内定义、 存储了手势动作与指令消息的对应关系。 当手 势控制中心通过识别确定了操控命令、 操控命令对应的受控设备后, 可以 通过与受控设备之间的网络连接将操控命令发送给受控设备。
操控命令可以是通用的指令, 如播放媒体、 关机等, 也可以是设备专 用的指令, 如更换频道、 增大音量等, 还可以是协议指令, 即某一应用协 议中规定的指令, 例如 UPnP等协议中的媒体内容共享等。
5、 接收到操控命令消息的受控设备执行操控命令。
受控设备通过网络连接接收到操控命令并执行。 如步骤 4所述, 受控 设备根据指令的具体内容, 执行内部程序, 或者与其它设备通讯、 协同完 成操控命令。
如果指令消息中包含的是手势动作的特征数据, 则受控设备对特征数 据进行分析、 计算, 得到对应的指令并执行。
总体而言, 在如图 6所示的手势控制中心与受控设备之间进行交互的 消息流程中, 手势控制中心的操作分为两大步, 分别是完成受控设备的视 觉发现, 以及操控命令的发送。
具体而言, 首先, 手势控制中心扫描视觉范围内的受控设备, 对扫描 到的受控设备的标签进行识别、 存储。 这时不需要使用网络。
在识别出受控设备后, 手势控制中心可以与受控设备通过网络交互, 例如使用现有的 UPnP协议或 DNS-SD查找该受控设备, 以确认该受控设 备的地址、 功能等信息。
接下来, 当手势控制中心识别出手势动作时, 对手势动作进行分析, 分析出动作对象、 动作意图并将动作意图映射成操控命令或手势动作特征 数据。
之后, 手势控制中心发送操控命令给手势指向的受控设备。 这时需要 手势控制中心与手势指向的受控设备之间存在网络连接, 并可以使用 UPnP 协议等协议报文发送网络消息。
如果上述手势是预定义的协同手势, 例如连续对电视、 DVD播放机进 行手势操作, 则手势控制中心分别识别手势动作并将对应的操控命令发送 给相应的受控设备。
最后, 收到操控命令的受控设备执行操控命令对应的操作。
所述操控命令可以是通用的指令, 如播放媒体、 关机等, 也可以是设 备专用的指令, 如更换频道、 增大音量等, 还可以是协议指令, 即某一应 用协议中规定的指令, 例如 UPnP等协议中的媒体内容共享, DVD播放机 在电视上播放内容等。
需要说明的是, 受控设备不限于上述的电视、 播放机、 存储服务器等, 也可以是电脑、 音响、 音箱、 投影仪、 机顶盒等, 甚至工业领域中的汽车、 机床、 轮船等。 并且, 受控设备可以是现有技术中的安装有摄像头, 能够 基于视觉识别独立实现手势控制的设备, 也可以是没有安装摄像头的其它 设备。
另外, 手势控制中心的摄像头可以是各种规格的, 例如可以是固定焦 距或可变焦距的, 旋转空间可以是上下左右各个角度的, 或只支持左右角 度。 测距模块可以使用激光红外线测距, 也可以用其它波段的光线测距。 可以使用前述的三摄像头测距, 也可以使用更多摄像头测距(如: 用加权 调整等方法)。
再有, 上述的通信模块可以是收发器; 测距模块可以是测距仪; 识别 模块可以是单片机、 处理器等, 图像识别模块可以是能够对图像进行处理 的单片机、 处理器等, 手势识别模块可以是能够对手势进行处理的单片机、 处理器等; 数据存储模块可以是存储器; 控制模块可以是 CPU等能够进行 数据处理及控制的集成电路等; 网络服务模块可以是网络服务器。
结合以上描述可见, 本发明实施例的手势控制技术可以表示如图 7所 示的流程, 该流程包括以下步骤:
步骤 710:手势控制中心识别针对受控设备的手势以及手势指向的受控 设备;
步骤 720: 手势控制中心向手势指向的受控设备发送操控信息, 受控设 备根据收到的操控信息执行相应操作。
综上所述可见, 采用本发明实施例的手势控制技术, 可以保证只需要 一套如手势控制中心这样的手势识别设备即可实现对多个设备的手势操 控, 实现了手势控制的统一, 避免了不同设备在手势识别过程中可能产生 的误操作, 也避免了重复资源的消耗; 还为不支持手势识别操控的设备提 供了方便的操控方式, 也为设备节省了加装手势识别部件的成本, 这都能 有效提高用户满意度。
以上所述, 仅为本发明的较佳实施例而已, 并非用于限定本发明的保 护范围。

Claims

权利要求书
1、 一种手势控制方法, 包括:
手势控制中心识别针对受控设备的手势, 并向受控设备发送识别出的 手势对应的操控信息; 受控设备根据收到的操控信息执行相应操作。
2、 根据权利要求 1所述的方法, 其中, 识别所述手势之前, 手势控制 中心还识别视觉范围内的受控设备。
3、 根据权利要求 2所述的方法, 其中, 手势控制中心识别视觉范围内 的受控设备时, 识别并记录受控设备的设备标识、 设备地址、 设备位置中 至少之一。
4、 根据权利要求 1至 3任一项所述的方法, 其中, 手势控制中心识别 针对受控设备的手势时, 识别手势对应的操控命令、 手势指向的受控设备。
5、 根据权利要求 4所述的方法, 其中, 手势控制中心识别手势指向的 受控设备时,
利用手势与手势控制中心的视频采集模块的角度进行计算; 或, 在测量出手势、 受控设备、 手势控制中心之间的距离后, 用三角公式 进行计算。
6、 根据权利要求 4所述的方法, 其中,
手势控制中心发送操控信息时, 发送操控命令, 或是发送手势动作特 征数据; 和 /或,
受控设备收到的操控信息是操控命令时, 执行操控命令对应的操作; 受控设备收到的操控信息是手势动作特征数据时, 分析收到的手势动作特 征数据以得到对应的操控命令, 并执行操控命令对应的操作。
7、 根据权利要求 1所述的方法, 其中, 受控设备建立与手势控制中心 的连接, 基于该连接以会话方式利用消息完成操控。
8、 一种手势控制装置, 包括视频采集模块、 识别模块、 控制模块; 其 中,
所述视频采集模块, 配置为捕获针对受控设备的手势;
所述识别模块, 配置为识别所述手势;
所述控制模块, 配置为向受控设备发送识别出的手势对应的操控信息。
9、 根据权利要求 8所述的装置, 其中, 该装置还包括测距模块, 配置 为结合所述视频采集模块识别出视觉范围内的受控设备, 并计算出所述装 置与受控设备之间的距离。
10、 根据权利要求 9所述的装置, 其中, 该装置还包括数据存储模块, 配置为在识别视觉范围内的受控设备时, 记录识别出的受控设备的设备标 识、 设备地址、 设备位置中至少之一。
11、 根据权利要求 8至 10任一项所述的装置, 其中, 所述识别模块包 括图像识别模块和手势识别模块; 其中,
所述图像识别模块, 配置为识别手势对应的操控命令;
所述手势识别模块, 配置为识别手势指向的受控设备。
12、 根据权利要求 11所述的装置, 其中, 所述手势识别模块在识别手 势指向的受控设备时, 配置为:
利用手势与所述视频采集模块的角度进行计算; 或,
在测量出手势、 受控设备、 所述装置之间的距离后, 使用三角公式进 行计算。
13、 根据权利要求 11所述的装置, 其中, 所述控制模块在发送操控信 息时, 配置为发送操控命令, 或是发送手势动作特征数据。
14、 根据权利要求 8所述的装置, 其中, 所述装置包括网络服务模块, 配置为建立与受控设备的连接, 基于该连接以会话方式利用消息完成操控。
15、 一种手势控制系统, 包括手势控制中心、 受控设备; 其中, 所述手势控制中心, 配置为识别针对受控设备的手势, 并向受控设备 发送识别出的手势对应的操控信息;
所述受控设备, 配置为根据收到的操控信息执行相应操作。
16、 根据权利要求 15所述的系统, 其中,
所述手势控制中心在发送操控信息时, 配置为发送操控命令, 或是发 送手势动作特征数据; 和 /或,
所述受控设备收到的操控信息是操控命令时, 配置为执行操控命令对 应的操作; 所述受控设备收到的操控信息是手势动作特征数据时, 配置为 分析收到的手势动作特征数据以得到对应的操控命令, 并执行操控命令对 应的操作。
17、 根据权利要求 15或 16所述的系统, 其中, 所述受控设备根据收 到的操控信息执行操作时, 配置为建立与手势控制中心的连接, 基于该连 接以会话方式利用消息完成操控。
PCT/CN2013/083690 2013-04-15 2013-09-17 一种手势控制方法、装置和系统 WO2014169566A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2016506752A JP6126738B2 (ja) 2013-04-15 2013-09-17 手振り制御方法、装置およびシステム
EP13882158.2A EP2988210B1 (en) 2013-04-15 2013-09-17 Gesture control method, apparatus and system
KR1020157032564A KR101969624B1 (ko) 2013-04-15 2013-09-17 제스처 제어 방법, 장치 및 시스템
US14/784,435 US10013067B2 (en) 2013-04-15 2013-09-17 Gesture control method, apparatus and system
DK13882158.2T DK2988210T3 (en) 2013-04-15 2013-09-17 Process, device and system for gesture management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310130673.4A CN104102335B (zh) 2013-04-15 2013-04-15 一种手势控制方法、装置和系统
CN201310130673.4 2013-04-15

Publications (1)

Publication Number Publication Date
WO2014169566A1 true WO2014169566A1 (zh) 2014-10-23

Family

ID=51670543

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/083690 WO2014169566A1 (zh) 2013-04-15 2013-09-17 一种手势控制方法、装置和系统

Country Status (8)

Country Link
US (1) US10013067B2 (zh)
EP (1) EP2988210B1 (zh)
JP (1) JP6126738B2 (zh)
KR (1) KR101969624B1 (zh)
CN (1) CN104102335B (zh)
DK (1) DK2988210T3 (zh)
HU (1) HUE043509T2 (zh)
WO (1) WO2014169566A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681859A (zh) * 2016-01-12 2016-06-15 东华大学 基于人体骨骼追踪控制智能电视的人机交互方法
EP3062196A1 (en) * 2015-02-26 2016-08-31 Xiaomi Inc. Method and apparatus for operating and controlling smart devices with hand gestures

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162039A1 (en) * 2013-07-21 2016-06-09 Pointgrab Ltd. Method and system for touchless activation of a device
CN105589550A (zh) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 信息发布方法、信息接收方法、装置及信息共享系统
CN105630142A (zh) * 2014-11-07 2016-06-01 中兴通讯股份有限公司 一种发布和传递识别信息的方法和装置及信息识别系统
CN105807549A (zh) * 2014-12-30 2016-07-27 联想(北京)有限公司 一种电子设备
US10484827B2 (en) * 2015-01-30 2019-11-19 Lutron Technology Company Llc Gesture-based load control via wearable devices
ES2748608T3 (es) 2015-12-14 2020-03-17 Signify Holding Bv Método para controlar un dispositivo de iluminación
CN106886279A (zh) * 2015-12-16 2017-06-23 华为技术有限公司 一种控制方法及装置
CN107436678B (zh) * 2016-05-27 2020-05-19 富泰华工业(深圳)有限公司 手势控制系统及方法
EP3529675B1 (de) 2016-10-21 2022-12-14 Trumpf Werkzeugmaschinen GmbH + Co. KG Innenraum-personenortung-basierte fertigungssteuerung in der metallverarbeitenden industrie
JP2019537786A (ja) 2016-10-21 2019-12-26 トルンプフ ヴェルクツォイクマシーネン ゲゼルシャフト ミット ベシュレンクテル ハフツング ウント コンパニー コマンディートゲゼルシャフトTrumpf Werkzeugmaschinen GmbH + Co. KG 金属処理産業における製造工程の内部位置特定に基づく制御
US11678950B2 (en) 2017-09-04 2023-06-20 Hiroki Kajita Multiple-viewpoint video image viewing system and camera system
CN108259671B (zh) * 2018-01-30 2020-01-03 深圳市科迈爱康科技有限公司 翻译内容传送方法和系统
US10296102B1 (en) * 2018-01-31 2019-05-21 Piccolo Labs Inc. Gesture and motion recognition using skeleton tracking
WO2019176441A1 (ja) * 2018-03-15 2019-09-19 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
CN108389459A (zh) * 2018-03-15 2018-08-10 湖南工业大学 一种基于虚拟现实的印刷模拟系统
US11003629B2 (en) * 2018-10-31 2021-05-11 EMC IP Holding Company LLC Dual layer deduplication for application specific file types in an information processing system
CN109934152B (zh) * 2019-03-08 2021-02-09 浙江理工大学 一种针对手语图像的改进小弯臂图像分割方法
US11543888B2 (en) 2019-06-27 2023-01-03 Google Llc Intent detection with a computing device
CN112698716A (zh) * 2019-10-23 2021-04-23 上海博泰悦臻电子设备制造有限公司 基于手势识别的车内设置、控制方法、系统、介质及设备
US11144128B2 (en) * 2019-11-20 2021-10-12 Verizon Patent And Licensing Inc. Systems and methods for controlling video wall content using air gestures
CN111880422B (zh) * 2020-07-20 2024-01-05 Oppo广东移动通信有限公司 设备控制方法及装置、设备、存储介质
US11798201B2 (en) 2021-03-16 2023-10-24 Snap Inc. Mirroring device with whole-body outfits
US11734959B2 (en) 2021-03-16 2023-08-22 Snap Inc. Activating hands-free mode on mirroring device
US11809633B2 (en) * 2021-03-16 2023-11-07 Snap Inc. Mirroring device with pointing based navigation
US11978283B2 (en) 2021-03-16 2024-05-07 Snap Inc. Mirroring device with a hands-free mode
US11908243B2 (en) 2021-03-16 2024-02-20 Snap Inc. Menu hierarchy navigation on electronic mirroring devices
CN113223344B (zh) * 2021-05-25 2022-08-23 湖南汽车工程职业学院 一种基于大数据的艺术设计用专业教学展示系统
US20230076716A1 (en) * 2021-09-03 2023-03-09 Apple Inc. Multi-device gesture control
CN115778046B (zh) * 2023-02-09 2023-06-09 深圳豪成通讯科技有限公司 基于数据分析的智能安全帽调节控制方法和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102778954A (zh) * 2012-06-29 2012-11-14 Tcl集团股份有限公司 一种手势操作管理方法及装置
CN102810023A (zh) * 2011-06-03 2012-12-05 联想(北京)有限公司 识别手势动作的方法及终端设备
CN102915111A (zh) * 2012-04-06 2013-02-06 寇传阳 一种腕上手势操控系统和方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60042156D1 (de) * 2000-08-24 2009-06-18 Sony Deutschland Gmbh Fernbedienungsgeber
US20060125968A1 (en) * 2004-12-10 2006-06-15 Seiko Epson Corporation Control system, apparatus compatible with the system, and remote controller
DE202006020369U1 (de) * 2005-03-04 2008-05-21 Apple Inc., Cupertino Multifunktionale handgehaltene Vorrichtung
WO2007004134A2 (en) * 2005-06-30 2007-01-11 Philips Intellectual Property & Standards Gmbh Method of controlling a system
US8599132B2 (en) * 2008-06-10 2013-12-03 Mediatek Inc. Methods and systems for controlling electronic devices according to signals from digital camera and sensor modules
JP5544570B2 (ja) * 2010-08-27 2014-07-09 公立大学法人首都大学東京 情報提示システム、情報提示プログラム、及び情報提示方法
US20120068857A1 (en) * 2010-09-22 2012-03-22 Apple Inc. Configurable remote control
US9423878B2 (en) * 2011-01-06 2016-08-23 Blackberry Limited Electronic device and method of displaying information in response to a gesture
US9778747B2 (en) * 2011-01-19 2017-10-03 Hewlett-Packard Development Company, L.P. Method and system for multimodal and gestural control
US9152376B2 (en) * 2011-12-01 2015-10-06 At&T Intellectual Property I, L.P. System and method for continuous multimodal speech and gesture interaction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810023A (zh) * 2011-06-03 2012-12-05 联想(北京)有限公司 识别手势动作的方法及终端设备
CN102915111A (zh) * 2012-04-06 2013-02-06 寇传阳 一种腕上手势操控系统和方法
CN102778954A (zh) * 2012-06-29 2012-11-14 Tcl集团股份有限公司 一种手势操作管理方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3062196A1 (en) * 2015-02-26 2016-08-31 Xiaomi Inc. Method and apparatus for operating and controlling smart devices with hand gestures
US10007354B2 (en) 2015-02-26 2018-06-26 Xiaomi Inc. Method and apparatus for controlling smart device
CN105681859A (zh) * 2016-01-12 2016-06-15 东华大学 基于人体骨骼追踪控制智能电视的人机交互方法

Also Published As

Publication number Publication date
EP2988210B1 (en) 2018-11-28
US10013067B2 (en) 2018-07-03
EP2988210A1 (en) 2016-02-24
KR101969624B1 (ko) 2019-04-16
CN104102335B (zh) 2018-10-02
DK2988210T3 (en) 2019-02-25
CN104102335A (zh) 2014-10-15
EP2988210A4 (en) 2016-05-25
US20160266653A1 (en) 2016-09-15
KR20150143724A (ko) 2015-12-23
JP6126738B2 (ja) 2017-05-10
HUE043509T2 (hu) 2019-08-28
JP2016518657A (ja) 2016-06-23

Similar Documents

Publication Publication Date Title
WO2014169566A1 (zh) 一种手势控制方法、装置和系统
JP6374052B2 (ja) デジタルメディアコンテンツ再生を転送する方法、装置およびシステム
CN111417028B (zh) 信息处理方法、装置、存储介质及电子设备
CN104866083B (zh) 手势识别方法、装置和系统
CN104866084B (zh) 手势识别方法、装置和系统
WO2015137740A1 (ko) 로봇을 이용한 홈 네트워크 시스템 및 그 제어방법
CN104662558A (zh) 用于手势输入的指尖定位
WO2016095641A1 (zh) 数据交互方法及系统、移动终端
WO2017020671A1 (zh) 视频交互方法、装置及视频源设备
US20160117553A1 (en) Method, device and system for realizing visual identification
CN109388238A (zh) 一种电子设备的控制方法及装置
WO2016062191A1 (zh) 信息发布方法、信息接收方法、装置及信息共享系统
CN115243085A (zh) 显示设备及设备互联方法
WO2016070827A1 (zh) 一种发布和传递识别信息的方法和装置及信息识别系统
CN115378748B (zh) 一种电子家居设备及定位方法
CN114257821B (zh) 一种视频的播放方法和电子设备
WO2024021036A1 (zh) 模型控制方法、装置、设备、系统以及计算机存储介质
TW202211001A (zh) 頭戴式顯示器和其控制方法
CN116055774A (zh) 一种显示设备及显示设备的控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13882158

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016506752

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013882158

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157032564

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 14784435

Country of ref document: US