WO2015127786A1 - 手势识别方法、装置、系统及计算机存储介质 - Google Patents
手势识别方法、装置、系统及计算机存储介质 Download PDFInfo
- Publication number
- WO2015127786A1 WO2015127786A1 PCT/CN2014/089167 CN2014089167W WO2015127786A1 WO 2015127786 A1 WO2015127786 A1 WO 2015127786A1 CN 2014089167 W CN2014089167 W CN 2014089167W WO 2015127786 A1 WO2015127786 A1 WO 2015127786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture
- recognition device
- gesture recognition
- information
- recognized
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
Definitions
- the invention relates to a gesture recognition technology in the field of communication and information, in particular to a gesture recognition method, device, system and computer storage medium.
- the development of digital multimedia and network enriches the entertainment experience in the daily life of users and also facilitates the operation of home appliances.
- the current technology allows users to watch HDTV at home.
- the source of TV programs may come from digital discs, cable TV, the Internet, etc., and can experience stereo, 5.1-channel, 7.1-channel and even more realistic sound effects, and users can also These experiences are realized using handheld multimedia consumer devices or tablets (PADs) and mobile phones.
- Related technologies include users being able to transfer content between different devices over the network, and controlling the playback of a device through remote controls and gestures. For example, the control switches the previous channel, the next channel program, etc., and controls the temperature, the light and the light, etc. through the network to operate the home appliances such as air conditioners and lights.
- remote controllers of the devices In the traditional control of multiple devices, it is common for the users to use the respective remote controllers of the devices for control, and these remote controllers are often not universal, and most of the remote controllers do not have network functions, such as traditional televisions.
- Remote control for audio there are also remote controls that support network functions, such as loading software that supports interworking protocols on devices with computing and network capabilities (such as mobile phones, PADs) to control another device.
- Gesture control is a relatively novel control technology.
- the control method is as follows: the camera on one device monitors the gesture and analyzes and recognizes it, converts it into the control command of the device and executes it; the user uses the wearable device through the hand and arm. And the body is wearing a similar device such as a ring, a watch, a vest, etc., the device recognizes the user's action, converts it into a device's control command, and executes the command.
- a user to manipulate a device by using a gesture, for example, by adding a camera to a television, collecting and recognizing a user's gesture, determining a command corresponding to the captured gesture according to a correspondence between the gesture and the manipulation command, and executing an instruction to
- the operations that have been implemented include changing channels, changing the volume, and the like.
- the device to be manipulated is required to have a camera to collect gestures.
- more devices are often gathered, and these devices may integrate gesture recognition cameras.
- the gesture control device it is necessary to face the camera of the manipulation device and make a control action. If you want to control multiple devices, you need to move between different devices to implement the gestures of manipulating the device. The operation is cumbersome and time-consuming, which affects the user experience.
- the embodiment of the invention provides a gesture recognition method, device, system and computer storage medium, which can enable the gesture recognition device to quickly determine whether to receive the user's control of the gesture, thereby enabling the target manipulation device to quickly respond to the gesture.
- An embodiment of the present invention provides a gesture recognition method, where the method includes:
- the first gesture recognition device is based on the information of the gesture recognized by itself, and the second gesture recognition device The information of the recognized gesture is determined to determine whether to accept the control of the gesture.
- the embodiment of the invention further provides a gesture recognition method, the method comprising:
- the first gesture recognition device issues information of the gesture recognized by itself and receives information of the gesture recognized by the second gesture recognition device.
- the embodiment of the present invention further provides a first gesture recognition apparatus, where the gesture recognition apparatus includes:
- a first identifying unit configured to recognize information of the gesture
- the first determining unit is configured to determine whether to accept the control of the gesture according to the information of the gesture recognized by the first recognition unit and the information of the gesture provided by the second gesture recognition device.
- the embodiment of the present invention further provides a first gesture recognition apparatus, where the gesture recognition apparatus includes:
- the second issuing unit is configured to publish information of the gesture recognized by the second identifying unit in the network, and receive information of the gesture provided by the second gesture recognition device.
- An embodiment of the present invention further provides a gesture recognition system, including at least one of the first gesture recognition devices described above.
- the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute any one of the gesture recognition methods described above.
- the first gesture recognition device may be disposed in a plurality of belt manipulation devices, such that in a scene having a plurality of devices to be controlled, the first gesture recognition device can be based on the information of the gesture recognized by itself. And the information of the gesture recognized by the second gesture recognition device determines whether to accept the control of the gesture, the recognition is more automatic, more humanized, conforms to the user's usage habit, and facilitates the user to use the gesture to control the device.
- FIG. 1a is a schematic diagram 1 of a gesture recognition method according to an embodiment of the present invention.
- FIG. 1b is a schematic diagram 1 of a scene for gesture recognition according to an embodiment of the present invention.
- FIG. 2 is a second schematic diagram of a gesture recognition method according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram 1 of a first gesture recognition apparatus according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram 2 of a first gesture recognition apparatus according to an embodiment of the present invention.
- FIG. 5 is a second schematic diagram of a gesture recognition scenario according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram of message interaction between gesture recognition devices according to an embodiment of the present invention.
- the inventors have found that the control information is transmitted between different devices through a network, and related technologies have been able to realize mutual discovery and control between devices, such as Universal Plug and Play (UPnP, Universal Plug and Play).
- the technology stipulates how to send and receive network messages between devices to realize discovery and control between devices.
- the technology uses network address and digital code as the identifier of the device, which is a kind of machine identification.
- the final control needs the user according to the device.
- the machine identification is selected and operated; if a gesture recognition method, device and system can be provided, it is simpler and more convenient for the user to manipulate multiple devices, and assisting in implementing coordinated gesture control will make the user's entertainment life easier and more enjoyable. Users don't need to learn more about how to use them, and they can help companies produce products that are more popular with consumers.
- the first gesture recognition device determines whether to accept the control of the gesture according to the recognized gesture and the position of the gesture, and the gesture recognized by the second gesture recognition device and the position of the gesture.
- the first gesture recognition device and the second gesture recognition device described in the embodiments of the present invention are gesture recognition devices that implement the same function; in actual applications, they may be disposed in multiple devices to be controlled, and one of the devices is convenient for description.
- the gesture recognition device provided in the description is described as a first gesture recognition device, and the gesture recognition device provided in the other device is described as the second gesture recognition device; that is, the number of the second gesture recognition devices is at least one.
- the embodiment of the present invention describes a gesture recognition method. As shown in FIG. 1a, the first gesture recognition apparatus according to the recognized gesture and the position of the gesture, and the gesture recognized by the second gesture recognition apparatus and the gesture Position, determine whether to accept the control of the gesture.
- the information of the gesture includes at least one of the following: a shape of the gesture, a location of the gesture; and a shape of the gesture includes at least one of the following information: a gesture number, a gesture text description information, and digital graphic information;
- the shape of the gesture may be described in the form of gesture number, gesture text description information or digital graphic information;
- the position of the gesture includes at least one of the following: a spatial coordinate parameter of the gesture, a spatial coordinate of the gesture, Image data of the gesture with depth information, positioning parameters of the gesture relative to an absolute origin;
- the first gesture recognition device and the second gesture recognition device may be disposed in the device to be manipulated, using their own visual recognition capabilities Gesture recognition, or the first gesture recognition device and the second gesture recognition device may perform gesture recognition as a wearable device, and thus, the types of gestures include: a computer vision recognition gesture and a wearable device gesture; accordingly, the first gesture recognition device The position of the recognized gesture characterizes the gesture and the first gesture recognition The relative position between the counter; second position gesture
- the first gesture recognition device determines, according to the information of the gesture recognized by the first gesture and the information of the gesture recognized by the second gesture recognition device. Whether to accept the control of the gesture can be as follows:
- the first gesture recognition device determines to support the gesture according to the shape of the gesture recognized by itself, it determines to accept the control of the gesture; otherwise, determines that the control of the gesture is not accepted.
- the first gesture recognition device determines, according to the information of the gesture that is recognized by the second gesture, and the information of the gesture that is recognized by the second gesture recognition device.
- the following methods can be used:
- the first gesture recognition device determines whether the gesture is closest to the first gesture recognition device, If so, control is accepted to accept the gesture; otherwise, control to accept the gesture is determined.
- the first gesture recognition device when the information of the gesture includes the shape of the gesture and the position of the gesture, the first gesture recognition device according to the information of the gesture recognized by itself and the gesture recognized by the second gesture recognition device Information, to determine whether to accept the control of the gesture, can be as follows:
- the first gesture recognition device determines that the gesture is supported according to the shape of the gesture that is recognized by itself, and the first gesture recognition device recognizes the location of the gesture according to the self, and the second gesture recognition device recognizes The position of the gesture determines whether the gesture satisfies a preset condition, and if so, determines to accept control of the gesture; otherwise, determines to not accept control of the gesture.
- the preset condition includes at least one of the following conditions:
- the distance between the gesture and the first gesture recognition device is smaller than the distance between the gesture and the second gesture recognition device
- the angle of the gesture from the first gesture recognition device is less than the angle at which the gesture deviates from the second gesture recognition device.
- both the first gesture recognition device and the second gesture recognition device acquire the gesture, and respectively recognize the shape of the gesture and the position of the gesture, and the second gesture recognition device Transmitting the identified gesture shape and the position of the gesture to the first gesture recognition device; the first gesture recognition device determines to support the gesture according to the shape of the gesture recognized by itself (of course, the first gesture recognition device may also identify according to the self The shape of the gesture, and/or the shape of the gesture identified by the second gesture determines to support the gesture), and the gesture is offset from the angle of the first gesture recognition device, less than the gesture deviation When the angle of the second gesture recognition device is described, the control of accepting the gesture is determined.
- FIG. 1b A schematic diagram of a scene is shown in FIG. 1b.
- the first gesture recognition device is disposed in the home storage server, and the second gesture recognition device is disposed in the DVD player.
- the first gesture recognition device Determining a gesture supported by the current user according to the shape of the gesture recognized by itself, and determining the first gesture recognition device and the second gesture recognition device according to the position of the gesture recognized by the second gesture and the position of the gesture recognized by the second gesture recognition device
- the angle ⁇ 1 of the user-implemented gesture deviating from the first gesture recognition device is smaller than the angle ⁇ 2 of the gesture deviating from the second gesture recognition device, that is, the gesture performed by the user is most positively applied to the first gesture recognition device, and therefore,
- a gesture recognition device determines that the user accepts the control of the gesture; in an actual application, considering the user's usage habit, when the user uses the gesture control device, the gesture is often performed directly in front of the device, and therefore, when the first gesture recognition device determines The angle that the user-implemented gesture deviates from the first gesture recognition device
- the first gesture recognition device issues information of the gesture recognized by itself, so that the second gesture recognition device identifies the information according to the gesture and the second gesture recognition device.
- the information of the gesture determines whether to accept the control of the gesture.
- the first gesture recognition device issues the information of the recognized user-implemented gesture in the network, so that the second gesture recognition device recognizes the gesture according to the first gesture recognition device.
- the information, and the information of the gesture recognized by the second gesture recognition device itself determine whether the gesture implemented by the user satisfies a preset condition; in FIG. 1b, since the gesture implemented by the user deviates from the angle of the first gesture recognition device, the angle is minimum, The second gesture recognition device determines that the control of the gesture implemented by the user is not accepted.
- the first gesture recognition device may determine the first gesture recognition device when the first gesture recognition device does not receive the message that the second gesture recognition device determines to accept the gesture control. Accepting the control of the gesture and publishing a message accepting the gesture control in the network. Based on such considerations, when the second gesture recognition device does not determine the reception gesture control, the gesture is necessarily for the first gesture Identification device implemented, thus first gesture recognition The device needs to accept the control of the gesture;
- the first gesture recognition device may re-publish the identified message when the first gesture recognition device does not receive the message that the second gesture recognition device determines to accept the gesture control within a preset time.
- the information of the gesture prevents the second gesture recognition device from receiving the information of the recognized gesture issued by the first gesture recognition device before, so that the second gesture recognition device cannot determine whether to accept the gesture control problem.
- the first gesture recognition device message releases the information of the recognized gesture
- the information of the recognized gesture is released in the form of a message; for example, the information may be distributed in the form of a broadcast, multicast or unicast message in the network.
- the embodiment of the present invention further describes a gesture recognition method.
- the first gesture recognition apparatus issues information of a gesture recognized by itself, and receives information of the gesture recognized by the second gesture recognition apparatus.
- the information of the gesture includes at least one of the following: a shape of the gesture, a location of the gesture; and a shape of the gesture includes at least one of the following information: a gesture number, a gesture text description information, and digital graphics information;
- the location of the gesture includes at least one of the following information: the location of the gesture includes at least one of: a spatial coordinate parameter of the gesture, a spatial coordinate of the gesture, and image data of the gesture with depth information The positioning parameter of the gesture relative to the absolute origin.
- the first gesture recognition device and the second gesture recognition device may be disposed in the device to be manipulated, using the visual recognition capability of the device to perform gesture recognition, or the first gesture recognition device and the second gesture recognition device may be performed as a wearable device Gesture recognition, correspondingly, types of gestures include: computer vision recognition gestures and wearable device gestures; types of gestures include: computer vision Identify gestures and wearable device gestures.
- the first gesture recognition device further determines whether to accept the control of the gesture according to the information of the gesture recognized by the second gesture and the information of the gesture recognized by the second gesture recognition device; For example, the information includes the shape of the gesture and the position of the gesture, the first gesture recognition device determines that the gesture is supported according to the shape of the gesture that is recognized by itself, and the first gesture recognition device is configured according to Determining the position of the gesture recognized by the second gesture recognition device and the position of the gesture recognized by the second gesture recognition device, determining whether the gesture satisfies a preset condition, and if so, determining to accept the control of the gesture; otherwise, determining not Accept the control of the gesture.
- the information includes the shape of the gesture and the position of the gesture
- the first gesture recognition device determines that the gesture is supported according to the shape of the gesture that is recognized by itself
- the first gesture recognition device is configured according to Determining the position of the gesture recognized by the second gesture recognition device and the position of the gesture recognized by the second gesture recognition device, determining whether the gesture
- the preset condition includes at least one of the following conditions:
- the distance between the gesture and the first gesture recognition device is smaller than the distance between the gesture and the second gesture recognition device
- the angle of the gesture from the first gesture recognition device is less than the angle at which the gesture deviates from the second gesture recognition device.
- the first gesture recognition device determines to accept the control of the gesture and does not receive the message that the second gesture recognition device determines to accept the gesture control within a preset time. Accept the message of the gesture control; or,
- the first gesture recognition device re-publishes the information of the gesture recognized by itself when the second gesture recognition device determines that the message that accepts the gesture control is not received within a preset time; Includes: publishing information in the form of unicast, multicast, or broadcast on the network.
- the first gesture recognition device when the first gesture recognition device determines the control to accept the gesture, the first gesture recognition device may also issue a message accepting the gesture control in the network.
- the second gesture recognition control means can be caused not to perform an operation of determining whether or not to accept the gesture control.
- the first gesture recognition device issues a letter of the gesture recognized by itself Interest, you can use any of the following methods:
- the first gesture recognition device issues information of a gesture recognized by itself in a message form
- the information of the gesture recognized by itself is published in the form of a message; for example, the information may be distributed in the form of a broadcast, multicast or unicast message in the network.
- the embodiment of the invention further describes a computer storage medium, wherein the computer storage medium stores executable instructions, and the executable instructions are used to execute the gesture recognition method shown in FIG. 1 or FIG.
- the embodiment of the present invention further describes a first gesture recognition apparatus.
- the first gesture recognition apparatus includes:
- the first identifying unit 31 is configured to recognize information of the gesture
- the first determining unit 32 is configured to determine whether to accept the control of the gesture according to the information of the gesture recognized by the first recognition unit 31 and the information of the gesture provided by the second gesture recognition device.
- the information of the gesture includes at least one of the following: a shape of the gesture, and a location of the gesture.
- the first determining unit 32 is further configured to determine that the shape of the gesture recognized by the first identifying unit 31 determines that the gesture is supported, and that the gesture is recognized according to the first identifying unit 31. a position, and a position of the gesture recognized by the second gesture recognition device, determining whether the gesture satisfies a preset condition, and if so, determining to accept control of the gesture; otherwise, determining to not accept control of the gesture.
- the preset condition includes at least one of the following conditions:
- the distance between the gesture and the gesture recognition device is less than the distance between the gesture and the second gesture recognition device
- the angle of the gesture deviating from the gesture recognition device is less than the gesture deviating from the second hand The angle of the potential identification device.
- the gesture recognition device further includes:
- the first issuing unit 33 is configured to publish information of the gesture recognized by the first identifying unit 31 in the network.
- the first issuing unit 33 is further configured to advertise the gesture recognized by the identifying unit and the location of the gesture in a message form;
- the gesture recognized by the first recognition unit 31 and the position of the gesture are issued in the form of a message.
- the first determining unit 32 is further configured to: when the first publishing unit 33 does not receive the message that the second gesture recognition device determines to accept the gesture control within a preset time, determine to accept Controlling the gesture, and triggering the first publishing unit 33 to issue a message accepting the gesture control; or when the first publishing unit 33 does not receive the second gesture recognition within a preset time
- the first issuing unit 33 is triggered to re-publish information of the gesture recognized by the first recognition unit 31.
- the first issuing unit 33 is further configured to issue a message accepting the gesture control when the first determining unit 32 determines to accept the control of the gesture.
- the shape of the gesture includes at least one of the following information: a gesture number, a gesture text description information, and digital graphic information.
- the location of the gesture includes at least one of the following: a spatial coordinate parameter of the gesture, a spatial coordinate of the gesture, image data of the gesture with depth information, and a positioning of the gesture relative to an absolute origin. parameter.
- the type of the gesture includes: a computer vision recognition gesture and a wearable device gesture.
- the first recognition unit 31 may be implemented by a camera having a visual acquisition capability; when the gesture recognition device is a wearable device, the first recognition unit 31 may be induced by gravity Or gyroscope implementation; the first determining unit 32
- the first processing unit (CPU), a digital signal processor (DSP), or a Field Programmable Gate Array (FPGA) can be implemented by the gesture recognition device;
- the issuing unit 33 can be implemented by a function chip supporting the network protocol such as IEEE 802.11b/g/n or supporting IEEE802.3 in the gesture recognition apparatus.
- the embodiment of the present invention further describes a first gesture recognition apparatus.
- the first gesture recognition apparatus includes:
- the second publishing unit 42 is configured to publish information of the gesture recognized by the second identification unit 41 in the network, and receive information of the gesture provided by the second gesture recognition device.
- the gesture recognition device further includes:
- the second determining unit 43 is configured to determine whether to accept the control of the gesture according to the information of the gesture recognized by the second recognition unit 41 and the information of the gesture recognized by the second gesture recognition device.
- the information of the gesture includes at least one of the following: a shape of the gesture, and a location of the gesture.
- the second determining unit 43 is further configured to determine the support according to the shape of the gesture recognized by the second identifying unit 41 when the information of the gesture includes the shape of the gesture and the location of the gesture. The gesture, and determining, according to the location of the gesture recognized by the second recognition unit 41 and the location of the gesture recognized by the second gesture recognition device, that the gesture meets a preset condition, determining the acceptance The control of the gesture.
- the preset condition includes at least one of the following conditions:
- the distance between the gesture and the gesture recognition device is smaller than the distance between the gesture and the second gesture recognition device
- the angle of the gesture deviating from the gesture recognition device is less than the gesture deviating from the second The angle of the gesture recognition device.
- the second determining unit 43 is further configured to: when the second gesture recognition device determines that the message that accepts the gesture control is not received within a preset time, determine to accept the control of the gesture, and Publishing a message accepting the gesture control in the network; or,
- the information of the gesture recognized by itself is re-published in the network when the second gesture recognition device determines that the message accepting the gesture control is not received within the preset time.
- the second issuing unit 42 is further configured to, when the second determining unit 43 determines to accept the control of the gesture, issue a message accepting the gesture control in the network.
- the second publishing unit 42 is further configured to advertise the information of the gesture recognized by the second identifying unit 41 in a broadcast, multicast, or unicast message in the network;
- the information of the gesture recognized by the second identifying unit 41 is published in the network in the form of a broadcast, multicast or unicast message.
- the shape of the gesture includes at least one of the following information: a gesture number, a gesture text description information, and digital graphic information.
- the location of the gesture includes at least one of the following: a spatial coordinate parameter of the gesture, a spatial coordinate of the gesture, image data of the gesture with depth information, and a positioning of the gesture relative to an absolute origin. parameter.
- the type of the gesture includes: a computer vision recognition gesture and a wearable device gesture.
- the second recognition unit 41 may be implemented by a camera having a visual acquisition capability; when the gesture recognition device is a wearable device, the second recognition unit 41 may be a gravity sensor or The gyroscope is implemented; the second determining unit 43 can be implemented by a CPU, DSP or FPGA in the gesture recognition device; the second issuing unit 42 can support a network protocol such as IEEE 802.11b/g/ in the gesture recognition device. n, or support IEEE802.3 function chip implementation.
- the embodiment of the invention further describes a gesture recognition system, including the first gesture recognition shown in FIG. a device, and/or a first gesture recognition device shown in FIG. 4; it should be noted that the first gesture recognition device included in the system may be disposed on a single device, and FIG. 1 and FIG. 2 are implemented by a single device.
- the gesture recognition method can also be set on a plurality of devices, and the gesture recognition methods shown in FIG. 1 and FIG. 2 are implemented by a plurality of devices.
- FIG. 5 is a schematic diagram 2 of a scenario for gesture recognition according to an embodiment of the present invention.
- FIG. 5 shows three devices, namely, a television, a DVD player, and a home storage server.
- the television, the DVD player, and the home storage server are respectively provided with gesture recognition devices, and the gesture recognition device uses the camera to recognize gestures, which is only convenient for description, each gesture recognition device has the same gesture recognition capability, and The ability to measure the position of the gesture.
- the gesture recognition device is provided with a network interface (corresponding to the network unit), for example, the network interface can support IEEE 802.11b/g/n, or supports IEEE 802.3, so that it can be connected to a network supporting Internet Protocol (IP); each gesture
- IP Internet Protocol
- each gesture The identification devices support message interaction with the gesture recognition device in other devices through the network unit, as well as processing manipulation commands or handing over manipulation commands.
- the ability of the gesture recognition device to discover, connect, send and receive messages on the network can be implemented by using related technologies such as UPnP (Universal Plug and Play) technology, or by using a multicast domain name system (mDNS, multicast). Domain Name System) and Domain Name System-Service Discovery (DNS-SD) technology.
- UPnP Universal Plug and Play
- mDNS multicast domain name system
- DNS-SD Domain Name System-Service Discovery
- This type of technology is used in IP networks, in unicast and multicast queries, according to predefined reports.
- the text format responds to queries and provides function calls.
- the UPnP technology specifies how media display devices (such as televisions), servers (such as DVD players, home storage servers) respond to queries, and which calling functions are provided.
- the gesture recognition device includes a camera with image and video capture capabilities (corresponding to the recognition unit in the gesture recognition device) to collect the gesture and recognize the position of the gesture by means of infrared ranging.
- the gesture recognition device may also be a wearable device, such as a ring-type device worn on the hand, a watch-type device worn on the arm, the wearable device capable of passing the gyroscope, gravity sensing
- the meter recognizes the user's finger and arm movements and also has a network function.
- the wearable device can exchange information with the device to be controlled such as a television, a DVD player, and a home storage server through a network unit disposed in itself.
- the gesture recognition device can recognize the gesture within its visual range.
- the image can be acquired by the camera, the position measurement is performed in the three-dimensional space by infrared, and the image information is analyzed in the collected image; the gesture recognition device further Distance measurements are performed on gestures such as finger gestures, palm gestures, arm gestures, etc., along with the recognized gesture information.
- the gesture recognition device in the television, the DVD player, and the home storage server transmits the message in a multicast manner when the gesture is recognized, and the message includes:
- a unique identifier of the gesture recognition device such as a network address
- the number of the recognized gesture representing the shape of the gesture, such as 1 for 5 fingers open, 2 for two fingers, 10 for fists, 20 for shaking arms, etc.
- the position information of the gesture may take the form of spatial coordinates, image data of a gesture with depth information, or positioning parameters of the gesture relative to an absolute origin.
- the message may also include gesture parameters, such as the duration of the gesture.
- FIG. 6 is a flowchart of gesture recognition in the embodiment of the present invention.
- FIG. 6 is a flowchart, referring to the device in FIG. 5. As shown in FIG. 6, the gesture recognition includes the following steps:
- Step 601 each gesture recognition device recognizes information of the gesture.
- Each gesture recognition device is independent of the recognition of the gesture, the recognition operation including the recognition of one or more of the gesture shape, the finger and the arm, and the measurement of the position of the finger and the arm.
- Step 602 each gesture recognition device issues information of the recognized gesture.
- the information of the gesture includes the shape of the gesture and the position of the gesture.
- the shape of the gesture and the location of the gesture can be posted on the network by means of multicasting or the like.
- each gesture recognition device receives information of gestures issued by other gesture recognition devices.
- the gesture recognition device in the television recognizes the shape of the gesture performed by the user and the position of the gesture; and receives the gesture implemented by the user recognized by the gesture recognition device in the DVD player.
- the shape and position of the gesture are identified and the shape of the recognized gesture and the location of the gesture are posted via the web.
- the gesture recognition device After receiving the information of the gesture, the gesture recognition device needs to store the information of the received gesture.
- the gesture recognition device may set a timer to save the received gesture information for a preset time. After the preset time, the stored gesture may be discarded. Information.
- Step 604 each gesture recognition device determines whether to accept the control of the recognized gesture, and if so, performs steps 605 and 606; otherwise, performs step 607a or step 607b.
- the gesture recognition device in the television recognizes the shape of the gesture performed by the user and the position of the gesture; and according to the gesture implemented by the user recognized by the gesture recognition device in the DVD player
- the gesture is off-angle from the television (and the gesture recognition device in the television) at a minimum angle, and thus the gesture recognition device in the television determines the control of accepting gestures performed by the user.
- Step 605 The gesture recognition device issues a message for receiving gesture control in the network.
- the gesture recognition device determines an instruction corresponding to the gesture implemented by the user according to the gesture implemented by the user, and controls the corresponding device to execute the instruction.
- Step 606 The gesture recognition apparatus issues a message in the network that responds to the end of the gesture control.
- Step 607a the gesture recognition device does not receive other gesture recognition devices within a preset time.
- the gesture recognition device When receiving a message of a gesture implemented by the user, it is determined to accept control of the gesture performed by the user.
- the gesture recognition device determines the control of accepting the gesture implemented by the user, it is also possible to publish a message in the network that accepts the gesture control implemented by the user.
- Step 607b The gesture recognition device re-publishes the information of the recognized gesture in the network when the gesture recognition device does not receive the message that the other gesture recognition device determines to receive the gesture implemented by the user.
- the gesture recognition process described above is further described in conjunction with the message interaction diagram between the gesture recognition devices shown in FIG. 7. As shown in FIG. 7, the method includes the following steps:
- each gesture recognition device in the device 1 to the device 3 recognizes a gesture action made by the user.
- the gesture recognition device may be a gesture that is recognized by the user by the camera; of course, the gesture recognition device itself may also be a wearable device (for example, the user simultaneously wears a wearable device corresponding to each device to be controlled), thereby recognizing that the user implements Gesture.
- step 702 the gesture recognition device in the device 1 to the device 3 respectively issues the position of the recognized gesture and gesture in the network.
- each gesture recognition device In order for each gesture recognition device to receive the location of the gestures and gestures recognized by other gesture recognition devices.
- Step 703 the gesture recognition device in the device 1 to the device 3 determines whether to accept the control of the gesture implemented by the user.
- Step 704 determining that the gesture recognition device receiving the gesture control issues a message determining that the gesture control is accepted.
- the gesture recognition device in the device 1 determines the control of accepting the gesture
- the gesture recognition device in the device 1 accepts the message of the gesture control at the network distribution device 1.
- Step 705 Determine a gesture recognition device that receives the gesture control, and after the corresponding device responds to the gesture control, issues a message that has responded to the gesture control in the network.
- the gesture recognition device control device 1 in the device 1 responds to the gesture control, it is released in the network.
- the device 1 has responded to the message of the control of the gesture.
- the determination of the target manipulation device of the gesture implemented by the user in the multi-device scenario is implemented, thereby implementing the manipulation of the target manipulation device.
- the television set, the DVD player, and the storage server are used as the controlled device, and the present invention is not limited to the device to be operated as the device mentioned in the embodiment, the computer, the sound, the speaker, the projector, Set-top boxes and the like can all be used as controlled devices, and even other industrial devices such as automobiles, machine tools, ships, etc. can be manipulated by visual recognition and discovery control devices.
- the camera of the gesture recognition device can be of various specifications, for example, a fixed focal length or a zoom distance, the rotation space can be up, down, left, and right, or only the left and right angles can be supported, and only one camera needs to be configured.
- the various capabilities described in the example when the gesture recognition device recognizes the position of the gesture, laser infrared or other bands of light can be used; of course, 3 cameras can also be used to calculate the ranging, or more than 3 cameras can be used and weighted.
- the method of adjustment determines the location of the recognized gesture.
- the above embodiments are network-related and can be applied to IEEE 802.3, IEEE 802.11b/g/n, power line network (POWELINE), cable (CABLE), public switched telephone network (PSTN, Public Switched Telephone Network), and third.
- the IP network supported by the 3GPP, 3rd Generation Partnership Project, 3GPP2 network, etc., the operating system of each device can be applied to UNIX operating systems, WINDOWS operating systems, ANDROID operating systems, IOS.
- the operating system, the consumer interface can be applied to the JAVA language interface.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
- the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
- the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated in one processing order.
- each unit may be separately used as a unit, or two or more units may be integrated into one unit; the integrated unit may be implemented in the form of hardware or a hardware plus software functional unit. Formal realization.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing storage device includes the following steps:
- the foregoing storage medium includes: a removable storage device, a read-only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
- the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
- the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, which is stored in a storage medium and includes a plurality of instructions for making A computer device (which may be a personal computer, server, or network device, etc.) performs all or part of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本发明实施例公开了手势识别方法、装置和系统;本发明公开的一种手势识别方法包括:第一手势识别装置根据识别的手势与所述手势的位置,以及第二手势识别装置识别的所述手势与所述手势的位置,确定是否接受所述手势的控制。本发明公开的另一种手势识别方法包括:第一手势识别装置发布自身识别的手势的信息,并接收第二手势识别装置识别的所述手势的信息。
Description
本发明涉及通信与信息领域的手势识别技术,尤其涉及手势识别方法、装置、系统及计算机存储介质。
数字多媒体和网络的发展,丰富了用户日常生活中的娱乐体验、还便利了用户对家电的操作。目前的技术让用户在家里能够观看高清电视,电视节目的来源可能来自数字光盘、有线电视、互联网等等,能够体验立体声、5.1声道、7.1声道乃至更逼真的声音效果,而且用户还能够使用手持式多媒体消费电子设备或平板电脑(PAD)、手机来实现这些体验,相关技术还包括,用户能够通过网络在不同设备之间转移内容的播放,以及通过遥控器、手势控制一个设备的播放,例如控制切换上一频道、下一频道节目等等,通过网络对空调、灯光等家电操作从而控制温度、灯光明暗等等。
传统的对多个设备的控制中,常见的是,用户分别使用设备各自的遥控器进行控制,而这些遥控器往往是互不通用的,且遥控器大多不具备网络功能,例如传统的电视机、音响的遥控器;也有一些支持网络功能的遥控器,例如在具有计算和网络能力的设备(如手机、PAD)上加载支持互通协议的软件,来控制另一设备。
随着技术的发展,多个设备之间的内容共享的需求、转移内容的需求越来越多,这需要对众多的家电分别进行操控,操控的方式有:用户需要在一堆遥控器中挑选出对应设备的遥控器对设备进行操控,且需要不断更换遥控器;由熟悉电脑操作的用户来操作PAD、手机来操控设备;以简单的手势来控制单一的设备。
可以看出,以上操控方式非常不便,为了使用不同设备往往要学习使用不同的操控工具。用户更希望使用更简单、更自然的操作方式来控制在较小范围内的更多设备。
手势控制是当前比较新颖的操控技术,操控方式有:一台设备上的摄像头监视手势动作并进行分析识别,转换成设备的控制指令并执行;用户使用可穿戴设备,通过在手上、手臂上以及身体上穿戴类似指环、手表、背心等等类似的设备,设备识别用户动作,转换成设备的控制指令并执行指令。
相关技术能够让用户使用手势来操控设备,例如,通过在电视机上增加一个摄像头,采集、识别用户的手势,按照手势与操控命令对应关系,确定采集到的手势对应的指令,并执行指令,以达到通过手势操控电视机的效果,已经实现的操控包括更换频道、改变音量等。
然而,相关技术中,要求被操控的设备具有一个摄像头以采集手势。在某些环境中如家庭环境中,往往聚集了较多的设备,这些设备可能会集成了手势识别的摄像头。当用户使用手势控制设备时,需要面对待操控设备的摄像头,做出控制动作。如果要操控多个设备,就需要在不同的设备之间移动,以实施操控设备的手势,操作繁琐费时,影响了用户体验。
相关技术中,对于如何帮助用户使用手势快捷方便地操控多个设备,尚无有效解决方案。
发明内容
本发明实施例提供一种手势识别方法、装置、系统及计算机存储介质,能够使手势识别装置快速确定是否接收用户实施手势的控制,从而能够使目标操控设备对手势进行快速响应。
本发明实施例提供一种手势识别方法,所述方法包括:
第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装
置识别的所述手势的信息,确定是否接受所述手势的控制。
本发明实施例还提供一种手势识别方法,所述方法包括:
第一手势识别装置发布自身识别的手势的信息,并接收第二手势识别装置识别的所述手势的信息。
本发明实施例还提供一种第一手势识别装置,所述手势识别装置包括:
第一识别单元,配置为识别手势的信息;
第一确定单元,配置为根据所述第一识别单元识别的手势的信息、以及第二手势识别装置提供的所述手势的信息,确定是否接受所述手势的控制。
本发明实施例还提供一种第一手势识别装置,所述手势识别装置包括:
第二识别单元和第二发布单元;其中,
所述第二发布单元,配置为在网络中发布所述第二识别单元识别的手势的信息,并接收第二手势识别装置提供的所述手势的信息。
本发明实施例还提供一种手势识别系统,包括以上所述的第一手势识别装置中的至少一种。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行以上所述的任意一种手势识别方法。
本发明实施例的技术方案中,第一手势识别装置可以设置于多个带操控设备中,这样,在具有多个待操控设备的场景中,第一手势识别装置能够根据自身识别的手势的信息、以及第二手势识别装置识别的手势的信息确定是否接受手势的控制,识别更自动化、更人性化,符合用户使用习惯,方便用户使用手势操控设备。
图1a为本发明实施例中手势识别方法的示意图一;
图1b为本发明实施例中手势识别的场景示意图一;
图2为本发明实施例中手势识别方法的示意图二;
图3为本发明实施例中第一手势识别装置的组成示意图一;
图4为本发明实施例中第一手势识别装置的组成示意图二;
图5为本发明实施例中手势识别的场景示意图二;
图6为本发明实施例中手势识别的流程图;
图7为本发明实施例中手势识别装置之间的消息交互示意图。
发明人在实施本发明的过程中发现,通过网络在不同设备之间传递控制信息,已经有相关技术能够实现设备之间的互相发现和控制,例如通用即插即用(UPnP,Universal Plug and Play)技术规定了设备之间如何发送、接收网络消息来实现设备之间的发现和控制,该技术以网络地址及数字编码作为设备的标识,是一种机器标识,最终的控制需要用户根据设备的机器标识进行选择再操作;如果能够提供一种手势识别方法、装置和系统,让用户操控多设备时更简单方便,并且辅助实现协同的手势控制,将使得用户的娱乐生活更轻松、更享受,用户不需要学习掌握更多的使用方法,还能够有助于企业生产出更受消费者喜欢的产品。
本发明实施例中,第一手势识别装置根据识别的手势与所述手势的位置,以及第二手势识别装置识别的所述手势与所述手势的位置,确定是否接受所述手势的控制。
本发明实施例记载的第一手势识别装置和第二手势识别装置为实现相同功能的手势识别装置;在实际应用中,可以设置于多个待操控设备中,为描述方便,将其中一个设备中设置的手势识别装置描述为第一手势识别装置,将其他设备中设置的手势识别装置描述为第二手势识别装置;也就是说,第二手势识别装置的数量至少为1个。
下面将结合附图和实施例对本发明作进一步详细描述。
本发明实施例记载一种手势识别方法,如图1a所示,第一手势识别装置根据识别的手势与所述手势的位置,以及第二手势识别装置识别的所述手势与所述手势的位置,确定是否接受所述手势的控制。
所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置;所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息;也就是说,手势的形状可以采用手势编号、手势文字描述信息或数字图形信息的形式进行描述;所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数;第一手势识别装置和第二手势识别装置可以设置于待操控设备中,利用自身的视觉识别能力进行手势识别,或者,第一手势识别装置和第二手势识别装置可以作为可穿戴设备进行手势识别,因此,手势的类型包括:计算机视觉识别手势和穿戴设备手势;相应地,第一手势识别装置识别的手势的位置表征所述手势与第一手势识别装置之间的相对位置;第二手势识别装置识别的手势的位置表征所述手势与第二手势识别装置之间的相对位置。
在一个实施方式中,当所述手势的信息仅包括手势的形状时,所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制,可以采用如下方式:
所述第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势时,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
在一个实施方式中,当所述手势的信息包括手势的位置时,所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制,可以采用如下方式:
所述第一手势识别装置判断所述手势是否最接近第一手势识别装置,
如果是,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
在一个实施方式中,当所述手势的信息包括手势的形状和手势的位置时,所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制,可以采用如下方式:
所述第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势,且,所述第一手势识别装置根据自身识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
其中,所述预设条件包括以下条件至少之一:
所述手势与所述第一手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;
所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度。
例如,当用户实施一个手势时,第一手势识别装置和第二手势识别装置均采集到所述手势,并分别识别了所述手势的形状和所述手势的位置,第二手势识别装置将所识别的手势形状和手势的位置发送至第一手势识别装置;第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势(当然,第一手势识别装置也可根据自身识别的所述手势的形状,和/或第二手势识别的所述手势的形状确定支持所述手势),且,所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度时,确定接受所述手势的控制。一个场景的示意图如图1b所示,第一手势识别装置设置于家庭存储服务器中,第二手势识别装置设置于DVD播放机中,用户实施图1b所示的手势时,第一手势识别装置根据自身识别的
手势的形状确定支持当前用户实施的手势,并且根据自身识别的手势的位置、第二手势识别装置识别的手势的位置,确定在第一手势识别装置和第二手势识别装置中,用户实施的手势偏离第一手势识别装置的角度φ1小于手势偏离第二手势识别装置的角度φ2,也就是说,用户实施的手势最正对第一手势识别装置,因此,第一手势识别装置确定接受用户实施手势的控制;实际应用中,考虑到用户的使用习惯,当用户使用手势控制设备时,往往会在设备的正前方实施手势,因此,当第一手势识别装置确定用户实施的手势偏离第一手势识别装置的角度,小于用户实施的手势偏离第二手势识别装置的角度时,确定接受所述手势的控制;相应地,第一手势识别装置确定所识别的手势对应的指令,并控制家庭存储服务器执行所确定的指令,如此,用户通过实施手势实现设备对家庭存储服务器的控制。
在一个实施方式中,所述第一手势识别装置发布自身所识别的手势的信息,以使所述第二手势识别装置根据所述手势的信息、以及所述第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制。
以图1b为例进行说明,用户实施手势时,第一手势识别装置将识别的用户实施的手势的信息在网络中发布,以使第二手势识别装置根据第一手势识别装置识别的手势的信息、以及第二手势识别装置自身识别的所述手势的信息,确定用户实施的手势是否满足预设条件;在图1b中,由于用户实施的手势偏离第一手势识别装置的角度最小,因此,第二手势识别装置确定不接受用户实施的手势的控制。
在一个实施方式中,所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,所述第一手势识别装置还可以确定接受所述手势的控制,并在网络中发布接受所述手势控制的消息,基于这样的考虑,当第二手势识别装置均未确定接收手势控制时,则所述手势必然是针对第一手势识别装置实施的,因此第一手势识别
装置需要接受手势的控制;
或者,所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,所述第一手势识别装置还可以重新发布所识别的手势的信息,避免第二手势识别装置之前未接收到第一手势识别装置发布的所识别的手势的信息,导致第二手势识别装置无法确定是否接受手势控制的问题。
其中,所述第一手势识别装置发布所识别的手势的信息时,可以采用以下方式:
所述第一手势识别装置消息形式发布识别出的手势的信息;
或者,当所述第一手势识别装置接收到查询请求时,以消息形式发布识别出的手势的信息;例如,可以在网络中以中以广播、多播或单播消息的形式发布信息。
本发明实施例还记载一种手势识别方法,如图2所示,第一手势识别装置发布自身识别的手势的信息,并接收第二手势识别装置识别的所述手势的信息。
其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置;所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息;所述手势的位置包括以下信息至少之一:所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
第一手势识别装置和第二手势识别装置可以设置于待操控设备中,利用自身的视觉识别能力进行手势识别,或者,第一手势识别装置和第二手势识别装置可以作为可穿戴设备进行手势识别,相应地,手势的类型包括:计算机视觉识别手势和穿戴设备手势;所述手势的类型包括:计算机视觉
识别手势和穿戴设备手势。
在一个实施方式中,所述第一手势识别装置还根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制;以所述手势的信息包括所述手势的形状和所述手势的位置时为例,所述第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势,且,所述第一手势识别装置根据自身识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
其中,所述预设条件包括以下条件至少之一:
所述手势与所述第一手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;
所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度。
在一个实施方式中,所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并发布接受所述手势控制的消息;或者,
所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,重新发布自身识别的手势的信息;这里,发布信息的方式在包括:在网络中以单播、多播或广播的形式发布信息。
在一个实施方式中,所述第一手势识别装置确定接受所述手势的控制时,第一手势识别装置还可以在网络中发布接受所述手势控制的消息。如此,可以使第二手势识别控制装置不执行确定是否接受手势控制的操作。
在一个实施方式中,所述第一手势识别装置发布自身识别的手势的信
息,可以采用以下方式任意之一:
所述第一手势识别装置以消息形式发布自身识别的手势的信息;
或者,当所述第一手势识别装置接收到查询请求时,以消息形式发布自身识别的手势的信息;例如,可以在网络中以广播、多播或单播消息的形式发布信息。
本发明实施例还记载一种计算机存储介质,所述计算机存储介质中存储有可执行指令,所述可执行指令用于执行图1或图2所示的手势识别方法。
本发明实施例还记载一种第一手势识别装置,如图3所示,所述第一手势识别装置包括:
第一识别单元31,配置为识别手势的信息;
第一确定单元32,配置为根据所述第一识别单元31识别的手势的信息、以及第二手势识别装置提供的所述手势的信息,确定是否接受所述手势的控制。
其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
其中,所述第一确定单元32,还配置为确定所述第一识别单元31识别的所述手势的形状确定支持所述手势,且,根据所述第一识别单元31识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
其中,所述预设条件包括以下条件至少之一:
所述手势与所述手势识别装置的距离小于所述手势与所述第二手势识别装置的距离;
所述手势偏离所述手势识别装置的角度小于所述手势偏离所述第二手
势识别装置的角度。
其中,所述手势识别装置还包括:
第一发布单元33,配置为在网络中发布所述第一识别单元31识别的手势的信息。
其中,所述第一发布单元33,还配置为以消息形式发布所述识别单元识别出的手势与所述手势的位置;
或者,当接收到查询请求时,以消息形式发布所述第一识别单元31识别出的手势与所述手势的位置。
其中,所述第一确定单元32,还配置为当所述第一发布单元33在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并触发所述第一发布单元33发布接受所述手势控制的消息;或者,当所述第一发布单元33在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,触发所述第一发布单元33重新发布所述第一识别单元31识别的手势的信息。
其中,所述第一发布单元33,还配置为在所述第一确定单元32确定接受所述手势的控制时,发布接受所述手势控制的消息。
其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
所述手势识别装置设置与待操控设备中时,所述第一识别单元31可由具有视觉采集能力的摄像头实现;所述手势识别装置作为可穿戴设备时,所述第一识别单元31可由重力感应计或陀螺仪实现;所述第一确定单元32
可由所述手势识别装置中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或现场可编程门阵列(FPGA,Field Programmable Gate Array)实现;所述第一发布单元33可由所述手势识别装置中支持网络协议如IEEE 802.11b/g/n,或者支持IEEE802.3的功能芯片实现。
本发明实施例还记载一种第一手势识别装置,如图4所示,所述第一手势识别装置包括:
第二识别单元41和第二发布单元42;其中,
所述第二发布单元42,配置为在网络中发布所述第二识别单元41识别的手势的信息,并接收第二手势识别装置提供的所述手势的信息。
其中,所述手势识别装置还包括:
第二确定单元43,配置为根据所述第二识别单元41识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制。
其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
其中,所述第二确定单元43,还配置为当所述手势的信息包括所述手势的形状和所述手势的位置时,根据所述第二识别单元41识别的所述手势的形状确定支持所述手势,且,根据所述第二识别单元41识别的所述手势的位置、以及第二手势识别装置识别的所述手势的位置,确定所述手势满足预设条件时,确定接受所述手势的控制。
其中,所述预设条件包括以下条件至少之一:
所述手势与所述手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;
所述手势偏离所述手势识别装置的角度,小于所述手势偏离所述第二
手势识别装置的角度。
其中,所述第二确定单元43,还配置为在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并在网络中发布接受所述手势控制的消息;或者,
在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,在网络中重新发布自身识别的手势的信息。
其中,所述第二发布单元42,还配置为在所述第二确定单元43确定接受所述手势的控制时,在网络中发布接受所述手势控制的消息。
其中,所述第二发布单元42,还配置为在网络中以广播、多播或单播消息形式发布所述第二识别单元41识别的手势的信息;
或者,当所述第二发布单元42接收到查询请求时,在网络中以广播、多播或单播消息形式发布所述第二识别单元41识别的手势的信息。
其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
所述手势识别装置设置与待操控设备中时,第二识别单元41可由具有视觉采集能力的摄像头实现;所述手势识别装置作为可穿戴设备时,所述第二识别单元41可由重力感应计或陀螺仪实现;所述第二确定单元43可由所述手势识别装置中的CPU、DSP或FPGA实现;所述第二发布单元42可由所述手势识别装置中支持网络协议如IEEE 802.11b/g/n,或者支持IEEE802.3的功能芯片实现。
本发明实施例还记载一种手势识别系统,包括图3所示的第一手势识
别装置、和/或图4所示的第一手势识别装置;需要说明的是,所述系统包括的第一手势识别装置可以设置于单个设备上,由单个设备实施图1以及图2所示的手势识别方法;也可以设置于多个设备上,由多个设备实施图1以及图2所示的手势识别方法。
下面结合使用场景进行说明,图5为本发明实施例中手势识别的场景示意图二,图5示出三个设备,分别是电视机、DVD播放机和家庭存储服务器。其中,电视机、DVD播放机、家庭存储服务器分别设置有手势识别装置,所述手势识别装置采用摄像头识别手势,仅为说明上的方便,各手势识别装置都有相同的手势识别能力、以及对识别到的手势的位置进行测量的能力。
手势识别装置上都设置有网络接口(与网络单元对应),例如网络接口可以支持IEEE 802.11b/g/n,或者支持IEEE 802.3,从而可以连接到支持网际协议(IP)的网络;每个手势识别装置均通过网络单元支持与其它设备中的手势识别装置进行消息交互,以及处理操控指令、或转交操控指令。
手势识别装置在网络上的互相发现、连接、发送与接收消息能力,可以使用相关技术例如通用即插即用(UPnP,Universal Plug and Play)技术实现,也可以使用多播域名系统(mDNS,multicast Domain Name System)和基于域名系统的服务发现(DNS-SD,Domain Name System-Service Discovery)技术实现,这一类技术用在IP网络中,以单播、多播查询方式,按照预先定义的报文格式响应查询、提供功能调用。例如,UPnP技术规定了媒体显示设备(如电视机)、服务器(如DVD播放机、家庭存储服务器)如何响应查询、提供哪些调用功能。
在一个实施方式中,手势识别装置包含有一个具有图像、视频采集能力的摄像头(与手势识别装置中的识别单元对应)以采集手势,并采用红外测距的方式识别手势的位置。
在另一个实施方式中,手势识别装置也可以是一种可穿戴设备,例如戴在手上的指环式设备、戴在手臂上的手表式设备,这种可穿戴设备能够通过陀螺仪、重力感应计等识别出用户的手指、手臂动作,并且也具备网络功能。可穿戴设备可以通过设置于自身的网络单元与上述待操控的设备如电视机、DVD播放机、家庭存储服务器进行信息交互。
在本实施例中,手势识别装置可以识别其视觉范围内的手势,例如,可以通过摄像头采集图像,采用红外的方式在三维空间进行位置测量,在采集的图像中分析图像信息;手势识别装置还对手势如手指手势、手掌手势、手臂手势等进行距离测量,和识别的手势信息一起存储。
在这一实施例中,电视机、DVD播放机、家庭存储服务器中的手势识别装置在识别到手势时,以多播方式发送报文,报文中包括:
手势识别装置的惟一标识,例如网络地址;
识别的手势的编号,表征手势的形状,例如1表示5指张开,2表示两个指头,10表示拳头,20表示摇动手臂,等等;
手势的位置信息,位置信息可以采取空间坐标、带有深度信息的手势的图像数据,或者手势的相对于绝对原点的定位参数等形式。
报文中还可以包括手势参数,例如手势的持续时间。
图6为本发明实施例中手势识别的流程图,图6示出的流程图,参照了图5中的设备,如图6所示,手势识别包括以下步骤:
步骤601,各个手势识别装置识别手势的信息。
各手势识别装置对手势的识别是独立的,识别操作包括对手势形状、手指和手臂中的一个或多个的识别,还包括对手指及手臂位置的测量。
步骤602,各个手势识别装置发布所识别的手势的信息。
本实施例中,手势的信息包括手势的形状以及手势的位置。
手势的形状和手势的位置可以通过多播等方式在网络中发布。
步骤603,各个手势识别装置接收其他手势识别装置发布的手势的信息。
以电视机中的手势识别装置的处理为例,电视机中的手势识别装置识别用户实施的手势的形状、以及手势的位置;并接收DVD播放机中的手势识别装置识别的用户实施的手势的形状和所述手势的位置、以及家庭存储服务器识别的用户实施的手势的形状和手势的位置;也就是说,当用户实施手势时,图5三个设备中的手势识别装置分别对用户的手势的形状和手势的位置进行识别,并通过网络发布识别的手势的形状和手势的位置。
手势识别装置接收到手势的信息后,需要存储接收到手势的信息;其中,手势识别装置可以设置定时器,将接收的手势的信息保存预设时间,在预设时间过后,可以丢弃存储的手势的信息。
步骤604,各手势识别装置确定是否接受所识别的手势的控制,如果接受,则执行步骤605和步骤606;否则,执行步骤607a或步骤607b。
以电视机中的手势识别装置的处理为例,电视机中的手势识别装置识别用户实施的手势的形状、以及手势的位置;并根据DVD播放机中的手势识别装置识别的用户实施的手势的形状和所述手势的位置、以及家庭存储服务器识别的用户实施的手势的形状和手势的位置,判断是否接收用户实施手势的控制;结合图5,电视机中的手势识别装置能够确定用户实施的手势偏离电视机(以及电视机中的手势识别装置)的角度最小,因此,电视机中的手势识别装置确定接受用户实施的手势的控制。
步骤605,手势识别装置在网络中发布接收手势控制的消息。
手势识别装置根据用户实施的手势,确定用户实施的手势对应的指令,并控制对应的设备执行指令。
步骤606,手势识别装置在网络中发布响应手势控制结束的消息。
步骤607a,手势识别装置在预设时间内未接收到其他手势识别装置确
定接收用户实施的手势的消息时,确定接受用户实施的手势的控制。
相应地,手势识别装置确定接受用户实施的手势的控制时,还可以在网络中发布接受用户实施的手势控制的消息。
步骤607b,手势识别装置在预设时间内未接收到其他手势识别装置确定接收用户实施的手势的消息时,在网络中重新发布识别的手势的信息。
下面再结合图7所示的手势识别装置之间的消息交互示意图,对上述的手势识别流程进行说明,如图7所示,包括以下步骤:
步骤701,设备1至设备3中的各手势识别装置识别到了用户做出的手势动作。
手势识别装置可以是通过摄像头识别用户实施的手势;当然,手势识别装置本身也可能是一种可穿戴设备(例如,用户同时配戴各个待操控设备对应的可穿戴设备),从而识别到用户实施的手势。
步骤702,设备1至设备3中的手势识别装置分别在网络中发布识别的手势和手势的位置。
以使每个手势识别装置均接收到其他手势识别装置识别的手势和手势的位置。
步骤703,设备1至设备3中的手势识别装置确定是否接受用户实施的手势的控制。
步骤704,确定接收手势控制的手势识别装置发布确定接受手势控制的消息。
假设设备1中手势识别装置确定接受手势的控制时,设备1中的手势识别装置在网络发布设备1接受手势控制的消息。
步骤705,确定接收手势控制的手势识别装置,在控制相应的设备响应手势控制后,在网络中发布已响应手势控制的消息。
当设备1中手势识别装置控制设备1响应手势控制后,在网络中发布
设备1已经响应手势的控制的消息。
可以看出,通过本发明实施例,实现了在多设备场景中,对用户实施的手势的目标操控设备的确定,进而实现了对目标操控设备的操控。
上述实施例中,电视机、DVD播放机、存储服务器被作为被操控设备,而本发明并不限定于被操控设备是如实施例中提到的这样设备,电脑、音响、音箱、投影仪、机顶盒等等都可以作为被操控设备,甚至工业上其它设备如汽车、机床、轮船等等都可以由视觉识别与发现控制装置来操控。
上述实施例中,手势识别装置的摄像头可以是各种规格,例如可以是固定焦距或变焦距的,旋转空间可以是上下左右各个角度的,或只支持左右角度,只需要配置一摄像头,具有实施例中描述的各种能力;手势识别装置识别手势的位置时,可以使用激光红外线,也可以用其它波段的光;当然,也可以使用3摄像头计算测距,或者使用3个以上摄像头并采用加权调整的方法确定识别的手势的位置。
为了清楚起见,本发明实施例中没有示出和描述设备的所有的常规特征。当然,应当理解,在任何实际设备的研制中,必需做出特定实现方式的决定以便实现研制者的特定目标,例如符合与应用及业务相关的约束,这些特定的目标随着不同的实现方式而变化,并且随着不同的研制者而变化。而且,应当理解,这种研制工作是复杂和耗时的,但是尽管如此,对于受到本发明实施例公开内容启发的普通技术人员而言所进行的技术工作是常规的。
根据这里描述的主题,能够利用各种类型的操作系统、计算平台、计算机程序、和/或通用机器来制造、操作和/或执行各种部件、系统、装置、处理步骤和/或数据结构。此外,本领域的普通技术人员将会明白,也可以利用不太通用的装置,而不脱离这里公开的发明构思的范围和精神实质。其中,所包含的方法由计算机、装置或机器执行,并且该方法可以被存储
为机器可读的指令,它们可以存储在确定的介质上,例如计算机存储装置,包括但不限于ROM(例如,只读存储器、FLASH存储器、转移装置等)、磁存储介质(例如,磁带、磁盘驱动器等)、光学存储介质(例如,CD-ROM、DVD-ROM、纸卡、纸带等)以及其他熟知类型的程序存储器。此外,应当认识到,该方法可以利用软件工具的选择由人类操作者执行,而不需要人或创造性的判断。
上述实施例,网络相关的,可适用于基于IEEE 802.3、IEEE 802.11b/g/n、电力线网路(POWELINE)、电缆(CABLE)、公共交换电话网络(PSTN,Public Switched Telephone Network)、第三代合作伙伴计划(3GPP,3rd Generation Partnership Project,)网络、3GPP2网络等通讯网络所支持的IP网络,各装置的操作系统可适用于UNIX类操作系统、WINDOWS类操作系统、ANDROID类操作系统、IOS操作系统,对消费者接口可适用于JAVA语言接口等。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单
元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
Claims (47)
- 一种手势识别方法,所述方法包括:第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制。
- 如权利要求1所述的手势识别方法,其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
- 如权利要求2所述的手势识别方法,其中,当所述手势的信息包括手势的形状和手势的位置时,所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制,包括:所述第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势,且,所述第一手势识别装置根据自身识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
- 如权利要求3所述的手势识别方法,其中,所述预设条件包括以下条件至少之一:所述手势与所述第一手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度。
- 如权利要求1所述的手势识别方法,其中,所述方法还包括:所述第一手势识别装置发布自身识别的手势的信息。
- 如权利要求5所述的手势识别方法,其中,所述第一手势识别装置 发布自身识别的手势的信息,包括:所述第一手势识别装置以消息形式发布自身识别的手势的信息;或者,当所述第一手势识别装置接收到查询请求时,以消息形式发布自身识别的手势的信息。
- 如权利要求1所述的手势识别方法,其中,所述方法还包括:所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,所述第一手势识别装置确定接受所述手势的控制,并发布接受所述手势控制的消息;或者,所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,重新发布自身识别的手势的信息。
- 如权利要求1所述的手势识别方法,其中,所述第一手势识别装置确定接受所述手势的控制时,所述方法还包括:所述第一手势识别装置发布接受所述手势控制的消息。
- 如权利要求1所述的手势识别方法,其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
- 如权利要求1所述的手势识别方法,其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
- 如权利要求1至10任一项所述的手势识别方法,其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
- 一种手势识别方法,所述方法包括:第一手势识别装置发布自身识别的手势的信息,并接收第二手势识别装置识别的所述手势的信息。
- 如权利要求12所述的手势识别方法,其中,所述方法还包括:所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制。
- 如权利要求13所述的手势识别方法,其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
- 如权利要求14所述的手势识别方法,其中,当所述手势的信息包括所述手势的形状和所述手势的位置时,所述第一手势识别装置根据自身识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控制,包括:所述第一手势识别装置根据自身识别的所述手势的形状确定支持所述手势,且,所述第一手势识别装置根据自身识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
- 如权利要求15所述的手势识别方法,其中,所述预设条件包括以下条件至少之一:所述手势与所述第一手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度。
- 如权利要求13所述的手势识别方法,其中,所述方法还包括:所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并发布接受所述手势控制的消息;或者,所述第一手势识别装置在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,重新发布自身识别的手势的信息。
- 如权利要求13所述的手势识别方法,其中,所述第一手势识别装置确定接受所述手势的控制时,所述方法还包括:所述第一手势识别装置发布接受所述手势控制的消息。
- 如权利要求12所述的手势识别方法,其中,所述第一手势识别装置发布自身识别的手势的信息,包括:所述第一手势识别装置以消息形式发布自身识别的手势的信息;或者,当所述第一手势识别装置接收到查询请求时,以消息形式发布自身识别的手势的信息。
- 如权利要求12所述的手势识别方法,其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
- 如权利要求12所述的手势识别方法,其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
- 如权利要求12至21任一项所述的手势识别方法,其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
- 一种第一手势识别装置,所述手势识别装置包括:第一识别单元,配置为识别手势的信息;第一确定单元,配置为根据所述第一识别单元识别的手势的信息、以及第二手势识别装置提供的所述手势的信息,确定是否接受所述手势的控制。
- 如权利要求23所述的第一手势识别装置,其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
- 如权利要求24所述的第一手势识别装置,其中,所述第一确定单元,还配置为确定所述第一识别单元识别的所述手势的形状确定支持所述手势,且,根据所述第一识别单元识别的所述手势的位置,以及第二手势识别装置识别的所述手势的位置,判断所述手势是否满足预设条件,如果满足,则确定接受所述手势的控制;否则,确定不接受所述手势的控制。
- 如权利要求25所述的第一手势识别装置,其中,所述预设条件包括以下条件至少之一:所述手势与所述手势识别装置的距离小于所述手势与所述第二手势识别装置的距离;所述手势偏离所述手势识别装置的角度小于所述手势偏离所述第二手势识别装置的角度。
- 如权利要求23所述的第一手势识别装置,其中,所述第一手势识别装置还包括:第一发布单元,配置为在网络中发布所述第一识别单元识别的手势的信息。
- 如权利要求27所述的第一手势识别装置,其中,所述第一发布单元,还配置为以消息形式发布所述第一识别单元识别出的手势与所述手势的位置;或者,当接收到查询请求时以消息形式发布所述第一识别单元识别出的手势与所述手势的位置。
- 如权利要求23所述的第一手势识别装置,其中,所述第一确定单元,还配置为当所述第一发布单元在预设时间内,未 接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并触发所述第一发布单元发布接受所述手势控制的消息;或者,当所述第一发布单元在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,触发所述第一发布单元在重新发布所述第一识别单元识别的手势的信息。
- 如权利要求23所述的第一手势识别装置,其中,所述第一发布单元,还配置为在所述第一确定单元确定接受所述手势的控制时,在网络中发布接受所述手势控制的消息。
- 如权利要求23所述的第一手势识别装置,其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
- 如权利要求23所述的第一手势识别装置,其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
- 如权利要求23至32任一项所述的第一手势识别装置,其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
- 一种第一手势识别装置,所述手势识别装置包括:第二识别单元和第二发布单元;其中,所述第二发布单元,配置为在网络中发布所述第二识别单元识别的手势的信息,并接收第二手势识别装置提供的所述手势的信息。
- 如权利要求34所述的第一手势识别装置,其中,所述手势识别装置还包括:第二确定单元,配置为根据所述第二识别单元识别的手势的信息,以及第二手势识别装置识别的所述手势的信息,确定是否接受所述手势的控 制。
- 如权利要求35所述的第一手势识别装置,其中,所述手势的信息包括以下信息至少之一:所述手势的形状、所述手势的位置。
- 如权利要求36所述的第一手势识别装置,其中,所述第二确定单元,还配置为当所述手势的信息包括所述手势的形状和所述手势的位置时,根据所述第二识别单元识别的所述手势的形状确定支持所述手势,且,根据所述第二识别单元识别的所述手势的位置、以及第二手势识别装置识别的所述手势的位置,确定所述手势满足预设条件时,确定接受所述手势的控制。
- 如权利要求37所述的第一手势识别装置,其中,所述预设条件包括以下条件至少之一:所述手势与所述第一手势识别装置的距离,小于所述手势与所述第二手势识别装置的距离;所述手势偏离所述第一手势识别装置的角度,小于所述手势偏离所述第二手势识别装置的角度。
- 如权利要求35所述的第一手势识别装置,其中,所述第二确定单元,还配置为在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,确定接受所述手势的控制,并在网络中发布接受所述手势控制的消息;或者,在预设时间内,未接收到所述第二手势识别装置确定接受所述手势控制的消息时,在网络中重新发布自身识别的手势的信息。
- 如权利要求35所述的第一手势识别装置,其中,所述第二发布单元,还配置为在所述第二确定单元确定接受所述手势的控制时,在网络中发布接受所述手势控制的消息。
- 如权利要求34所述的第一手势识别装置,其中,所述第二发布单元,还配置为以消息形式发布所述第二识别单元识别的手势的信息;或者,当所述第二发布单元接收到查询请求时,以消息形式发布所述第二识别单元识别的手势的信息。
- 如权利要求34所述的第一手势识别装置,其中,所述手势的形状包括以下信息至少之一:手势编号、手势文字描述信息和数字图形信息。
- 如权利要求34所述的第一手势识别装置,其中,所述手势的位置包括以下信息至少之一:所述手势的空间坐标参数、所述手势的空间坐标、带有深度信息的所述手势的图像数据、所述手势相对于绝对原点的定位参数。
- 如权利要求34至43任一项所述的第一手势识别装置,其中,所述手势的类型包括:计算机视觉识别手势和穿戴设备手势。
- 一种手势识别系统,所述手势识别系统包括权利要求23至33任一项所述的第一手势识别装置、和/或权利要求34至44任一项所述的第一手势识别装置。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至11任一项所述的手势识别方法。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求12至22任一项所述的手势识别方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/121,592 US10198083B2 (en) | 2014-02-25 | 2014-10-22 | Hand gesture recognition method, device, system, and computer storage medium |
EP14883571.3A EP3112983B1 (en) | 2014-02-25 | 2014-10-22 | Hand gesture recognition method, device, system, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410065540.8 | 2014-02-25 | ||
CN201410065540.8A CN104866083B (zh) | 2014-02-25 | 2014-02-25 | 手势识别方法、装置和系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015127786A1 true WO2015127786A1 (zh) | 2015-09-03 |
Family
ID=53911967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/089167 WO2015127786A1 (zh) | 2014-02-25 | 2014-10-22 | 手势识别方法、装置、系统及计算机存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10198083B2 (zh) |
EP (1) | EP3112983B1 (zh) |
CN (1) | CN104866083B (zh) |
WO (1) | WO2015127786A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021109068A1 (zh) * | 2019-12-05 | 2021-06-10 | 深圳市大疆创新科技有限公司 | 手势控制方法及可移动平台 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105589550A (zh) * | 2014-10-21 | 2016-05-18 | 中兴通讯股份有限公司 | 信息发布方法、信息接收方法、装置及信息共享系统 |
US10484827B2 (en) | 2015-01-30 | 2019-11-19 | Lutron Technology Company Llc | Gesture-based load control via wearable devices |
WO2017004241A1 (en) * | 2015-07-02 | 2017-01-05 | Krush Technologies, Llc | Facial gesture recognition and video analysis tool |
CN106648039B (zh) * | 2015-10-30 | 2019-07-16 | 富泰华工业(深圳)有限公司 | 手势控制系统及方法 |
TWI597656B (zh) * | 2016-05-27 | 2017-09-01 | 鴻海精密工業股份有限公司 | 手勢控制系統及方法 |
TWI598809B (zh) * | 2016-05-27 | 2017-09-11 | 鴻海精密工業股份有限公司 | 手勢控制系統及方法 |
CN106369737A (zh) * | 2016-08-19 | 2017-02-01 | 珠海格力电器股份有限公司 | 空调控制处理方法及装置 |
CN109754821B (zh) | 2017-11-07 | 2023-05-02 | 北京京东尚科信息技术有限公司 | 信息处理方法及其系统、计算机系统和计算机可读介质 |
CN110377145B (zh) | 2018-04-13 | 2021-03-30 | 北京京东尚科信息技术有限公司 | 电子设备确定方法、系统、计算机系统和可读存储介质 |
US11106223B2 (en) * | 2019-05-09 | 2021-08-31 | GEOSAT Aerospace & Technology | Apparatus and methods for landing unmanned aerial vehicle |
CN111443802B (zh) * | 2020-03-25 | 2023-01-17 | 维沃移动通信有限公司 | 测量方法及电子设备 |
CN111382723A (zh) * | 2020-03-30 | 2020-07-07 | 北京云住养科技有限公司 | 求救识别的方法、装置及系统 |
CN113411139B (zh) * | 2021-07-23 | 2022-11-22 | 北京小米移动软件有限公司 | 一种控制方法、装置及可读存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101667089A (zh) * | 2008-09-04 | 2010-03-10 | 比亚迪股份有限公司 | 一种触摸手势的识别方法和装置 |
CN102096469A (zh) * | 2011-01-21 | 2011-06-15 | 中科芯集成电路股份有限公司 | 一种多功能手势互动系统 |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100746995B1 (ko) * | 2005-09-22 | 2007-08-08 | 한국과학기술원 | 직관적인 실제 공간적 조준에 따른 시스템 및 그식별방법과 통신방법 |
US20100083189A1 (en) * | 2008-09-30 | 2010-04-01 | Robert Michael Arlein | Method and apparatus for spatial context based coordination of information among multiple devices |
CN101777250B (zh) * | 2010-01-25 | 2012-01-25 | 中国科学技术大学 | 家用电器的通用遥控装置及方法 |
KR101690117B1 (ko) * | 2011-01-19 | 2016-12-27 | 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. | 멀티모달 및 제스처 제어를 위한 방법 및 시스템 |
CN103105926A (zh) * | 2011-10-17 | 2013-05-15 | 微软公司 | 多传感器姿势识别 |
CN103294374A (zh) | 2012-02-23 | 2013-09-11 | 中兴通讯股份有限公司 | 一种触摸屏解锁方法及装置 |
US9389690B2 (en) * | 2012-03-01 | 2016-07-12 | Qualcomm Incorporated | Gesture detection based on information from multiple types of sensors |
CN102662462B (zh) | 2012-03-12 | 2016-03-30 | 中兴通讯股份有限公司 | 电子装置、手势识别方法及手势应用方法 |
CN102625173B (zh) | 2012-03-21 | 2014-12-10 | 中兴通讯股份有限公司 | 一种视频节目切换的方法、系统及相关设备 |
CN102722321A (zh) | 2012-05-22 | 2012-10-10 | 中兴通讯股份有限公司 | 双摄像头切换的方法及装置 |
CN103455136A (zh) | 2012-06-04 | 2013-12-18 | 中兴通讯股份有限公司 | 一种基于手势控制的输入方法、装置及系统 |
CN103488406B (zh) | 2012-06-11 | 2016-09-07 | 中兴通讯股份有限公司 | 调整移动终端屏幕键盘的方法、装置及移动终端 |
CN102799355A (zh) | 2012-06-18 | 2012-11-28 | 中兴通讯股份有限公司 | 信息处理方法及装置 |
CN103543926A (zh) | 2012-07-17 | 2014-01-29 | 中兴通讯股份有限公司 | 在移动终端锁屏状态下进入功能界面的方法及移动终端 |
CN103577793B (zh) | 2012-07-27 | 2017-04-05 | 中兴通讯股份有限公司 | 手势识别方法及装置 |
CN103576966A (zh) | 2012-08-09 | 2014-02-12 | 中兴通讯股份有限公司 | 一种电子设备及控制所述电子设备的方法及装置 |
CN103677591A (zh) | 2012-08-30 | 2014-03-26 | 中兴通讯股份有限公司 | 终端自定义手势的方法及其终端 |
CN102866777A (zh) | 2012-09-12 | 2013-01-09 | 中兴通讯股份有限公司 | 一种数字媒体内容播放转移的方法及播放设备及系统 |
CN102929508B (zh) | 2012-10-11 | 2015-04-01 | 中兴通讯股份有限公司 | 一种电子地图触控方法和装置 |
US20140157209A1 (en) * | 2012-12-03 | 2014-06-05 | Google Inc. | System and method for detecting gestures |
CN102984592B (zh) * | 2012-12-05 | 2018-10-19 | 中兴通讯股份有限公司 | 一种数字媒体内容播放转移的方法、装置和系统 |
WO2014106862A2 (en) * | 2013-01-03 | 2014-07-10 | Suman Saurav | A method and system enabling control of different digital devices using gesture or motion control |
CN103472796B (zh) * | 2013-09-11 | 2014-10-22 | 厦门狄耐克电子科技有限公司 | 一种基于手势识别的智能家居系统 |
CN105849710B (zh) * | 2013-12-23 | 2021-03-09 | 英特尔公司 | 用于使用磁力计连同手势以向无线显示器发送内容的方法 |
US10222868B2 (en) * | 2014-06-02 | 2019-03-05 | Samsung Electronics Co., Ltd. | Wearable device and control method using gestures |
US9721566B2 (en) * | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
TWI598809B (zh) * | 2016-05-27 | 2017-09-11 | 鴻海精密工業股份有限公司 | 手勢控制系統及方法 |
TWI597656B (zh) * | 2016-05-27 | 2017-09-01 | 鴻海精密工業股份有限公司 | 手勢控制系統及方法 |
-
2014
- 2014-02-25 CN CN201410065540.8A patent/CN104866083B/zh active Active
- 2014-10-22 EP EP14883571.3A patent/EP3112983B1/en active Active
- 2014-10-22 US US15/121,592 patent/US10198083B2/en active Active
- 2014-10-22 WO PCT/CN2014/089167 patent/WO2015127786A1/zh active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101667089A (zh) * | 2008-09-04 | 2010-03-10 | 比亚迪股份有限公司 | 一种触摸手势的识别方法和装置 |
CN102096469A (zh) * | 2011-01-21 | 2011-06-15 | 中科芯集成电路股份有限公司 | 一种多功能手势互动系统 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021109068A1 (zh) * | 2019-12-05 | 2021-06-10 | 深圳市大疆创新科技有限公司 | 手势控制方法及可移动平台 |
Also Published As
Publication number | Publication date |
---|---|
US20170277267A1 (en) | 2017-09-28 |
CN104866083B (zh) | 2020-03-17 |
US10198083B2 (en) | 2019-02-05 |
EP3112983B1 (en) | 2019-11-20 |
EP3112983A1 (en) | 2017-01-04 |
CN104866083A (zh) | 2015-08-26 |
EP3112983A4 (en) | 2017-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015127786A1 (zh) | 手势识别方法、装置、系统及计算机存储介质 | |
JP6374052B2 (ja) | デジタルメディアコンテンツ再生を転送する方法、装置およびシステム | |
WO2015127787A1 (zh) | 手势识别方法、装置、系统及计算机存储介质 | |
JP6379103B2 (ja) | ジェスチャを介したマルチデバイスのペアリングおよび共有 | |
US10013067B2 (en) | Gesture control method, apparatus and system | |
JP2016015150A5 (zh) | ||
US20150185856A1 (en) | Method for Transferring Playing of Digital Media Contents and Playing Device and System | |
TW201624304A (zh) | 銜接系統 | |
WO2015062247A1 (zh) | 显示装置及其控制方法、手势识别方法、和头戴显示装置 | |
JP7126008B2 (ja) | 多点slamキャプチャ | |
CN106464976B (zh) | 显示设备、用户终端设备、服务器及其控制方法 | |
WO2016062191A1 (zh) | 信息发布方法、信息接收方法、装置及信息共享系统 | |
WO2016095641A1 (zh) | 数据交互方法及系统、移动终端 | |
US20190096130A1 (en) | Virtual mobile terminal implementing system in mixed reality and control method thereof | |
TW201642177A (zh) | 設備控制方法和設備控制系統 | |
JP6556703B2 (ja) | 視覚識別を実現する方法、装置及びシステム | |
CN106358064A (zh) | 控制电视机的方法及设备 | |
WO2017113528A1 (zh) | 一种智能家居设备匹配的方法、装置、设备以及系统 | |
CN109213307A (zh) | 一种手势识别方法及装置、系统 | |
WO2016070827A1 (zh) | 一种发布和传递识别信息的方法和装置及信息识别系统 | |
TWI470968B (zh) | 影音監控系統及其網路連線設定方法 | |
CN109213309A (zh) | 一种手势识别方法及装置、系统 | |
TW202211001A (zh) | 頭戴式顯示器和其控制方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14883571 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15121592 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2014883571 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014883571 Country of ref document: EP |