WO2016167331A1 - ジェスチャー認識装置、ジェスチャー認識方法、及び情報処理装置 - Google Patents
ジェスチャー認識装置、ジェスチャー認識方法、及び情報処理装置 Download PDFInfo
- Publication number
- WO2016167331A1 WO2016167331A1 PCT/JP2016/062052 JP2016062052W WO2016167331A1 WO 2016167331 A1 WO2016167331 A1 WO 2016167331A1 JP 2016062052 W JP2016062052 W JP 2016062052W WO 2016167331 A1 WO2016167331 A1 WO 2016167331A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture
- body part
- motion
- information
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present invention relates to a gesture recognition apparatus and gesture recognition method for quickly recognizing a user's gesture, and an information processing apparatus including the gesture recognition apparatus.
- gesture UI User Interface
- Gesture UI User Interface
- game entertainment devices but recently it has been used in various devices such as broadcast receivers, PCs (Personal Computers), car navigation systems, and information communication terminals.
- the gesture UI uses a user's gesture to operate the device.
- the gestures recognized by the gesture UI include, for example, the movement and shape of a part of the user's body (hand movement and shape, finger movement and shape, etc.) and the movement and shape of the entire user's body.
- the gesture UI acquires user's imaging data (image data) through an imaging device such as a camera, recognizes the user's gesture from a plurality of frame images (imaging data), and operates the information corresponding to the recognized gesture by the user.
- An input signal (control signal) is transmitted to the device (control target).
- the operation of the device is controlled in accordance with the user's gesture, so that the user can feel excellent operability.
- Patent Document 1 discloses that a user's hand starts moving at a certain speed or more, a user's hand movement continues, and the user's hand movement stops or the user's hand moves. It is described that the movement is switched to the movement in the reverse direction, and then the recognition process of the gesture executed by the user (for example, the action of waving a hand in a certain direction called “swipe”) is started. .
- Patent Document 1 starts gesture recognition processing after detecting the start of gesture, continuation of gesture, and completion of gesture. After the completion of this recognition processing, the control target device The process based on the result of the recognition process (for example, switching of the display screen) is executed.
- the gesture recognition process is started after the user's gesture is completed, there is a problem that the time from the start of the gesture to the start of execution of the process in the control target device is long. In other words, there is a problem that the response of the operation of the control target device to the input operation by the user's gesture is slow.
- the present invention has been made to solve the above-described problems of the prior art, and includes a gesture recognition device and a gesture recognition method capable of quickly recognizing a user's gesture, and information including the gesture recognition device.
- An object is to provide a processing apparatus.
- the gesture recognition apparatus includes a body part motion from a motion of the body part obtained by detecting and tracking the body part of the user in each of a plurality of frames of imaging data acquired by photographing the user.
- a motion information generation unit that generates information; and gesture pre-motion model information indicating a reference motion of a user's body part for a gesture pre-motion performed immediately before a gesture is stored in advance, and the motion information generation unit generates the motion information Gestures indicated by the motion of the body part detected by the motion information generation unit based on a result of the first comparison by performing a first comparison comparing the body part motion information and the gesture pre-motion model information.
- Prediction processing unit that generates prediction results for prior actions, and user during gesture Gesture model information indicating a reference motion of a body part is stored in advance, a second comparison is performed to compare the body part motion information generated by the motion information generation unit and the gesture model information, and the prediction result and the And a recognition processing unit that generates a recognition result of a gesture indicated by the motion of the body part detected by the motion information generation unit based on a result of the second comparison.
- the gesture recognition method includes gesture pre-motion model information indicating a reference motion of the user's body part for a gesture pre-motion performed immediately before the gesture, and a gesture indicating the reference motion of the user's body part during the gesture.
- a gesture recognition method performed by a gesture recognition device that stores model information in advance, which is obtained by detecting and tracking a user's body part in each of a plurality of frames of imaging data acquired by imaging a user.
- a motion information generating step for generating body part motion information from the motion of the body part, and a first comparison for comparing the body part motion information generated in the motion information generating step with the gesture pre-motion model information.
- a second comparison for comparing information is performed, and a recognition result of the gesture indicated by the motion of the body part detected in the motion information generation step is generated based on the prediction result and the result of the second comparison A recognition processing step.
- An information processing apparatus includes the gesture recognition apparatus, an imaging apparatus that transmits the imaging data to the gesture recognition apparatus, and a control target device that is controlled according to a recognition result of the gesture. It is characterized by that.
- a gesture executed following the gesture pre-operation is predicted from the gesture pre-operation executed immediately before the start of the gesture, and the gesture recognition process is executed using the prediction result. For this reason, in the present invention, it is possible to start the gesture recognition process from the time before the start of the gesture, and to complete the gesture recognition process at the time before the completion of the gesture. Therefore, according to the present invention, the time from the start of the gesture to the output of the control signal according to the gesture can be shortened, and as a result, the time from the start of the gesture to the start of the operation of the control target device is shortened. be able to.
- FIG. 6 is a flowchart showing a body part detection phase in the gesture recognition apparatus according to the first embodiment.
- 6 is a flowchart showing a body part tracking phase in the gesture recognition apparatus according to the first embodiment.
- 6 is a flowchart showing a gesture prediction phase in the gesture recognition device according to the first embodiment.
- 6 is a flowchart showing a gesture identification phase in the gesture recognition apparatus according to the first embodiment.
- FIG. (A) And (b) is a figure which shows an example of the body part detection process (body part detection phase) in the gesture recognition apparatus which concerns on Embodiment 1.
- FIG. (A) to (f) is a diagram showing an example of a body part tracking process (body part tracking phase) in the gesture recognition apparatus according to the first embodiment. It is a figure which shows an example of the body part movement information handled with the gesture recognition apparatus which concerns on Embodiment 1.
- FIG. (A) to (f) is a diagram illustrating an example of a gesture advance operation detected by the gesture recognition apparatus according to the first embodiment. It is a figure which shows an example of the gesture prior action model information which the gesture recognition apparatus concerning Embodiment 1 has memorize
- FIGS. 1 to (f) are diagrams showing an example of a gesture pre-operation detected by the gesture recognition device according to the first embodiment
- (g) to (k) are gesture recognition devices according to the first embodiment. It is a figure which shows an example of the gesture which detects. The figure which shows on the time-axis the start time of gesture pre-operation in the gesture recognition apparatus which concerns on Embodiment 1, the end time of gesture pre-operation (the start time of gesture), the completion time of gesture, and the completion time of gesture recognition processing is there.
- (A) to (f) is a diagram showing another example of gesture pre-operation immediately before the gesture detected by the gesture recognition apparatus according to Embodiment 1 (example of creating a state in which the palm of the left hand is opened).
- FIG. It is a block diagram which shows roughly the structure of the gesture recognition apparatus which concerns on Embodiment 2 of this invention. It is a block diagram which shows roughly the structure of the gesture recognition apparatus which concerns on Embodiment 3 of this invention. It is a hardware block diagram which shows the modification of Embodiment 1-3 of this invention.
- FIG. 1 is a diagram schematically showing an example of the configuration of an information processing apparatus to which the present invention can be applied.
- the information processing apparatus shown in FIG. 1 includes a gesture recognition device 1, a camera 4 as an imaging device connected to the gesture recognition device 1, and a display 5 as an image display unit connected to the gesture recognition device 1. .
- the gesture recognition apparatus 1 and the camera 4 constitute a gesture UI.
- the gesture recognition apparatus 1 recognizes a gesture pre-operation (gesture preliminary operation) PG and a gesture executed by the user U using a plurality of frame images (imaging data) generated by the camera 4 that captures the user U,
- the display 5 displays an image according to the recognition result.
- the information processing apparatus in FIG. 1 can employ the gesture recognition apparatus in the first to third embodiments and the modifications described below as the gesture recognition apparatus 1.
- FIG. 2 is a diagram schematically showing another example of the configuration of the information processing apparatus to which the present invention is applicable.
- the information processing apparatus shown in FIG. 2 includes a gesture recognition device 7 having a display as an image display unit, and a camera 4 as an imaging device connected to the gesture recognition device 7.
- the gesture UI is configured by the gesture recognition device 7 and the camera 4.
- the gesture recognition device 7 recognizes a gesture pre-operation PG and a gesture executed by the user U using a plurality of frame images (imaging data) generated by the camera 4 that captures the user U, and the display recognizes this recognition.
- the image display according to the result is performed.
- the information processing apparatus in FIG. 2 is the same as the information processing apparatus in FIG. 1 except that the gesture recognition device 7 has a display.
- FIG. FIG. 3 is a block diagram schematically showing the configuration of the gesture recognition apparatus 1 according to Embodiment 1 of the present invention.
- the gesture recognition device 1 is a device that can perform the gesture recognition method according to the first embodiment.
- the camera 4 and the display control unit 51 shown in FIG. 1 are connected to the gesture recognition device 1, and the display information storage unit 52 and the display 5 as a control target device are the display control unit 51. It is connected to the gesture recognition device 1 via.
- FIG. 3 shows an information processing apparatus including the gesture recognition apparatus 1.
- the gesture recognition device 1 is a device that identifies (specifies) a gesture executed by a user and outputs a signal (control signal) corresponding to the gesture.
- the gesture recognition device 1 includes a motion information generation unit 10, a prediction processing unit 20, and a recognition processing unit 30.
- the gesture recognition device 1 includes a device control unit 41 that transmits a signal based on a gesture executed by a user to the display control unit 51 of the display 5 as a control target device.
- the motion information generation unit 10 detects and tracks the user's body part in each of a plurality of frames of frame image data (also referred to as “frame images” or “imaging data”) acquired by photographing the user with the camera 4.
- the body part motion information is generated from the motion of the body part obtained by performing the above.
- the prediction processing unit 20 stores in advance gesture pre-motion model information PGM indicating the motion of the user's body part in the gesture pre-motion PG performed immediately before the gesture.
- the prediction processing unit 20 performs a first comparison that compares the body part movement information generated by the movement information generation unit 10 and the gesture advance movement model information PGM, and generates movement information based on the result of the first comparison.
- the gesture pre-motion PG indicated by the motion of the body part detected by the unit 10 is identified (specified).
- the prediction processing unit 20 predicts a gesture that is predicted to be performed before the start point of the gesture (time t2 in FIG. 14 described later) by identifying the gesture pre-operation PG that is performed immediately before the gesture. be able to.
- the device control unit 41 can output a control signal corresponding to the gesture predicted to be performed thereafter.
- the recognition processing unit 30 stores in advance gesture model information GM indicating the movement of the user's body part during the gesture.
- the recognition processing unit 30 performs a second comparison that compares the body part movement information generated by the movement information generation unit 10 and the gesture model information GM, and based on the result of the second comparison, the movement information generation unit 10.
- the gesture indicated by the action of the body part detected by is identified.
- the device control unit 41 outputs a control signal corresponding to the specified gesture.
- the recognition processing unit 30 can start processing according to the predicted gesture at an early time (time t2 in FIG. 14 described later).
- the gesture recognition process can be completed at an earlier time (time t3 in FIG. 14 described later).
- the motion information generation unit 10 includes, for example, an imaging control unit 11, an imaging data storage unit 12, a body part detection unit 13, a body part tracking unit 14, a body part motion information generation unit 15, And a part motion information storage unit 16.
- the prediction processing unit 20 includes, for example, a gesture preliminary motion model storage unit (first storage unit) 21, a gesture preliminary motion prediction unit 22, and a gesture preliminary motion prediction result storage unit 23.
- the recognition processing unit 30 includes, for example, a gesture recognition unit 31 and a gesture model storage unit (second storage unit) 32.
- a camera 4 a display 5, a display control unit 51, and a display information storage unit 52 are connected to the gesture recognition device 1.
- the display 5, the display control unit 51, and the display information storage unit 52 constitute a display device. Although one camera 4 is shown in the figure, a plurality of cameras may be connected.
- the left hand is detected as a body part for one user U included in the frame image FI, which is image data acquired by capturing with the camera 4, and the user executes from the left hand motion. Identify (recognize) gestures.
- the detection of the left hand is detection of a specific position of the left hand, for example, detection of the center of gravity position of the palm region or detection of the fingertip position.
- the gesture recognized by the gesture recognition device 1 is a gesture called “right swipe” performed with the left hand.
- “Right swipe” is an operation in which the user shakes his / her hand (or arm) in the right direction.
- the “right direction” in this case is the direction of “right” when the user U faces the camera 4.
- the camera 4 When the camera 4 receives a control signal indicating the start of imaging from the imaging control unit 11, the camera 4 starts moving image shooting (or imaging of a plurality of continuous still images), and the control signal indicating the end of imaging from the imaging control unit 11. When the video is received, moving image shooting (or imaging of a plurality of continuous still images) is terminated.
- the imaging control unit 11 transmits a control signal indicating the start or end of imaging to the camera 4 and receives a frame image FI that is imaging data obtained by imaging of the camera 4.
- the imaging data storage unit 12 receives the frame image FI as imaging data from the imaging control unit 11, and stores the frame image FI together with the frame number FN.
- the frame image FI and the frame number FN stored in the imaging data storage unit 12 are read by the body part detection unit 13 and the body part tracking unit 14.
- the body part detection unit 13 acquires a frame image FI and its frame number FN as imaging data from the imaging data storage unit 12.
- the body part detection unit 13 transmits the left hand coordinate LHP of the user included in the acquired frame image FI and the frame number FN of the acquired frame image FI to the body part motion information storage unit 16.
- the body part tracking unit 14 reads out and acquires the frame image FI and the frame number FN from the imaging data storage unit 12. Further, the body part tracking unit 14 acquires the left hand coordinate LHP ( ⁇ 1) in the frame image FI ( ⁇ 1) one frame before from the body part motion information storage unit 16. Further, the body part tracking unit 14 causes the body part motion information generation unit 15 to perform the left hand coordinate LHP (-1) of the frame image FI (-1) one frame before and the left hand of the current frame image FI (0). The coordinates LHP (0) of The body part tracking unit 14 transmits a control signal for deleting the stored body part motion information 150 to the body part motion information storage unit 16.
- the body part movement information generation unit 15 outputs the left hand coordinate LHP ( ⁇ 1) in the frame image FI ( ⁇ 1) one frame before from the body part tracking unit 14 and the left hand coordinate LHP ( 0) and transmit the body part motion information 150 to the body part motion information storage unit 16.
- the body part movement information storage unit 16 receives the left hand coordinate LHP and the frame number FN from the body part detection unit 13 and stores them.
- the body part motion information storage unit 16 receives the body part motion information 150 from the body part motion information generation unit 15 and stores the body part motion information 150.
- the body part tracking unit 14 reads the coordinates LHP (-1) of the left hand in the frame image FI (-1) one frame before stored in the body part motion information storage unit 16.
- the body part motion information storage unit 16 receives a control signal for deleting the stored body part motion information 150 from the body part tracking unit 14, and deletes information to be deleted at this time.
- body part motion information 150 (for example, shown in FIG. 10 described later) stored in the body part motion information storage unit 16 is read by the gesture pre-motion prediction unit 22.
- the body part motion information 150 stored in the body part motion information storage unit 16 is read by the gesture recognition unit 31.
- the gesture pre-motion model storage unit 21 stores gesture pre-motion model information PGM.
- the gesture advance action model information PGM stored in the gesture advance action model storage unit 21 is read by the gesture advance action prediction unit 22.
- the gesture pre-motion prediction unit 22 acquires the body part motion information 150 from the body part motion information storage unit 16. In addition, the gesture preliminary motion prediction unit 22 acquires the gesture preliminary motion model information PGM from the gesture preliminary motion model storage unit 21. In addition, the gesture preliminary motion prediction unit 22 transmits the gesture preliminary motion prediction result PGR to the gesture preliminary motion prediction result storage unit 23.
- the gesture pre-motion prediction result storage unit 23 receives the gesture pre-motion prediction result PGR from the gesture pre-motion prediction unit 22 and stores it. In addition, the gesture preliminary motion prediction result storage unit 23 transmits the gesture preliminary motion prediction result PGR to the gesture recognition unit 31.
- the gesture recognizing unit 31 acquires the body part motion information 150 from the body part motion information storage unit 16. In addition, the gesture recognizing unit 31 acquires the gesture pre-motion prediction result PGR from the gesture pre-motion prediction result storage unit 23. In addition, the gesture recognition unit 31 acquires the gesture model information GM from the gesture model storage unit 32. In addition, the gesture recognition unit 31 transmits the gesture identification result GR to the device control unit 41.
- the gesture model storage unit 32 stores gesture model information GM.
- the gesture model information GM in the gesture model storage unit 32 is read by the gesture recognition unit 31.
- the device control unit 41 receives the gesture identification result GR from the gesture recognition unit 31. In addition, the device control unit 41 transmits a control signal instructing the display control unit 51 to execute a device operation corresponding to the gesture identification result GR.
- the display control unit 51 receives a control signal instructing execution of device operation from the device control unit 41. In addition, the display control unit 51 acquires display data to be displayed on the display 5 from the display information storage unit 52 and causes the display 5 to display an image corresponding to the display data.
- the display 5 receives display data from the display control unit 51 and displays an image based on the display data.
- FIG. 4 is a flowchart showing the body part detection phase S1 of the gesture recognition process in the gesture recognition apparatus 1 according to the first embodiment.
- FIG. 5 is a flowchart showing the body part tracking phase S2 of the gesture recognition process in the gesture recognition apparatus 1 according to the first embodiment.
- FIG. 6 is a flowchart showing the gesture prediction phase S3 of the gesture recognition process in the gesture recognition apparatus 1 according to the first embodiment.
- FIG. 7 is a flowchart showing the gesture identification phase S4 of the gesture recognition process in the gesture recognition apparatus 1 according to the first embodiment.
- the body part detection unit 13 captures imaging data.
- a new frame image FI that has not yet been subjected to the body part detection process and its frame number FN are read from the storage unit 12 (step S12 in FIG. 4).
- the body part detection unit 13 performs the body part detection phase.
- the process in S ⁇ b> 1 is not performed, and the process waits until a new frame image FI that has not been subjected to the body part detection process is accumulated in the imaging data storage unit 12.
- the imaging data storage unit 12 stores not only the pixel values of the pixels constituting one frame image but also the frame number assigned to each frame in order to specify the frame when accumulating the frame image FI as imaging data. Also stores FN.
- the body part detection unit 13 determines whether or not the latest frame image FI is accumulated in the imaging data storage unit 12, the body part detection unit 13 uses the frame image FI that has undergone the previous body part detection process.
- the frame number FN is compared with the frame number FN stored in the imaging data storage unit 12, and the stored frame number FN is 1 more than the frame number FN of the frame image FI subjected to the previous body part detection process. It is examined whether or not the above is added.
- the body part detection unit 13 reads out a new frame image FI and the frame number FN from the imaging data storage unit 12 (step S12 in FIG. 4).
- FIGS. 8A and 8B are diagrams illustrating an example of the body part detection process (body part detection phase S1) in the gesture recognition apparatus 1 according to the first embodiment.
- FIG. 8A shows imaging data (input image) input to the gesture recognition device 1
- FIG. 8B shows a detection result of the left hand LH.
- the body part detection unit 13 detects the left hand LH as shown in FIG. 8B in the plurality of frame images FI read from the imaging data storage unit 12 shown in FIG. 4 step S13).
- a method of detecting a specific part of the body such as a hand from the imaging data as an object is known, and the body part detection unit 13 can freely select a method of detecting the body part.
- the body part detection unit 13 stores the detected left hand coordinate LHP and the frame number FN in the body part motion information storage unit 16. Send.
- the body part movement information storage unit 16 receives the left hand coordinate LHP and the frame number FN from the body part detection unit 13, and stores them (step S15 in FIG. 4).
- the body part detection unit 13 re-accumulates a new frame image FI in the imaging data storage unit 12. ,stand by.
- the body part tracking phase S2 shown in FIG. 5 is executed. That is, not the body part detection unit 13 but the body part tracking unit 14 reads the frame image FI and the frame number FN and executes the tracking process.
- the body part tracking unit 14 stores a new frame image FI that has not been subjected to the body part tracking process in the imaging data storage unit 12 (YES in step S ⁇ b> 21 in FIG. 5).
- the new frame image FI and its frame number FN are read out and acquired from the imaging data storage unit 12 (step S22 in FIG. 5).
- the imaging data storage unit 12 has not accumulated a new frame image FI (NO in step S21 in FIG. 5)
- the body part tracking unit 14 does not perform the subsequent body part tracking process and creates a new frame image. Wait until the FI is accumulated.
- FIGS. 9A to 9F are diagrams illustrating an example of the body part tracking process (body part tracking phase S2) in the gesture recognition apparatus 1 according to the first embodiment.
- FIGS. 9A to 9C are frame images (input images) of the respective frames based on the imaging data
- FIGS. 9D to 9F are the left hand in FIGS. 9A to 9C, respectively. It is a figure which shows the position of LH with a star-shaped mark.
- the body part tracking unit 14 is shown in FIGS. 9D to 9F in a plurality of frame images FI read from the imaging data storage unit 12 as shown in FIGS. 9A to 9C.
- the tracking of the left hand LH is executed (step S23 in FIG. 5).
- the body part tracking unit 14 acquires the left hand coordinate LHP (-1) in the frame image FI (-1) one frame before from the body part motion information storage unit 16, and the latest coordinates in the surrounding coordinates are obtained.
- the left hand LH in the frame image FI (0) is detected.
- the same body part is obtained. It is possible to employ a method of tracking as if the left hand LH of the
- the method of tracking the left hand LH as the same body part is a known method such as a method of tracking using a feature amount based on the luminance distribution.
- the body part tracking unit 14 can freely select a body part tracking method from known methods.
- the body part tracking unit 14 causes the body part motion information generation unit 15 to transmit the left hand coordinate LHP (in the frame image FI ( ⁇ 1) one frame before). -1) and the left hand coordinate LHP (0) in the current frame image FI (0).
- the body part tracking unit 14 stores the body stored in the body part motion information storage unit 16 up to the point when it is determined that the tracking has failed.
- a control signal instructing to erase the part motion information 150 is transmitted.
- the body part motion information storage unit 16 that has received this control signal deletes the stored body part motion information 150 in accordance with the instruction of the received control signal (step S27 in FIG. 5). If the tracking fails, the process of the body part detection phase S1 (the process shifts from A in FIG. 5 to A in FIG. 4) is executed for the next frame image FI.
- the body part motion information generation unit 15 sends the left hand in the frame image FI ( ⁇ 1) one frame before from the body part tracking unit 14. Coordinate LHP (-1) and the left hand coordinate LHP (0) in the current frame image FI (0) are obtained, and these left hand coordinates LHP are obtained as body part motion information 150 as shown in FIG. Is generated (step S25 in FIG. 5).
- FIG. 10 is a diagram illustrating an example of body part motion information handled by the gesture recognition apparatus 1 according to the first embodiment.
- body part motion information 150 includes frame number FN, body part coordinate BPP (xy coordinate position), body part motion amount BPM (motion amount mag), body part motion direction BPD (angle with respect to the reference direction).
- the body part motion information 150 is stored in the body part motion information storage unit 16.
- the body part motion information generation unit 15 transmits the body part motion information 150 generated in this way to the body part motion information storage unit 16.
- the body part motion information storage unit 16 stores the body part motion information 150 (step S26).
- gesture pre-motion prediction unit 22 has body part motion information 150 stored in body part motion information storage unit 16 (YES in step S ⁇ b> 31 of FIG. 6). Then, the body part motion information 150 is read from the body part motion information storage unit 16 (step S32 in FIG. 6). When the body part motion information 150 is not stored in the body part motion information storage unit 16 (NO in step S31 in FIG. 6), the gesture preliminary motion prediction unit 22 stores the body part motion information in the body part motion information storage unit 16. Wait until 150 is stored.
- the gesture pre-motion prediction unit 22 reads the body part motion information 150 from the body part motion information storage unit 16, and reads the gesture pre-motion model information PGM stored in advance from the gesture pre-motion model storage unit 21 (step in FIG. 6). S33).
- FIGS. 11A to 11F are diagrams illustrating an example of the gesture pre-operation PG detected by the gesture recognition device 1 according to the first embodiment.
- the user U performs a right swipe as a gesture with the left hand LH
- the user U's left hand LH is placed in the upper left direction of the user (upper right in FIG. 11) in the order of FIGS.
- the user U performs the operation of raising his hand in the order of FIGS.
- the movement of the body part to be executed before the actual gesture is defined as a gesture pre-movement PG.
- FIG. 12 is a diagram illustrating an example of gesture pre-motion model information PGM stored in advance by the gesture recognition apparatus 1 according to the first embodiment.
- the gesture pre-motion model information PGM is body part pre-motion information 151 describing body part motion information related to the gesture pre-motion PG.
- the gesture pre-motion model information PGM includes a body part motion amount BPM and a body part motion direction BPD.
- the description content of the body part motion amount BPM and the body part motion direction BPD in the gesture pre-motion model information PGM is the left hand in the swipe pre-motion before collecting one or more right swipes collected one or more times in advance. It is expressed as the average of the movement amount and the movement direction of the coordinate LHP. That is, the description content of the body part motion amount BPM and the body part motion direction BPD describes, in order of time, what kind of movement is performed on average as a pre-right swipe motion.
- the gesture pre-motion predicting unit 22 determines the similarity (first similarity) between the body part motion information 150 read from the body part motion information storage unit 16 and the gesture pre-motion model information PGM read from the gesture pre-motion model storage unit 21. Degree). For the similarity, the body part motion amount BPM and the body part motion direction BPD of each frame are regarded as vectors, and the Euclidean distance and the correlation coefficient between the vectors are obtained in time order from the body part motion information 150 and the gesture pre-motion model information PGM. The average of the Euclidean distance and the average of the correlation coefficient can be calculated and used as the similarity.
- the similarity indicates that the larger the value, the more similar. Even if two operations are similar to each other, if the time required for each of the two operations is different, the similarity calculated in a one-to-one correspondence in the time axis order may be small. For example, this is a case where the user U slowly performs the gesture pre-motion PG that swings up the left hand. For this reason, when calculating the similarity, the body part motion amount BPM and the body part motion direction BPD of the body part motion information 150 or the gesture advance motion model information PGM may be sampled and compared at different intervals.
- the gesture pre-motion prediction unit 22 determines whether or not the calculated similarity is equal to or greater than a predetermined threshold (first threshold) MTH1.
- the gesture pre-motion prediction unit 22 predicts that the user U is performing a swipe pre-motion that is a gesture pre-motion PG before performing a right swipe when the calculated similarity is equal to or greater than the threshold MTH1, and the threshold If it is less than MTH1, the user U predicts that the pre-swipe operation before the right swipe is not performed.
- the gesture pre-motion prediction unit 22 stores the gesture pre-motion prediction result PGR indicating that in the gesture pre-motion prediction result storage unit 23.
- the gesture pre-motion prediction result storage unit 23 stores the gesture pre-motion prediction result PGR.
- the gesture pre-motion prediction result PGR may be, for example, a simple flag. When the user U predicts that the swipe pre-motion before performing the right swipe is predicted, “1” is described and stored. You can do it.
- the gesture recognizing unit 31 has the body part motion information 150 stored in the body part motion information storage unit 16 (YES in step S41 of FIG. 7).
- the body part movement information 150 is acquired (step S42 in FIG. 7).
- the gesture recognition unit 31 stores the body part motion information 150 in the body part motion information storage unit 16. Wait until it is memorized.
- the gesture recognizing unit 31 reads the body part motion information 150 from the body part motion information storage unit 16 and reads the gesture model information GM from the gesture model storage unit 32 (step S43 in FIG. 7).
- the gesture recognition unit 31 reads this (step S45 in FIG. 7). If the gesture pre-motion prediction result storage unit 23 does not store the gesture pre-motion prediction result PGR (NO in step S44 in FIG. 7), the gesture recognizing unit 31 uses the gesture model whose body part motion information is the gesture motion model. It is determined whether or not the information matches (step S47 in FIG. 7).
- the gesture recognizing unit 31 When the gesture recognizing unit 31 reads the gesture pre-motion prediction result PGR from the gesture pre-motion prediction result storage unit 23, the gesture recognizing unit 31 reads the information of all frames of the read body part motion information 150 and all the frames of the gesture model information GM. The degree of similarity with the information is calculated, or the degree of similarity between the information for the partial frame and the information for the partial frame of the gesture model information GM is calculated.
- the gesture recognizing unit 31 determines whether or not the calculated similarity (second similarity) is greater than or equal to a threshold (second threshold) MTH2, and if it is greater than or equal to the threshold MTH2, the user U performs a right swipe. When the user U is recognized as being executed (or executed) and is less than the threshold value MTH2, the user U recognizes that the right swipe is not executed. When the user U recognizes that the right swipe is being executed (or executed), the gesture recognition unit 31 transmits the gesture identification result GR to the device control unit 41 (YES in step S46 in FIG. 7). If it is recognized that the right swipe has not been executed, nothing is done (NO in step S46 in FIG. 7).
- FIGS. 13A to 13F are diagrams illustrating an example of the gesture pre-operation PG detected by the gesture recognition apparatus 1 according to the first embodiment.
- FIGS. 13G to 13K are diagrams illustrating the embodiment. It is a figure which shows an example of the gesture which the gesture recognition apparatus which concerns on 1 detects.
- the relatively first half of the swing-out operation is recognized. If you can, you can recognize the gesture.
- gesture pre-operation PG start time t1 shows a gesture pre-operation PG start time t1, a gesture pre-operation PG end time (gesture start time) t2, a gesture completion time t4, and gesture recognition processing in the gesture recognition apparatus 1 according to the first embodiment. It is a figure which shows the completion time t3.
- the gesture recognition unit 31 can transmit the gesture identification result GR to the device control unit 41 within a period between the end time (gesture start time) t2 and the gesture end time t4.
- the gesture recognizing unit 31 does not acquire the gesture pre-motion prediction result PGR from the gesture pre-motion prediction result storage unit 23, the information for all or a part of the body part motion information 150 and the gesture model information GM A similarity (second similarity) with information for all frames is calculated.
- the gesture model information GM describes a body part motion amount BPM and a body part motion direction BPD in a gesture.
- the gesture pre-motion PG when the gesture pre-motion PG cannot be recognized, the gesture is determined from the motion information of the left hand LH between the gesture pre-motion PG end time (gesture start time) and the gesture end time.
- the gesture recognition unit 31 transmits a gesture identification result GR to the device control unit 41 after the end of the gesture.
- gesture pre-motion PG when the gesture pre-motion PG can be predicted (recognition processing is completed at time t3 in FIG. 14), when the gesture pre-operation PG is not predicted and the gesture is recognized only by the gesture (FIG. 14). Compared to the completion of the recognition process at time t5, gesture recognition is possible in a shorter time.
- the gesture pre-motion PG can be predicted, it is possible to perform gesture recognition using only the gesture pre-motion prediction result PGR. However, in order to increase the recognition rate, the first part of the gesture is combined. It is preferable that the gesture identification result GR is transmitted after it is recognized.
- the gesture model storage unit 32 stores gesture model information GM related to a plurality of types of gestures, there is a possibility that the gesture model storage unit 32 may actually be performed after that. High gestures can be narrowed down, and the effect of shortening the recognition time and improving the recognition rate can be obtained. And the effect becomes high, so that the kind increases.
- the device control unit 41 receives the gesture identification result GR from the gesture recognition unit 31, and transmits a control signal to the display control unit 51 so that the information processing apparatus 100 performs a device operation corresponding to the result.
- the device control unit 41 changes the image displayed on the display 5 every time a right swipe is recognized.
- the changed image is, for example, the next image stored in the display information storage unit 52 in the folder in which the image currently displayed on the display 5 is stored (the order of the images is, for example, Determined in the order of creation date).
- the display control unit 51 that has received a control signal from the device control unit 41 reads information to be displayed from the display information storage unit 52 and transmits the information to the display 5 to display an image. .
- the gesture executed by the user U is set before the time when the gesture is completed (time t4 in FIG. 14 described later) (for example, time t3 in FIG. 14 described later). Therefore, the device control based on the gesture recognition result can be executed earlier (for example, at time t5 in FIG. 14 described later). Thereby, the time lag from the gesture execution which the user U was feeling to apparatus control can be shortened, and stress-less gesture operation for the user U is attained.
- image data that is a source of gesture recognition is image data.
- this does not indicate only data from a camera that can be imaged with visible light (as a color signal), but uses infrared rays. It is also possible to use data obtained by imaging with a camera that captures images and data obtained by imaging with two or more cameras. Further, when using a camera capable of ranging, it is easy for the body part detection unit 13 to detect the body part. This is because, for example, it becomes easy to distinguish a person included in the image data from the other (background) based on the distance information.
- the case of the right swipe of the left hand as the gesture that can be recognized by the gesture recognition device 1 has been described, but the left swipe of the left hand, the left swipe of the right hand, the left swipe of the right hand, and the upper swipe of the left hand
- the present invention can also be applied to recognizing other gestures such as right hand upper swipe, left hand lower swipe, right hand lower swipe, and the like.
- FIGS. 15 (a) to 15 (f) show a gesture pre-operation PG (operation for gradually opening the palm while moving the left hand) and a gesture (left hand) immediately before the gesture detected by the gesture recognition apparatus 1 according to the first embodiment.
- FIG. 4 is a diagram showing an operation) of opening the palm of the hand.
- FIGS. 16A to 16F are a gesture pre-operation PG immediately before the gesture detected by the gesture recognition apparatus 1 according to Embodiment 1 (an operation of gradually extending one finger while moving the left hand). It is a figure which shows gesture (operation
- the gesture to which the present invention can be applied may be an operation of opening the palm as shown in FIGS. 15 (e) and 15 (f).
- the gesture to which the present invention is applicable may be an operation in which one finger is stretched to point to something as shown in FIGS. 16 (e) and 16 (f).
- the gesture recognizing device 1 gradually changes the shape of the hand when raising the hand.
- the gesture pre-motion prediction unit 22 predicts whether or not the user U has executed the gesture pre-motion PG, and when it is executed, the gesture pre-motion prediction result storage unit 23 indicates that fact.
- the gesture pre-motion prediction result PGR is transmitted.
- the gesture preliminary motion prediction result storage unit 23 stores the gesture preliminary motion prediction result PGR.
- the gesture pre-motion prediction unit 22 does not store the gesture pre-motion prediction result PGR in the gesture pre-motion prediction result storage unit 23 only when the user U executes the gesture pre-motion PG, but the similarity calculation result This may be stored in the gesture preliminary motion prediction result storage unit 23 as the gesture preliminary motion prediction probability PGPR.
- the gesture recognizing unit 31 reads the gesture pre-motion prediction result PGR expressed by the probability in this way from the gesture pre-motion prediction result storage unit 23, and determines the similarity between the body part motion information 150 and the gesture model information GM.
- the number of frames to be compared may be changed in accordance with the probability to change (for example, shorten) the time required for recognition.
- the gesture recognizing unit 31 determines the first similarity between the body part motion information 150 and the gesture pre-motion model information (gesture pre-motion prediction result PGR).
- the time of imaging data used for the second comparison may be changed (that is, the time corresponding to the imaging data is changed by changing the amount of imaging data to be compared in the second comparison).
- the gesture recognizing unit 31 when the gesture pre-motion prediction probability PGPR is high, the gesture recognizing unit 31 has a high probability that the gesture is executed. Therefore, the relatively few first few frames of the actual gesture include the gesture model information GM. Is identified as having been executed. In other words, the gesture recognizing unit 31 has a first similarity between the body part motion information 150 and the gesture pre-motion model information (gesture pre-motion prediction result PGR) equal to or greater than a predetermined first threshold. In addition, the time of the imaging data used for the second comparison among the imaging data of a plurality of frames is shortened (that is, the time corresponding to the imaging data by reducing the amount of imaging data to be compared in the second comparison). May be.
- the gesture prior motion prediction probability PGPR is low, the probability that the gesture is executed is low, and when the relatively few first few frames of the actual gesture match the gesture model information GM. Identify that the gesture was executed. In this way, for example, even if the left hand LH is raised slightly different from the gesture pre-motion model information PGM and the right swipe is performed, the gesture can be recognized.
- the right swipe has been described as a gesture recognized by the gesture recognition device 1, but a plurality of gestures including other gestures can be simultaneously recognized.
- the gesture pre-motion model storage unit 21 stores in advance all or part of the gesture pre-motion model information PGM of the gesture to be recognized.
- the gesture pre-motion prediction unit 22 is configured to be able to predict those gesture pre-motion PG, and the gesture model storage unit 32 stores the gesture model information GM in advance.
- the gesture recognition unit 31 is configured to recognize these gestures, and transmits a control signal for executing device control according to the recognized gesture to the device control unit 41.
- the user U can input a plurality of types of gestures, and can execute various types of device control.
- the gesture pre-motion prediction unit 22 does not store the gesture pre-motion prediction result PGR only when the user U executes the gesture pre-motion PG, but stores the similarity calculation result itself in each gesture as the gesture pre- You may memorize
- the gesture recognizing unit 31 reads the gesture pre-motion prediction result PGR expressed by the probability in this way from the gesture pre-motion prediction result storage unit 23, and determines the similarity between the body part motion information 150 and the gesture model information GM.
- the time required for recognition may be shortened by changing the number of frames to be compared for each gesture according to the probability. By doing in this way, even if a plurality of types of gestures are to be recognized, the gesture recognizing unit 31 performs the gesture even if an actual gesture is performed after performing an operation different from the gesture pre-operation model information PGM. It becomes possible to recognize.
- the gesture recognition device 1 can simultaneously recognize a plurality of gestures including other gestures. .
- the gesture pre-operation PG may be predicted and the gesture may be recognized in consideration of usage conditions such as the device and the outside world, time, and the past usage frequency of the gesture.
- the device is a control target device connected to the gesture recognition device 1.
- the outside world is the installation environment of the gesture recognition device 1 or device.
- the time information may be acquired from the outside or may be acquired from a built-in clock.
- the gesture recognition device 1 can narrow down the gestures to be recognized according to the state of the device.
- the gesture recognition device 1 collects and stores in advance time zone information and usage frequency information such as that a certain gesture is frequently executed in a specific time zone of the day, and identifies the gesture. Prior to processing, the scope of the gesture can be narrowed down. By doing in this way, the prediction probability of gesture pre-operation PG and the recognition rate of gesture recognition can be improved.
- Embodiment 1 the case where the left hand LH is a body part to be recognized has been described, but other body parts can also be targeted. For example, body parts such as the tip of the index finger, elbows, knees, and shoulders may be recognized. Further, in different types of gestures, the body part generally performs different actions. It is possible to improve the recognition rate by setting the body part that performs a characteristic movement for each gesture as a recognition target.
- Embodiment 1 the case where the left hand LH is a body part to be recognized has been described, but the body part need not be one place and may be two or more places.
- the body part need not be one place and may be two or more places.
- Embodiment 1 since the gesture can be recognized by combining the gesture pre-operation and the actual gesture operation, even when the gesture operation by the user U is performed at high speed, only the gesture operation is used for gesture recognition The amount of imaging data that can be used in comparison increases. Thereby, Embodiment 1 also has an effect of improving the gesture recognition rate.
- FIG. FIG. 17 is a block diagram schematically showing a configuration of gesture recognition apparatus 2 according to Embodiment 2 of the present invention.
- the gesture recognition device 2 is a device that can perform the gesture recognition method according to the second embodiment.
- FIG. 17 the same reference numerals as those in FIG. 3 are assigned to the same or corresponding elements as those shown in FIG.
- the gesture recognition device 2 (FIG. 17) according to the second embodiment is different from the gesture recognition device 1 (FIG. 3) according to the first embodiment in that data is sent from the gesture recognition unit 31a to the gesture pre-action model storage unit 21a.
- the second embodiment is the same as the first embodiment. Since the gesture recognition device 2 according to the second embodiment sequentially updates the gesture advance motion model information PGM stored in the gesture advance motion model storage unit 21, the prediction probability of the user U's gesture advance motion PG is improved. Can be made.
- the gesture recognizing unit 31a sends the body part motion information 150 to the gesture advance motion model storage unit 21.
- the gesture pre-motion model storage unit 21a receives the body part motion information 150 from the gesture recognition unit 31a and stores the body part motion information 150.
- the gesture recognizing unit 31a calculates the similarity between the information for all frames of the body part movement information 150 and the information for all frames of the gesture model information GM, and identifies the gesture.
- the gesture recognizing unit 31a calculates the similarity between the information for a part of the body part motion information 150 and the information for the part of the frame of the gesture model information GM in the gesture identification phase S4, and identifies the gesture. .
- the gesture recognizing unit 31a recognizes that the user U is executing (or has performed) a gesture, transmits the gesture identification result GR to the device control unit 41, and recognizes that the user U is not executing Does nothing.
- the gesture recognition unit 31a when the gesture recognition unit 31a recognizes that the user U is executing (or has executed) the gesture, the gesture recognition unit 31a extracts information before the information related to the gesture from the body part movement information 150. That is, the gesture recognizing unit 31a extracts the gesture pre-movement PG in the gesture identified this time. The gesture recognizing unit 31a transmits the body part motion information 150 regarding the gesture pre-motion PG to the gesture pre-motion model storage unit 21a.
- the gesture pre-motion model storage unit 21a stores the body part motion information 150 received from the gesture recognition unit 31a as gesture pre-motion model information PGM. At this time, if the gesture pre-motion model storage unit 21a has already stored the gesture pre-motion model information PGM, the gesture pre-motion model storage unit 21a newly stores the average of them or the gesture read out from the gesture recognition unit 31a Pre-operation model information PGM is newly stored.
- the gesture executed by the user U can be recognized before the time point at which the gesture is completed.
- Device control can be performed more quickly than before. Thereby, the time lag from the gesture execution which the user U was feeling to apparatus control can be shortened, and stress-less gesture operation for the user U is attained.
- the gesture pre-motion model information PGM can be sequentially updated, changes related to the manner of gesture due to the user U's habit and familiarity can be sequentially reflected.
- the prediction probability of the gesture pre-operation PG can be improved.
- FIG. FIG. 18 is a block diagram schematically showing a configuration of gesture recognition apparatus 3 according to Embodiment 3 of the present invention.
- the gesture recognition device 3 is a device that can perform the gesture recognition method according to the third embodiment. 18, components that are the same as or correspond to the components shown in FIG. 17 are given the same reference numerals as those in FIG. In the third embodiment, a case where the camera and the display shown in FIG. 1 are connected to the gesture recognition device 3 will be described.
- Gesture recognition device 3 (FIG. 18) according to the third embodiment differs from gesture recognition device 2 (FIG. 17) according to the second embodiment in that it includes a person identification processing unit 17 that identifies a person who executes a gesture. .
- the gesture recognition apparatus 3 according to Embodiment 3 performs gesture recognition in consideration of user information. According to the gesture recognition device 3 according to Embodiment 3, it is possible to improve the prediction probability of the user U's gesture pre-movement PG.
- the person identification processing unit 17 of the motion information generation unit 10 a acquires the frame image FI from the imaging data storage unit 12, and uses the person identification result HDR for the person included in the frame image FI as the body part detection unit 13. Are transmitted to the body part tracking unit 14 and the gesture preliminary motion prediction unit 22a.
- the body part detection unit 13 acquires the frame image FI and its frame number FN from the imaging data storage unit 12, and obtains the coordinates LHP and the frame number FN of the left hand included in the frame image FI as the body part motion information storage unit. 16 to send. In addition, the body part detection unit 13 receives the person identification result HDR from the person identification processing unit 17, and transmits a set of the left hand coordinate LHP and the frame number FN to the body part motion information storage unit 16.
- the body part tracking unit 14 acquires the frame image FI and its frame number FN from the imaging data storage unit 12. Further, the body part tracking unit 14 acquires the coordinate LHP of the left hand in the frame image FI one frame before the specific user from the body part movement information storage unit 16. Further, the body part tracking unit 14 receives the person identification result HDR from the person identification processing unit 17. Further, the body part tracking unit 14 sends the left hand coordinate LHP in the frame image FI one frame before, the left hand coordinate LHP in the current frame image FI, and the person identification result HDR to the body part motion information generation unit 15. Send. The body part tracking unit 14 transmits a control signal for deleting the stored body part motion information 150 to the body part motion information storage unit 16.
- the body part motion information generation unit 15 includes the left hand coordinate LHP (-1) in the frame image FI (-1) one frame before the body part tracking unit 14 and the left hand coordinate LHP in the current frame image FI (0). (0) and the person identification result HDR are received, and the body part motion information 150 is transmitted to the body part motion information storage unit 16.
- the gesture pre-motion model storage unit 21a stores gesture pre-motion model information PGM for each user.
- the gesture pre-motion model storage unit 21a transmits the gesture pre-motion model information PGM of a specific user to the gesture pre-motion prediction unit 22a.
- the gesture pre-motion prediction unit 22a reads the body part motion information 150 of a specific user (a user who has performed a body part motion in the latest frame image FI) from the body part motion information storage unit 16. In addition, the gesture pre-motion prediction unit 22a receives the user's gesture pre-motion model information PGM from the gesture pre-motion model storage unit 21a. In addition, the gesture preliminary motion prediction unit 22 a transmits the gesture preliminary motion prediction result PGR of the user to the gesture preliminary motion prediction result storage unit 23.
- the gesture pre-motion prediction result storage unit 23 receives the gesture pre-motion prediction result PGR of a specific user from the gesture pre-motion prediction unit 22a and stores it. In addition, the gesture preliminary motion prediction result storage unit 23 transmits the gesture preliminary motion prediction result PGR of a specific user to the gesture recognition unit 31a.
- the gesture recognition unit 31a receives the body part motion information 150 of a specific user from the body part motion information storage unit 16. In addition, the gesture recognition unit 31a receives the gesture preliminary motion prediction result PGR of a specific user from the gesture preliminary motion prediction result storage unit 23. In addition, the gesture recognition unit 31 a reads the gesture model information GM of a specific user from the gesture model storage unit 32. In addition, the gesture recognition unit 31 a transmits a gesture identification result GR to the device control unit 41.
- the gesture model storage unit 32 stores gesture model information GM for each user.
- the gesture recognition unit 31 a acquires the gesture model information GM of a specific user from the gesture model storage unit 32.
- the person identification processing unit 17 receives the frame image FI from the imaging data storage unit 12, and executes the person identification process for the user included in the frame image FI.
- Examples of the person specifying process include face detection.
- the person specifying process unit 17 transmits the person specifying result HDR to the body part detecting unit 13, the body part tracking unit 14, and the gesture pre-motion predicting unit 22a.
- the body part detection unit 13 detects the body part and transmits detection result information to the body part movement information storage unit 16. At this time, the body part detection unit 13 transmits the person identification result HDR received from the person identification processing unit 17 to the body part movement information storage unit 16.
- the body part tracking unit 14 performs tracking of the body part in the plurality of frame images FI read from the imaging data storage unit 12. At this time, the body part tracking unit 14 tracks the body part of the same person based on the person identification result HDR received from the person identification processing unit 17. For example, the body part tracking unit 14 performs tracking of the first user's hand and the second user's hand as separate body parts using the person identification result HDR.
- the body part tracking unit 14 determines the left hand coordinate LHP ( ⁇ 1) in the frame image FI ( ⁇ 1) one frame before and the left hand coordinate LHP (0) in the current frame image FI (0). ) And the person identification result HDR are transmitted to the body part motion information generation unit 15.
- the body part movement information generation unit 15 sends the left hand coordinates LHP (-1) in the frame image FI (-1) one frame before and the left hand coordinates in the current frame image FI (0) from the body part tracking unit 14.
- LHP (0) and the person identification result HDR are received, and body part movement information 150 is generated from these pieces of information.
- the body part movement information 150 is described in a different file for each user.
- the body part movement information 150 may be described in the same file. In this case, the body part motion information 150 is described so that the body part motion information can be determined for each user.
- the body part motion information storage unit 16 stores body part motion information 150.
- the gesture pre-motion model storage unit 21a stores gesture pre-motion model information PGM for each user. Since the gesture pre-motion model information PGM is stored for each user, it is possible to reflect a physical motion unique to each user, such as a habit of each user.
- the gesture pre-motion model information PGM may be information acquired and registered in advance based on a gesture executed by the user, or a recognition result is fed back from the gesture recognition unit 31a as in the second embodiment. The information may be stored and updated sequentially.
- the gesture preliminary motion prediction unit 22a acquires the gesture preliminary motion model information PGM of a specific user from the gesture preliminary motion model storage unit 21a.
- the gesture pre-motion prediction unit 22a acquires the body part motion information 150 of a specific user (a user who has performed a body part motion in the latest frame image FI) from the body part motion information storage unit 16. In addition, the gesture pre-motion prediction unit 22a acquires the user's gesture pre-motion model information PGM from the gesture pre-motion model storage unit 21a, and calculates the similarity. The gesture preliminary motion prediction unit 22a compares the calculated similarity with the threshold value MTH1, predicts whether the user has executed the gesture preliminary motion PG, transmits the gesture preliminary motion prediction result PGR, and stores the gesture preliminary motion prediction result storage unit. 23 stores this.
- the gesture recognizing unit 31a acquires the body part motion information 150 of a specific user from the body part motion information storage unit 16, and further reads the gesture model information from the gesture model storage unit 32. Furthermore, when the gesture preliminary motion prediction result storage unit 23 stores the gesture preliminary motion prediction result PGR of the user, the gesture recognition unit 31a receives the gesture preliminary motion prediction result PGR. The gesture recognizing unit 31a does nothing if the user's gesture pre-motion prediction result storage unit 23 does not store the user's gesture pre-motion prediction result PGR. Thereafter, the gesture recognizing unit 31a calculates the similarity between the acquired body part movement information 150 and the gesture model information GM, and identifies the gesture based on the similarity.
- the gesture model storage unit 32 stores gesture model information GM for each user.
- the gesture recognizing unit 31a transmits the gesture model information GM of a specific user.
- the gesture recognition device 3 configured as described above, since the gesture executed by the user U can be recognized before the time when the gesture is completed, the device control based on the gesture recognition result is conventionally performed. Can be done more quickly. Thereby, the time lag from the gesture execution which the user U was feeling to apparatus control can be shortened, and stress-less gesture operation for the user U is attained.
- the gesture recognition device 3 recognizes the gesture pre-operation PG and the gesture for each user. Therefore, since the difference regarding the way of gesture due to the user U's habit or the like can be absorbed, the prediction probability of the gesture pre-operation PG and the recognition rate of the gesture can be improved.
- not only face detection but also other methods can be used as the content of the person specifying process.
- the third embodiment it is possible to narrow down the gestures recognized by the user. By doing in this way, the prediction probability of gesture pre-operation PG and the recognition rate of gesture recognition can be improved.
- FIG. 19 is a hardware configuration diagram illustrating a modified example of the gesture recognition apparatus according to Embodiments 1 to 3 of the present invention.
- the gesture recognition device 1 shown in FIG. 3 uses a memory 91 as a storage device that stores a program as software, and a processor 92 as an information processing unit that executes the program stored in the memory 91 (for example, Can be realized).
- the processor 92 that executes A part of the gesture recognition apparatus 1 shown in FIG. 3 may be realized by the memory 91 shown in FIG. 19 and the processor 92 that executes a program.
- the gesture recognition device 2 shown in FIG. 17 uses a memory 91 as a storage device that stores a program as software, and a processor 92 as an information processing unit that executes the program stored in the memory 91 ( For example, by a computer).
- the imaging data storage unit 12, the body part motion information storage unit 16, the gesture preliminary motion model storage unit 21a, the gesture preliminary motion prediction result storage unit 23, and the gesture model storage unit 32 in FIG. This corresponds to the memory 91.
- a part of the gesture recognition device 2 shown in FIG. 17 may be realized by the memory 91 shown in FIG. 19 and the processor 92 that executes a program.
- the gesture recognition device 3 shown in FIG. 18 uses a memory 91 as a storage device that stores a program as software, and a processor 92 as an information processing unit that executes the program stored in the memory 91 ( For example, by a computer).
- the imaging data storage unit 12, body part motion information storage unit 16, gesture pre-motion model storage unit 21a, gesture pre-motion prediction result storage unit 23, and gesture model storage unit 32 in FIG. This corresponds to the memory 91.
- the control unit 41 corresponds to the processor 92 that executes a program.
- a part of the gesture recognition device 3 shown in FIG. 18 may be realized by the memory 91 shown in FIG. 19 and the processor 92 that executes a program.
- the gesture recognition apparatus, gesture recognition method, and information processing apparatus can be applied to various electronic devices such as a broadcast receiver, a PC, a car navigation system, and an information communication terminal.
- the gesture recognition device, the gesture recognition method, and the information processing device include a broadcast receiver that changes a viewing channel according to a gesture executed by a user, and a display that is displayed according to the gesture executed by the user.
- the present invention can be applied to a broadcast receiver that changes the display of an electronic program guide.
Abstract
Description
図3は、本発明の実施の形態1に係るジェスチャー認識装置1の構成を概略的に示すブロック図である。ジェスチャー認識装置1は、実施の形態1に係るジェスチャー認識方法を実施することができる装置である。実施の形態1においては、図1に示されるカメラ4及び表示制御部51が、ジェスチャー認識装置1に接続されており、表示情報記憶部52及び制御対象機器としてのディスプレイ5が、表示制御部51を経由してジェスチャー認識装置1に接続されている。図3には、ジェスチャー認識装置1を含む情報処理装置が示されている。
図17は、本発明の実施の形態2に係るジェスチャー認識装置2の構成を概略的に示すブロック図である。ジェスチャー認識装置2は、実施の形態2に係るジェスチャー認識方法を実施することができる装置である。図17において、図3に示される構成要素と同一又は対応する構成要素には、図3における符号と同じ符号を付している。実施の形態2においては、図1で示されたカメラ及びディスプレイが、ジェスチャー認識装置2に接続されている場合を説明する。実施の形態2に係るジェスチャー認識装置2(図17)は、ジェスチャー認識部31aからジェスチャー事前動作モデル記憶部21aにデータが送られる点において、実施の形態1に係るジェスチャー認識装置1(図3)に係るジェスチャー認識装置1(図3)と異なる。この点を除き、実施の形態2は、実施の形態1と同じである。実施の形態2に係るジェスチャー認識装置2は、ジェスチャー事前動作モデル記憶部21に記憶されるジェスチャー事前動作モデル情報PGMを逐次更新するものであるから、ユーザーUのジェスチャー事前動作PGの予測確率を向上させることができる。
図18は、本発明の実施の形態3に係るジェスチャー認識装置3の構成を概略的に示すブロック図である。ジェスチャー認識装置3は、実施の形態3に係るジェスチャー認識方法を実施することができる装置である。図18において、図17に示される構成要素と同一又は対応する構成要素には、図17における符号と同じ符号を付している。実施の形態3においては、図1で示されたカメラ及びディスプレイが、ジェスチャー認識装置3に接続されている場合を説明する。実施の形態3に係るジェスチャー認識装置3(図18)は、ジェスチャーを実行する人物を特定する人物特定処理部17を備える点において、実施の形態2に係るジェスチャー認識装置2(図17)と異なる。この点を除き、実施の形態3は、実施の形態2と同じである。実施の形態3に係るジェスチャー認識装置3は、ユーザー情報を考慮してジェスチャー認識を実行する。実施の形態3に係るジェスチャー認識装置3によれば、ユーザーUのジェスチャー事前動作PGの予測確率を向上させることができる。
図19は、本発明の実施の形態1から3に係るジェスチャー認識装置の変形例を示すハードウェア構成図である。図3に示されるジェスチャー認識装置1は、ソフトウェアとしてのプログラムを格納する記憶装置としてのメモリ91と、メモリ91に格納されたプログラムを実行する情報処理部としてのプロセッサ92とを用いて(例えば、コンピュータにより)実現することができる。この場合には、図3における撮像データ記憶部12、身体部位動作情報記憶部16、ジェスチャー事前動作モデル記憶部21、ジェスチャー事前動作予測結果記憶部23、及びジェスチャーモデル記憶部32は、図19におけるメモリ91に相当する。また、図3における撮像制御部11、身体部位検出部13、身体部位追跡部14、身体部位動作情報生成部15、ジェスチャー事前動作予測部22a、ジェスチャー認識部31、及び機器制御部41は、プログラムを実行するプロセッサ92に相当する。なお、図3に示されるジェスチャー認識装置1の一部を、図19に示されるメモリ91と、プログラムを実行するプロセッサ92とによって実現してもよい。
Claims (10)
- ユーザーを撮影して取得された複数フレームの撮像データの各々において、前記ユーザーの身体部位の検出及び追跡を行って得られた前記身体部位の動作から身体部位動作情報を生成する動作情報生成部と、
ジェスチャーの直前に行われるジェスチャー事前動作についてのユーザーの身体部位の基準動作を示すジェスチャー事前動作モデル情報を予め記憶し、前記動作情報生成部によって生成された前記身体部位動作情報と前記ジェスチャー事前動作モデル情報とを比較する第1の比較を行い、前記第1の比較の結果に基づいて、前記動作情報生成部によって検出された前記身体部位の動作が示すジェスチャー事前動作についての予測結果を生成する予測処理部と、
ジェスチャー中におけるユーザーの身体部位の基準動作を示すジェスチャーモデル情報を予め記憶し、前記動作情報生成部によって生成された前記身体部位動作情報と前記ジェスチャーモデル情報とを比較する第2の比較を行い、前記予測結果と前記第2の比較の結果とに基づいて、前記動作情報生成部によって検出された前記身体部位の動作が示すジェスチャーの認識結果を生成する認識処理部と、
を有することを特徴とするジェスチャー認識装置。 - 前記認識処理部は、前記身体部位動作情報と前記ジェスチャー事前動作モデル情報との間の第1の類似度に応じて、前記複数フレームの撮像データの内の、前記第2の比較に用いる撮像データの時間を変更することを特徴とする請求項1に記載のジェスチャー認識装置。
- 前記認識処理部は、前記身体部位動作情報と前記ジェスチャー事前動作モデル情報との間の第1の類似度が予め決められた第1の閾値以上であるときに、前記複数フレームの撮像データの内の、前記第2の比較に用いる撮像データの時間を短縮することを特徴とする請求項1又は2に記載のジェスチャー認識装置。
- 前記認識処理部は、前記身体部位動作情報と前記ジェスチャー事前動作モデル情報との間の第1の類似度に応じて、前記第2の比較に用いる前記ジェスチャーモデル情報の範囲を絞り込むことを特徴とする請求項1に記載のジェスチャー認識装置。
- 前記認識処理部は、前記身体部位動作情報と前記ジェスチャー事前動作モデル情報との間の第1の類似度が予め決められた第1の閾値以上であるときに、前記第2の比較に用いる前記ジェスチャーモデル情報の範囲を絞り込むことを特徴とする請求項1又は4に記載のジェスチャー認識装置。
- 前記予測処理部は、前記ジェスチャー認識装置に接続される制御対象機器の種類、前記ジェスチャー認識装置の設置環境、時刻、時間帯あたりのジェスチャーの使用頻度のうちの少なくとも1つに応じて、前記第1の比較を行う前記ジェスチャー事前動作モデル情報を絞り込むことを特徴とする請求項1から5のいずれか1項に記載のジェスチャー認識装置。
- 前記認識処理部は、前記ジェスチャー認識装置に接続される制御対象機器の種類、前記ジェスチャー認識装置の設置環境、時刻、時間帯あたりのジェスチャーの使用頻度のうちの少なくとも1つに応じて、前記第2の比較を行う前記ジェスチャーモデル情報を絞り込むことを特徴とする請求項1から5のいずれか1項に記載のジェスチャー認識装置。
- 前記認識処理部は、前記認識されたジェスチャーの直前の身体部位動作情報を前記予測処理部に送信し、
前記予測処理部は、前記認識されたジェスチャーの直前の身体部位動作情報を、前記ジェスチャー事前動作モデル情報として記憶する
ことを特徴とする請求項1から7のいずれか1項に記載のジェスチャー認識装置。 - ジェスチャーの直前に行われるジェスチャー事前動作についてのユーザーの身体部位の基準動作を示すジェスチャー事前動作モデル情報とジェスチャー中におけるユーザーの身体部位の基準動作を示すジェスチャーモデル情報とを予め記憶するジェスチャー認識装置が行うジェスチャー認識方法であって、
ユーザーを撮影して取得された複数フレームの撮像データの各々において、前記ユーザーの身体部位の検出及び追跡を行って得られた前記身体部位の動作から身体部位動作情報を生成する動作情報生成ステップと、
前記動作情報生成ステップにおいて生成された前記身体部位動作情報と前記ジェスチャー事前動作モデル情報とを比較する第1の比較を行い、前記第1の比較の結果に基づいて、前記動作情報生成ステップにおいて検出された前記身体部位の動作が示すジェスチャー事前動作についての予測結果を生成する予測処理ステップと、
前記動作情報生成ステップにおいて生成された前記身体部位動作情報と前記ジェスチャーモデル情報とを比較する第2の比較を行い、前記予測結果と前記第2の比較の結果とに基づいて、前記動作情報生成ステップにおいて検出された前記身体部位の動作が示すジェスチャーの認識結果を生成する認識処理ステップと、
を有することを特徴とするジェスチャー認識方法。 - 請求項1から8のいずれか1項に記載のジェスチャー認識装置と、
前記ジェスチャー認識装置に前記撮像データを送信する撮像装置と、
前記ジェスチャーの認識結果に応じて制御される制御対象機器と、
を有することを特徴とする情報処理装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/560,716 US10156909B2 (en) | 2015-04-17 | 2016-04-15 | Gesture recognition device, gesture recognition method, and information processing device |
CN201680022114.3A CN107533363B (zh) | 2015-04-17 | 2016-04-15 | 手势识别装置、手势识别方法以及信息处理装置 |
DE112016001794.4T DE112016001794T5 (de) | 2015-04-17 | 2016-04-15 | Gestenerkennungsvorrichtung, Gestenerkennungsverfahren und Informationsverarbeitungsvorrichtung |
JP2017512586A JP6355829B2 (ja) | 2015-04-17 | 2016-04-15 | ジェスチャー認識装置、ジェスチャー認識方法、及び情報処理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-084853 | 2015-04-17 | ||
JP2015084853 | 2015-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016167331A1 true WO2016167331A1 (ja) | 2016-10-20 |
Family
ID=57127116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/062052 WO2016167331A1 (ja) | 2015-04-17 | 2016-04-15 | ジェスチャー認識装置、ジェスチャー認識方法、及び情報処理装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10156909B2 (ja) |
JP (1) | JP6355829B2 (ja) |
CN (1) | CN107533363B (ja) |
DE (1) | DE112016001794T5 (ja) |
WO (1) | WO2016167331A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018156316A (ja) * | 2017-03-16 | 2018-10-04 | 日立造船株式会社 | 情報処理装置、情報処理方法、および情報処理プログラム |
WO2019235274A1 (ja) * | 2018-06-07 | 2019-12-12 | ソニー株式会社 | 制御装置、制御方法、プログラム、および移動体 |
CN112001248A (zh) * | 2020-07-20 | 2020-11-27 | 北京百度网讯科技有限公司 | 主动交互的方法、装置、电子设备和可读存储介质 |
JP2021507434A (ja) * | 2017-12-13 | 2021-02-22 | ヒューマニシング オートノミー リミテッド | 歩行者の意図を予測するためのシステムおよび方法 |
JP2022522319A (ja) * | 2019-12-23 | 2022-04-18 | 商▲湯▼国▲際▼私人有限公司 | 目標追跡方法、装置、電子デバイス、及び記録媒体 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10810289B2 (en) * | 2016-08-15 | 2020-10-20 | Fisher-Rosemount Systems, Inc. | Apparatuses, systems, and methods for providing access security in a process control system |
JP6293953B1 (ja) * | 2017-04-04 | 2018-03-14 | 京セラ株式会社 | 電子機器、プログラムおよび制御方法 |
CN109144268A (zh) * | 2018-09-04 | 2019-01-04 | 爱笔(北京)智能科技有限公司 | 一种电子设备的状态切换方法及装置 |
CN109858380A (zh) * | 2019-01-04 | 2019-06-07 | 广州大学 | 可扩展手势识别方法、装置、系统、手势识别终端和介质 |
US11256342B2 (en) | 2019-04-03 | 2022-02-22 | Facebook Technologies, Llc | Multimodal kinematic template matching and regression modeling for ray pointing prediction in virtual reality |
US10824247B1 (en) * | 2019-04-03 | 2020-11-03 | Facebook Technologies, Llc | Head-coupled kinematic template matching for predicting 3D ray cursors |
CN110059661B (zh) * | 2019-04-26 | 2022-11-22 | 腾讯科技(深圳)有限公司 | 动作识别方法、人机交互方法、装置及存储介质 |
US11144128B2 (en) * | 2019-11-20 | 2021-10-12 | Verizon Patent And Licensing Inc. | Systems and methods for controlling video wall content using air gestures |
JP7322824B2 (ja) * | 2020-07-01 | 2023-08-08 | トヨタ自動車株式会社 | 情報処理装置、情報処理方法、および制御システム |
CN112486317B (zh) * | 2020-11-26 | 2022-08-09 | 湖北鼎森智能科技有限公司 | 基于手势的数字阅读方法及系统 |
CN113221745B (zh) * | 2021-05-12 | 2023-09-01 | 北京百度网讯科技有限公司 | 举手识别方法、装置、电子设备及存储介质 |
CN114167993B (zh) * | 2022-02-10 | 2022-05-24 | 北京优幕科技有限责任公司 | 信息处理方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010182014A (ja) * | 2009-02-04 | 2010-08-19 | Toshiba Corp | ジェスチャ認識装置、その方法及びそのプログラム |
JP2012008772A (ja) * | 2010-06-24 | 2012-01-12 | Sony Corp | ジェスチャ認識装置、ジェスチャ認識方法およびプログラム |
JP2012022458A (ja) * | 2010-07-13 | 2012-02-02 | Canon Inc | 情報処理装置およびその制御方法 |
JP2012079138A (ja) * | 2010-10-04 | 2012-04-19 | Olympus Corp | ジェスチャ認識装置 |
JP2012123608A (ja) * | 2010-12-08 | 2012-06-28 | Nippon Syst Wear Kk | ジェスチャー認識装置、方法、プログラム、および該プログラムを格納したコンピュータ可読媒体 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3603919B2 (ja) | 1996-06-11 | 2004-12-22 | 日本電気株式会社 | ジェスチャ認識装置および方法 |
JP4035610B2 (ja) | 2002-12-18 | 2008-01-23 | 独立行政法人産業技術総合研究所 | インタフェース装置 |
US7971156B2 (en) * | 2007-01-12 | 2011-06-28 | International Business Machines Corporation | Controlling resource access based on user gesturing in a 3D captured image stream of the user |
US20090243998A1 (en) * | 2008-03-28 | 2009-10-01 | Nokia Corporation | Apparatus, method and computer program product for providing an input gesture indicator |
JP2010009484A (ja) | 2008-06-30 | 2010-01-14 | Denso It Laboratory Inc | 車載機器制御装置および車載機器制御方法 |
EP2399243A4 (en) | 2009-02-17 | 2013-07-24 | Omek Interactive Ltd | METHOD AND SYSTEM FOR RECOGNIZING GESTURE |
JP5318189B2 (ja) | 2009-02-18 | 2013-10-16 | 株式会社東芝 | インターフェース制御装置、及びその方法 |
JP2011204019A (ja) * | 2010-03-25 | 2011-10-13 | Sony Corp | ジェスチャ入力装置、ジェスチャ入力方法およびプログラム |
WO2012011263A1 (ja) | 2010-07-20 | 2012-01-26 | パナソニック株式会社 | ジェスチャ入力装置およびジェスチャ入力方法 |
JP5617581B2 (ja) | 2010-12-08 | 2014-11-05 | オムロン株式会社 | ジェスチャ認識装置、ジェスチャ認識方法、制御プログラム、および、記録媒体 |
CN103380405A (zh) * | 2010-12-30 | 2013-10-30 | 汤姆逊许可公司 | 用于手势识别的用户界面、装置和方法 |
JP6316540B2 (ja) * | 2012-04-13 | 2018-04-25 | 三星電子株式会社Samsung Electronics Co.,Ltd. | カメラ装置及びその制御方法 |
JP6057755B2 (ja) | 2013-02-08 | 2017-01-11 | 株式会社東海理化電機製作所 | ジェスチャ操作装置及びジェスチャ操作プログラム |
JP5980879B2 (ja) * | 2014-10-28 | 2016-08-31 | レノボ・シンガポール・プライベート・リミテッド | ジェスチャの認識方法、ジェスチャ入力システムおよび電子機器 |
-
2016
- 2016-04-15 WO PCT/JP2016/062052 patent/WO2016167331A1/ja active Application Filing
- 2016-04-15 JP JP2017512586A patent/JP6355829B2/ja active Active
- 2016-04-15 US US15/560,716 patent/US10156909B2/en active Active
- 2016-04-15 DE DE112016001794.4T patent/DE112016001794T5/de active Pending
- 2016-04-15 CN CN201680022114.3A patent/CN107533363B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010182014A (ja) * | 2009-02-04 | 2010-08-19 | Toshiba Corp | ジェスチャ認識装置、その方法及びそのプログラム |
JP2012008772A (ja) * | 2010-06-24 | 2012-01-12 | Sony Corp | ジェスチャ認識装置、ジェスチャ認識方法およびプログラム |
JP2012022458A (ja) * | 2010-07-13 | 2012-02-02 | Canon Inc | 情報処理装置およびその制御方法 |
JP2012079138A (ja) * | 2010-10-04 | 2012-04-19 | Olympus Corp | ジェスチャ認識装置 |
JP2012123608A (ja) * | 2010-12-08 | 2012-06-28 | Nippon Syst Wear Kk | ジェスチャー認識装置、方法、プログラム、および該プログラムを格納したコンピュータ可読媒体 |
Non-Patent Citations (2)
Title |
---|
AKIHIRO MORI ET AL.: "Early Recognition and Prediction of Gestures for Embodied Proactive Human Interface", JOURNAL OF THE ROBOTICS SOCIETY OF JAPAN, vol. 24, no. 8, 15 November 2006 (2006-11-15), pages 66 - 75, XP055321846, DOI: doi:10.7210/jrsj.24.954 * |
SEIICHI UCHIDA ET AL.: "Early Recognition and Prediction of Gestures for Proactive Human- Machine Interface", IEICE TECHNICAL REPORT, vol. 104, no. 449, 11 November 2004 (2004-11-11), pages 7 - 12 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018156316A (ja) * | 2017-03-16 | 2018-10-04 | 日立造船株式会社 | 情報処理装置、情報処理方法、および情報処理プログラム |
JP2021507434A (ja) * | 2017-12-13 | 2021-02-22 | ヒューマニシング オートノミー リミテッド | 歩行者の意図を予測するためのシステムおよび方法 |
WO2019235274A1 (ja) * | 2018-06-07 | 2019-12-12 | ソニー株式会社 | 制御装置、制御方法、プログラム、および移動体 |
JPWO2019235274A1 (ja) * | 2018-06-07 | 2021-06-17 | ソニーグループ株式会社 | 制御装置、制御方法、プログラム、および移動体 |
JP7331846B2 (ja) | 2018-06-07 | 2023-08-23 | ソニーグループ株式会社 | 制御装置、制御方法、プログラム、および移動体 |
JP2022522319A (ja) * | 2019-12-23 | 2022-04-18 | 商▲湯▼国▲際▼私人有限公司 | 目標追跡方法、装置、電子デバイス、及び記録媒体 |
JP7212154B2 (ja) | 2019-12-23 | 2023-01-24 | 商▲湯▼国▲際▼私人有限公司 | 目標追跡方法、装置、電子デバイス、及び記録媒体 |
CN112001248A (zh) * | 2020-07-20 | 2020-11-27 | 北京百度网讯科技有限公司 | 主动交互的方法、装置、电子设备和可读存储介质 |
JP2022020588A (ja) * | 2020-07-20 | 2022-02-01 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | アクティブインタラクションの方法、装置、電子デバイス及び可読記憶媒体 |
US11734392B2 (en) | 2020-07-20 | 2023-08-22 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Active interaction method, electronic device and readable storage medium |
CN112001248B (zh) * | 2020-07-20 | 2024-03-01 | 北京百度网讯科技有限公司 | 主动交互的方法、装置、电子设备和可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
DE112016001794T5 (de) | 2018-02-08 |
CN107533363B (zh) | 2020-06-30 |
US20180052521A1 (en) | 2018-02-22 |
JPWO2016167331A1 (ja) | 2017-09-28 |
CN107533363A (zh) | 2018-01-02 |
JP6355829B2 (ja) | 2018-07-11 |
US10156909B2 (en) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6355829B2 (ja) | ジェスチャー認識装置、ジェスチャー認識方法、及び情報処理装置 | |
US9940507B2 (en) | Image processing device and method for moving gesture recognition using difference images | |
US8379098B2 (en) | Real time video process control using gestures | |
CN108900767B (zh) | 相机装置及其控制方法 | |
AU2010366331B2 (en) | User interface, apparatus and method for gesture recognition | |
CN102141839B (zh) | 图像处理设备和方法 | |
US20140132515A1 (en) | System and method for inputting user commands to a processor | |
US9240047B2 (en) | Recognition apparatus, method, and computer program product | |
JP2011253292A (ja) | 情報処理装置および方法、並びにプログラム | |
JP6465600B2 (ja) | 映像処理装置および映像処理方法 | |
US20140118244A1 (en) | Control of a device by movement path of a hand | |
US20180321754A1 (en) | Remote control of a desktop application via a mobile device | |
CN109040524B (zh) | 伪影消除方法、装置、存储介质及终端 | |
CN109254662A (zh) | 移动设备操作方法、装置、计算机设备及存储介质 | |
JP6575845B2 (ja) | 画像処理システム、画像処理方法及びプログラム | |
JP6021488B2 (ja) | 制御装置、制御方法、および制御プログラム | |
JP2007179338A (ja) | 情報処理装置 | |
CN112788244B (zh) | 拍摄方法、拍摄装置和电子设备 | |
US9761009B2 (en) | Motion tracking device control systems and methods | |
CN111913636A (zh) | 图像的显示处理方法、装置、设备及计算机可读存储介质 | |
CN113448427B (zh) | 设备控制方法、装置及系统 | |
US20230179867A1 (en) | Control apparatus, control method, and non-transitory storage medium | |
JP6857537B2 (ja) | 情報処理装置 | |
KR100694283B1 (ko) | 이미지 프로세싱을 이용한 pc 기반의 영상 인식 방법 | |
JP2005332061A (ja) | 操作装置、操作入力端末、操作方法、および、操作プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16780122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017512586 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15560716 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112016001794 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16780122 Country of ref document: EP Kind code of ref document: A1 |