WO2020192215A1 - Procédé interactif et dispositif interactif portable - Google Patents

Procédé interactif et dispositif interactif portable Download PDF

Info

Publication number
WO2020192215A1
WO2020192215A1 PCT/CN2019/128643 CN2019128643W WO2020192215A1 WO 2020192215 A1 WO2020192215 A1 WO 2020192215A1 CN 2019128643 W CN2019128643 W CN 2019128643W WO 2020192215 A1 WO2020192215 A1 WO 2020192215A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
feature information
image feature
control system
main control
Prior art date
Application number
PCT/CN2019/128643
Other languages
English (en)
Chinese (zh)
Inventor
更藏多杰
Original Assignee
更藏多杰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 更藏多杰 filed Critical 更藏多杰
Publication of WO2020192215A1 publication Critical patent/WO2020192215A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present invention relates to the field of communication technology, in particular to an interaction method and a wearable interaction device.
  • the existing smart watches still mainly rely on the touch screen for interaction and operation. Because the touch screen of the smart watch is small and cannot be controlled with one hand, it does not reflect the convenient features of the smart watch, and the experience is not for the watch and Smart phones are differentiated enough to cause smart watches to become dispensable.
  • the present invention provides an interaction method and a wearable interaction device, aiming to improve the convenience of interaction with a smart watch.
  • the present invention provides an interaction method, which includes the following steps:
  • the recognition control unit obtains the depth image and extracts features to obtain image feature information
  • the method for obtaining the wake-up instruction is:
  • the sound information is the wake-up instruction
  • the geographic location information is the wake-up instruction.
  • the method for determining whether the image feature information is gesture information is:
  • the method for judging whether the image feature information includes the device to be interacted is:
  • the image feature information is compared and judged with several device images pre-stored in the main control system or the image feature information is uploaded to the cloud through the main control system, and big data is used for identification and judgment.
  • the method further includes:
  • the present invention also provides a wearable interactive device, the wearable interactive device includes a main control system, the main control system is provided with an identification control unit, and further includes:
  • Collection module used to collect sound information and geographic location information in real time
  • the first analysis module connected to the collection module, is used to analyze the sound information and the geographic location information to determine whether the voice information is valid voice information and/or whether the geographic location information is a designated area , Obtain the first judgment result;
  • a first execution module connected to the first analysis module, and configured to wake up the main control system and start the identification control unit when the first judgment result is yes;
  • the identification control unit includes:
  • Depth-of-field image acquisition module for acquiring depth-of-field images
  • An extraction module connected to the depth-of-field image acquisition module, and used to extract features in the depth-of-field image to obtain image feature information
  • the second analysis module connected to the extraction module, is used to compare the image feature information with a number of gesture feature images pre-stored in the main control system to determine whether the image feature information is gesture feature information, and obtain The second judgment result;
  • the second execution module is connected to the second analysis module, and is configured to call and execute a control instruction matching the gesture feature information based on the gesture feature information when the second judgment result is yes;
  • the third analysis module is connected to the second analysis module, and is used to compare or compare the image feature information with a number of device images pre-stored in the main control system when the second judgment result is no
  • the image feature information is uploaded to the cloud through the main control system, and big data is used for identification, and it is judged whether the image feature information includes the device to be interacted, and a third judgment result is obtained;
  • the third execution module is connected to the third analysis module and is used to output link prompt information when the third judgment result is yes.
  • the collection module includes a bone conduction microphone and a positioning device, and the sound information is the sound information of a thumb clicking the fingertip of the middle finger.
  • the effective sound information is the sound information of the thumb tapping the fingertip of the middle finger twice or three consecutive times.
  • the depth-of-field image acquisition module is a tof camera.
  • the identification control unit further includes:
  • the fourth execution module is connected to the third analysis module, and is used to identify the image feature information by using big data when the third judgment result is no, and obtain and output the item information represented by the image feature information.
  • the interaction method and wearable interaction device provided by the present invention wake up the main control system by acquiring a wake-up instruction, and then realize interconnection and control with other devices by collecting and recognizing the wearer’s gestures, which not only simplifies the operation of the wearer, but also One-handed operation is realized, and the experience effect of the wearer is improved.
  • FIG. 1 is a method flowchart of an interaction method provided by an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a wearable device provided by an embodiment of the present invention.
  • the indicated orientation or positional relationship is based on the orientation or positional relationship shown in the drawings, or is the orientation or positional relationship that is customarily placed when the product of the invention is used, or is the original
  • the position or position relationship commonly understood by those skilled in the art, or the position or position relationship usually placed when the product of the invention is used is only for the convenience of describing the present invention and simplifying the description, and does not indicate or imply that the device or element referred to must It has a specific orientation, is constructed and operated in a specific orientation, and therefore cannot be understood as a limitation to the present invention.
  • the terms “first” and “second” are only used for distinguishing description, and cannot be understood as indicating or implying relative importance.
  • an embodiment of the present invention provides an interaction method.
  • the interaction method includes the following steps:
  • Step S1 Acquire a wake-up instruction
  • Step S2 wake up the main control system based on the wake-up instruction, and start the identification control unit of the main control system;
  • Step S3 The recognition control unit obtains a depth image and extracts features to obtain image feature information
  • Step S4 Determine whether the image feature information is gesture feature information, and obtain a first judgment result
  • Step S5 when the first judgment result is yes, based on the gesture feature information, call and execute a control instruction matching the gesture feature information;
  • Step S6 when the first judgment result is no, judge whether the image feature information includes the device to be interacted, and obtain a second judgment result;
  • Step S7 When the second judgment result is yes, output link prompt information.
  • the main control system in the embodiment of the present invention is a general control system in existing smart devices, and it has the general functions of the existing control system, for example, it can use its own resources as an independent individual. It can also communicate with the cloud, use cloud resources, etc., which will not be repeated here.
  • step S1 there are many types of the wake-up commands in step S1, such as voice wake-up commands, touch wake-up commands, etc.
  • voice wake-up commands such as voice wake-up commands, touch wake-up commands, etc.
  • touch wake-up commands etc.
  • all the wake-up commands in the embodiment of the present invention uses geographic location information and sound information of the thumb clicking on the fingertip of the middle finger.
  • the specific method of obtaining is:
  • the geographic location information is the wake-up instruction.
  • the effective sound information is: the sound information of the thumb tapping the fingertip of the middle finger twice or three consecutive times.
  • the designated area is defined by the user, such as home, company, etc.
  • the main control system can work in full state when in use, and in a standby state when not in use, which can greatly save energy consumption.
  • step S4 there are many methods for judging whether the image feature information is gesture information in step S4.
  • the embodiment of the present invention specifically uses the combination of the image feature information and pre-stored in the main Several gesture feature images in the control system are compared and judged one by one.
  • the method for determining whether the image feature information includes the device to be interacted in step S6 is specifically: comparing the image feature information with a number of device images pre-stored in the main control system, or comparing The image feature information is uploaded to the cloud through the main control system, and big data is used for identification and judgment.
  • the image feature information is uploaded to the cloud through the main control system, and the recognition and judgment using big data is performed only when the main control system is connected to the cloud. If the main control system is not connected to the cloud, then the method for judging whether the image feature information includes the device to be interacted is only to compare the image feature information with several device images pre-stored in the main control system. In addition, when the main control system is connected to the cloud, the image feature information is first uploaded to the cloud through the main control system, and big data is used for identification and judgment.
  • the interaction method provided in the embodiment of the present invention further includes:
  • Using big data to identify the image feature information obtain and output the item information represented by the image feature information. For example, use big data to identify the image feature information, and find that the image feature information is clothes, then output information related to the clothes, such as similar clothes and prices; or use big data to identify the image feature information, and find If the image feature information is beer, then relevant information (such as brand, alcohol content, selling price, etc.) corresponding to the beer is output.
  • relevant information such as brand, alcohol content, selling price, etc.
  • an embodiment of the present invention also provides a wearable interactive device.
  • the wearable interactive device includes a main control system 100, and the main control system 100 is provided with an identification control unit 200,
  • the wearable interactive device also includes:
  • the collection module 300 is used to collect sound information and geographic location information in real time;
  • the first analysis module 400 connected to the collection module 300, is used to analyze the sound information and the geographic location information to determine whether the voice information is valid voice information and/or whether the geographic location information is a designated area , Obtain the first judgment result;
  • the first execution module 500 is connected to the first analysis module 400 and is used to wake up the main control system 100 and start the identification control unit 200 when the first judgment result is yes;
  • the identification control unit 200 includes:
  • the depth-of-field image acquisition module 210 is used to acquire a depth-of-field image
  • the extraction module 220 is connected to the depth-of-field image acquisition module 210, and is used to extract features in the depth-of-field image to obtain image feature information;
  • the second analysis module 230 connected to the extraction module 220, is used to compare the image feature information with a number of gesture feature images pre-stored in the main control system, to determine whether the image feature information is gesture feature information, and to obtain The second judgment result.
  • the second execution module 240 is connected to the second analysis module 230, and is configured to call and execute a control instruction matching the gesture feature information based on the gesture feature information when the second judgment result is yes;
  • the third analysis module 250 is connected to the second analysis module 230, and is used to compare or compare the image feature information with a number of device images pre-stored in the main control system when the second judgment result is no
  • the image feature information is uploaded to the cloud through the main control system, and big data is used for identification, and it is judged whether the image feature information includes the device to be interacted, and a third judgment result is obtained;
  • the third execution module 260 is connected to the third analysis module 250 and is configured to output link prompt information when the third judgment result is yes.
  • the wearable interaction device provided by the embodiment of the present invention may be a smart watch or a smart wristband, which is not limited here.
  • the sound information collected in the embodiment of the present invention is the sound made by the thumb clicking the fingertip of the middle finger
  • the collection module 300 specifically includes a bone conduction microphone and The positioning device uses the bone conduction microphone to collect real-time sound information from the thumb clicking the middle finger tip, and uses the positioning device to obtain real-time geographic location information.
  • Bone conduction microphones have the characteristics of short sound collection distance and low loss.
  • the sound collection distance is close to avoid the influence of external environmental sound, and low loss can collect very small sounds, ensuring that the sound of the thumb tapping the middle finger can be smoothly obtained.
  • the positioning device in the embodiment of the present invention may be a GPS positioning chip, a Beidou positioning chip, or other devices capable of positioning, such as Bluetooth direction finding devices, etc., which are not limited here. .
  • the effective information in the embodiment of the present invention is specifically the sound information of the thumb tapping the fingertip of the middle finger twice or three consecutive times. Using the sound information of the thumb to tap the middle finger tip twice or three consecutive times as effective information can effectively prevent the wearer's thumb from touching the middle finger by mistake, leading to false awakening, and further saving energy consumption.
  • the depth image acquisition module 200 in the embodiment of the present invention specifically adopts a tof camera.
  • the tof camera can obtain a richer position relationship between objects through distance information, and can quickly complete the identification and tracking of the target.
  • the identification control unit 200 in the embodiment of the present invention further includes:
  • the fourth execution module 270 is connected to the third analysis module 250, and is used to identify the image feature information by using big data when the third judgment result is no, and obtain and output the item information represented by the image feature information. For example, use big data to identify the image feature information, and find that the image feature information is clothes, then output information related to the clothes, such as similar clothes and prices; or use big data to identify the image feature information, and find If the image feature information is beer, then relevant information (such as brand, alcohol content, selling price, etc.) corresponding to the beer is output.
  • relevant information such as brand, alcohol content, selling price, etc.
  • the fourth execution module greatly expands the functions of the wearable interactive device, and further improves the user experience.
  • the interaction method and wearable interaction device provided by the present invention wake up the main control system by acquiring a wake-up instruction, and then realize interconnection and control with other devices by collecting and recognizing the wearer’s gestures, which not only simplifies the operation of the wearer, but also One-handed operation is realized, and the experience effect of the wearer is improved.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention appartient au domaine technique des communications et concerne un procédé interactif et un dispositif interactif portable. Le procédé interactif consiste à : acquérir une instruction de réveil ; réveiller un système de commande principal sur la base de l'instruction de réveil et démarrer une unité de commande d'identification du système de commande principal ; par l'unité de commande d'identification, acquérir une image de profondeur de champ et extraire des caractéristiques pour acquérir des informations de caractéristique d'image ; déterminer si les informations de caractéristique d'image sont des informations de caractéristique de geste pour obtenir un premier résultat de détermination ; sur la base des informations de caractéristique de geste, appeler et exécuter une instruction de commande qui correspond aux informations de caractéristique de geste lorsque le premier résultat de détermination est oui ; lorsque le premier résultat de détermination est non, déterminer si les informations de caractéristique d'image contiennent un dispositif avec lequel interagir pour obtenir un second résultat de détermination ; et délivrer en sortie des informations d'invite de lien lorsque le second résultat de détermination est oui. Le procédé interactif et le dispositif interactif portable selon l'invention se connectent avec d'autres dispositifs et les commandent au moyen de la collecte et de l'identification de gestes d'un porteur, ce qui simplifie les opérations du porteur et améliore l'effet d'expérience du porteur.
PCT/CN2019/128643 2019-03-28 2019-12-26 Procédé interactif et dispositif interactif portable WO2020192215A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910245411.X 2019-03-28
CN201910245411.XA CN109917922A (zh) 2019-03-28 2019-03-28 一种交互方法及可穿戴交互设备

Publications (1)

Publication Number Publication Date
WO2020192215A1 true WO2020192215A1 (fr) 2020-10-01

Family

ID=66967447

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128643 WO2020192215A1 (fr) 2019-03-28 2019-12-26 Procédé interactif et dispositif interactif portable

Country Status (2)

Country Link
CN (1) CN109917922A (fr)
WO (1) WO2020192215A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109917922A (zh) * 2019-03-28 2019-06-21 更藏多杰 一种交互方法及可穿戴交互设备
CN110780743A (zh) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 一种vr交互方法及vr设备
CN111080537B (zh) * 2019-11-25 2023-09-12 厦门大学 水下机器人智能控制方法、介质、设备及系统
CN114785954A (zh) * 2022-04-27 2022-07-22 深圳影目科技有限公司 处理器唤醒方法以及装置、系统、存储介质、ar眼镜

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095178A (zh) * 2016-06-14 2016-11-09 广州视睿电子科技有限公司 输入设备识别方法和系统、输入指令识别方法和系统
CN107517313A (zh) * 2017-08-22 2017-12-26 珠海市魅族科技有限公司 唤醒方法及装置、终端及可读存储介质
US20180067644A1 (en) * 2015-11-24 2018-03-08 International Business Machines Corporation Gesture recognition and control based on finger differentiation
CN208547816U (zh) * 2018-08-20 2019-02-26 更藏多杰 一种智能手表
CN109917922A (zh) * 2019-03-28 2019-06-21 更藏多杰 一种交互方法及可穿戴交互设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0201434L (sv) * 2002-05-10 2003-10-14 Henrik Dryselius Anordning för inmatning av styrsignaler till en elektronisk apparat
CN103226443A (zh) * 2013-04-02 2013-07-31 百度在线网络技术(北京)有限公司 智能眼镜的控制方法、装置和智能眼镜
CN104410883B (zh) * 2014-11-29 2018-04-27 华南理工大学 一种移动可穿戴非接触式交互系统与方法
CN104484037A (zh) * 2014-12-12 2015-04-01 三星电子(中国)研发中心 通过可穿戴设备进行智能控制的方法及该可穿戴设备
CN105101565A (zh) * 2015-09-01 2015-11-25 广西南宁智翠科技咨询有限公司 车氛围灯开启方法
CN105204742B (zh) * 2015-09-28 2019-07-09 小米科技有限责任公司 电子设备的控制方法、装置及终端
CN107450717B (zh) * 2016-05-31 2021-05-18 联想(北京)有限公司 一种信息处理方法及穿戴式设备
CN106775206B (zh) * 2016-11-24 2020-05-22 广东小天才科技有限公司 一种用户终端的屏幕唤醒方法及装置、用户终端
CN106774850B (zh) * 2016-11-24 2020-06-30 深圳奥比中光科技有限公司 一种移动终端及其交互控制方法
CN106777071B (zh) * 2016-12-12 2021-03-05 北京奇虎科技有限公司 一种图像识别获取参考信息的方法和装置
CN107172744A (zh) * 2017-06-02 2017-09-15 单广会 一种打响指声音控制的卧室氛围灯及其工作方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180067644A1 (en) * 2015-11-24 2018-03-08 International Business Machines Corporation Gesture recognition and control based on finger differentiation
CN106095178A (zh) * 2016-06-14 2016-11-09 广州视睿电子科技有限公司 输入设备识别方法和系统、输入指令识别方法和系统
CN107517313A (zh) * 2017-08-22 2017-12-26 珠海市魅族科技有限公司 唤醒方法及装置、终端及可读存储介质
CN208547816U (zh) * 2018-08-20 2019-02-26 更藏多杰 一种智能手表
CN109917922A (zh) * 2019-03-28 2019-06-21 更藏多杰 一种交互方法及可穿戴交互设备

Also Published As

Publication number Publication date
CN109917922A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020192215A1 (fr) Procédé interactif et dispositif interactif portable
US10796694B2 (en) Optimum control method based on multi-mode command of operation-voice, and electronic device to which same is applied
US10778830B2 (en) Electronic device and method for performing task using external device by electronic device
EP3120298B1 (fr) Procédés et dispositifs pour établir une connexion de communication entre des dispositifs électroniques
CN108023934B (zh) 电子装置及其控制方法
TWI665584B (zh) 語音控制系統及方法
KR102453603B1 (ko) 전자 장치 및 그 제어 방법
EP3258423B1 (fr) Procédé et appareil de reconnaissance d'écriture manuscrite
US10825453B2 (en) Electronic device for providing speech recognition service and method thereof
WO2018000200A1 (fr) Terminal de commande d'un dispositif électronique et son procédé de traitement
EP2680110B1 (fr) Procédé et appareil de traitement d'entrées multiples
WO2017143948A1 (fr) Procédé pour activer un robot intelligent, et robot intelligent
CN102932212A (zh) 一种基于多通道交互方式的智能家居控制系统
US10991372B2 (en) Method and apparatus for activating device in response to detecting change in user head feature, and computer readable storage medium
CN110730115B (zh) 语音控制方法及装置、终端、存储介质
JP2019532543A (ja) 制御システムならびに制御処理方法および装置
US11720814B2 (en) Method and system for classifying time-series data
WO2019214442A1 (fr) Procédé de commande de dispositif, appareil, dispositif de commande et support d'enregistrement
CN113671846B (zh) 智能设备控制方法、装置、可穿戴设备及存储介质
WO2020135334A1 (fr) Procédé de commutation de thème d'application de télévision, télévision, support de stockage lisible et dispositif
WO2017070971A1 (fr) Dispositif électronique et procédé de reconnaissance faciale
CN107870674B (zh) 一种程序启动方法和移动终端
US20150153827A1 (en) Controlling connection of input device to electronic devices
US11620995B2 (en) Voice interaction processing method and apparatus
WO2015131590A1 (fr) Procédé pour commander un traitement de geste d'écran vide et terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19921712

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19921712

Country of ref document: EP

Kind code of ref document: A1