WO2018023513A1 - Home control method based on motion recognition - Google Patents

Home control method based on motion recognition Download PDF

Info

Publication number
WO2018023513A1
WO2018023513A1 PCT/CN2016/093159 CN2016093159W WO2018023513A1 WO 2018023513 A1 WO2018023513 A1 WO 2018023513A1 CN 2016093159 W CN2016093159 W CN 2016093159W WO 2018023513 A1 WO2018023513 A1 WO 2018023513A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition
motion
emotion recognition
emotion
information
Prior art date
Application number
PCT/CN2016/093159
Other languages
French (fr)
Chinese (zh)
Inventor
易晓阳
Original Assignee
易晓阳
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 易晓阳 filed Critical 易晓阳
Priority to PCT/CN2016/093159 priority Critical patent/WO2018023513A1/en
Publication of WO2018023513A1 publication Critical patent/WO2018023513A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric

Definitions

  • the invention relates to the field of smart home technology, and more particularly to a home control method based on motion recognition.
  • Smart home is the embodiment of materialization under the influence of the Internet. Smart Home connects various devices in the home through IoT technology, providing home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, burglar alarm, environmental monitoring, HVAC control, infrared forwarding and programmable timing control. Functions and means. Compared with ordinary homes, smart homes not only have traditional living functions, but also combine construction, network communication, information appliances, equipment automation, and integrate efficient systems, structures, services and management into a highly efficient, comfortable, safe, convenient and environmentally friendly living environment. Provide a full range of information interaction functions to help families and the outside to maintain information exchange, optimize people's lifestyles, help people to effectively arrange time, enhance the safety of home life, and even save money for various energy costs.
  • the technical problem to be solved by the present invention is to provide a home control method based on motion recognition for the above-mentioned drawbacks of the prior art.
  • the current recognition state includes: a first recognition state that is determined only according to the motion recognition, a second recognition state that is only determined according to the emotion recognition, and a judgment according to the motion and the emotion recognition Third identification state;
  • the motion recognition result is used as the first parameter
  • the emotion recognition result is used as the second parameter
  • the comprehensive judgment information is output according to the preset determination formula
  • the home is controlled according to the comprehensive judgment information.
  • the motion recognition-based home control method of the present invention wherein the step of determining the input motion graphics specifically includes:
  • the simulated motion graphics are generated based on the acquired motion images, and are compared with the motion graphics corresponding to the standard motion control modes.
  • the motion recognition-based home control method of the present invention wherein the step of determining the input motion track specifically includes:
  • a motion trajectory is generated according to the acquired video signal, and is compared with a motion trajectory corresponding to the standard motion control mode.
  • the motion recognition-based home control method wherein when the current recognition state is the first recognition state, the comprehensive judgment information is directly output according to the motion recognition result.
  • the motion recognition-based home control method wherein when the current recognition state is the second recognition state, the comprehensive judgment information is directly output according to the emotion recognition result.
  • the motion recognition-based home control method of the present invention wherein the emotion recognition comprises a sinister emotion recognition and a swearing emotion recognition.
  • the motion recognition-based home control method of the present invention further includes the steps of:
  • the motion recognition-based home control method according to the present invention wherein the method for performing voice tone emotion recognition comprises the steps of:
  • the second emotion recognition result is generated by a preset semantic sentiment analysis method according to word similarity.
  • the invention has the beneficial effects that the smart home control is implemented by adopting the combination of the emotion recognition and the motion recognition mode, and the user is provided with various options, so that the use of the smart home is more humanized.
  • FIG. 1 is a flow chart of a motion recognition based home control method according to a preferred embodiment of the present invention
  • FIG. 2 is a block diagram showing the principle of a motion recognition based home control system in accordance with a preferred embodiment of the present invention.
  • the flow of the motion recognition based home control method according to the preferred embodiment of the present invention is as shown in FIG. 1 , and includes the steps of:
  • Step S101 Acquire external voice information, perform voice tone emotion recognition on the voice information, generate a first emotion recognition result, convert the voice information into text information, perform semantic emotion recognition on the text information to generate a second emotion Identifying a result, based on the first emotion recognition result and the second emotion recognition result, generating a user emotion recognition result according to the predetermined emotion recognition result judgment method;
  • Step S102 Acquire external video information, perform image recognition on the video information, and determine an input motion graphic or a motion track;
  • Step S103 Acquire a setting parameter input by the user, and generate current identification state information.
  • the current recognition state includes: a first recognition state that is determined only according to the motion recognition, a second recognition state that is only determined according to the emotion recognition, and a motion and an emotion according to the same. Identifying a third recognition state for making a determination;
  • Step S104 When the current recognition state is the third recognition state, the motion recognition result is used as the first parameter, and the emotion recognition result is used as the second parameter, and the comprehensive judgment information is output according to the preset determination formula.
  • the step of determining the input motion graphics specifically includes: generating a simulated motion graphics according to the acquired motion images, and comparing the motion graphics corresponding to the standard motion control modes.
  • Determining the input motion trajectory specifically includes: generating a motion trajectory according to the acquired video signal, and comparing the motion trajectory corresponding to the standard motion control mode.
  • the comprehensive judgment information is directly output according to the motion recognition result; when the current recognition state is the second recognition state, the comprehensive judgment is directly output according to the emotion recognition result. information.
  • the motion recognition enhancement glove has an infrared light source built therein, and the outer surface of the motion recognition reinforcement glove is provided with a plurality of small holes to facilitate the transmission of infrared light emitted by the infrared light source. Enhancing the recognition of the motion of the user by the motion recognition module.
  • the emotion recognition includes derogatory emotion recognition and derogatory emotion recognition.
  • the motion recognition-based home control method of the present invention further includes the steps of:
  • the motion recognition-based home control method according to the present invention wherein the method for performing voice tone emotion recognition comprises the steps of:
  • the word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary are respectively calculated; and according to the similarity of the words, the second emotion recognition result is generated by a preset semantic sentiment analysis method.
  • the word similarity between the words in the text information and the swearing seed words and the text information are separately calculated according to a semantic similarity calculation method.
  • the similarity between the words of the words and the words of the derogatory seed are separately calculated according to a semantic similarity calculation method.
  • FIG. 2 A block diagram of a motion recognition-based home control system according to a preferred embodiment of the present invention is shown in FIG. 2, and includes: a motion recognition module, an emotion recognition judgment module, an audio signal acquisition module, a video signal acquisition module, an integrated information processing module, and a control module.
  • wireless transceiver module wireless transceiver module; motion recognition module, emotion recognition judgment module and integrated information processing module are electrically connected; audio signal acquisition module is connected with emotion recognition judgment module; video signal acquisition module is connected with motion recognition module and emotion recognition judgment module; comprehensive information processing The module is electrically connected to the wireless transceiver module through the control module; the integrated information processing module is configured to comprehensively process the motion recognition result and the emotion recognition result, and obtain comprehensive judgment information, and control each smart home device to perform corresponding according to the comprehensive judgment information. operating.
  • the smart home control is implemented by adopting the combination of the emotion recognition and the motion recognition mode, and the user is provided with various options, so that the use of the smart home is more humanized.
  • the integrated information processing module includes: an identification state determining unit, configured to determine a current recognition state according to a user setting input parameter, wherein the current recognition state includes: a first identification that is determined only based on the motion recognition The state, the second recognition state judged only based on the emotion recognition, and the third recognition state judged at the same time based on the motion and the emotion recognition.
  • the integrated information processing module further includes: a comprehensive judgment information generating unit, configured to use the motion recognition result as the first parameter and the emotion recognition result as the first recognition state when the current recognition state is the third recognition state
  • the second parameter outputs comprehensive judgment information according to a preset judgment formula.
  • the emotion recognition determination module includes: a first emotion recognition unit configured to perform voice tone emotion recognition on the voice information to generate a first emotion recognition result; and a second emotion recognition unit configured to use the voice After the information is converted into text information, the semantic emotion recognition is performed on the text information to generate a second emotion recognition result; the emotion recognition result output unit is configured to be based on the first emotion The recognition result and the second emotion recognition result determine the system to generate the user emotion recognition result according to the predetermined emotion recognition result.
  • emotion recognition includes derogatory emotion recognition and derogatory emotion recognition.
  • a number of derogatory seed words and a number of derogatory seed words are selected to generate an sentiment dictionary; the word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary are respectively calculated;
  • the semantic emotion analysis system is configured to generate the second emotion recognition result.
  • the word similarity between the word in the text information and the derogatory seed word and the word similarity between the word in the text information and the derogatory seed word may be respectively calculated according to a semantic similarity calculation system.
  • the step of generating the second emotion recognition result by using the preset semantic sentiment analysis system is: calculating the word sentiment tendency value by using the word sentiment tendency formula: when the word sentiment tendency value is greater than the predetermined When the threshold value is determined, the words in the text information are judged as derogatory emotions; when the word sentiment tendency value is less than a predetermined threshold, the words in the text information are judged as derogatory emotions.
  • the emotion recognition determination module includes: a third emotion recognition unit configured to perform image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.

Abstract

A home control method based on motion recognition, comprises: a motion recognition module (40), an emotion recognition and determination module (30), an audio signal collection module (10), a video signal collection module (20), an integrated information processing module (50), a control module (60), and a wireless transceiver module (70); the motion recognition module (40) and the emotion recognition and determination module (30) are electrically connected to the integrated information processing module (50); the audio signal collection module (10) is connected to the emotion recognition and determination module (30); the video signal collection module (20) is connected to the motion recognition module (40) and the emotion recognition and determination module (30); the integrated information processing module (50) is electrically connected to the wireless transceiver module (70) by means of the control module (60); the integrated information processing module (50) performs an integrating process on the motion recognition result and the emotion recognition result to obtain integrated determination information, and control, according to the integrated determination information, each of the smart home appliances to perform a corresponding operation. The invention provides a user various options, enabling more user-friendly usage and control of the smart home.

Description

一种基于运动识别的家居控制方法Home control method based on motion recognition 技术领域Technical field
本发明涉及智能家居技术领域,更具体地说,涉及一种基于运动识别的家居控制方法。The invention relates to the field of smart home technology, and more particularly to a home control method based on motion recognition.
背景技术Background technique
智能家居是在互联网的影响之下物联化的体现。智能家居通过物联网技术将家中的各种设备连接到一起,提供家电控制、照明控制、电话远程控制、室内外遥控、防盗报警、环境监测、暖通控制、红外转发以及可编程定时控制等多种功能和手段。与普通家居相比,智能家居不仅具有传统的居住功能,兼备建筑、网络通信、信息家电、设备自动化,集系统、结构、服务、管理为一体的高效、舒适、安全、便利、环保的居住环境,提供全方位的信息交互功能,帮助家庭与外部保持信息交流畅通,优化人们的生活方式,帮助人们有效安排时间,增强家居生活的安全性,甚至为各种能源费用节约资金。Smart home is the embodiment of materialization under the influence of the Internet. Smart Home connects various devices in the home through IoT technology, providing home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, burglar alarm, environmental monitoring, HVAC control, infrared forwarding and programmable timing control. Functions and means. Compared with ordinary homes, smart homes not only have traditional living functions, but also combine construction, network communication, information appliances, equipment automation, and integrate efficient systems, structures, services and management into a highly efficient, comfortable, safe, convenient and environmentally friendly living environment. Provide a full range of information interaction functions to help families and the outside to maintain information exchange, optimize people's lifestyles, help people to effectively arrange time, enhance the safety of home life, and even save money for various energy costs.
随着智能家居的越来越普及,单一的识别控制模式已经不能满足人们的需要。With the increasing popularity of smart homes, a single identification control model can no longer meet people's needs.
发明内容Summary of the invention
本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种基于运动识别的家居控制方法。The technical problem to be solved by the present invention is to provide a home control method based on motion recognition for the above-mentioned drawbacks of the prior art.
本发明解决其技术问题所采用的技术方案是: The technical solution adopted by the present invention to solve the technical problem thereof is:
构造一种基于运动识别的家居控制方法,其中,包括步骤:Constructing a home control method based on motion recognition, including steps:
获取外部语音信息,对所述语音信息进行语音音调情感识别,生成第一情感识别结果,将所述语音信息转换为文字信息后,对所述文字信息进行语义情感识别生成第二情感识别结果,基于所述第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果;Obtaining external voice information, performing voice tone emotion recognition on the voice information, generating a first emotion recognition result, converting the voice information into text information, and performing semantic emotion recognition on the text information to generate a second emotion recognition result, Generating a user emotion recognition result according to the predetermined emotion recognition result determination method based on the first emotion recognition result and the second emotion recognition result;
获取外部视频信息,对所述视频信息进行图像识别,判断输入的运动图形或运动轨迹;Obtaining external video information, performing image recognition on the video information, and determining an input motion graphic or a motion track;
获取用户输入的设置参数,生成当前识别状态信息;当前识别状态包括:只根据运动识别进行判断的第一识别状态、只根据情感识别进行判断的第二识别状态、同时根据运动和情感识别进行判断的第三识别状态;Obtaining a setting parameter input by the user, and generating current identification state information; the current recognition state includes: a first recognition state that is determined only according to the motion recognition, a second recognition state that is only determined according to the emotion recognition, and a judgment according to the motion and the emotion recognition Third identification state;
在当前识别状态为第三识别状态时,将运动识别结果作为第一参数、将情感识别结果作为第二参数,根据预设的判断公式输出综合判断信息;When the current recognition state is the third recognition state, the motion recognition result is used as the first parameter, and the emotion recognition result is used as the second parameter, and the comprehensive judgment information is output according to the preset determination formula;
根据所述综合判断信息对家居进行控制。The home is controlled according to the comprehensive judgment information.
本发明所述的基于运动识别的家居控制方法,其中,所述步骤判断输入的运动图形具体包括:The motion recognition-based home control method of the present invention, wherein the step of determining the input motion graphics specifically includes:
根据获取的运动图像生成模拟运动图形,并于标准运动控制方式对应的运动图形相对比。The simulated motion graphics are generated based on the acquired motion images, and are compared with the motion graphics corresponding to the standard motion control modes.
本发明所述的基于运动识别的家居控制方法,其中,所述步骤判断输入的运动轨迹具体包括:The motion recognition-based home control method of the present invention, wherein the step of determining the input motion track specifically includes:
根据获取的视频信号生成运动轨迹,并于标准运动控制方式对应的运动轨迹相对比。A motion trajectory is generated according to the acquired video signal, and is compared with a motion trajectory corresponding to the standard motion control mode.
本发明所述的基于运动识别的家居控制方法,其中,在当前识别状态为第一识别状态时,直接根据运动识别结果输出综合判断信息。 The motion recognition-based home control method according to the present invention, wherein when the current recognition state is the first recognition state, the comprehensive judgment information is directly output according to the motion recognition result.
本发明所述的基于运动识别的家居控制方法,其中,在当前识别状态为第二识别状态时,直接根据情感识别结果输出综合判断信息。The motion recognition-based home control method according to the present invention, wherein when the current recognition state is the second recognition state, the comprehensive judgment information is directly output according to the emotion recognition result.
本发明所述的基于运动识别的家居控制方法,其中,所述情感识别包括褒义情感识别和贬义情感识别。The motion recognition-based home control method of the present invention, wherein the emotion recognition comprises a sinister emotion recognition and a swearing emotion recognition.
本发明所述的基于运动识别的家居控制方法,其中,还包括步骤:The motion recognition-based home control method of the present invention further includes the steps of:
对所述视频信号采集模块获取的面部图像信息进行图像识别判断,生成第三情感识别结果。And performing image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.
本发明所述的基于运动识别的家居控制方法,其中,所述进行语音音调情感识别的方法包括步骤:The motion recognition-based home control method according to the present invention, wherein the method for performing voice tone emotion recognition comprises the steps of:
选定若干个褒义种子词和若干个贬义种子词生成情感词典;Selecting a number of derogatory seed words and a number of derogatory seed words to generate an emotional dictionary;
分别计算文字信息中的词语与情感词典中的褒义种子词和贬义种子词的词语相似度;Calculating word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary;
根据词语相似度,通过预设的语义情感分析方法生成所述第二情感识别结果。The second emotion recognition result is generated by a preset semantic sentiment analysis method according to word similarity.
本发明的有益效果在于:通过采用情感识别和运动识别模式相结合的识别方式进行智能家居控制,给用户提供了多种选择,使得智能家居的使用更加人性化。The invention has the beneficial effects that the smart home control is implemented by adopting the combination of the emotion recognition and the motion recognition mode, and the user is provided with various options, so that the use of the smart home is more humanized.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将结合附图及实施例对本发明作进一步说明,下面描述中的附图仅仅是本发明的部分实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图: In order to more clearly illustrate the embodiments of the present invention or the prior art, the present invention will be further described in conjunction with the accompanying drawings and embodiments. For ordinary technicians, other drawings can be obtained based on these drawings without any creative work:
图1是本发明较佳实施例的基于运动识别的家居控制方法流程图;1 is a flow chart of a motion recognition based home control method according to a preferred embodiment of the present invention;
图2是本发明较佳实施例的基于运动识别的家居控制系统原理框图。2 is a block diagram showing the principle of a motion recognition based home control system in accordance with a preferred embodiment of the present invention.
具体实施方式detailed description
为了使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明的部分实施例,而不是全部实施例。基于本发明的实施例,本领域普通技术人员在没有付出创造性劳动的前提下所获得的所有其他实施例,都属于本发明的保护范围。The present invention will be described in detail with reference to the embodiments of the present invention. Not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any inventive effort are within the scope of the present invention.
本发明较佳实施例的基于运动识别的家居控制方法流程如图1所示,包括步骤:The flow of the motion recognition based home control method according to the preferred embodiment of the present invention is as shown in FIG. 1 , and includes the steps of:
步骤S101、获取外部语音信息,对所述语音信息进行语音音调情感识别,生成第一情感识别结果,将所述语音信息转换为文字信息后,对所述文字信息进行语义情感识别生成第二情感识别结果,基于所述第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果;Step S101: Acquire external voice information, perform voice tone emotion recognition on the voice information, generate a first emotion recognition result, convert the voice information into text information, perform semantic emotion recognition on the text information to generate a second emotion Identifying a result, based on the first emotion recognition result and the second emotion recognition result, generating a user emotion recognition result according to the predetermined emotion recognition result judgment method;
步骤S102、获取外部视频信息,对所述视频信息进行图像识别,判断输入的运动图形或运动轨迹;Step S102: Acquire external video information, perform image recognition on the video information, and determine an input motion graphic or a motion track;
步骤S103、获取用户输入的设置参数,生成当前识别状态信息;当前识别状态包括:只根据运动识别进行判断的第一识别状态、只根据情感识别进行判断的第二识别状态、同时根据运动和情感识别进行判断的第三识别状态;Step S103: Acquire a setting parameter input by the user, and generate current identification state information. The current recognition state includes: a first recognition state that is determined only according to the motion recognition, a second recognition state that is only determined according to the emotion recognition, and a motion and an emotion according to the same. Identifying a third recognition state for making a determination;
步骤S104、在当前识别状态为第三识别状态时,将运动识别结果作为第一参数、将情感识别结果作为第二参数,根据预设的判断公式输出综合判断信息。 Step S104: When the current recognition state is the third recognition state, the motion recognition result is used as the first parameter, and the emotion recognition result is used as the second parameter, and the comprehensive judgment information is output according to the preset determination formula.
上述基于运动识别的家居控制方法中,步骤判断输入的运动图形具体包括:根据获取的运动图像生成模拟运动图形,并于标准运动控制方式对应的运动图形相对比。判断输入的运动轨迹具体包括:根据获取的视频信号生成运动轨迹,并于标准运动控制方式对应的运动轨迹相对比。In the above-described motion recognition-based home control method, the step of determining the input motion graphics specifically includes: generating a simulated motion graphics according to the acquired motion images, and comparing the motion graphics corresponding to the standard motion control modes. Determining the input motion trajectory specifically includes: generating a motion trajectory according to the acquired video signal, and comparing the motion trajectory corresponding to the standard motion control mode.
上述基于运动识别的家居控制方法中,在当前识别状态为第一识别状态时,直接根据运动识别结果输出综合判断信息;在当前识别状态为第二识别状态时,直接根据情感识别结果输出综合判断信息。In the above-mentioned motion recognition-based home control method, when the current recognition state is the first recognition state, the comprehensive judgment information is directly output according to the motion recognition result; when the current recognition state is the second recognition state, the comprehensive judgment is directly output according to the emotion recognition result. information.
上述基于运动识别的家居控制方法中,所述运动识别增强手套内置红外光源,且所述运动识别增强手套的外表面设置有多个小孔以利于所述红外光源发射出的红外光透过进而增强所述运动识别模块对用户的运动的识别。In the above-mentioned motion recognition-based home control method, the motion recognition enhancement glove has an infrared light source built therein, and the outer surface of the motion recognition reinforcement glove is provided with a plurality of small holes to facilitate the transmission of infrared light emitted by the infrared light source. Enhancing the recognition of the motion of the user by the motion recognition module.
上述基于运动识别的家居控制方法中,所述情感识别包括褒义情感识别和贬义情感识别。In the above-described motion recognition-based home control method, the emotion recognition includes derogatory emotion recognition and derogatory emotion recognition.
本发明所述的基于运动识别的家居控制方法,其中,还包括步骤:The motion recognition-based home control method of the present invention further includes the steps of:
对所述视频信号采集模块获取的面部图像信息进行图像识别判断,生成第三情感识别结果。And performing image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.
本发明所述的基于运动识别的家居控制方法,其中,所述进行语音音调情感识别的方法包括步骤:The motion recognition-based home control method according to the present invention, wherein the method for performing voice tone emotion recognition comprises the steps of:
选定若干个褒义种子词和若干个贬义种子词生成情感词典;Selecting a number of derogatory seed words and a number of derogatory seed words to generate an emotional dictionary;
分别计算文字信息中的词语与情感词典中的褒义种子词和贬义种子词的词语相似度;根据词语相似度,通过预设的语义情感分析方法生成所述第二情感识别结果。The word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary are respectively calculated; and according to the similarity of the words, the second emotion recognition result is generated by a preset semantic sentiment analysis method.
上述基于运动识别的家居控制方法中,根据语义相似度计算方法分别计算所述文字信息中的词语与所述褒义种子词的词语相似度以及所述文字信息中 的词语与所述贬义种子词的词语相似度。In the above-described motion recognition-based home control method, the word similarity between the words in the text information and the swearing seed words and the text information are separately calculated according to a semantic similarity calculation method. The similarity between the words of the words and the words of the derogatory seed.
本发明较佳实施例的基于运动识别的家居控制系统原理框图如图2所示,包括:运动识别模块、情感识别判断模块、音频信号采集模块、视频信号采集模块、综合信息处理模块、控制模块、无线收发模块;运动识别模块、情感识别判断模块与综合信息处理模块电连接;音频信号采集模块与情感识别判断模块连接;视频信号采集模块与运动识别模块和情感识别判断模块连接;综合信息处理模块通过控制模块与无线收发模块电连接;综合信息处理模块,用于对运动识别结果、情感识别结果进行综合处理,得出综合判断信息,并根据该综合判断信息控制各智能家居设备执行相应的操作。本实施例通过采用情感识别和运动识别模式相结合的识别方式进行智能家居控制,给用户提供了多种选择,使得智能家居的使用更加人性化。A block diagram of a motion recognition-based home control system according to a preferred embodiment of the present invention is shown in FIG. 2, and includes: a motion recognition module, an emotion recognition judgment module, an audio signal acquisition module, a video signal acquisition module, an integrated information processing module, and a control module. , wireless transceiver module; motion recognition module, emotion recognition judgment module and integrated information processing module are electrically connected; audio signal acquisition module is connected with emotion recognition judgment module; video signal acquisition module is connected with motion recognition module and emotion recognition judgment module; comprehensive information processing The module is electrically connected to the wireless transceiver module through the control module; the integrated information processing module is configured to comprehensively process the motion recognition result and the emotion recognition result, and obtain comprehensive judgment information, and control each smart home device to perform corresponding according to the comprehensive judgment information. operating. In this embodiment, the smart home control is implemented by adopting the combination of the emotion recognition and the motion recognition mode, and the user is provided with various options, so that the use of the smart home is more humanized.
上述基于运动识别的家居控制系统中,综合信息处理模块包括:识别状态判断单元,用于根据用户设置输入参数判断当前识别状态,其中,当前识别状态包括:只根据运动识别进行判断的第一识别状态、只根据情感识别进行判断的第二识别状态、同时根据运动和情感识别进行判断的第三识别状态。In the above-mentioned motion recognition-based home control system, the integrated information processing module includes: an identification state determining unit, configured to determine a current recognition state according to a user setting input parameter, wherein the current recognition state includes: a first identification that is determined only based on the motion recognition The state, the second recognition state judged only based on the emotion recognition, and the third recognition state judged at the same time based on the motion and the emotion recognition.
上述基于运动识别的家居控制系统中,综合信息处理模块还包括:综合判断信息生成单元,用于在当前识别状态为第三识别状态时,将运动识别结果作为第一参数、将情感识别结果作为第二参数,根据预设的判断公式输出综合判断信息。In the above-mentioned motion recognition-based home control system, the integrated information processing module further includes: a comprehensive judgment information generating unit, configured to use the motion recognition result as the first parameter and the emotion recognition result as the first recognition state when the current recognition state is the third recognition state The second parameter outputs comprehensive judgment information according to a preset judgment formula.
上述基于运动识别的家居控制系统中,情感识别判断模块包括:第一情感识别单元,用于对语音信息进行语音音调情感识别,生成第一情感识别结果;第二情感识别单元,用于将语音信息转换为文字信息后,对文字信息进行语义情感识别生成第二情感识别结果;情感识别结果输出单元,用于基于第一情感 识别结果和第二情感识别结果,根据预定的情感识别结果判断系统生成用户情感识别结果。In the above-mentioned motion recognition-based home control system, the emotion recognition determination module includes: a first emotion recognition unit configured to perform voice tone emotion recognition on the voice information to generate a first emotion recognition result; and a second emotion recognition unit configured to use the voice After the information is converted into text information, the semantic emotion recognition is performed on the text information to generate a second emotion recognition result; the emotion recognition result output unit is configured to be based on the first emotion The recognition result and the second emotion recognition result determine the system to generate the user emotion recognition result according to the predetermined emotion recognition result.
上述基于运动识别的家居控制系统中,情感识别包括褒义情感识别和贬义情感识别。例如,选定若干个褒义种子词和若干个贬义种子词生成情感词典;分别计算文字信息中的词语与情感词典中的褒义种子词和贬义种子词的词语相似度;根据词语相似度,通过预设的语义情感分析系统生成所述第二情感识别结果。具体地,可根据语义相似度计算系统分别计算所述文字信息中的词语与所述褒义种子词的词语相似度以及所述文字信息中的词语与所述贬义种子词的词语相似度。In the above-mentioned motion recognition-based home control system, emotion recognition includes derogatory emotion recognition and derogatory emotion recognition. For example, a number of derogatory seed words and a number of derogatory seed words are selected to generate an sentiment dictionary; the word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary are respectively calculated; The semantic emotion analysis system is configured to generate the second emotion recognition result. Specifically, the word similarity between the word in the text information and the derogatory seed word and the word similarity between the word in the text information and the derogatory seed word may be respectively calculated according to a semantic similarity calculation system.
上述实施例中,根据词语相似度,通过预设的语义情感分析系统生成所述第二情感识别结果的步骤具体系统为:通过词语情感倾向算式计算词语情感倾向值:当词语情感倾向值大于预定的阈值时,判断文字信息中的词语为褒义情感;当词语情感倾向值小于预定的阈值时,判断文字信息中的词语为贬义情感。In the above embodiment, according to the word similarity, the step of generating the second emotion recognition result by using the preset semantic sentiment analysis system is: calculating the word sentiment tendency value by using the word sentiment tendency formula: when the word sentiment tendency value is greater than the predetermined When the threshold value is determined, the words in the text information are judged as derogatory emotions; when the word sentiment tendency value is less than a predetermined threshold, the words in the text information are judged as derogatory emotions.
上述基于运动识别的家居控制系统中,情感识别判断模块包括:第三情感识别单元,用于对视频信号采集模块获取的面部图像信息进行图像识别判断,生成第三情感识别结果。In the above-mentioned motion recognition-based home control system, the emotion recognition determination module includes: a third emotion recognition unit configured to perform image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.
应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。 It is to be understood that those skilled in the art will be able to make modifications and changes in accordance with the above description, and all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (8)

  1. 一种基于运动识别的家居控制方法,其特征在于,包括步骤:A home control method based on motion recognition, characterized in that it comprises the steps of:
    获取外部语音信息,对所述语音信息进行语音音调情感识别,生成第一情感识别结果,将所述语音信息转换为文字信息后,对所述文字信息进行语义情感识别生成第二情感识别结果,基于所述第一情感识别结果和第二情感识别结果,根据预定的情感识别结果判断方法生成用户情感识别结果;Obtaining external voice information, performing voice tone emotion recognition on the voice information, generating a first emotion recognition result, converting the voice information into text information, and performing semantic emotion recognition on the text information to generate a second emotion recognition result, Generating a user emotion recognition result according to the predetermined emotion recognition result determination method based on the first emotion recognition result and the second emotion recognition result;
    获取外部视频信息,对所述视频信息进行图像识别,判断输入的运动图形或运动轨迹;Obtaining external video information, performing image recognition on the video information, and determining an input motion graphic or a motion track;
    获取用户输入的设置参数,生成当前识别状态信息;当前识别状态包括:只根据运动识别进行判断的第一识别状态、只根据情感识别进行判断的第二识别状态、同时根据运动和情感识别进行判断的第三识别状态;Obtaining a setting parameter input by the user, and generating current identification state information; the current recognition state includes: a first recognition state that is determined only according to the motion recognition, a second recognition state that is only determined according to the emotion recognition, and a judgment according to the motion and the emotion recognition Third identification state;
    在当前识别状态为第三识别状态时,将运动识别结果作为第一参数、将情感识别结果作为第二参数,根据预设的判断公式输出综合判断信息;When the current recognition state is the third recognition state, the motion recognition result is used as the first parameter, and the emotion recognition result is used as the second parameter, and the comprehensive judgment information is output according to the preset determination formula;
    根据所述综合判断信息对家居进行控制。The home is controlled according to the comprehensive judgment information.
  2. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,所述步骤判断输入的运动图形具体包括:The motion recognition-based home control method according to claim 1, wherein the step of determining the input motion pattern specifically comprises:
    根据获取的运动图像生成模拟运动图形,并于标准运动控制方式对应的运动图形相对比。The simulated motion graphics are generated based on the acquired motion images, and are compared with the motion graphics corresponding to the standard motion control modes.
  3. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,所述步骤判断输入的运动轨迹具体包括:The motion recognition-based home control method according to claim 1, wherein the step of determining the input motion trajectory specifically comprises:
    根据获取的视频信号生成运动轨迹,并于标准运动控制方式对应的运动轨迹相对比。 A motion trajectory is generated according to the acquired video signal, and is compared with a motion trajectory corresponding to the standard motion control mode.
  4. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,在当前识别状态为第一识别状态时,直接根据运动识别结果输出综合判断信息。The motion recognition-based home control method according to claim 1, wherein when the current recognition state is the first recognition state, the comprehensive judgment information is directly output based on the motion recognition result.
  5. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,在当前识别状态为第二识别状态时,直接根据情感识别结果输出综合判断信息。The motion recognition-based home control method according to claim 1, wherein when the current recognition state is the second recognition state, the comprehensive judgment information is directly output based on the emotion recognition result.
  6. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,所述情感识别包括褒义情感识别和贬义情感识别。The motion recognition-based home control method according to claim 1, wherein the emotion recognition comprises ambiguous emotion recognition and ambiguous emotion recognition.
  7. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,还包括步骤:The motion recognition-based home control method according to claim 1, further comprising the steps of:
    对所述视频信号采集模块获取的面部图像信息进行图像识别判断,生成第三情感识别结果。And performing image recognition judgment on the facial image information acquired by the video signal acquisition module to generate a third emotion recognition result.
  8. 根据权利要求1所述的基于运动识别的家居控制方法,其特征在于,所述进行语音音调情感识别的方法包括步骤:The motion recognition-based home control method according to claim 1, wherein the method for performing voice tone emotion recognition comprises the steps of:
    选定若干个褒义种子词和若干个贬义种子词生成情感词典;Selecting a number of derogatory seed words and a number of derogatory seed words to generate an emotional dictionary;
    分别计算文字信息中的词语与情感词典中的褒义种子词和贬义种子词的词语相似度;Calculating word similarity between the words in the text information and the derogatory seed words and the derogatory seed words in the sentiment dictionary;
    根据词语相似度,通过预设的语义情感分析方法生成所述第二情感识别结果。 The second emotion recognition result is generated by a preset semantic sentiment analysis method according to word similarity.
PCT/CN2016/093159 2016-08-04 2016-08-04 Home control method based on motion recognition WO2018023513A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/093159 WO2018023513A1 (en) 2016-08-04 2016-08-04 Home control method based on motion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/093159 WO2018023513A1 (en) 2016-08-04 2016-08-04 Home control method based on motion recognition

Publications (1)

Publication Number Publication Date
WO2018023513A1 true WO2018023513A1 (en) 2018-02-08

Family

ID=61072334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/093159 WO2018023513A1 (en) 2016-08-04 2016-08-04 Home control method based on motion recognition

Country Status (1)

Country Link
WO (1) WO2018023513A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109283849A (en) * 2018-09-10 2019-01-29 缙云县科耳沃自动化科技有限公司 A kind of Intelligent house system
CN112286062A (en) * 2020-09-23 2021-01-29 青岛经济技术开发区海尔热水器有限公司 Intelligent household appliance linkage control method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819751A (en) * 2012-08-21 2012-12-12 长沙纳特微视网络科技有限公司 Man-machine interaction method and device based on action recognition
CN102932212A (en) * 2012-10-12 2013-02-13 华南理工大学 Intelligent household control system based on multichannel interaction manner
CN104102181A (en) * 2013-04-10 2014-10-15 海尔集团公司 Intelligent home control method, device and system
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN104765278A (en) * 2015-04-20 2015-07-08 宇龙计算机通信科技(深圳)有限公司 Smart home equipment control method and electronic equipment
CN105334743A (en) * 2015-11-18 2016-02-17 深圳创维-Rgb电子有限公司 Intelligent home control method and system based on emotion recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819751A (en) * 2012-08-21 2012-12-12 长沙纳特微视网络科技有限公司 Man-machine interaction method and device based on action recognition
CN102932212A (en) * 2012-10-12 2013-02-13 华南理工大学 Intelligent household control system based on multichannel interaction manner
CN104102181A (en) * 2013-04-10 2014-10-15 海尔集团公司 Intelligent home control method, device and system
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN104765278A (en) * 2015-04-20 2015-07-08 宇龙计算机通信科技(深圳)有限公司 Smart home equipment control method and electronic equipment
CN105334743A (en) * 2015-11-18 2016-02-17 深圳创维-Rgb电子有限公司 Intelligent home control method and system based on emotion recognition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109283849A (en) * 2018-09-10 2019-01-29 缙云县科耳沃自动化科技有限公司 A kind of Intelligent house system
CN112286062A (en) * 2020-09-23 2021-01-29 青岛经济技术开发区海尔热水器有限公司 Intelligent household appliance linkage control method and device, electronic equipment and storage medium
CN112286062B (en) * 2020-09-23 2023-02-28 青岛经济技术开发区海尔热水器有限公司 Intelligent household appliance linkage control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6690031B2 (en) System control method, system, and program
CN112863547B (en) Virtual resource transfer processing method, device, storage medium and computer equipment
CN105118257B (en) Intelligent control system and method
US9781575B1 (en) Autonomous semantic labeling of physical locations
CN105471705B (en) Intelligent control method, equipment and system based on instant messaging
CN109188927A (en) Appliance control method, device, gateway and storage medium
WO2020244573A1 (en) Voice instruction processing method and device, and control system
CN112051743A (en) Device control method, conflict processing method, corresponding devices and electronic device
WO2019184300A1 (en) Ai control element, smart household control system and control method
CN109951363B (en) Data processing method, device and system
WO2018023515A1 (en) Gesture and emotion recognition home control system
WO2018006374A1 (en) Function recommending method, system, and robot based on automatic wake-up
WO2017166462A1 (en) Method and system for reminding about change in environment, and head-mounted vr device
US11875571B2 (en) Smart hearing assistance in monitored property
CN106228989A (en) A kind of interactive voice identification control method
WO2018023523A1 (en) Motion and emotion recognizing home control system
WO2018023514A1 (en) Home background music control system
WO2018023513A1 (en) Home control method based on motion recognition
CN106251871A (en) A kind of Voice command music this locality playing device
WO2018023518A1 (en) Smart terminal for voice interaction and recognition
CN106019977A (en) Gesture and emotion recognition home control system
WO2018023517A1 (en) Voice interactive recognition control system
WO2018023512A1 (en) Furniture control method using multi-dimensional recognition
WO2018023516A1 (en) Voice interaction recognition and control method
WO2018023524A1 (en) Intelligent door-opening method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16911110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28/06/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16911110

Country of ref document: EP

Kind code of ref document: A1