WO2018014432A1 - 语音应用触发控制方法、装置及终端 - Google Patents

语音应用触发控制方法、装置及终端 Download PDF

Info

Publication number
WO2018014432A1
WO2018014432A1 PCT/CN2016/097714 CN2016097714W WO2018014432A1 WO 2018014432 A1 WO2018014432 A1 WO 2018014432A1 CN 2016097714 W CN2016097714 W CN 2016097714W WO 2018014432 A1 WO2018014432 A1 WO 2018014432A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
acceleration vector
motion
voice application
motion mode
Prior art date
Application number
PCT/CN2016/097714
Other languages
English (en)
French (fr)
Inventor
张翀
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018014432A1 publication Critical patent/WO2018014432A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs

Definitions

  • the present invention relates to the field of intelligent terminals, and in particular, to a voice application trigger control method, apparatus, and terminal.
  • various voice wake-up solutions need to add a dedicated DSP (Digital Signal Processing) chip to the underlying hardware of the mobile phone.
  • the chip is physically connected to the CPU (Central Processing Unit) of the mobile phone.
  • the recording monitor program will always be enabled on the DSP chip to collect the user's voice to ensure that the system always needs to be turned on (Always-on) without affecting the normal sleep of the mobile phone CPU.
  • the DSP then continuously analyzes the entered audio data until the corresponding wake-up condition is met to wake up the corresponding voice application.
  • the existing various voice wake-up solutions described above have the following problems:
  • the voice application trigger control method, device, and terminal provided by the embodiments of the present invention mainly solve the technical problem of solving the problem that the existing voice application in the terminal wakes up by the voice wake-up solution has high cost, high power consumption, and poor versatility.
  • an embodiment of the present invention provides a voice application trigger control method, which is applied to a sensor hub module of a terminal, and includes:
  • the processor that wakes up the terminal executes the voice application when the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application.
  • the embodiment of the present invention further provides a voice application trigger control device, which is applied to a sensor hub module of a terminal, and includes:
  • a data acquisition module configured to acquire motion data of the terminal
  • a data processing module configured to determine a current motion mode of the terminal according to the motion data
  • waking up the control module configured to wake up the processor of the terminal to execute the voice application when the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application.
  • the embodiment of the invention further provides a terminal, which comprises the voice application trigger control device as described above.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the foregoing voice application trigger control method.
  • a storage medium is also provided.
  • the storage medium is arranged to store program code for performing the following steps:
  • the sensor hub module (Sensor Hub) of the terminal is directly used to acquire motion data of the terminal, and then the current motion mode of the terminal is determined according to the collected motion data. And, when the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application, the processor of the wake-up terminal executes the voice application. Since the sensor hub module is built into various smart terminals, no extra is required. The chip is set to collect and analyze data, and the motion sensor is also a standard device of various intelligent terminals, so the existing voice wake-up scheme is lower in cost and more versatile. In addition, the sensor hub module has the problem of low power consumption from the innate. At the same time, the invention collects and analyzes the motion data, and the data analysis process of the audio data is also faster and simpler, thereby further reducing the system power consumption and improving the user. Experience satisfaction.
  • FIG. 1 is a schematic diagram of an independent setting of a sensor hub module according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of an integrated arrangement of a sensor hub module according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic flowchart of a voice application trigger control method according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic flowchart of a process for separating terminal acceleration vector data according to Embodiment 1 of the present invention
  • FIG. 5 is a schematic diagram of a coordinate system of a terminal acceleration sensor according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic structural diagram of a voice application trigger control apparatus according to Embodiment 2 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the embodiment of the invention is based on the hardware standard configuration of the underlying Sensor Hub of the intelligent terminal, and detects the triggering of the corresponding voice application in the terminal by detecting the motion mode of the terminal.
  • the Sensor Hub translates as a sensor hub module or coprocessor.
  • the main function of the Sensor Hub is to process various data from various sensors, and wake up the CPU (processor) of the terminal when necessary to reduce the operating load of the main processor, thereby reducing system power consumption.
  • the Sensor Hub has the inherent power to reduce power consumption from the architecture. It is a relatively mature power reduction technology solution.
  • various smart terminals are set to be standard. For example, referring to the sensor hub configuration modes of the two sensor hub modules shown in FIG. 1 and FIG. 2, the sensor hub hub Sensor Hub in FIG. 1 is independently set, and the sensor hub hub Sensor Hub in FIG. 2 is integrated in the application processor application processor. .
  • the voice application trigger control method provided by the embodiment of the present invention is applied to the sensor hub module of the terminal, as shown in FIG. 3, including:
  • the acquisition frequency of the motion data acquired by the motion sensor may be the same as or different from the frequency of the motion sensor acquisition data.
  • the motion sensor can acquire a certain amount of data before acquiring.
  • S302 Determine a current motion mode of the terminal according to the collected motion data.
  • corresponding motion modes are respectively set for each voice application to be controlled in the terminal (for example, voice answering, recording, and mute setting). It should be understood that the controlled object of this embodiment may be any other type of application in the terminal in addition to the voice application.
  • each voice application may have a one-to-one correspondence with each motion mode, or A plurality of associated voice applications correspond to one motion mode.
  • a motion mode is represented by a motion mode vector; and the motion mode vector in this embodiment includes a motion element and a gesture element of the terminal. That is, in this embodiment, the control of the voice application is implemented in combination with the motion characteristics and posture characteristics of the terminal.
  • the posture element of the terminal in this embodiment includes the angle change amount of the terminal; the motion element includes the maximum value, the minimum value, the amplitude, the change period, the peak value and the bottom value of the linear acceleration, and the distance value between the terminal and the user, and the terminal is at the user's ear. At least one of the residence times.
  • the specifically included elements can be flexibly set according to different voice applications and specific application scenarios. It should be understood that the different voice applications in this embodiment may contain the same elements or may be different.
  • the somatosensory recognition technology of smart terminals requires the use of a nine-axis sensor (accelerometer 3 axes, gyroscope 3 axes, magnetometer 3 axes), and the various properties of the mobile phone are projected into the global coordinate system before the posture and motion can be performed. Analysis of other characteristics. For low-end mobile phones, there are often only accelerometers and distance sensors, so they are not compatible with low-end mobile phones. Therefore, identification is often inaccurate, misidentified or even unrecognizable, which brings great inconvenience to users. Such a program is not universally applicable, and it will cause a significant increase in the production cost of the mobile phone.
  • the acquisition of the terminal data by the acceleration sensor and the distance sensor may be completed, and the analysis and confirmation of the terminal motion mode is completed.
  • the motion data in this embodiment includes a terminal acceleration vector acquired by the acceleration sensor, and determining the current motion mode of the terminal according to the motion data includes:
  • the motion element of the current motion mode of the terminal is determined according to the obtained actual linear acceleration vector, and the posture element of the current motion mode of the terminal is obtained according to the actual gravity acceleration vector.
  • the terminal acceleration vector data is separated to obtain the actual gravity acceleration vector and the actual linear acceleration vector of the terminal, as shown in FIG. 4, including:
  • the measured terminal acceleration vector of the acceleration sensor is the synthesized data A of the gravitational acceleration G and the linear acceleration A L :
  • the acceleration data consists of three sets of data, A: [A x , A y , Aa z ], which represent the projection components of the acceleration on the X-axis Y-axis Z-axis of the device coordinate system of the mobile device, respectively. See Figure 5 for a schematic diagram of the coordinate system of the terminal acceleration sensor.
  • the theoretical gravity acceleration vector Gravity: [G x , G y , G z ] and the theoretical linear acceleration vector Acceleration: [A x , A y , A z ] can be separated by S401.
  • S401 is not limited to the separation of the linear acceleration and the gravity acceleration by the filtering operation, as long as any processing manner capable of achieving linear acceleration and gravity acceleration separation is within the scope of the present invention.
  • Pre-processing the theoretical gravity acceleration vector to obtain the actual gravity acceleration vector includes:
  • the obtained actual linear acceleration vector can also be pre-processed, for example, using least squares to perform smooth fitting, and the irregular points of the data are removed, so that the shape of the data is not destroyed and the data is more realistic. happening.
  • the corresponding posture features and motion features can be extracted, that is, the corresponding motion elements and posture elements are obtained.
  • the posture element of the terminal includes the angle change amount of the terminal; and in this embodiment, the posture element is obtained according to the actual gravity acceleration vector, which specifically includes:
  • the angular change amount of the terminal is obtained.
  • the motion element in this embodiment includes at least one of a maximum value, a minimum value, an amplitude, a change period, a peak value, and a bottom value of the linear acceleration.
  • the above motion element is obtained from the obtained actual linear acceleration vector.
  • the maximum and minimum values of the linear acceleration corresponding to the actual linear acceleration vector acquired during the detection period can be calculated;
  • Each actual linear acceleration is plotted as a waveform function to obtain amplitude, period of change, peak and valley, and so on.
  • the motion data in this embodiment may further include at least one of a terminal-to-user distance value collected by the distance sensor and a dwell time value of the terminal at the user's ear; the motion element in this embodiment may also The distance value between the terminal and the user, and the stay time of the terminal at the user's ear are at least one of. Specifically, the corresponding extraction and addition may be performed from the data collected by the distance sensor.
  • determining whether the motion mode vector of the motion mode corresponding to the voice application matches may specifically include:
  • the similarity is greater than the preset similarity threshold, it is determined that the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application.
  • the specific setting of the similarity threshold in this embodiment may also be flexibly selected according to a specific application scenario.
  • the acceleration data collected by the acceleration sensor and the data collected by the distance sensor are used to analyze the real-time attitude characteristics and motion characteristics of the terminal, that is, the current motion mode of the terminal is analyzed.
  • the corresponding voice application is triggered once the specific action is met.
  • the embodiment of the present invention can acquire the attitude characteristics and the motion characteristics of the terminal through the acceleration sensor and the distance sensor, without using other additional sensors, and the cost is low.
  • the filtering, normalized operation and smooth fitting of data preprocessing can obtain more accurate acceleration data and analyze the precise motion mode of the terminal.
  • the success rate of the recognition of the terminal motion mode by the existing algorithm library is 90%
  • the success rate of the recognition by the above algorithm is 96%
  • the error rate of the existing algorithm library reaches a staggering 70%.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the embodiment provides a voice application trigger control device, and the device is disposed in the terminal, and may be specifically disposed in a sensor hub module of the terminal.
  • the present invention detects the user's motion through the motion sensor through the voice application trigger control device until the condition for waking up the voice application is satisfied.
  • the embodiment of the invention does not need additional hardware cost, and only needs the data of the three-axis acceleration sensor + the distance sensor to realize the motion recognition, the data analysis is simple, and the requirement of the system always opening (Always-on) can greatly reduce the work. Consumption.
  • the voice application trigger control apparatus in this embodiment includes:
  • the data acquisition module 61 is configured to acquire the terminal motion data.
  • the acquisition frequency of the motion data acquired by the data acquisition module 61 in the embodiment may be the same as or different from the acquisition frequency of the motion sensor acquisition data.
  • the motion sensor can acquire a certain amount of data before acquiring.
  • the data processing module 62 is configured to determine, according to the motion data acquired by the data acquiring module 61, the current motion mode of the terminal;
  • the wake-up control module 63 is configured to wake up the processor of the terminal to execute the voice application when the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application.
  • the corresponding motion modes may be separately set in advance for each voice application to be controlled in the terminal.
  • the controlled object of this embodiment may be any other type of application in the terminal in addition to the voice application.
  • each voice application may have a one-to-one correspondence with each motion mode, or multiple associated voice applications may correspond to one motion mode.
  • a motion mode is represented by a motion mode vector; and the motion mode vector in this embodiment includes a motion element and a gesture element of the terminal, and is compared with an existing scheme for analyzing and matching by collecting audio data to wake up the voice application, and collecting And analyzing the motion data is faster and simpler, so the power consumption is lower and the accuracy is better.
  • the data acquisition module 61 in this embodiment is configured to specifically acquire data collected by the acceleration sensor and the distance sensor, and submit the data to the data processing module 62 for analysis and processing.
  • the data processing module 62 includes a data pre-processing unit 621 and a feature extraction unit 622.
  • the data pre-processing unit 621 is configured to perform separation processing on the terminal acceleration vector to obtain a terminal.
  • the actual gravity acceleration vector and the actual linear acceleration vector are as follows:
  • the data pre-processing unit 621 performs filtering processing on the terminal acceleration vector to separate the theoretical gravity acceleration vector and the theoretical linear acceleration vector.
  • the data pre-processing unit 621 separates the theoretical gravity acceleration vector Gravity: [G x , G y , G z ] and the theoretical linear acceleration vector Acceleration: [A x , A y , A Z ], and the theoretical gravity acceleration vector Gravity: [ G x , G y , G z ] are preprocessed to obtain the actual gravity acceleration vector.
  • the specific preprocessing process includes:
  • the data pre-processing unit 621 then subtracts the terminal acceleration vector from the actual gravity acceleration vector Gravity' to obtain an actual linear acceleration vector.
  • the data pre-processing unit 621 can also perform pre-processing on the obtained actual linear acceleration vector, for example, using least squares to perform smooth fitting, and removing irregular points of the data, so as not to destroy the shape of the data. Can make the data closer to the real situation.
  • the feature extraction unit 622 can extract the corresponding posture feature and the motion feature, that is, acquire the corresponding motion element and the gesture element.
  • the posture element of the terminal in this embodiment includes the angle change amount of the terminal; and the feature extraction unit 622 obtains the posture element according to the actual gravity acceleration vector in the embodiment, which specifically includes:
  • the angular change amount of the terminal is obtained.
  • the motion element in this embodiment includes at least one of a maximum value, a minimum value, an amplitude, a change period, a peak value, and a bottom value of the linear acceleration.
  • the feature extraction unit 622 obtains the above-described motion element from the obtained actual linear acceleration vector. For example, the maximum and minimum values of the linear acceleration corresponding to the actual linear acceleration vector acquired during the detection period can be calculated; the actual linear accelerations collected during the detection period are plotted as a waveform function to obtain amplitude, period of change, peak and valley. Value, etc.
  • the motion data in this embodiment may further include at least one of a terminal-to-user distance value collected by the distance sensor and a dwell time value of the terminal at the user's ear; the motion element in this embodiment may also The distance value between the terminal and the user, and the stay time of the terminal at the user's ear are at least one of.
  • Feature extraction unit 622 is also arranged to perform a corresponding extraction addition from the data acquired by the distance sensor.
  • the wake-up control module 63 determines whether the motion mode vector of the motion mode corresponding to the voice application matches, and may specifically include:
  • the wake-up control module 63 calculates the similarity between the motion mode vector of the current operating mode of the terminal and the motion mode vector of the motion mode corresponding to the voice application. When the similarity is greater than the preset similarity threshold, determining the current motion mode of the terminal and triggering a certain The motion pattern matching corresponding to the voice application.
  • the specific setting of the similarity threshold in this embodiment may also be flexibly selected according to a specific application scenario.
  • the above data acquisition module 61, data processing module 62, and wakeup in the embodiment of the present invention can be implemented by the Sensor Hub chip, that is, can be constructed in the Sensor Hub.
  • the voice application trigger control device in the Sensor Hub acquires the acceleration data collected by the acceleration sensor and the data collected by the distance sensor to analyze the real-time posture characteristics and motion characteristics of the terminal, that is, analyze the current location of the terminal.
  • the sport mode triggers the corresponding voice application once the specific action is met.
  • the voice application trigger control device of the embodiment of the present invention can acquire the posture feature and the motion characteristic of the terminal through the acceleration sensor and the distance sensor, without using other additional sensors, and the cost is low.
  • the filtering, normalized operation and smooth fitting of data preprocessing can obtain more accurate acceleration data and analyze the precise motion mode of the terminal.
  • modules or steps of the above embodiments of the present invention can be implemented by a general computing device, which can be concentrated on a single computing device or distributed among multiple computing devices.
  • they may be implemented by program code executable by the computing device, such that they may be stored in a computer storage medium (ROM/RAM, disk, optical disk) by a computing device, and at some
  • the steps shown or described may be performed in an order different than that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of modules or steps may be fabricated into a single integrated circuit module. . Therefore, the invention is not limited to any particular combination of hardware and software.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be configured to store program code for performing the following steps:
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the sensor hub module (Sensor Hub) of the terminal is directly used to acquire motion data of the terminal, and then the current motion mode of the terminal is determined according to the collected motion data. And, when the current motion mode of the terminal matches the motion mode corresponding to triggering a certain voice application, the processor of the wake-up terminal executes the voice application.
  • the sensor hub module is provided by various intelligent terminals, there is no need to set up a chip for data collection and analysis, and the motion sensor is also a standard device of various intelligent terminals, so the cost and versatility of the existing voice wake-up scheme is relatively low. better.
  • the pivot module has the problem of low power consumption from the innate.
  • the invention collects and analyzes the motion data, and the data analysis process of the audio data is also faster and simpler, thereby further reducing the system power consumption and improving the user experience. degree.

Abstract

一种语音应用触发控制方法、装置及终端,直接利用终端的传感器中枢模块,获取终端的运动数据(S301),进而根据采集的运动数据确定终端当前的运动模式(S302),并在终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒终端的处理器执行语音应用(S303)。由于传感器中枢模块是各种智能终端自带的,不需要额外设置芯片进行数据的采集和分析,同时运动传感器也是各种智能终端标配的器件,因此相对现有语音唤醒方案低成本、通用性更好。另外传感器中枢模块从先天上就具有功耗低的问题,同时采集和分析的是运动数据,相对音频数据其数据分析过程也更为快速、简单,可进一步降低系统功耗,提升用户体验满意度。

Description

语音应用触发控制方法、装置及终端 技术领域
本发明涉及智能终端领域,尤其涉及一种语音应用触发控制方法、装置及终端。
背景技术
伴随着智能手机的普及,智能手机上的各种语音应用日益丰富,且目前针对这些语音应用设置了多种多样的语音唤醒方案。目前已有的各种语音唤醒方案都需要在手机底层硬件增加一个专门的DSP(Digital Signal Processing,数字信号处理)芯片,该芯片在架构上与手机的CPU(Central Processing Unit,中央处理器)相互独立,在该DSP芯片上会一直开启录音监控程序对用户声音进行采集以保证系统常时开启(Always-on)的需求但又不会影响手机CPU的正常休眠。DSP然后会不停分析录入的音频数据直至满足相应的唤醒条件唤醒对应的语音应用。现有的上述各种语音唤醒方案存在以下问题:
需要额外设置一块DSP芯片,增加成本。对于没有设置该DSP芯片的低端机来说,上述各种通过语音唤醒方案唤醒对应语音应用则不再适用,通用性差。然后就是音频文件本身的复杂性导致数据分析过程十分复杂,在一系列的复杂计算之后,功耗也会有所提升,导致功耗高。
发明内容
本发明实施例提供的语音应用触发控制方法、装置及终端,主要解决的技术问题是:解决现有通过语音唤醒方案唤醒终端内的语音应用存在的成本和功耗高,通用性差的问题。
为解决上述技术问题,本发明实施例提供一种语音应用触发控制方法,应用于终端的传感器中枢模块,包括:
获取终端的运动数据;
根据所述运动数据确定所述终端当前的运动模式;
在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
本发明实施例还提供一种语音应用触发控制装置,应用于终端的传感器中枢模块中,包括:
数据获取模块,设置为获取终端的运动数据;
数据处理模块,设置为根据所述运动数据确定所述终端当前的运动模式;
唤醒控制模块,设置为在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
本发明实施例还提供一种终端,其包括如上所述的语音应用触发控制装置。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行前述的语音应用触发控制方法。
根据本发明的又一个实施例,还提供了一种存储介质。该存储介质设置为存储用于执行以下步骤的程序代码:
获取终端的运动数据;根据所述运动数据确定所述终端当前的运动模式;在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
本发明实施例的有益效果是:
根据本发明实施例提供的语音应用触发控制方法、装置、终端及存储介质,直接利用终端的传感器中枢模块(Sensor Hub),获取终端的运动数据,进而根据采集的运动数据确定终端当前的运动模式,并在终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒终端的处理器执行语音应用。由于传感器中枢模块是各种智能终端自带的,不需要额外 设置芯片进行数据的采集和分析,同时运动传感器也是各种智能终端标配的器件,因此相对现有语音唤醒方案低成本、通用性更好。另外传感器中枢模块从先天上就具有功耗低的问题,同时本发明采集和分析的是运动数据,相对音频数据其数据分析过程也更为快速、简单,因此可进一步降低系统功耗,提升用户体验满意度。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明实施例一中的传感器中枢模块独立设置示意图;
图2为本发明实施例一中的传感器中枢模块集成设置示意图;
图3为本发明实施例一中的语音应用触发控制方法流程示意图;
图4为本发明实施例一中的对终端加速度向量数据进行分离处理过程的流程示意图;
图5为本发明实施例一中的终端加速度传感器坐标系示意图;
图6为本发明实施例二中的语音应用触发控制装置结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例只是本发明中一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一:
本发明实施例基于智能终端底层Sensor Hub这一硬件标准配置,通过检测终端的运动模式实现对终端内相应语音应用触发的控制。Sensor Hub翻译为传感器中枢模块或协处理器。在系统设计中,Sensor Hub主要功能在于处理来自各个传感器的各种数据,在必要时才将终端的CPU(处理器)唤醒,以降低主处理器的运作负担,借此降低系统功耗。也就是说Sensor Hub从架构上先天具有降低功耗的作用,它是比较成熟的降功耗技术方案。目前各种智能终端均设置成为标配。例如参见图1和图2所示的两种传感器中枢模块Sensor Hub设置方式,图1中传感器中枢模块Sensor Hub独立设置,图2中的感器中枢模块Sensor Hub则集成于应用处理器application processor中。
如上,本发明实施例提供的语音应用触发控制方法应用于终端的传感器中枢模块,参见图3所示,包括:
S301:获取终端的运动数据。
本实施例中获取运动传感器所采集的运动数据的获取频率可以与运动传感器采集数据的采集频率相同,也可以不同。例如可以在运动传感器采集一定的数据量之后再进行获取。
S302:根据采集的运动数据确定终端当前的运动模式。
S303:在终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒终端的处理器执行该语音应用,也即控制处理器触发执行该语音应用。
本实施例中,对于终端内的各待控制的语音应用(例如语音接听、录音、静音设置)分别设置对应的运动模式。应当理解的是,本实施例被控对象除了语音应用外,还可以终端内的其他任意类型的应用。
本实施例中,各语音应用与各运动模式之间可以是一一对应,也可是 多个关联的语音应用对应一个运动模式。本实施例中一个运动模式用一个运动模式向量表征;且本实施例中的运动模式向量包含终端的运动元素和姿态元素。也即本实施例中结合终端的运动特性和姿态特性来实现对语音应用的控制。相对现有通过采集音频数据进行分析匹配以唤醒语音应用的方案,采集并分析运动数据更为快速、简单,因此功耗更低,准确性也更好。本实施例中终端的姿态元素包括终端的角度变化量;运动元素包括线性加速度的最大值、最小值、幅度、变化周期、峰值和谷值、以及终端与用户的距离值、终端在用户耳边的停留时间中的至少一种。具体包含的元素可以根据不同的语音应用以及具体的应用场景灵活设定。应当理解的是本实施例中的不同的语音应用包含的元素可以相同,也可以不同。
目前智能终端的体感识别技术都需要用到九轴传感器(加速度计3轴,陀螺仪3轴,磁力计3轴),将手机的各项属性投射到全域坐标系中,然后才能进行姿态和运动等特性的分析。而对于低端手机而言,往往只有加速度传感器和距离传感器,所以就无法兼容低端手机,因此通常就会造成识别不准,误识别甚至无法识别,这会给用户带来极大的不便。这样的方案普适性不够,而且使用起来就会造成手机的生产成本大大增加。
因此,尽管现有通过九轴传感器采集终端的运动数据进行分析实现对语音终端的控制也适用于本发明。本实施例还提供一种检测更精准,成本更低,通用性更好的终端运动模式检测方式。
具体的,本实施例具体可以通过加速度传感器和距离传感器完成对终端数据的采集,并完成终端运动模式的分析确认。
因此,本实施例中的运动数据包含通过加速度传感器采集的终端加速度向量,根据运动数据确定终端当前的运动模式包括:
对终端加速度向量进行分离处理得到终端的实际重力加速度向量和实际线性加速度向量;
根据得到的实际线性加速度向量确定终端当前运动模式的运动元素,并根据实际重力加速度向量得到终端当前运动模式的姿态元素。
其中,对终端加速度向量数据进行分离处理得到终端的实际重力加速度向量和实际线性加速度向量参见图4所示,包括:
S401:对终端加速度向量进行滤波处理分离出理论重力加速度向量和理论线性加速度向量。本步骤可以采用各种滤波函数H(raw)=[Gravity,Acceleration]对终端加速度向量进行分离。
加速度传感器的测量的终端加速度向量是重力加速度G和线性加速度AL的合成数据A:
Figure PCTCN2016097714-appb-000001
Figure PCTCN2016097714-appb-000002
Figure PCTCN2016097714-appb-000003
加速度数据由三组数据组成,A:[Ax,Ay,Aaz],它们分别代表加速度在手机设备的装置坐标系的X轴Y轴Z轴上的投影分量,
Figure PCTCN2016097714-appb-000004
终端加速度传感器的坐标系示意图参见图5所示。
通过S401可以分离出理论重力加速度向量Gravity:[Gx,Gy,Gz]和理论线性加速度向量Acceleration:[Ax,Ay,Az]。但应当理解的是,本实施例S401并不仅限于通过滤波操作实现线性加速度和重力加速度的分离,只要能实现线性加速度和重力加速度分离的任意处理方式都在本发明范围内。
S402:对理论重力加速度向量Gravity:[Gx,Gy,Gz]进行预处理得到实际重力加速度向量。
对所述理论重力加速度向量进行预处理得到实际重力加速度向量包括:
计算理论重力加速度向量[Gx,Gy,Gz]的标量
Figure PCTCN2016097714-appb-000005
将所述理论重力加速度向量[Gx,Gy,Gz]与所述标量scale相乘得到实际重力加速度向量Gravity′=scale*[Gx,Gy,Gz]。
S403:将终端加速度向量与实际重力加速度向量Gravity′相减得到实际线性加速度向量。
本实施例中,对于得到的实际线性加速度向量也可以进行预处理,例如采用最小二乘来进行平滑拟合,去掉数据的不规则点,这样既不破坏数据的形态又可以使数据更加逼近真实情况。
得到实际重力加速度向量和实际线性加速度向量后,则可以提取对应的姿态特征和运动特征,也即获取对应的运动元素和姿态元素。
本实施例中终端的姿态元素包括终端的角度变化量;且本实施例中根据实际重力加速度向量得到姿态元素,具体包括:
根据当前得到的实际重力加速度向量与之前得到的实际重力加速度向量(例如上一次得到的实际重力加速度向量)的变化量得到终端当前角度变化的角速度;
然后根据得到的角速度和加速度向量传感器的采集周期得到终端的角度变化量。
如上所示,本实施例中的运动元素包括线性加速度的最大值、最小值、幅度、变化周期、峰值和谷值等中的至少一种。根据得到的实际线性加速度向量得到上述运动元素。例如,可以计算在检测周期内采集到的实际线性加速度向量对应的线性加速度的最大值、最小值;将检测周期内采集的 各实际线性加速度绘制成波形函数,得到幅度、变化周期、峰值和谷值等。
如上所述,本实施例中的运动数据还可包含通过距离传感器采集的终端与用户的距离值以及终端在用户耳边的停留时间值中的至少一种;本实施例中的运动元素还可包括终端与用户的距离值,以及终端在用户耳边的停留时间中的至少一种。具体可以从距离传感器采集的数据中进行对应的提取添加即可。
通过上述过程计算得到终端当前运用模式之运动模式向量后,判断与语音应用对应的运动模式之运动模式向量是否匹配具体可以包括:
计算终端当前运用模式之运动模式向量与语音应用对应的运动模式之运动模式向量的相似度。对于两个向量之间的相似度计算过程在此不再赘述,可以采用任意相似度计算方法进行计算。
当相似度大于预设的相似度阈值时,判定终端当前的运动模式与触发某一语音应用对应的运动模式匹配。本实施例中相似度阈值的具体设定也可以根据具体应用场景灵活选定。
本发明实施例在手机等智能终端休眠中,通过Sensor Hub获取加速度传感器采集的加速度数据以及距离传感器采集的数据分析出终端实时的姿态特性和运动特性,也即分析出终端当前所处的运动模式,一旦满足了特定动作就触发对应的语音应用。
另外,本发明实施例通过加速度传感器和距离传感器就能获取终端的姿态特征和运动特性,无需使用其他额外的传感器,成本低。数据预处理所采用的滤波,规范化操作,平滑拟合可以得到更加逼近真实的加速度数据从而分析出终端的精准运动模式。通过根据测试报告,现有算法库实现终端运动模式的识别成功率是90%,而本实施例的通过上述算法识别成功率是96%,而现有算法库的误识别率达到惊人的70%,而本实施例的上述算法只有20%。因此本发明实施例提供的通过加速度传感器和距离传感器识别终端的姿态特征和运动特性的方式,具有成本低、准确率更好的效果。
实施例二:
本实施例提供了一种语音应用触发控制装置,该装置设置于终端中,具体可以设置于终端的传感器中枢模块(Sensor Hub)中。本发明通过该语音应用触发控制装置通过运动传感器检测用户的动作,直至满足唤醒语音应用的条件。本发明实施例无需额外增加硬件成本,并且只需要三轴加速度传感器+距离传感器的数据就可以实现动作识别,数据分析简单,保证系统常时开启(Always-on)的需求的同时可以大大降低功耗。
参见图6所示,本实施例中的语音应用触发控制装置包括:
数据获取模块61,设置为获取终端运动数据;本实施例中数据获取模块61获取运动传感器所采集的运动数据的获取频率可以与运动传感器采集数据的采集频率相同,也可以不同。例如可以在运动传感器采集一定的数据量之后再进行获取。
数据处理模块62,设置为根据数据获取模块61获取的运动数据确定终端当前的运动模式;
唤醒控制模块63,设置为在终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒终端的处理器执行语音应用。
本实施例中,对于终端内的各待控制的语音应用可以预先分别设置好对应的运动模式。应当理解的是,本实施例被控对象除了语音应用外,还可以终端内的其他任意类型的应用。本实施例中,各语音应用与各运动模式之间可以是一一对应,也可是多个关联的语音应用对应一个运动模式。本实施例中一个运动模式用一个运动模式向量表征;且本实施例中的运动模式向量包含终端的运动元素和姿态元素,相对现有通过采集音频数据进行分析匹配以唤醒语音应用的方案,采集并分析运动数据更为快速、简单,因此功耗更低,准确性也更好。
本实施例中的数据获取模块61设置为具体获取加速度传感器和距离传感器采集的数据,并交由数据处理模块62进行分析处理。
其中,数据处理模块62包括数据预处理单元621和特征提取单元622。其中数据预处理单元621设置为对终端加速度向量进行分离处理得到终端 的实际重力加速度向量和实际线性加速度向量,具体过程如下:
数据预处理单元621对终端加速度向量进行滤波处理分离出理论重力加速度向量和理论线性加速度向量。
数据预处理单元621可以采用各种滤波函数H(raw)=[Gravity,Acceleration对终端加速度向量进行分离。
数据预处理单元621分离出理论重力加速度向量Gravity:[Gx,Gy,Gz]和理论线性加速度向量Acceleration:[Ax,Ay,AZ]后,对理论重力加速度向量Gravity:[Gx,Gy,Gz]进行预处理得到实际重力加速度向量,具体预处理过程包括:
计算理论重力加速度向量[Gx,Gy,Gz]的标量
Figure PCTCN2016097714-appb-000006
将所述理论重力加速度向量[Gx,Gy,Gz]与所述标量scale相乘得到实际重力加速度向量Gravity′=scale*[Gx,Gy,Gz]。
然后数据预处理单元621将终端加速度向量与实际重力加速度向量Gravity′相减得到实际线性加速度向量。
本实施例中,对于得到的实际线性加速度向量,数据预处理单元621也可以进行预处理,例如采用最小二乘来进行平滑拟合,去掉数据的不规则点,这样既不破坏数据的形态又可以使数据更加逼近真实情况。
数据预处理单元621得到实际重力加速度向量和实际线性加速度向量后,特征提取单元622可以提取对应的姿态特征和运动特征,也即获取对应的运动元素和姿态元素。
本实施例中终端的姿态元素包括终端的角度变化量;且本实施例中特征提取单元622根据实际重力加速度向量得到姿态元素,具体包括:
根据当前得到的实际重力加速度向量与之前得到的实际重力加速度向量(例如上一次或之前多次得到的实际重力加速度向量)的变化量得到终端当前角度变化的角速度;
然后根据得到的角速度和加速度向量传感器的采集周期得到终端的角度变化量。
如上所示,本实施例中的运动元素包括线性加速度的最大值、最小值、幅度、变化周期、峰值和谷值等中的至少一种。特征提取单元622根据得到的实际线性加速度向量得到上述运动元素。例如,可以计算在检测周期内采集到的实际线性加速度向量对应的线性加速度的最大值、最小值;将检测周期内采集的各实际线性加速度绘制成波形函数,得到幅度、变化周期、峰值和谷值等。
如上所述,本实施例中的运动数据还可包含通过距离传感器采集的终端与用户的距离值以及终端在用户耳边的停留时间值中的至少一种;本实施例中的运动元素还可包括终端与用户的距离值,以及终端在用户耳边的停留时间中的至少一种。特征提取单元622还设置为从距离传感器采集的数据中进行对应的提取添加。
通过上述过程计算得到终端当前运用模式之运动模式向量后,唤醒控制模块63判断与语音应用对应的运动模式之运动模式向量是否匹配具体可以包括:
唤醒控制模块63计算终端当前运用模式之运动模式向量与语音应用对应的运动模式之运动模式向量的相似度,当相似度大于预设的相似度阈值时,判定终端当前的运动模式与触发某一语音应用对应的运动模式匹配。
本实施例中,对于两个向量之间的相似度计算过程在此不再赘述,可以采用任意相似度计算方法进行计算。
本实施例中相似度阈值的具体设定也可以根据具体应用场景灵活选定。
本发明实施例中的上述数据获取模块61、数据处理模块62以及唤醒 控制模块63的上述功能可以通过Sensor Hub芯片实现,也即可以构造于Sensor Hub中。本实施例在终端休眠中,通过Sensor Hub中的语音应用触发控制装置获取加速度传感器采集的加速度数据以及距离传感器采集的数据分析出终端实时的姿态特性和运动特性,也即分析出终端当前所处的运动模式,一旦满足了特定动作就触发对应的语音应用。
另外,本发明实施例的语音应用触发控制装置通过加速度传感器和距离传感器就能获取终端的姿态特征和运动特性,无需使用其他额外的传感器,成本低。数据预处理所采用的滤波,规范化操作,平滑拟合可以得到更加逼近真实的加速度数据从而分析出终端的精准运动模式。
显然,本领域的技术人员应该明白,上述本发明实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在计算机存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本发明不限制于任何特定的硬件和软件结合。
以上内容是结合具体的实施方式对本发明实施例所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的程序代码:
S1,获取终端的运动数据;
S2,根据所述运动数据确定所述终端当前的运动模式;
S3,在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
根据本发明实施例提供的语音应用触发控制方法、装置、终端及存储介质,直接利用终端的传感器中枢模块(Sensor Hub),获取终端的运动数据,进而根据采集的运动数据确定终端当前的运动模式,并在终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒终端的处理器执行语音应用。由于传感器中枢模块是各种智能终端自带的,不需要额外设置芯片进行数据的采集和分析,同时运动传感器也是各种智能终端标配的器件,因此相对现有语音唤醒方案低成本、通用性更好。另外传感器中 枢模块从先天上就具有功耗低的问题,同时本发明采集和分析的是运动数据,相对音频数据其数据分析过程也更为快速、简单,因此可进一步降低系统功耗,提升用户体验满意度。

Claims (11)

  1. 一种语音应用触发控制方法,应用于终端的传感器中枢模块,包括:
    获取终端的运动数据;
    根据所述运动数据确定所述终端当前的运动模式;
    在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
  2. 如权利要求1所述的语音应用触发控制方法,其中,所述方法还包括为各待控制的语音应用分别设置对应的运动模式,一个运动模式用一个运动模式向量表征;所述运动模式向量包含终端的运动元素和姿态元素。
  3. 如权利要求2所述的语音应用触发控制方法,其中,所述运动数据包含通过加速度传感器采集的终端加速度向量,根据所述运动数据确定所述终端当前的运动模式包括:
    对所述终端加速度向量进行分离处理得到所述终端的实际重力加速度向量和实际线性加速度向量;
    根据得到的实际线性加速度向量确定所述终端当前运动模式的运动元素,并根据所述实际重力加速度向量得到所述终端当前运动模式的姿态元素。
  4. 如权利要求3所述的语音应用触发控制方法,其中,对所述终端加速度向量数据进行分离处理得到所述终端的实际重力加速度向量和实际线性加速度向量包括:
    对所述终端加速度向量进行滤波处理分离出理论重力加速度向量和理论线性加速度向量;
    对所述理论重力加速度向量进行预处理得到实际重力加速度向量;
    将所述终端加速度向量与所述实际重力加速度向量相减得到实际 线性加速度向量。
  5. 如权利要求3所述的语音应用触发控制方法,其中,所述姿态元素包括终端的角度变化量;根据所述实际重力加速度向量得到所述姿态元素包括:
    根据当前得到的实际重力加速度向量与之前得到的实际重力加速度向量的变化量得到终端当前角度变化的角速度;
    根据所述角速度和所述加速度向量传感器的采集周期得到所述终端的角度变化量。
  6. 如权利要求4所述的语音应用触发控制方法,其中,对所述理论重力加速度向量进行预处理得到实际重力加速度向量包括:
    计算理论重力加速度向量[Gx,Gy,Gz]的标量
    Figure PCTCN2016097714-appb-100001
    Figure PCTCN2016097714-appb-100002
    将所述理论重力加速度向量[Gx,Gy,Gz]与所述标量scale相乘得到实际重力加速度向量Gravity′=scale*[Gx,Gy,Gz]。
  7. 如权利要求2-4任一项所述的语音应用触发控制方法,其中,判断所述终端当前的运动模式与触发某一语音应用对应的运动模式是否匹配包括:
    计算所述终端当前运用模式之运动模式向量与所述语音应用对应的运动模式之运动模式向量的相似度;
    当所述相似度大于预设的相似度阈值时,判定所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配。
  8. 如权利要求2-4任一项所述的语音应用触发控制方法,其中,所述运动元素包括线性加速度的最大值、最小值、幅度、变化周期、 峰值和谷值中的至少一种。
  9. 如权利要求8所述的语音应用触发控制方法,其中,所述运动元素还包括终端与用户的距离值,以及终端在用户耳边的停留时间中的至少一种;
    所述运动数据还包含通过距离传感器采集的终端与用户的距离值以及终端在用户耳边的停留时间值中的至少一种。
  10. 一种语音应用触发控制装置,应用于终端的传感器中枢模块中,包括:
    数据获取模块,设置为获取终端的运动数据;
    数据处理模块,设置为根据所述运动数据确定所述终端当前的运动模式;
    唤醒控制模块,设置为在所述终端当前的运动模式与触发某一语音应用对应的运动模式匹配时,唤醒所述终端的处理器执行所述语音应用。
  11. 一种终端,包括如权利要求10所述的语音应用触发控制装置。
PCT/CN2016/097714 2016-07-20 2016-08-31 语音应用触发控制方法、装置及终端 WO2018014432A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610572887.0 2016-07-20
CN201610572887.0A CN107643908A (zh) 2016-07-20 2016-07-20 语音应用触发控制方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2018014432A1 true WO2018014432A1 (zh) 2018-01-25

Family

ID=60991684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/097714 WO2018014432A1 (zh) 2016-07-20 2016-08-31 语音应用触发控制方法、装置及终端

Country Status (2)

Country Link
CN (1) CN107643908A (zh)
WO (1) WO2018014432A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831959A (zh) * 2020-03-05 2020-10-27 北京嘀嘀无限科技发展有限公司 运动数据处理方法、装置、终端和计算机可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519895A (zh) * 2018-03-30 2018-09-11 四川斐讯信息技术有限公司 一种智能设备的控制方法和系统
CN111124507A (zh) * 2019-11-18 2020-05-08 珠海格力电器股份有限公司 一种语音设备及其唤醒方法、唤醒装置、存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561156A (zh) * 2013-09-24 2014-02-05 北京光年无限科技有限公司 一种通过位移唤醒语音助手的方法
CN104035559A (zh) * 2014-06-04 2014-09-10 小米科技有限责任公司 控制移动终端的方法及装置
CN204116902U (zh) * 2014-02-10 2015-01-21 美的集团股份有限公司 对家用电器语音控制的语音控制端及控制终端
US20150100323A1 (en) * 2013-10-04 2015-04-09 Panasonic Intellectual Property Corporation Of America Wearable terminal and method for controlling the same
CN104536558A (zh) * 2014-10-29 2015-04-22 三星电子(中国)研发中心 一种智能指环和控制智能设备的方法
CN104571529A (zh) * 2015-01-28 2015-04-29 锤子科技(北京)有限公司 一种应用程序唤醒方法以及移动终端
CN105678222A (zh) * 2015-12-29 2016-06-15 浙江大学 一种基于移动设备的人体行为识别方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102179811B1 (ko) * 2012-12-03 2020-11-17 엘지전자 주식회사 포터블 디바이스 및 음성 인식 서비스 제공 방법
SE537579C2 (sv) * 2013-04-11 2015-06-30 Crunchfish Ab Bärbar enhet nyttjandes en passiv sensor för initiering av beröringsfri geststyrning
US9970768B2 (en) * 2013-12-20 2018-05-15 Fca Us Llc Vehicle information/entertainment management system
CN105045394A (zh) * 2015-08-03 2015-11-11 歌尔声学股份有限公司 一种可穿戴式电子终端中预设功能的启动方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561156A (zh) * 2013-09-24 2014-02-05 北京光年无限科技有限公司 一种通过位移唤醒语音助手的方法
US20150100323A1 (en) * 2013-10-04 2015-04-09 Panasonic Intellectual Property Corporation Of America Wearable terminal and method for controlling the same
CN204116902U (zh) * 2014-02-10 2015-01-21 美的集团股份有限公司 对家用电器语音控制的语音控制端及控制终端
CN104035559A (zh) * 2014-06-04 2014-09-10 小米科技有限责任公司 控制移动终端的方法及装置
CN104536558A (zh) * 2014-10-29 2015-04-22 三星电子(中国)研发中心 一种智能指环和控制智能设备的方法
CN104571529A (zh) * 2015-01-28 2015-04-29 锤子科技(北京)有限公司 一种应用程序唤醒方法以及移动终端
CN105678222A (zh) * 2015-12-29 2016-06-15 浙江大学 一种基于移动设备的人体行为识别方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831959A (zh) * 2020-03-05 2020-10-27 北京嘀嘀无限科技发展有限公司 运动数据处理方法、装置、终端和计算机可读存储介质

Also Published As

Publication number Publication date
CN107643908A (zh) 2018-01-30

Similar Documents

Publication Publication Date Title
US10353476B2 (en) Efficient gesture processing
CN105184325B (zh) 一种移动智能终端
Wang et al. Fall detection based on dual-channel feature integration
EP2802255B1 (en) Activity classification in a multi-axis activity monitor device
JP6064280B2 (ja) ジェスチャを認識するためのシステムおよび方法
Juefei-Xu et al. Gait-id on the move: Pace independent human identification using cell phone accelerometer dynamics
CN109276255B (zh) 一种肢体震颤检测方法及装置
WO2017050140A1 (zh) 一种人体动作识别方法、识别用户动作的方法和智能终端
Thiemjarus et al. A study on instance-based learning with reduced training prototypes for device-context-independent activity recognition on a mobile phone
CN111288986B (zh) 一种运动识别方法及运动识别装置
Jensen et al. Classification of kinematic swimming data with emphasis on resource consumption
CN109840480B (zh) 一种智能手表的交互方法及交互系统
KR101418333B1 (ko) 사용자 동작 인식 장치 및 그 방법
US11620995B2 (en) Voice interaction processing method and apparatus
WO2018014432A1 (zh) 语音应用触发控制方法、装置及终端
CN108847941B (zh) 身份认证方法、装置、终端及存储介质
CN106662970A (zh) 一种设置指纹识别器中断阈值的方法、装置和终端设备
Fernandez-Lopez et al. Optimizing resources on smartphone gait recognition
CN107533371A (zh) 使用影响手势的用户接口控制
CN111803902B (zh) 泳姿识别方法、装置、可穿戴设备及存储介质
Iyer et al. Generalized hand gesture recognition for wearable devices in IoT: Application and implementation challenges
CN111796663B (zh) 场景识别模型更新方法、装置、存储介质及电子设备
Pipanmaekaporn et al. Mining Acceleration Data for Smartphone-based Fall Detection
KR101958334B1 (ko) 노이즈를 고려한 동작 인식 방법 및 장치
CN111797656A (zh) 人脸关键点检测方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16909359

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16909359

Country of ref document: EP

Kind code of ref document: A1