CN108595003A - Function control method and relevant device - Google Patents

Function control method and relevant device Download PDF

Info

Publication number
CN108595003A
CN108595003A CN201810367319.6A CN201810367319A CN108595003A CN 108595003 A CN108595003 A CN 108595003A CN 201810367319 A CN201810367319 A CN 201810367319A CN 108595003 A CN108595003 A CN 108595003A
Authority
CN
China
Prior art keywords
gesture
user
wearable device
control instruction
combination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810367319.6A
Other languages
Chinese (zh)
Inventor
郭富豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810367319.6A priority Critical patent/CN108595003A/en
Publication of CN108595003A publication Critical patent/CN108595003A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application discloses a kind of function control method and relevant device, wearable device includes photoelectric sensor and processor, wherein:The photoelectric sensor, multiple user gestures for identification, there are at least two user gestures from different hands in the multiple user gesture;The multiple user gesture is combined into user gesture combination by the processor for the sequencing according to the gesture identification time;In the case where user gesture combination combines with setting gesture and matches, executes the setting gesture and combine corresponding function.It is realized using the embodiment of the present application and intelligent control is carried out to mobile terminal.

Description

Function control method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a function control method and related device.
Background
Currently, with the increasing popularization of mobile terminals such as smart phones, smart phones have become an inseparable part of smart phone users in life. The user can send information, talk, browse a webpage and the like through the smart phone. Wearable equipment accessible adopts wireless technologies such as bluetooth to be connected with mobile terminal, and wearable equipment possesses functions such as voice conversation, audio playback.
Disclosure of Invention
The embodiment of the application provides a function control method and related equipment, and intelligent control over a mobile terminal is achieved.
In a first aspect, an embodiment of the present application provides a wearable device, which includes an optoelectronic sensor and a processor, wherein:
the photoelectric sensor is used for recognizing a plurality of user gestures, wherein at least two user gestures in the plurality of user gestures are from different hands;
the processor is used for combining the plurality of user gestures into a user gesture combination according to the sequence of gesture recognition time; and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
In a second aspect, an embodiment of the present application provides a function control method, applied to a wearable device including a photosensor, including:
identifying, by the photosensor, a plurality of user gestures, at least two of which are from different hands;
combining the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time;
and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
In a third aspect, an embodiment of the present application provides a function control apparatus, which is applied to a wearable device, and includes a gesture obtaining unit, a gesture processing unit, and a function executing unit, where:
the gesture acquiring unit is used for identifying a plurality of user gestures, wherein at least two user gestures are from different hands;
the gesture processing unit is used for combining the plurality of user gestures into a user gesture combination according to the sequence of gesture recognition time;
and the function execution unit is used for executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
In a fourth aspect, embodiments of the present application provide a wearable device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods in the second aspect of the present application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in any one of the methods of the second aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, the wearable device recognizes a plurality of user gestures through the photoelectric sensor, then combines the plurality of user gestures into a user gesture combination, and executes a function corresponding to the set gesture combination when the user gesture combination is matched with the set gesture combination. The intelligent control of the mobile terminal is realized, and in addition, the gesture combination is not a simple gesture, so that the problem of false triggering is avoided.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a function control system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a wearable device provided in an embodiment of the present application;
fig. 3A is a schematic flow chart of a function control method according to an embodiment of the present application;
FIG. 3B is a schematic diagram of an example provided by an embodiment of the present application;
FIG. 3C is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 3D is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 3E is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 3F is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 3G is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 3H is a schematic diagram of another example provided by an embodiment of the present application;
FIG. 4 is a flow chart illustrating another method for controlling functions provided by embodiments of the present application;
fig. 5 is a schematic structural diagram of another wearable device provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a function control device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a function control system according to an embodiment of the present application, in which the function control system includes: mobile terminal and wearable equipment. Wherein, wearable equipment passes through wireless technologies such as bluetooth, infrared and is connected with mobile terminal.
The Mobile terminal may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication functions, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and so on.
Wherein, wearable equipment can include wireless earphone, intelligent bracelet, intelligent ear nail, intelligent button, intelligent ring, intelligent glasses, helmet etc. and wearable equipment has the function such as answering the incoming call, audio playback, audio acquisition, knocking signal and acquireing.
The working principle of the function control system of the embodiment of the application is as follows: the wearable device identifies a plurality of user gestures through a photoelectric sensor of the wearable device, wherein at least two user gestures exist in the plurality of user gestures and come from different hands, and each user gesture corresponds to one gesture identification time; the wearable equipment combines the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time; if the user gesture combination is matched with a preset certain set gesture combination, the wearable device executes the function corresponding to the set gesture combination.
At present, some wearable equipment only support voice conversation and audio playback function, some emerging intelligent wearable equipment can realize intelligent control, but intelligent control is realized through light induction, for example detect the light intensity in the near ear side and the ear outside according to the received light signal, compare the size of the light intensity in the near ear side and the ear outside to according to the contrast result and generate the signal of telecommunication, then send this signal of telecommunication to mobile terminal, supply mobile terminal according to the corresponding function of signal of telecommunication control. Because light receives environmental impact great, like weather, indoor light, outdoor light etc. lead to the contrast result based on light intensity production accurate inadequately, and then lead to the intelligent control mistake, the validity is low.
In the embodiment of the application, intelligent control over the mobile terminal is achieved through user gesture combination, in addition, a comparison process is carried out before intelligent control, at least two user gestures come from different hands in the user gesture combination, control accuracy is improved, and the problem of mistaken touch is avoided.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a wearable device 200 according to an embodiment of the present application, where the wearable device 200 includes: the photoelectric sensor 10, the processor 20, the memory 30, the tapping signal acquiring device 40, the camera 50, the communication interface 60, the microphone 70 and the earpiece 80, and the photoelectric sensor 10, the memory 30, the tapping signal acquiring device 40, the camera 50, the communication interface 60, the microphone 70 and the earpiece 80 are connected with the processor 20, wherein:
a photosensor 10 for recognizing a plurality of user gestures, at least two of which are from different hands;
a processor 20, configured to combine the plurality of user gestures into a user gesture combination according to a sequence of gesture recognition times; and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
Further, the wearable device 200 further comprises a battery (not shown) for supplying power to the wearable device 200.
Therein, the communication interface 60 is configured to pair the wearable device 200 with a mobile terminal and to transmit signals to the mobile terminal, which may be, for example, control instructions, audio information, etc.
Wherein, the tapping signal acquiring device 40 is configured to acquire a tapping signal input by a user. The tapping signal includes: the knocking action combination is composed of a plurality of knocking actions, and the knocking actions all comprise simple knocking actions or complex knocking actions or comprise simple knocking actions and complex knocking actions, which are not limited herein. Wherein, the knocking signal acquiring device 40 includes at least one of the following: capacitive sensors, acceleration sensors, pressure sensors, gyroscopes, etc.
The processor 20 is a control center of the wearable device, connects various parts of the entire wearable device by using various interfaces and lines, and performs various functions and processes of the wearable device by running or executing software programs and/or modules stored in the memory 30 and calling data stored in the memory 30, thereby performing overall monitoring of the wearable device. Alternatively, the processor 20 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory 30 may be used to store software programs and modules, and the processor 20 executes various functional applications and data processing of the wearable device by running the software programs and modules stored in the memory 30. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the passing of the wearable device, and the like. Further, the memory 30 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
In another implementation, the wearable device may include control circuitry (not shown), which may include storage and processing circuitry. The storage and processing circuitry may be memory, such as hard disk drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), etc., and embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry may be used to control operation of the wearable device. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry may be used to run software in the wearable device, such as a sensor switch application, a establish communication connection application with the mobile terminal device, a break communication connection application with the mobile terminal device, a communication application with the mobile terminal, an audio playback application, and the like. The software may be used to perform some control operations, such as recognizing gestures based on gesture sensors, capturing image information based on a camera, capturing audio based on audio components, playing audio based on audio components, and other functions in the wearable device 100, and the embodiments of the present application are not limited thereto.
In another implementation, the wearable device may include input-output circuitry (not shown). The input-output circuitry may be used to enable the wearable device to enable input and output of data, i.e., to allow the wearable device 100 to receive data from and also to allow the wearable device to output data from the wearable device to an external device. The input-output circuitry may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in the input-output circuitry may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, and filters. For example, the wireless Communication circuitry in the input-output circuitry may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the input-output circuitry may include a near field communication antenna and a near field communication transceiver.
In another implementation, the wearable device may be an audio component. The audio components may be used to provide audio input and output functionality for the wearable device 100. Audio components in a wearable device may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
The wearable device may further include other input-output units (not shown), which may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, cameras, light emitting diodes and other status indicators, etc.
It can be seen that, in the embodiment of the present application, the wearable device recognizes a plurality of user gestures through the photoelectric sensor, then combines the plurality of user gestures into a user gesture combination, and executes a function corresponding to the set gesture combination when the user gesture combination is matched with the set gesture combination. The intelligent control of the mobile terminal is realized, and in addition, the gesture combination is not a simple gesture, so that the problem of false triggering is avoided.
In an embodiment of the present application, the tapping signal acquiring device 40 is configured to acquire a tapping signal input by a user on the wearable device; the processor is further used for starting the photoelectric sensor under the condition that the knocking signal is matched with a set knocking signal; or,
a processor 20, further configured to activate the photoelectric sensor if a pressing operation of a user for a specified position of the wearable device is detected.
Wherein the wearable device 200 further comprises a pressure sensor (not shown), the pressing operation being detectable by the pressure sensor, which is arranged below the designated position.
In an embodiment of the present application, the camera 50 is configured to collect image information;
the processor 20 is further configured to determine whether a blocking object exists in front of the camera according to the image information; and prompting a user to remove the shielding object under the condition that the shielding object is arranged in front of the camera.
In an embodiment of the application, in terms of executing the function corresponding to the setting gesture combination, the processor 20 is specifically configured to:
determining a control instruction corresponding to the set gesture combination, and sending a control instruction to at least one mobile terminal connected to the wearable device through a communication interface 60, where the control instruction includes control information, and the control instruction is used to instruct the at least one mobile terminal to execute the control information.
In an embodiment of the present application, the communication interface 60 is further configured to receive response information sent by the mobile terminal for the control instruction;
a processor 20, further configured to convert the response information into audio;
and an earpiece 80 for playing the audio.
It should be noted that, the specific implementation process of the present embodiment may refer to the specific implementation process described in the following method, and is not described here.
Referring to fig. 3A, fig. 3A is a schematic flowchart of a function control method according to an embodiment of the present application, applied to the wearable device shown in fig. 2, the method including:
step 301: the wearable device identifies a plurality of user gestures through the photoelectric sensor, wherein at least two user gestures exist in the plurality of user gestures and are from different hands, and each user gesture corresponds to one gesture identification time.
The photoelectric sensor comprises a camera which can be a common camera or an infrared camera and the like. The photoelectric sensor may also include other sensors capable of recognizing user gestures. The position of the photoelectric sensor on the wearable device is set as required, and the gesture of the user can be conveniently recognized.
The plurality of user gestures each include a non-contact gesture, or the plurality of user gestures each include a contact gesture, or the plurality of user gestures include a non-contact gesture or a contact gesture, which is not limited herein.
Further, the photoelectric sensor includes a camera, and the specific implementation manner of the wearable device recognizing the gestures of the plurality of users through the photoelectric sensor is as follows: the wearable equipment starts the camera to collect video to obtain video information; the wearable device identifies the video information and obtains a plurality of user gestures in the video information. The user gestures in the video information can be realized by the existing image recognition technology, and are not described herein.
It should be noted that, when the position of the wearable device is not changed, the position of the camera of the wearable device is also not changed, so that the wearable device can determine whether different user gestures are from different hands according to the direction of the user gestures. For example, assuming that the wearable device is a wireless headset, if the user puts the headset into the left ear of the user, the user firstly uses the left hand to perform a finger-ringing gesture at the left ear, the user gesture recognized by the camera of the wireless headset is a finger-ringing gesture, the finger direction is forward relative to the left ear, and then the user uses the right hand to perform a finger-ringing gesture at the left ear, the user gesture recognized by the camera of the wireless headset is a finger-ringing gesture, the finger direction is backward relative to the left ear, and it is visible that although both the two gestures are finger-ringing gestures, the finger directions are different, and thus it can be determined that the two gestures are from different hands.
Further, two user gestures adjacent to each other in the gesture recognition time among the plurality of user gestures are different, for example, assuming that the plurality of user gestures are 3 user gestures, where the 3 user gestures include a user gesture 1, a user gesture 2, and a user gesture 3, the user gesture 1 corresponds to a gesture recognition time 1, the user gesture 2 corresponds to a gesture recognition time 2, and the user gesture 3 corresponds to a gesture recognition time 3, the gesture recognition time 1 is adjacent to the gesture recognition time 2, and the gesture recognition time 2 is adjacent to the gesture recognition time 3, then the user gesture 1 is different from the user gesture 2, and the user gesture 2 is different from the user gesture 3.
Further, two user gestures adjacent to each other in the gesture recognition time among the plurality of user gestures are the same, for example, assume that the plurality of user gestures are 3 user gestures, where the 3 user gestures include a user gesture 1, a user gesture 2, and a user gesture 3, the user gesture 1 corresponds to a gesture recognition time 1, the user gesture 2 corresponds to a gesture recognition time 2, and the user gesture 3 corresponds to a gesture recognition time 3, the gesture recognition time 1 is adjacent to the gesture recognition time 2, and the gesture recognition time 2 is adjacent to the gesture recognition time 3, so that the user gesture 1 is the same as the user gesture 2, and the user gesture 2 is the same as the user gesture 3.
Step 302: the wearable device combines the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time.
For example, if the plurality of user gestures are 3 user gestures, where the 3 user gestures include a user gesture 1, a user gesture 2, and a user gesture 3, the user gesture 1 corresponds to a gesture recognition time 1, the user gesture 2 corresponds to a gesture recognition time 2, and the user gesture 3 corresponds to a gesture recognition time 3, the gesture recognition time 1 is earlier than the gesture recognition time 2, and the gesture recognition time 2 is earlier than the gesture recognition time 3, then the user gestures combined by the wearable devices are combined into a user gesture 1-a user gesture 2-a user gesture 3.
Step 303: and under the condition that the user gesture combination is matched with the set gesture combination, the wearable equipment executes the function corresponding to the set gesture combination.
And if the user gesture combination is the same as the set gesture combination, the user gesture combination is matched with the set gesture combination, otherwise, the user gesture combination is not matched with the set gesture combination. For example, assume that the user gesture combination is user gesture 1-user gesture 2-user gesture 3, and if the setting gesture combination is also user gesture 1-user gesture 2-user gesture 3, it indicates that the user gesture combination matches the setting gesture combination.
Further, different setting gesture combinations correspond to different functions, for example, setting gesture combination 1 corresponds to function 1, setting gesture combination 2 corresponds to function 2, setting gesture combination 1 is different from setting gesture combination 2, and function 1 is different from function 2.
It can be seen that, in the embodiment of the present application, the wearable device recognizes a plurality of user gestures through the photoelectric sensor, then combines the plurality of user gestures into a user gesture combination, and executes a function corresponding to the set gesture combination when the user gesture combination is matched with the set gesture combination. The intelligent control of the mobile terminal is realized, and in addition, the gesture combination is not a simple gesture, so that the problem of false triggering is avoided.
In an embodiment of the application, before the wearable device recognizes a plurality of user gestures through the photoelectric sensor, the method further includes:
the method comprises the steps that a wearable device acquires a tapping signal input by a user on the wearable device; and starting the photoelectric sensor under the condition that the knocking signal is matched with a set knocking signal.
Wherein, the knocking signal includes: at least one of a number of taps, a frequency of taps, a tap sound, and a tap action. The knocking action can be one knocking action or a plurality of knocking actions. When the tapping action is a plurality of tapping actions, at least two tapping actions among the plurality of tapping actions are different from each other.
Furthermore, the set tapping signal is input by the user according to the set tapping action and the set tapping frequency in advance. Wherein the set tapping action is a simple tapping action, such as tapping twice with one finger.
Specifically, if in wearable equipment opens, photoelectric sensor is in the on-state always, can lead to the consumption of power consumption too big, consequently in this application embodiment, sets up photoelectric sensor and opens when having needs, has saved the consumption of power consumption, triggers photoelectric sensor's start-up through knocking the action in addition, has increased human-computer interaction's interest.
In an embodiment of the application, before the wearable device recognizes a plurality of user gestures through the photoelectric sensor, the method further includes:
the wearable device activates the photoelectric sensor when a pressing operation of a user for a specified position of the wearable device is detected.
Assuming that the wearable device is a wireless headset, the specified position may be a certain position of an earbud housing of the wireless headset, or a certain position of an earstem housing of the wireless headset, which is not limited herein.
Specifically, if in wearable equipment opens, photoelectric sensor is in the on-state always, can lead to the consumption of power too big, consequently in this application embodiment, sets up photoelectric sensor and opens when having needs, has saved the consumption of power, in addition through pressing the start-up that triggers photoelectric sensor at the assigned position, convenient and fast promotes user experience.
In an embodiment of the application, before the wearable device recognizes a plurality of user gestures through the photoelectric sensor, the method further includes:
the wearable equipment acquires voice information input by a user;
the wearable device analyzes the voice signal to obtain at least one keyword;
and under the condition that at least one keyword in the at least one keyword is matched with a set target keyword, the wearable device starts the photoelectric sensor.
For example, assuming that the photoelectric sensor is a camera, the voice signal input by the user is "start the camera", and at least one keyword obtained by analyzing the voice signal includes: and starting and shooting a camera. If the set target keywords include: starting, opening, turning on, a camera and the like, 2 keywords included in a voice signal input by a user are matched with a set target keyword, and the wearable device starts the camera.
Specifically, if in wearable equipment opens, photoelectric sensor is in the on-state always, can lead to the consumption of power too big, consequently in this application embodiment, sets up photoelectric sensor and opens when having needs, has saved the consumption of power, triggers photoelectric sensor's start-up through pronunciation in addition, convenient and fast promotes user experience.
In an embodiment of the present application, the photoelectric sensor includes a camera, and after the wearable device activates the photoelectric sensor, the method further includes:
the wearable equipment collects image information through the camera;
the wearable equipment judges whether a shelter exists in front of the camera or not according to the image information;
under the condition that a shelter is in front of the camera, the wearable equipment prompts a user to remove the shelter;
in the absence of an obstruction in front of the camera, the wearable device recognizes a plurality of user gestures through the photosensor
Further, the wearable device prompts the user to remove the obstruction according to the following specific embodiments: the wearable device acquires prompt information associated with the shelter, wherein the prompt information is used for prompting a user to remove the shelter; and the wearable equipment plays the prompt message. For example, if the obstruction is hair, the hair may be "hair removed" as the prompt. For another example, if the barrier is a collar, the hair may be associated with a reminder to remove the collar.
Further, the wearable device judges whether a specific implementation mode of a blocking object exists in front of the camera through the image information is as follows: the wearable device identifies the image information to obtain at least one object in the image information; and under the condition that at least one object in the at least one object is matched with a set target object, judging that a shielding object exists in front of the camera, otherwise, judging that no shielding object exists in front of the camera.
Specifically, if the wearable device is a wireless headset, when the user wears the wireless headset, the camera for shielding the wireless headset is usually a hair, a collar, a hat, an earring, or other objects, and thus whether a shielding object exists in front of the camera can be determined by determining whether the acquired image information includes the objects.
The wearable device prompts the user to remove the barrier by an earphone of the wearable device, or prompts the user by other means, which is not limited herein.
Therefore, before the gesture of the user is recognized, whether a shielding object exists in front of the camera is judged firstly, and the gesture of the user is recognized under the condition that the shielding object does not exist in front of the camera, so that the accuracy of gesture recognition is improved.
In an embodiment of the application, a specific implementation manner of the wearable device executing the function corresponding to the set gesture combination includes:
the wearable device determines a control instruction corresponding to the set gesture combination, and sends the control instruction to at least one mobile terminal connected with the wearable device, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
The following is a detailed description of the different control commands.
Example 1: the volume is amplified.
The user gesture combination combined in step 302 is a ringing finger gesture-listening gesture, which may specifically be that a left hand is used for making a ringing finger gesture first, and then a right hand is used for making a listening gesture, where control information included in the control instruction corresponding to the user gesture combination is an amplified volume, and then the mobile terminal performs an amplified volume operation after receiving the control instruction sent by the wearable device, as specifically shown in fig. 3B.
Further, the specific volume to be amplified by the mobile terminal is preset by the user, such as amplifying the volume of 1 grid, amplifying the volume of 2 grids or amplifying the volume of 3 grids, and the like.
Further, how much volume the mobile terminal specifically amplifies is specified by the wearable device through a control instruction, the control instruction carries the magnitude of the amplification volume in addition to the control information, and the magnitude of the amplification volume is determined based on the magnitude of the sound emitted by the pointing gesture. The larger the sound size, the larger the corresponding amplified volume, and the smaller the sound size, the smaller the corresponding amplified volume.
Example 2: the volume is reduced.
The user gesture combination combined in step 302 is a finger snapping gesture-ear covering gesture, which may specifically be a finger snapping gesture performed by using the left hand first and an ear covering gesture performed by using the right hand, where the control information included in the control instruction corresponding to the user gesture combination is volume reduction, and then the mobile terminal performs a volume reduction operation after receiving the control instruction sent by the wearable device, as specifically shown in fig. 3C.
Further, the specific volume reduction of the mobile terminal is set by the user in advance, such as reducing the volume of 1 grid, reducing the volume of 2 grids, or reducing the volume of 3 grids, and so on.
Further, how much the volume of the mobile terminal is specifically reduced is specified by the wearable device through a control instruction, and the control instruction carries the volume reduction size in addition to the control information, and the volume reduction size is determined based on the size of the sound emitted by the finger-making gesture. The larger the sound size, the larger the corresponding reduced volume, and the smaller the sound size, the smaller the corresponding reduced volume.
Example 3: and (5) making a call.
The user gesture combination combined in step 302 is a phone dialing gesture-digital gesture, which may specifically be that a left hand is used to make a phone dialing gesture first, and then a right hand is used to make a digital gesture, where the control information included in the control instruction corresponding to the user gesture combination is to make a phone call to the user corresponding to the digital gesture, and then the mobile terminal receives the control instruction sent by the wearable device and then makes a phone call to the user corresponding to the digital gesture, as specifically shown in fig. 3D.
Example 4: and hanging up the phone.
The user gesture combination combined in step 302 is a ring gesture-hang up phone gesture, specifically, a left hand is used for making the ring gesture first, and a right hand is used for making the hang up phone gesture, and the control information included in the control instruction corresponding to the user gesture combination is to hang up the current call, so that the mobile terminal hangs up the current call after receiving the control instruction sent by the wearable device, as specifically shown in fig. 3E.
Example 5: and playing music.
The user gesture combination combined in step 302 is a ring finger gesture-digital gesture, which may specifically be that a left hand is used to make the ring finger gesture first, and then a right hand is used to make the digital gesture, where the control information included in the control instruction corresponding to the user gesture combination is to start a music player corresponding to the digital gesture to play music, and then after receiving the control instruction sent by the wearable device, the mobile terminal starts the music player corresponding to the digital gesture to play music, as specifically shown in fig. 3F.
Example 6: the music is stopped.
The user gesture combination combined in step 302 is a ring finger gesture-ring finger gesture, which may specifically be that a left hand is used to make the ring finger gesture first, and then a right hand is used to make the ring finger gesture, where the control information included in the control instruction corresponding to the user gesture combination is the music that is currently played, and then the mobile terminal stops the currently played music after receiving the control instruction sent by the wearable device, as specifically shown in fig. 3G.
Example 7: and sending a distress short message.
The user gesture combination combined in step 302 is a gesture of letter S, a gesture of letter O, and a gesture of letter S, and specifically, the user gesture combination may be a gesture of letter S first performed by using a left hand, then performed by using a right hand, and finally performed by using a left hand, where a control instruction corresponding to the user gesture combination includes control information indicating that a preset distress message is sent to an appointed user, and then the mobile terminal sends the preset distress message to the appointed user after receiving the control instruction sent by the wearable device, as shown in fig. 3H.
In an embodiment of the application, after the wearable device sends the control instruction to the mobile terminal in communication with the wearable device, the method further includes:
the wearable device receives response information sent by the mobile terminal aiming at the control instruction;
the wearable device converts the response information into audio and plays the audio.
Specifically, the control instruction includes multiple types, in some cases, the wearable device can independently complete the control instruction, and in some cases, the control instruction needs to be completed in combination with the mobile terminal, such as answering a call, making a call, playing audio, and the like. For some cases of completing the control command in combination with the mobile terminal, response information aiming at the control command from the mobile terminal is received, so that the response information is convenient for a user to check and receive, and the response information is converted into audio and played. For example, if the control information of the control command is a call to a certain telephone, if the response information is text information of "call has been made to a certain telephone", the text information is converted into audio of "call has been made to a certain telephone".
The embodiment of the present application also provides another more detailed method flow, as shown in fig. 4, which is applied to the wearable device shown in fig. 2, and the method includes:
step 401: a wearable device acquires a tapping signal input by a user on the wearable device.
Step 402: and under the condition that the knocking signal is matched with a set knocking signal, the wearable device starts the photoelectric sensor.
Step 403: the wearable device collects image information through the camera.
Step 404: and the wearable equipment judges whether a shelter exists in front of the camera or not according to the image information.
In the case where there is a blockage in front of the camera, step 405 is performed.
If there is no obstruction in front of the camera, step 406 is performed.
Step 405: the wearable device prompts the user to remove the obstruction.
Step 406: the wearable device identifies a plurality of user gestures through the electro-optical sensor, wherein at least two of the user gestures are from different hands.
Step 407: the wearable device combines the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time.
Step 408: and under the condition that the user gesture combination is matched with a set gesture combination, the wearable device determines a control instruction corresponding to the set gesture combination, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
Step 409: the wearable device sends a control instruction to at least one mobile terminal connected with the wearable device.
Step 410: and the wearable device receives response information sent by the mobile terminal for the control instruction.
Step 411: the wearable device converts the response information into audio and plays the audio.
It should be noted that, for the specific implementation of the steps of the method shown in fig. 4, reference may be made to the specific implementation of the method, and a description thereof is omitted here.
Consistent with the embodiments shown in fig. 3A and fig. 4, please refer to fig. 5, fig. 5 is a schematic structural diagram of a wearable device provided in an embodiment of the present application, and as shown, the wearable device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for:
identifying a plurality of user gestures, at least two of the user gestures in the plurality of user gestures being from different hands;
combining the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time;
and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
In an embodiment of the application, before recognizing the plurality of user gestures, the program includes instructions for further performing the steps of:
acquiring a tapping signal input by a user on the wearable device; starting the photoelectric sensor under the condition that the knocking signal is matched with a set knocking signal; or,
the method includes initiating the photoelectric sensor upon detecting a pressing operation of a user for a specified location of the wearable device.
In an embodiment of the application, after activating the photosensor, the program includes instructions for further performing the steps of:
acquiring image information through a camera;
judging whether a shielding object exists in front of the camera or not according to the image information;
and prompting a user to remove the shielding object under the condition that the shielding object is arranged in front of the camera.
In an embodiment of the application, in terms of executing the function corresponding to the set gesture combination, the program includes instructions specifically configured to:
determining a control instruction corresponding to the set gesture combination, and sending the control instruction to at least one mobile terminal connected with the wearable device, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
In an embodiment of the application, after sending the control instruction to the mobile terminal in communication with the wearable device, the program includes instructions for further performing the following steps:
receiving response information sent by the mobile terminal aiming at the control instruction;
converting the response information into audio, and playing the audio.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
Referring to fig. 6, fig. 6 is a function control device provided in an embodiment of the present application, and is applied to the wearable device shown in fig. 2, and the function control device includes a gesture obtaining unit 601, a gesture processing unit 602, a function executing unit 603, a tapping signal obtaining unit 604, a starting unit 605, an image capturing unit 606, a determining unit 607, a prompting unit 608, a receiving unit 609, a converting unit 610, and a playing unit 611, where:
a gesture obtaining unit 601, configured to recognize a plurality of user gestures, where at least two of the user gestures are from different hands;
a gesture processing unit 602, configured to combine the multiple user gestures into a user gesture combination according to a sequence of gesture recognition times;
a function executing unit 603, configured to execute a function corresponding to the set gesture combination when the user gesture combination matches the set gesture combination.
In an embodiment of the present application, the tapping signal obtaining unit 604 is configured to obtain a tapping signal input by a user on the wearable device; a starting unit 605 for starting the photo sensor.
In an embodiment of the present application, the starting unit 605 starts the photoelectric sensor when detecting a pressing operation of a user on a specified position of the wearable device.
In an embodiment of the present application, the image collecting unit 606 is configured to collect image information through a camera;
the judging unit 607 is used for judging whether a shelter exists in front of the camera according to the image information;
a prompting unit 608, configured to prompt a user to remove a blocking object when the blocking object exists in front of the camera.
In an embodiment of the application, in terms of executing the function corresponding to the setting gesture combination, the function executing unit 603 is specifically configured to:
determining a control instruction corresponding to the set gesture combination, and sending the control instruction to at least one mobile terminal connected with the wearable device, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
In an embodiment of the present application, the receiving unit 609 is configured to receive response information sent by the mobile terminal for the control instruction;
a converting unit 610 for converting the response information into audio;
the playing unit 611 is configured to play the audio.
It should be noted that the wearable device described in the embodiments of the present application is presented in the form of a functional unit. The term "unit" as used herein is to be understood in its broadest possible sense, and objects used to implement the functions described by the respective "unit" may be, for example, an integrated circuit ASIC, a single circuit, a processor (shared, dedicated, or chipset) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
The gesture obtaining unit 601, the gesture processing unit 602, the function executing unit 603, the starting unit 605, the image capturing unit 606, the determining unit 607 and the converting unit 610 may be processors or controllers, the prompting unit 608 and the playing unit 611 may be receivers or speakers, and the tapping signal obtaining unit 604 may be a tapping signal obtaining device.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a wearable device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a wearable device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific implementation and application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. A wearable device, comprising an electro-optical sensor and a processor, wherein:
the photoelectric sensor is used for recognizing a plurality of user gestures, wherein at least two user gestures in the plurality of user gestures are from different hands;
the processor is used for combining the plurality of user gestures into a user gesture combination according to the sequence of gesture recognition time; and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
2. The wearable device of claim 1, further comprising a tapping signal acquisition device, wherein:
the knocking signal acquisition device is used for acquiring a knocking signal input by a user on the wearable equipment; the processor is further used for starting the photoelectric sensor under the condition that the knocking signal is matched with a set knocking signal; or,
the processor is further configured to activate the photoelectric sensor if a pressing operation of a user for a specified position of the wearable device is detected.
3. The wearable device of claim 2, wherein the photosensor comprises a camera, wherein:
the camera is used for collecting image information;
the processor is also used for judging whether a shelter exists in front of the camera or not according to the image information; and prompting a user to remove the shielding object under the condition that the shielding object is arranged in front of the camera.
4. The wearable device according to any of claims 1-3, further comprising a communication interface, wherein in performing the function corresponding to the set gesture combination, the processor is specifically configured to:
determining a control instruction corresponding to the set gesture combination, and sending the control instruction to at least one mobile terminal connected with the wearable device through the communication interface, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
5. The wearable device of claim 4, further comprising an earpiece, wherein:
the communication interface is further used for receiving response information sent by the mobile terminal aiming at the control instruction;
the processor is further used for converting the response information into audio;
the earphone is used for playing the audio.
6. A function control method is applied to a wearable device comprising a photoelectric sensor, and comprises the following steps:
identifying, by the photosensor, a plurality of user gestures, at least two of which are from different hands;
combining the plurality of user gestures into a user gesture combination according to the sequence of the gesture recognition time;
and executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
7. The method of claim 6, wherein prior to identifying a plurality of user gestures by the photosensor, the method further comprises:
acquiring a tapping signal input by a user on the wearable device; starting the photoelectric sensor under the condition that the knocking signal is matched with a set knocking signal; or,
the method includes initiating the photoelectric sensor upon detecting a pressing operation of a user for a specified location of the wearable device.
8. The method of claim 7, wherein the photosensor comprises a camera, and wherein after activating the photosensor, the method further comprises:
acquiring image information through the camera;
judging whether a shielding object exists in front of the camera or not according to the image information;
and prompting a user to remove the shielding object under the condition that the shielding object is arranged in front of the camera.
9. The method according to any one of claims 6-8, wherein the performing the function corresponding to the set gesture combination comprises:
determining a control instruction corresponding to the set gesture combination, and sending the control instruction to at least one mobile terminal connected with the wearable device, wherein the control instruction comprises control information, and the control instruction is used for instructing the at least one mobile terminal to execute the control information.
10. The method of claim 9, wherein after sending the control instruction to a mobile terminal in communication with the wearable device, the method further comprises:
receiving response information sent by the mobile terminal aiming at the control instruction;
converting the response information into audio, and playing the audio.
11. The function control device is applied to wearable equipment and comprises a gesture acquisition unit, a gesture processing unit and a function execution unit, wherein:
the gesture acquiring unit is used for identifying a plurality of user gestures, wherein at least two user gestures are from different hands;
the gesture processing unit is used for combining the plurality of user gestures into a user gesture combination according to the sequence of gesture recognition time;
and the function execution unit is used for executing the function corresponding to the set gesture combination under the condition that the user gesture combination is matched with the set gesture combination.
12. A wearable device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 6-10.
13. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 6-10.
CN201810367319.6A 2018-04-23 2018-04-23 Function control method and relevant device Pending CN108595003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810367319.6A CN108595003A (en) 2018-04-23 2018-04-23 Function control method and relevant device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810367319.6A CN108595003A (en) 2018-04-23 2018-04-23 Function control method and relevant device

Publications (1)

Publication Number Publication Date
CN108595003A true CN108595003A (en) 2018-09-28

Family

ID=63614754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810367319.6A Pending CN108595003A (en) 2018-04-23 2018-04-23 Function control method and relevant device

Country Status (1)

Country Link
CN (1) CN108595003A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111820896A (en) * 2020-07-24 2020-10-27 上海茂声智能科技有限公司 Active interaction intelligent body fat scale system
CN111880714A (en) * 2020-07-31 2020-11-03 Oppo广东移动通信有限公司 Page control method and related device
CN112115791A (en) * 2020-08-18 2020-12-22 北京嘀嘀无限科技发展有限公司 Image recognition method and device, electronic equipment and computer-readable storage medium
CN112256135A (en) * 2020-10-30 2021-01-22 Oppo广东移动通信有限公司 Equipment control method and device, equipment and storage medium
CN112541450A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Context awareness function control method and related device
CN112799572A (en) * 2021-01-28 2021-05-14 维沃移动通信有限公司 Control method, control device, electronic equipment and storage medium
CN113407028A (en) * 2021-06-24 2021-09-17 上海科技大学 Multi-user motion gesture control method and device, intelligent sound box and medium
WO2023249911A1 (en) * 2022-06-21 2023-12-28 Apple Inc. Occlusion classification and feedback

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516854A (en) * 2012-06-15 2014-01-15 三星电子株式会社 Terminal apparatus and control method thereof
US9141194B1 (en) * 2012-01-04 2015-09-22 Google Inc. Magnetometer-based gesture sensing with a wearable device
CN105159709A (en) * 2015-08-26 2015-12-16 小米科技有限责任公司 Application starting method and apparatus, and intelligent terminal
CN106054650A (en) * 2016-07-18 2016-10-26 汕头大学 Novel intelligent household system and multi-gesture control method thereof
CN106502416A (en) * 2016-11-09 2017-03-15 华南理工大学广州学院 A kind of driving simulation system of Intelligent Recognition bimanual input and its control method
CN106557672A (en) * 2015-09-29 2017-04-05 北京锤子数码科技有限公司 The solution lock control method of head mounted display and device
CN107465912A (en) * 2016-06-03 2017-12-12 中兴通讯股份有限公司 A kind of imaging difference detection method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141194B1 (en) * 2012-01-04 2015-09-22 Google Inc. Magnetometer-based gesture sensing with a wearable device
CN103516854A (en) * 2012-06-15 2014-01-15 三星电子株式会社 Terminal apparatus and control method thereof
CN105159709A (en) * 2015-08-26 2015-12-16 小米科技有限责任公司 Application starting method and apparatus, and intelligent terminal
CN106557672A (en) * 2015-09-29 2017-04-05 北京锤子数码科技有限公司 The solution lock control method of head mounted display and device
CN107465912A (en) * 2016-06-03 2017-12-12 中兴通讯股份有限公司 A kind of imaging difference detection method and device
CN106054650A (en) * 2016-07-18 2016-10-26 汕头大学 Novel intelligent household system and multi-gesture control method thereof
CN106502416A (en) * 2016-11-09 2017-03-15 华南理工大学广州学院 A kind of driving simulation system of Intelligent Recognition bimanual input and its control method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111820896A (en) * 2020-07-24 2020-10-27 上海茂声智能科技有限公司 Active interaction intelligent body fat scale system
CN111880714A (en) * 2020-07-31 2020-11-03 Oppo广东移动通信有限公司 Page control method and related device
CN111880714B (en) * 2020-07-31 2022-05-17 Oppo广东移动通信有限公司 Page control method and related device
CN112115791A (en) * 2020-08-18 2020-12-22 北京嘀嘀无限科技发展有限公司 Image recognition method and device, electronic equipment and computer-readable storage medium
CN112256135A (en) * 2020-10-30 2021-01-22 Oppo广东移动通信有限公司 Equipment control method and device, equipment and storage medium
CN112541450A (en) * 2020-12-18 2021-03-23 Oppo广东移动通信有限公司 Context awareness function control method and related device
CN112799572A (en) * 2021-01-28 2021-05-14 维沃移动通信有限公司 Control method, control device, electronic equipment and storage medium
CN113407028A (en) * 2021-06-24 2021-09-17 上海科技大学 Multi-user motion gesture control method and device, intelligent sound box and medium
CN113407028B (en) * 2021-06-24 2023-07-18 上海科技大学 Multi-user motion gesture control method and device, intelligent sound box and medium
WO2023249911A1 (en) * 2022-06-21 2023-12-28 Apple Inc. Occlusion classification and feedback

Similar Documents

Publication Publication Date Title
CN108710615B (en) Translation method and related equipment
CN108595003A (en) Function control method and relevant device
CN110413134B (en) Wearing state detection method and related equipment
EP3562130B1 (en) Control method at wearable apparatus and related apparatuses
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN106447836B (en) A kind of door control terminal binding method and relevant device
US10630826B2 (en) Information processing device
CN107978316A (en) The method and device of control terminal
CN109561420B (en) Emergency help-seeking method and related equipment
CN106888327B (en) Voice playing method and device
CN109067965B (en) Translation method, translation device, wearable device and storage medium
CN107743178B (en) Message playing method and mobile terminal
CN108897516B (en) Wearable device volume adjustment method and related product
CN108024128A (en) Control method, device, terminal device and the storage medium that Bluetooth music plays
CN108566221B (en) Call control method and related equipment
CN108600887B (en) Touch control method based on wireless earphone and related product
CN108668018B (en) Mobile terminal, volume control method and related product
CN106126171B (en) A kind of sound effect treatment method and mobile terminal
CN108810261B (en) Antenna switching method in call and related product
CN109040425B (en) Information processing method and related product
CN110415495B (en) Equipment loss processing method and related equipment
CN107645604B (en) Call processing method and mobile terminal
CN117881008A (en) Bluetooth connection method, device, equipment and storage medium
CN117751585A (en) Control method and device of intelligent earphone, electronic equipment and storage medium
CN115348349A (en) Application program control method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180928