CN108429956B - Wireless earphone, control operation method and related product - Google Patents

Wireless earphone, control operation method and related product Download PDF

Info

Publication number
CN108429956B
CN108429956B CN201810385548.0A CN201810385548A CN108429956B CN 108429956 B CN108429956 B CN 108429956B CN 201810385548 A CN201810385548 A CN 201810385548A CN 108429956 B CN108429956 B CN 108429956B
Authority
CN
China
Prior art keywords
ear
facial
user
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810385548.0A
Other languages
Chinese (zh)
Other versions
CN108429956A (en
Inventor
郭富豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810385548.0A priority Critical patent/CN108429956B/en
Publication of CN108429956A publication Critical patent/CN108429956A/en
Application granted granted Critical
Publication of CN108429956B publication Critical patent/CN108429956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups

Abstract

The embodiment of the application discloses a wireless earphone, a control operation method and a related product, wherein the method comprises the following steps: when a function wake-up operation for the wireless headset is detected, acquiring a facial action of a user; determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction; and executing the control operation corresponding to the preset functional instruction. The embodiment of the application realizes the hand-free operation of the wireless earphone, and is favorable for improving the convenience and intelligence of the control of the wireless earphone.

Description

Wireless earphone, control operation method and related product
Technical Field
The application relates to the technical field of wireless earphones, in particular to a wireless earphone, a control operation method and a related product.
Background
With the widespread use of electronic devices (e.g., smart phones), the electronic devices have increasingly demanded external devices (e.g., earphones) that can be supported, various earphones have become devices that are commonly used by people to listen to media, and wireless earphones have become dominant force earphones due to the fact that the earphone wires of wired earphones are often damaged, which leads to short service lives and high cost of the earphones.
At present, because the wireless headset is not provided with a headset cord, the headset cannot be controlled by a cord control method, and moreover, no physical connecting device exists between the wireless headset and the electronic device, so that the wireless headset and the electronic device exist in different positions, and how to rapidly and conveniently control various functions of the headset becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a wireless earphone, a control operation method and a related product, so that manual operation of the wireless earphone is avoided, and convenience and intelligence of wireless earphone control are improved.
In a first aspect, embodiments of the present application provide a wireless headset, including a processor, and a motion sensor and a memory connected to the processor, wherein,
the memory is used for storing preset function instructions, and the preset function instructions comprise volume adjusting instructions and power on/off control instructions;
the processor is used for acquiring facial actions of a user through the motion sensor when a function awakening operation aiming at the wireless headset is detected; and the preset function instruction used for determining the matching with the facial action; and the control device is used for executing the control operation corresponding to the preset functional instruction.
In a second aspect, an embodiment of the present application provides a control operation method, which is applied to a wireless headset, and the method includes:
when a function wake-up operation for the wireless headset is detected, acquiring a facial action of a user;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
and executing the control operation corresponding to the preset functional instruction.
In a third aspect, an embodiment of the present application provides a control operation method, which is applied to an electronic device, where the electronic device is communicatively connected to a wireless headset, and the method includes:
when user ear information sent by the wireless earphone is received, determining facial actions of a user according to the ear information, wherein the ear information is information acquired when the wireless earphone detects function awakening operation aiming at the wireless earphone;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
and executing corresponding control operation according to the preset function instruction to control the wireless earphone.
In a fourth aspect, an embodiment of the present application provides a control operation device, which is applied to a wireless headset, and includes an obtaining unit, a determining unit, and an executing unit, wherein,
the acquisition unit is used for acquiring the facial action of a user when the function awakening operation aiming at the wireless earphone is detected;
the determining unit is used for determining a preset function instruction matched with the facial action, and the preset function instruction comprises a volume adjusting instruction and an on/off control instruction;
and the execution unit is used for executing the control operation corresponding to the preset functional instruction.
In a fifth aspect, the present invention provides a control operation device, which is applied to an electronic device, the electronic device being in communication connection with a wireless headset, the control operation device including a processing unit, a determining unit and an executing unit, wherein,
the processing unit is used for determining the facial action of a user according to the ear information when receiving the ear information of the user sent by the wireless earphone, wherein the ear information is acquired when the wireless earphone detects a function awakening operation aiming at the wireless earphone;
the determining unit is used for determining a preset function instruction matched with the facial action, and the preset function instruction comprises a volume adjusting instruction and an on/off control instruction;
and the execution unit is used for executing corresponding control operation according to the preset function instruction to control the wireless earphone.
In a sixth aspect, embodiments of the present application provide a wireless headset, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of any of the methods of the second aspects of the embodiments of the present application.
In a seventh aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in any of the methods in the third aspect of the embodiment of the present application.
In an eighth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform part or all of the steps described in any of the methods of the second aspect of the present application or to perform part or all of the steps described in any of the methods of the third aspect of the present application.
In a ninth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any of the methods of the second aspect of the embodiments of the present application or to perform some or all of the steps as described in any of the methods of the third aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, the wireless headset first acquires the facial action of the user when detecting the function wakeup operation for the wireless headset, and then determines the preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and a power on/off control instruction, and finally executes the control operation corresponding to the preset function instruction. It can be known, wireless earphone is through the facial action that acquires, confirm the predetermined function instruction of matching, and carry out operations such as volume control or power on/off control that correspond, be favorable to realizing wireless earphone's exempting from manual operation, moreover, the ear information that acquires through wireless earphone acquires facial action, and need not electronic device and acquire facial action through the camera, be favorable to promoting the convenience of wireless earphone control operation, and wireless earphone's intellectuality, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's facial action, be favorable to avoiding the condition of wireless earphone maloperation because of user's facial action unconsciously causes, be favorable to promoting wireless earphone control operation's intellectuality.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a wireless headset according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a method of controlling operations according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a network architecture for communication between a wireless headset and an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of another control operation method provided by the embodiment of the application;
FIG. 5 is a schematic diagram illustrating an interaction flow of a method for controlling operations according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a wireless headset according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a block diagram of functional units of a control operation device according to an embodiment of the present application;
fig. 9 is a block diagram of functional units of another control operation device provided in the embodiment of the present application;
fig. 10 is a schematic structural diagram of another wireless headset according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. For convenience of description, the above-mentioned apparatuses are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a wireless headset 100 according to an embodiment of the present invention, where the wireless headset 100 includes: a processor 110, and a motion sensor 120 and a memory 130 coupled to the processor 110, the wireless headset 100 further includes a speaker 140 and a microphone 150 coupled to the processor 110, and a radio frequency system 160, wherein,
the memory 130 is configured to store preset function instructions, where the preset function instructions include a volume adjustment instruction and an on/off control instruction;
the processor 110 is configured to acquire a facial action of a user through the motion sensor 120 when a function wake-up operation for the wireless headset is detected; and the preset function instruction used for determining the matching with the facial action; and the control device is used for executing the control operation corresponding to the preset functional instruction.
The wireless headset 100 may be used alone or in pairs, and is not limited herein, when the wireless headset 100 is used in pairs, the wireless headset 100 includes a first wireless headset and a second wireless headset, and the functional structures of the first wireless headset and the second wireless headset are the same as the functional structures of the wireless headset 100.
The wireless headset 100 may include one or more motion sensors 120, where the motion sensors 120 are configured to acquire dynamic data of the ear of the user, so that the processor 110 determines the facial movement of the user according to the dynamic data, and therefore, the motion sensors 120 are a general term for various sensors that can acquire dynamic data of the ear of the user, and the motion sensors 120 may include various sensors, such as a speed sensor, an acceleration sensor, an infrared sensor, an ultrasonic sensor, and the like, which is not limited herein.
The speaker 140 and the microphone 150 are necessary components of the wireless headset, the speaker 140 is used for playing audio data for a user to listen to, and the microphone 150 is used for collecting voice data of the user, converting the voice data into an electric signal and sending the electric signal.
The processor 110 includes an application processor and a baseband processor, the processor 110 is a control center of the wireless headset 100, connects various parts of the whole wireless headset by using various interfaces and lines, and performs various functions of the wireless headset 100 and processes data by running or executing software programs and/or modules stored in the memory 130 and calling data stored in the memory 130, thereby performing overall monitoring of the wireless headset 100. The application processor mainly processes an operating system, audio data and the like, and the baseband processor mainly processes wireless communication.
The memory 130 may be used for storing software programs and modules, and the processor 110 executes various functional applications and data processing of the wireless headset 100 by running the software programs and modules stored in the memory 130. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the wireless headset, and the like. Further, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
In a specific implementation, the electronic device may control the motion sensor 120 to operate in a low power mode in a static state and in a high frequency mode in a moving state, so as to reduce power consumption.
It can be seen that, in the embodiment of the present application, the wireless headset first acquires the facial action of the user when detecting the function wakeup operation for the wireless headset, and then determines the preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and a power on/off control instruction, and finally executes the control operation corresponding to the preset function instruction. It can be known, wireless earphone is through the facial action that acquires, confirm the predetermined function instruction of matching, and carry out operations such as volume control or power on/off control that correspond, be favorable to realizing wireless earphone's exempting from manual operation, moreover, the ear information that acquires through wireless earphone acquires facial action, and need not electronic device and acquire facial action through the camera, be favorable to promoting the convenience of wireless earphone control operation, and wireless earphone's intellectuality, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's facial action, be favorable to avoiding the condition of wireless earphone maloperation because of user's facial action unconsciously causes, be favorable to promoting wireless earphone control operation's intellectuality.
In one possible example, in terms of the acquiring facial movements of the user, the motion sensor 120 is specifically configured to: when detecting the ear movement of a user, acquiring the posture change information of the ear outline of the user;
the processor 110 is specifically configured to: determining the facial motion of the user according to the posture change information of the ear contour.
In this possible example, in said determining the facial movement of the user from the pose change information of the ear profile, the processor 110 is specifically configured to: determining position parameters of the ear contour corresponding to the posture change information of the ear contour; and for obtaining a face region matching the location parameters; and means for determining the facial movement of the user from the facial region.
In one possible example, the motion sensor 120 comprises an infrared sensor, which, in terms of the acquisition of the facial movements of the user, is specifically configured to: when detecting the ear movement of a user, acquiring the target ear canal shape of the user;
the processor 110 is specifically configured to: and inquiring the mapping relation between the preset ear canal and the facial action stored in the memory 130 by using the target ear canal shape as an inquiry identifier, and determining the facial action corresponding to the target ear canal shape.
In one possible example, the wireless headset 100 comprises a first wireless headset and a second wireless headset, and in the aspect of determining the preset functional instruction matching the facial movement, the processor 110 is specifically configured to: determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone; and the preset function instruction for controlling the wireless headset, which is matched with the first facial action, is determined when the matching degree is detected to be greater than a preset matching degree threshold value stored in the memory 130.
Referring to fig. 2, fig. 2 is a flowchart illustrating a control operation method according to an embodiment of the present application, applied to the wireless headset shown in fig. 1, where the control operation method includes:
s201, when a wireless earphone detects a function awakening operation aiming at the wireless earphone, acquiring a facial action of a user;
the function awakening operation is a trigger operation for awakening the function of executing the control operation through the facial action, the wireless headset is indicated to acquire the facial action of the user currently through the function awakening operation, and then the corresponding control operation is executed through the facial action, so that the error control caused by the daily facial action of the user can be avoided.
For example, when the wireless headset comprises a first wireless headset and a second wireless headset, when the first wireless headset detects the point touch operation of the user, the first wireless headset acquires facial actions, and determines corresponding control operations to control the first wireless headset through the facial actions.
The facial action includes facial expression of the user, position change of any part of the face, and the like, for example, the facial action may be blinking eyes, opening a big mouth, smiling, glabella, and the like, which is not limited herein.
The specific implementation manner of acquiring the facial motion of the user may be various, for example, different facial expressions of the user may be determined by fine jitter of the ear of the user, or different facial expressions of the user may be determined by a change in the shape and size of the ear canal of the user, which is not limited herein.
S202, the wireless earphone determines a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
the wireless headset stores the corresponding relationship between facial movements and function instructions, and different facial movements correspond to different function instructions, for example, the volume increasing instruction corresponds to a big mouth, the volume is increased by 5% when the big mouth is opened, the volume decreasing instruction corresponds to blinking, the volume is decreased by 8% when the blinking is performed, the power-on/power-off control instruction corresponds to smiling, and the like, which is not limited herein.
S203, the wireless earphone executes the control operation corresponding to the preset function instruction.
It can be seen that, in the embodiment of the present application, the wireless headset first acquires the facial action of the user when detecting the function wakeup operation for the wireless headset, and then determines the preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and a power on/off control instruction, and finally executes the control operation corresponding to the preset function instruction. It can be known, wireless earphone is through the facial action that acquires, confirm the predetermined function instruction of matching, and carry out operations such as volume control or power on/off control that correspond, be favorable to realizing wireless earphone's exempting from manual operation, moreover, the ear information that acquires through wireless earphone acquires facial action, and need not electronic device and acquire facial action through the camera, be favorable to promoting the convenience of wireless earphone control operation, and wireless earphone's intellectuality, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's facial action, be favorable to avoiding the condition of wireless earphone maloperation because of user's facial action unconsciously causes, be favorable to promoting wireless earphone control operation's intellectuality.
In one possible example, the obtaining facial movements of the user includes:
when detecting the ear movement of a user, acquiring the posture change information of the ear outline of the user;
determining the facial motion of the user according to the posture change information of the ear contour.
The wireless earphone detects ear movement through a speed sensor in the movement sensor, and when the ear movement is detected through the speed sensor, posture change information of the ear outline of the user is acquired.
The posture change information of the ear contour includes ear contour position information corresponding to the posture change, for example, the posture of an ear lobe changes, the top of the ear contour changes, or the posture of the whole ear changes, and the posture change information also includes a posture change mode, for example, the whole ear stretches downwards due to the fact that the mouth is enlarged, or the position above the ear continuously shakes due to the fact that eyes blink continuously, and the like, which is not limited herein.
The specific implementation manner of determining the facial movement of the user according to the posture change information of the ear contour may be various, for example, the wireless headset may prestore a corresponding relationship between the posture change information and the facial movement, and determine the facial movement corresponding to the posture change information according to the corresponding relationship, or may determine a facial area corresponding to the posture change information and determine the facial movement according to the corresponding facial area, which is not limited herein.
Therefore, in the example, when the wireless earphone detects ear movement, the posture change information of the ear outline of the user is acquired, the facial action is determined according to the posture change information, the user can operate the wireless earphone at any time and any place in any scene, and convenience in control operation of the wireless earphone is improved.
In this possible example, the determining the facial motion of the user according to the pose change information of the ear profile includes:
determining position parameters of the ear contour corresponding to the posture change information of the ear contour;
acquiring a face region matched with the position parameters;
determining the facial motion of the user from the facial region.
The position parameter of the ear contour is a change position corresponding to the posture change information of the ear contour, such as an earlobe position, an ear top position, an ear entirety, and the like, which is not limited herein.
Wherein different position parameters correspond to different areas of the face, for example, the earlobe position corresponds to the mouth area below the nose, the top of the ear position corresponds to the eye area, etc.
Each facial area corresponds to one or more facial actions, for example, the top position of the ear corresponds to an eye area, the facial expression corresponding to the eye area can be eye blinking and eye eyebrow highlighting, and then whether the facial expression is eye blinking or eye eyebrow highlighting can be determined through the posture change mode in the posture change information.
For example, the position parameter of the ear profile corresponding to the posture change information of the ear profile is an earlobe position, the face area matched with the earlobe position is a mouth area, the face movement of the user is determined to be a big mouth or a smile, the face movement is determined to be the big mouth according to the posture change mode in the posture change information, then the preset function instruction corresponding to the big mouth is determined to be a volume increasing instruction, the times of the big mouth is obtained, the volume adjusting value is determined according to the times, and the volume is increased according to the adjusting value.
Therefore, in the example, the wireless headset determines the face area through the position parameters of the ear outline corresponding to the posture change information, the range of the face action is reduced, the face action is determined according to the face area, the face action determining speed is favorably improved, and the timeliness of the wireless headset control operation is improved.
In one possible example, the obtaining facial movements of the user includes:
when detecting the ear movement of a user, acquiring the target ear canal shape of the user;
and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
The wireless earphone can acquire the shape of the ear canal of a user through a thermal imaging system of the infrared sensor, the wireless earphone can continuously acquire the shape of the ear canal of the user for multiple times when detecting the movement of the ear of the user, the target ear canal shape in the wireless earphone is determined according to the change trend of the shape of the ear canal acquired for multiple times, and the target ear canal shape is the final form of the change in the shapes of the ear canals, namely the shape of the ear canal before the conventional ear canal shape of the user is recovered.
The wireless earphone comprises a wireless earphone body, a wireless earphone body and a wireless earphone, wherein the wireless earphone body is provided with a plurality of target ear canals, the wireless earphone body is provided with a plurality of wireless earphone bodies, the wireless earphone bodies are arranged in the wireless earphone body, the wireless earphone bodies are connected with the wireless earphone bodies through wireless cables, the wireless earphone bodies are connected with.
It can be seen that, in this example, wireless earphone confirms facial action through the ear canal shape that acquires the user, because there is not interference factor in the acquirement of ear canal shape, can acquire holistic ear shape, consequently, confirms facial action through the ear canal shape and is favorable to promoting the accuracy that facial action was acquireed, and then promotes the accuracy of wireless earphone control operation.
In one possible example, the wireless headset includes a first wireless headset and a second wireless headset, and the determining the preset function instructions that match the facial movements includes:
determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone;
and when the matching degree is detected to be larger than a preset matching degree threshold value, determining the preset function instruction which is matched with the first face action and controls the wireless earphone.
The preset matching degree may be 80% or 90%, and is not limited herein.
The preset function instruction is control operation which needs to be executed by the first wireless earphone and the second wireless earphone at the same time, for example, the wireless earphone is turned on and turned off, when the first wireless earphone is worn by a left ear and the second wireless earphone is worn by a right ear, facial actions of a user are smiling, both the first facial actions acquired by the first wireless earphone through the left ear and the second facial actions acquired by the second wireless earphone through the right ear are facial actions aiming at the smile of the user, and when the matching degree of the first facial actions and the second facial actions is greater than a preset matching degree threshold value, the power-off instruction corresponding to the smile is executed.
It can be seen that, in this example, the wireless headset obtains first face action and second face action respectively through two earphones, and then obtains the matching degree of first face action and second face action, just carries out corresponding control operation when the matching degree is greater than preset matching degree threshold value, is favorable to further promoting wireless headset control operation's accuracy, avoids the maloperation.
In one possible example, the wireless headset includes a first wireless headset and a second wireless headset, and the determining the preset function instructions that match the facial movements includes:
when the facial action is acquired by the first wireless earphone, determining the preset function instruction matched with the facial action and used for controlling the first wireless earphone; or the like, or, alternatively,
and when the facial movement is acquired as the second wireless earphone, determining the preset function instruction which is matched with the facial movement and controls the second wireless earphone.
The acquired facial action of the first wireless earphone is used for controlling the first wireless earphone, the facial action acquired by the second wireless earphone is used for controlling the second wireless earphone, the facial actions can be acquired by the first wireless earphone and the second wireless earphone at the same time, the facial action of one side of the first wireless earphone which is connected with the first wireless earphone is acquired by the first wireless earphone, the facial action of one side of the second wireless earphone which is connected with the second wireless earphone is acquired by the second wireless earphone, and the acquired facial actions are different, so that the control operation executed by the first wireless earphone and the control operation executed by the second wireless earphone can be different and can also be the same.
For example, if the first wireless earphone is worn on the left ear, the second wireless earphone is worn on the right ear, the first wireless earphone acquires facial movements of blinking of the left eye, and the right ear acquires movements of plucking the right eyebrow, then the first wireless earphone can perform control operations matching blinking of the eye, and the second wireless earphone can perform control operations matching plucking of the eyebrow.
It can be seen that, in this example, the wireless headset can respectively perform differentiated control on the first wireless headset and the second wireless headset through different facial actions acquired by the first wireless headset and the second wireless headset, which is beneficial to improving the flexibility and the practicability of the control operation of the wireless headset.
Referring to fig. 3, fig. 3 is a schematic diagram of a network architecture for communication between an electronic device and a wireless headset according to an embodiment of the present disclosure. In the network architecture shown in fig. 3, a wireless headset 100 and an electronic device 200 as shown in fig. 1 may be included. The wireless headset 100 may be communicatively coupled to the electronic device 200 via a wireless network (e.g., bluetooth, infrared, or WiFi). It should be noted that the number of the wireless headsets 100 may be one or two, and the embodiment of the present application is not limited. In the network architecture of the communication between the wireless headset 100 and the electronic device 200 shown in fig. 3, when the wireless headset 100 detects that the function of the wireless headset 100 is wakened up, the ear information of the user is acquired and sent to the electronic device 100, after the electronic device 100 receives the ear information of the user sent by the wireless headset 100, the electronic device 100 determines the facial action of the user according to the ear information and determines a preset function instruction matched with the facial action, wherein the preset function instruction includes a volume adjustment instruction and a power-on/power-off control instruction, and executes a corresponding control operation according to the preset function instruction to control the wireless headset.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for controlling an operation according to an embodiment of the present application, applied to an electronic device, where the electronic device is communicatively connected to the wireless headset shown in fig. 1, as shown, the method includes:
s401, when receiving user ear information sent by the wireless earphone, the electronic device determines the facial action of a user according to the ear information, wherein the ear information is information acquired when the wireless earphone detects function awakening operation aiming at the wireless earphone.
S402, the electronic device determines a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction.
And S403, the electronic device executes corresponding control operation according to the preset function instruction to control the wireless earphone.
It can be seen that, in the embodiment of the present application, electronic device is at first when receiving user's ear information that wireless headset sent, according to user's facial action is confirmed to ear information, ear information does wireless headset detects to the information that obtains when the function of wireless headset awakens the operation, secondly, confirm with the preset function instruction that facial action matches, preset function instruction includes volume control instruction, and power on/off control instruction, finally, according to preset function instruction carries out corresponding control operation control wireless headset. Therefore, the electronic device determines the facial action according to the ear information acquired by the wireless earphone, and determines the matched preset function instruction according to the facial action, and executes the corresponding operations of volume adjustment or on/off control, etc., which is beneficial to realizing the hand-free operation of the wireless earphone, moreover, the ear information acquired by the wireless earphone acquires the facial action without starting the camera by the electronic device, which is beneficial to reducing the electric quantity consumption of the electronic device, improving the convenience of the control operation of the wireless earphone and the intelligence of the wireless earphone, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's ear information, be favorable to avoiding because of the condition of wireless earphone and the electronic device maloperation that the facial action of user's unconsciousness caused, be favorable to promoting electronic device control operation's intellectuality.
In one possible example, the ear information includes posture change information of an ear contour of the user acquired by the wireless headset when detecting ear motion, and the determining the facial action of the user according to the ear information includes:
determining position parameters of the ear contour corresponding to the posture change information of the ear contour;
acquiring a face region matched with the position parameters;
determining the facial motion of the user from the facial region.
It is thus clear that, in this example, the electronic device acquires the gesture change information of the user's ear profile that wireless headset sent, confirm facial action according to gesture change information, the user can operate wireless headset anytime and anywhere and under any scene, be favorable to promoting the convenience of wireless headset control operation, and, facial region is confirmed to the position parameter of the ear profile that corresponds through gesture change information, the scope of facial action has been reduced, and then confirm facial action according to facial region, be favorable to promoting the speed that facial action was confirmed, promote the timeliness of wireless headset control operation.
In one possible example, the ear information includes a target ear canal shape of the user obtained by the wireless headset when detecting ear motion, and the determining the facial actions of the user according to the ear information includes:
and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
It can be seen that, in this example, the electronic device determines the facial action through the ear canal shape that wireless earphone sent, because there is not interference factor in the acquirement of ear canal shape, can acquire holistic ear shape, consequently, confirms the accuracy that facial action is favorable to promoting facial action and acquirees through the ear canal shape, and then promotes the accuracy of wireless earphone control operation.
In one possible example, the wireless headset includes a first wireless headset and a second wireless headset, and the determining the preset function instructions that match the facial movements includes:
determining a matching degree of a first facial action as a facial action determined according to ear information transmitted by the first wireless headset and a second facial action as a facial action determined according to ear information transmitted by the second wireless headset;
and when the matching degree is detected to be larger than a preset matching degree threshold value, determining the preset function instruction which is matched with the first face action and controls the wireless earphone.
It can be seen that, in this example, the electronic device obtains the first face action and the second face action respectively through two earphones, and then obtains the matching degree of the first face action and the second face action, and executes the corresponding control operation when the matching degree is greater than the preset matching degree threshold, which is favorable for further improving the accuracy of the wireless earphone control operation and avoiding misoperation.
In one possible example, the wireless headset includes a first wireless headset and a second wireless headset, and the determining the preset function instructions that match the facial movements includes:
when the first wireless earphone is determined to transmit the ear information of the user, determining the preset function instruction matched with the facial action and used for controlling the first wireless earphone; or the like, or, alternatively,
and when the user ear information is determined to be sent to the second wireless earphone, determining the preset function instruction matched with the facial action and used for controlling the second wireless earphone.
As can be seen, in this example, the electronic device can perform differentiated control on the first wireless headset and the second wireless headset respectively through different facial actions acquired by the first wireless headset and the second wireless headset, which is beneficial to improving flexibility and practicability of wireless headset control operation.
Referring to fig. 5, fig. 5 is an interaction flowchart illustrating a control operation method according to an embodiment of the present disclosure, and is applied to the wireless headset and the electronic device shown in fig. 1. As shown in the figure, the control operation method comprises the following steps:
s501, when the wireless earphone detects function awakening operation aiming at the wireless earphone, acquiring ear information of a user, and sending the ear information to an electronic device.
S502, the electronic device receives the ear information sent by the wireless earphone, and determines the facial action of the user according to the ear information.
S503, the electronic device determines a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction.
S504, the electronic device executes corresponding control operation according to the preset function instruction to control the wireless earphone.
It can be seen that, in the embodiment of the present application, electronic device is at first when receiving user's ear information that wireless headset sent, according to user's facial action is confirmed to ear information, ear information does wireless headset detects to the information that obtains when the function of wireless headset awakens the operation, secondly, confirm with the preset function instruction that facial action matches, preset function instruction includes volume control instruction, and power on/off control instruction, finally, according to preset function instruction carries out corresponding control operation control wireless headset. Therefore, the electronic device determines the facial action according to the ear information acquired by the wireless earphone, and determines the matched preset function instruction according to the facial action, and executes the corresponding operations of volume adjustment or on/off control, etc., which is beneficial to realizing the hand-free operation of the wireless earphone, moreover, the ear information acquired by the wireless earphone acquires the facial action without starting the camera by the electronic device, which is beneficial to reducing the electric quantity consumption of the electronic device, improving the convenience of the control operation of the wireless earphone and the intelligence of the wireless earphone, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's ear information, be favorable to avoiding because of the condition of wireless earphone and the electronic device maloperation that the facial action of user's unconsciousness caused, be favorable to promoting electronic device control operation's intellectuality.
Referring to fig. 6 in accordance with the embodiment shown in fig. 2, fig. 6 is a schematic structural diagram of a wireless headset according to an embodiment of the present application, and as shown in the figure, the wireless headset includes a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps;
when a function wake-up operation for the wireless headset is detected, acquiring a facial action of a user;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
and executing the control operation corresponding to the preset functional instruction.
It can be seen that, in the embodiment of the present application, the wireless headset first acquires the facial action of the user when detecting the function wakeup operation for the wireless headset, and then determines the preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and a power on/off control instruction, and finally executes the control operation corresponding to the preset function instruction. It can be known, wireless earphone is through the facial action that acquires, confirm the predetermined function instruction of matching, and carry out operations such as volume control or power on/off control that correspond, be favorable to realizing wireless earphone's exempting from manual operation, moreover, the ear information that acquires through wireless earphone acquires facial action, and need not electronic device and acquire facial action through the camera, be favorable to promoting the convenience of wireless earphone control operation, and wireless earphone's intellectuality, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's facial action, be favorable to avoiding the condition of wireless earphone maloperation because of user's facial action unconsciously causes, be favorable to promoting wireless earphone control operation's intellectuality.
In one possible example, in terms of the obtaining facial movements of the user, the instructions in the program are specifically configured to: when detecting the ear movement of a user, acquiring the posture change information of the ear outline of the user; and determining the facial motion of the user according to the posture change information of the ear contour.
In this possible example, in said determining the facial movement of the user from the pose change information of the ear profile, the instructions in the program are specifically configured to: determining position parameters of the ear contour corresponding to the posture change information of the ear contour; and for obtaining a face region matching the location parameters; and means for determining the facial movement of the user from the facial region.
In one possible example, in terms of the obtaining facial movements of the user, the instructions in the program are specifically configured to: when detecting the ear movement of a user, acquiring the target ear canal shape of the user; and the target ear canal shape is used as a query identification, the preset mapping relation between the ear canal and the facial action is queried, and the facial action corresponding to the target ear canal shape is determined.
In one possible example, the wireless headset comprises a first wireless headset and a second wireless headset, and in the aspect of the determining preset functional instructions that match the facial motion, the instructions in the program are specifically configured to: determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone; and the preset function instruction which is used for controlling the wireless earphone and matched with the first face action is determined when the matching degree is detected to be larger than a preset matching degree threshold value.
In accordance with the embodiments shown in fig. 4 and 5, please refer to fig. 7, fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, the electronic device being communicatively connected to a wireless headset, and as shown, the electronic device includes a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps;
when user ear information sent by the wireless earphone is received, determining facial actions of a user according to the ear information, wherein the ear information is information acquired when the wireless earphone detects function awakening operation aiming at the wireless earphone;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
and executing corresponding control operation according to the preset function instruction to control the wireless earphone.
It can be seen that, in the embodiment of the present application, electronic device is at first when receiving user's ear information that wireless headset sent, according to user's facial action is confirmed to ear information, ear information does wireless headset detects to the information that obtains when the function of wireless headset awakens the operation, secondly, confirm with the preset function instruction that facial action matches, preset function instruction includes volume control instruction, and power on/off control instruction, finally, according to preset function instruction carries out corresponding control operation control wireless headset. Therefore, the electronic device determines the facial action according to the ear information acquired by the wireless earphone, and determines the matched preset function instruction according to the facial action, and executes the corresponding operations of volume adjustment or on/off control, etc., which is beneficial to realizing the hand-free operation of the wireless earphone, moreover, the ear information acquired by the wireless earphone acquires the facial action without starting the camera by the electronic device, which is beneficial to reducing the electric quantity consumption of the electronic device, improving the convenience of the control operation of the wireless earphone and the intelligence of the wireless earphone, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's ear information, be favorable to avoiding because of the condition of wireless earphone and the electronic device maloperation that the facial action of user's unconsciousness caused, be favorable to promoting electronic device control operation's intellectuality.
In one possible example, the ear information includes pose change information of an ear profile of the user obtained by the wireless headset when detecting ear motion, and in the determining the facial movements of the user according to the ear information, the instructions in the program are specifically configured to: determining position parameters of the ear contour corresponding to the posture change information of the ear contour; and for obtaining a face region matching the location parameters; and means for determining the facial movement of the user from the facial region.
In one possible example, the ear information includes a target ear canal shape of the user obtained by the wireless headset when ear motion is detected, and in the determining facial movements of the user from the ear information, the instructions in the program are specifically configured to: and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
In one possible example, the wireless headset comprises a first wireless headset and a second wireless headset, and in the aspect of the determining preset functional instructions that match the facial motion, the instructions in the program are specifically configured to: determining a matching degree of a first facial action as a facial action determined according to ear information transmitted by the first wireless headset and a second facial action as a facial action determined according to ear information transmitted by the second wireless headset; and the preset function instruction which is used for controlling the wireless earphone and matched with the first face action is determined when the matching degree is detected to be larger than a preset matching degree threshold value.
In one possible example, the wireless headset comprises a first wireless headset and a second wireless headset, and in the aspect of the determining preset functional instructions that match the facial motion, the instructions in the program are specifically configured to: when the first wireless earphone is determined to transmit the ear information of the user, determining the preset function instruction matched with the facial action and used for controlling the first wireless earphone; or, when the second wireless earphone is determined to transmit the ear information of the user, the preset function instruction matched with the facial action and used for controlling the second wireless earphone is determined.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It will be appreciated that the wireless headset, in order to perform the above-described functions, includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the wireless headset may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 8 is a block diagram showing functional units of the control operation device 800 according to the embodiment of the present application. The control operation device 800 is applied to a wireless headset, and the control operation device 800 includes an acquisition unit 801, a determination unit 802, and an execution unit 803, wherein,
the acquiring unit 801 is configured to acquire a facial action of a user when a function wake-up operation for the wireless headset is detected;
the determining unit 802 is configured to determine a preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and an on/off control instruction;
the execution unit 803 is configured to execute a control operation corresponding to the preset functional instruction.
It can be seen that, in the embodiment of the present application, the wireless headset first acquires the facial action of the user when detecting the function wakeup operation for the wireless headset, and then determines the preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and a power on/off control instruction, and finally executes the control operation corresponding to the preset function instruction. It can be known, wireless earphone is through the facial action that acquires, confirm the predetermined function instruction of matching, and carry out operations such as volume control or power on/off control that correspond, be favorable to realizing wireless earphone's exempting from manual operation, moreover, the ear information that acquires through wireless earphone acquires facial action, and need not electronic device and acquire facial action through the camera, be favorable to promoting the convenience of wireless earphone control operation, and wireless earphone's intellectuality, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's facial action, be favorable to avoiding the condition of wireless earphone maloperation because of user's facial action unconsciously causes, be favorable to promoting wireless earphone control operation's intellectuality.
In one possible example, in terms of the acquiring the facial actions of the user, the acquiring unit 801 is specifically configured to: when detecting the ear movement of a user, acquiring the posture change information of the ear outline of the user; and determining the facial motion of the user according to the posture change information of the ear contour.
In this possible example, in terms of determining the facial movement of the user according to the posture change information of the ear contour, the obtaining unit 801 is specifically configured to: determining position parameters of the ear contour corresponding to the posture change information of the ear contour; and for obtaining a face region matching the location parameters; and means for determining the facial movement of the user from the facial region.
In one possible example, in terms of the acquiring the facial actions of the user, the acquiring unit 801 is specifically configured to: when detecting the ear movement of a user, acquiring the target ear canal shape of the user; and the target ear canal shape is used as a query identification, the preset mapping relation between the ear canal and the facial action is queried, and the facial action corresponding to the target ear canal shape is determined.
In one possible example, the wireless headsets include a first wireless headset and a second wireless headset, and in terms of the determining the preset function instruction matching the facial motion, the determining unit 802 is specifically configured to: determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone; and the preset function instruction which is used for controlling the wireless earphone and matched with the first face action is determined when the matching degree is detected to be larger than a preset matching degree threshold value.
The obtaining unit 801 may be a processor or a transceiver, and the determining unit 802 and the executing unit 803 may be processors.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 9 is a block diagram showing functional units of the control operation device 900 according to the embodiment of the present application. The control and operation device 900 is applied to an electronic device which is in communication connection with a wireless headset, and the control and operation device 900 comprises a processing unit 901, a determining unit 902 and an executing unit 903, wherein,
the processing unit 901 is configured to, when receiving user ear information sent by the wireless headset, determine a facial action of a user according to the ear information, where the ear information is information obtained when the wireless headset detects a function wakeup operation for the wireless headset;
the determining unit 902 is configured to determine a preset function instruction matched with the facial action, where the preset function instruction includes a volume adjustment instruction and an on/off control instruction;
the execution unit 903 is configured to execute a corresponding control operation according to the preset function instruction to control the wireless headset.
It can be seen that, in the embodiment of the present application, electronic device is at first when receiving user's ear information that wireless headset sent, according to user's facial action is confirmed to ear information, ear information does wireless headset detects to the information that obtains when the function of wireless headset awakens the operation, secondly, confirm with the preset function instruction that facial action matches, preset function instruction includes volume control instruction, and power on/off control instruction, finally, according to preset function instruction carries out corresponding control operation control wireless headset. Therefore, the electronic device determines the facial action according to the ear information acquired by the wireless earphone, and determines the matched preset function instruction according to the facial action, and executes the corresponding operations of volume adjustment or on/off control, etc., which is beneficial to realizing the hand-free operation of the wireless earphone, moreover, the ear information acquired by the wireless earphone acquires the facial action without starting the camera by the electronic device, which is beneficial to reducing the electric quantity consumption of the electronic device, improving the convenience of the control operation of the wireless earphone and the intelligence of the wireless earphone, in addition, when detecting the function awakening operation to wireless earphone, just acquire the operation of user's ear information, be favorable to avoiding because of the condition of wireless earphone and the electronic device maloperation that the facial action of user's unconsciousness caused, be favorable to promoting electronic device control operation's intellectuality.
In one possible example, the ear information includes posture change information of an ear contour of the user obtained by the wireless headset when detecting ear movement, and in terms of determining the facial action of the user according to the ear information, the processing unit 901 is specifically configured to: determining position parameters of the ear contour corresponding to the posture change information of the ear contour; and for obtaining a face region matching the location parameters; and means for determining the facial movement of the user from the facial region.
In one possible example, the ear information includes a target ear canal shape of the user obtained by the wireless headset when detecting ear movement, and in terms of determining the facial action of the user according to the ear information, the processing unit 901 is specifically configured to: and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
In one possible example, the wireless headsets include a first wireless headset and a second wireless headset, and in terms of the determining the preset function instruction matching the facial motion, the determining unit 902 is specifically configured to: determining a matching degree of a first facial action as a facial action determined according to ear information transmitted by the first wireless headset and a second facial action as a facial action determined according to ear information transmitted by the second wireless headset; and the preset function instruction which is used for controlling the wireless earphone and matched with the first face action is determined when the matching degree is detected to be larger than a preset matching degree threshold value.
In one possible example, the wireless headsets include a first wireless headset and a second wireless headset, and in terms of the determining the preset function instruction matching the facial motion, the determining unit 902 is specifically configured to: when the first wireless earphone is determined to transmit the ear information of the user, determining the preset function instruction matched with the facial action and used for controlling the first wireless earphone; or, when the second wireless earphone is determined to transmit the ear information of the user, the preset function instruction matched with the facial action and used for controlling the second wireless earphone is determined.
Wherein the processing unit 901 may be a processor or a transceiver, the determining unit 902 and the executing unit 903 may be a processor.
As shown in fig. 10, fig. 10 is a schematic structural diagram of another wireless headset 1000 disclosed in the embodiment of the present application. The wireless headset 1000 may include control circuitry that may include the storage and processing circuit 1030. The storage and processing circuit 1030 may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. The processing circuitry in the storage and processing circuitry 1030 may be used to control the operation of the wireless headset 1000. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuit 1030 may be used to run software in the wireless headset 1000 such as Voice Over Internet Protocol (VOIP) phone call applications, media playing applications, operating system functions, etc. Such software may be used to perform control operations such as, for example, ambient light measurements based on an ambient light sensor, proximity sensor measurements based on a proximity sensor, information display functions implemented based on status indicators such as status indicators of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the wireless headset 1000, to name a few.
The wireless headset 1000 may also include input-output circuitry 1040. The input-output circuit 1040 may be used to enable the wireless headset 1000 to input and output data, i.e., to allow the wireless headset 1000 to receive data from an external device and also to allow the wireless headset 1000 to output data from the wireless headset 1000 to an external device. Input-output circuit 1040 may further include a sensor 1041. The sensors 1041 may include ambient light sensors, optical and capacitive based proximity sensors, touch sensors (e.g., optical based touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
The wireless headset 1000 may also include an audio component 1042. The audio assembly 1042 may be used to provide audio input and output functionality for the wireless headset 1000. The audio component 1042 in the wireless headset 1000 may include a speaker, microphone, buzzer, tone generator, and other components for generating and detecting sound.
The communication circuit 1043 may be used to provide the wireless headset 1000 with the capability to communicate with external devices. The communication circuitry 1043 may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 1043 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 1043 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, communications circuitry 1043 may include a near field communications antenna and a near field communications transceiver. The communications circuitry 1043 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so forth.
The wireless headset 1000 may further include a battery, a power management circuit, and other input-output units 1044. Input-output unit 1044 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through the input-output circuitry 1044 to control the operation of the wireless headset 1000 and may use the output data of the input-output circuitry 1044 to enable receipt of status information and other outputs from the wireless headset 1000.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A wireless headset comprising a processor, and a motion sensor and a memory coupled to the processor, wherein,
the memory is used for storing preset function instructions, and the preset function instructions comprise volume adjusting instructions and power on/off control instructions;
the processor is used for acquiring the facial action of the user according to the user ear dynamic data acquired by the motion sensor when the function awakening operation aiming at the wireless earphone is detected; and the preset function instruction used for determining the matching with the facial action; and a control unit configured to perform a control operation corresponding to the preset function instruction, wherein, in the aspect of acquiring the facial movement of the user, the motion sensor is specifically configured to: when detecting that the ear of the user moves, acquiring gesture change information of the ear profile of the user, wherein the gesture change information of the ear profile comprises ear profile position information and a gesture change mode corresponding to the ear lobe gesture change; the processor is further specifically configured to: determining corresponding position parameters of the ear contour according to the posture change information of the ear contour, wherein different position parameters correspond to different face areas; and for obtaining a face region matching the location parameters; and the facial action of the facial area matched with the position parameter is determined according to the posture change mode, the position parameter of the ear contour is the change position corresponding to the posture change information of the ear contour, and the change position comprises the position of an earlobe.
2. The wireless headset of claim 1, wherein the motion sensor comprises an infrared sensor, the infrared sensor being configured to, in connection with the capturing of facial movements of the user: when detecting the ear movement of a user, acquiring the target ear canal shape of the user;
the processor is specifically configured to: and inquiring the mapping relation between the preset auditory canal and the facial action stored in the memory by using the shape of the target auditory canal as an inquiry identifier, and determining the facial action corresponding to the shape of the target auditory canal.
3. A wireless headset according to claim 1 or 2, wherein the wireless headset comprises a first wireless headset and a second wireless headset, and wherein, in respect of the determination of the preset functional instruction that matches the facial movement, the processor is specifically configured to: determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone; and the preset function instruction which is matched with the first facial action and controls the wireless earphone is determined when the matching degree is detected to be larger than a preset matching degree threshold value stored in the memory.
4. A method of controlling operation, for use with a wireless headset, the method comprising:
when function awakening operation aiming at the wireless earphone is detected, acquiring facial actions of a user according to user ear dynamic data acquired by a motion sensor;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
executing a control operation corresponding to the preset function instruction, wherein the acquiring the facial action of the user comprises:
when detecting that the ear of the user moves, acquiring gesture change information of the ear profile of the user, wherein the gesture change information of the ear profile comprises ear profile position information and a gesture change mode corresponding to the ear lobe gesture change;
determining posture change information of the ear contour and determining corresponding position parameters of the ear contour, wherein different position parameters correspond to different face areas;
acquiring a face region matched with the position parameters;
determining the facial motion of a user from the facial region, comprising: and determining the facial action of the facial area matched with the position parameter according to the posture change mode.
5. The method of claim 4, wherein the obtaining facial movements of the user comprises:
when detecting the ear movement of a user, acquiring the target ear canal shape of the user;
and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
6. The method of claim 4 or 5, wherein the wireless headsets include a first wireless headset and a second wireless headset, and wherein determining the preset functional instructions that match the facial movements comprises:
determining a matching degree of a first facial action acquired by the first wireless earphone and a second facial action acquired by the second wireless earphone;
and when the matching degree is detected to be larger than a preset matching degree threshold value, determining the preset function instruction which is matched with the first face action and controls the wireless earphone.
7. A method of controlling operation, applied to an electronic device communicatively coupled to a wireless headset, the method comprising:
when receiving user's ear information that wireless headset sent, according to user's facial action is confirmed to ear information, ear information does wireless headset detects the information that obtains when the function of wireless headset awakens the operation, and ear information includes ear dynamic data, and when detecting the ear motion, the gesture change information of the user's ear profile who obtains, ear profile's gesture change information includes ear profile positional information and the gesture change mode that ear lobe gesture change corresponds, according to user's facial action is confirmed to ear information still includes: determining position parameters of the corresponding ear contour according to the posture change information of the ear contour, wherein different position parameters correspond to different face regions, obtaining the face region matched with the position parameters, and determining the face action in the face region matched with the position parameters according to the posture change mode;
determining a preset function instruction matched with the facial action, wherein the preset function instruction comprises a volume adjusting instruction and a power-on/power-off control instruction;
and executing corresponding control operation according to the preset function instruction to control the wireless earphone.
8. The method of claim 7, wherein the ear information comprises a target ear canal shape of the user obtained by the wireless headset when detecting ear motion, and wherein determining facial movements of the user from the ear information comprises:
and using the target ear canal shape as a query identification, querying a mapping relation between a preset ear canal and a facial action, and determining that the target ear canal shape corresponds to the facial action.
9. The method of claim 8, wherein the wireless headset comprises a first wireless headset and a second wireless headset, and wherein determining the preset function instructions that match the facial movement comprises:
determining a matching degree of a first facial action as a facial action determined according to ear information transmitted by the first wireless headset and a second facial action as a facial action determined according to ear information transmitted by the second wireless headset;
and when the matching degree is detected to be larger than a preset matching degree threshold value, determining the preset function instruction which is matched with the first face action and controls the wireless earphone.
10. The method of any of claims 7-9, wherein the wireless headset comprises a first wireless headset and a second wireless headset, and wherein determining the preset functional instructions that match the facial movements comprises:
when the first wireless earphone is determined to transmit the ear information of the user, determining the preset function instruction matched with the facial action and used for controlling the first wireless earphone; or the like, or, alternatively,
and when the user ear information is determined to be sent to the second wireless earphone, determining the preset function instruction matched with the facial action and used for controlling the second wireless earphone.
11. A control operation device is applied to a wireless earphone and comprises an acquisition unit, a determination unit and an execution unit,
the acquiring unit is used for acquiring facial actions of a user according to user ear dynamic data acquired by a motion sensor when function awakening operation aiming at the wireless earphone is detected, wherein the acquiring unit is further used for acquiring posture change information of the ear profile of the user when the ear motion of the user is detected, the posture change information of the ear profile comprises ear profile position information and a posture change mode corresponding to ear lobe posture change, position parameters of the corresponding ear profile are determined according to the posture change information of the ear profile, different position parameters correspond to different face areas, a face area matched with the position parameters is acquired, and the facial actions in the face area matched with the position parameters are determined according to the posture change mode;
the determining unit is used for determining a preset function instruction matched with the facial action, and the preset function instruction comprises a volume adjusting instruction and an on/off control instruction;
and the execution unit is used for executing the control operation corresponding to the preset functional instruction.
12. A control operation device is applied to an electronic device which is in communication connection with a wireless earphone and comprises a processing unit, a determination unit and an execution unit,
the processing unit is used for determining the facial action of the user according to the ear information when receiving the ear information of the user sent by the wireless earphone, the ear information is information acquired when the wireless headset detects a function wake-up operation for the wireless headset, wherein the ear information comprises posture change information of the user's ear profile acquired by the wireless headset when detecting ear motion, the posture change information of the ear profile comprises ear profile position information and posture change modes corresponding to the posture change of the ear lobe, the processing unit is further used for, determining the position parameters of the corresponding ear contour according to the posture change information of the ear contour, wherein different position parameters correspond to different face regions, the face region matched with the position parameters is obtained, determining the facial action in the facial area matched with the position parameter according to the posture change mode;
the determining unit is used for determining a preset function instruction matched with the facial action, and the preset function instruction comprises a volume adjusting instruction and an on/off control instruction;
and the execution unit is used for executing corresponding control operation according to the preset function instruction to control the wireless earphone.
13. A wireless headset comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 4-6.
14. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 7-10.
15. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 4-6 or to perform the method according to any one of claims 7-10.
CN201810385548.0A 2018-04-26 2018-04-26 Wireless earphone, control operation method and related product Active CN108429956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810385548.0A CN108429956B (en) 2018-04-26 2018-04-26 Wireless earphone, control operation method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810385548.0A CN108429956B (en) 2018-04-26 2018-04-26 Wireless earphone, control operation method and related product

Publications (2)

Publication Number Publication Date
CN108429956A CN108429956A (en) 2018-08-21
CN108429956B true CN108429956B (en) 2021-05-04

Family

ID=63161879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810385548.0A Active CN108429956B (en) 2018-04-26 2018-04-26 Wireless earphone, control operation method and related product

Country Status (1)

Country Link
CN (1) CN108429956B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650405A (en) * 2019-10-22 2020-01-03 Oppo(重庆)智能科技有限公司 Wireless earphone control system, method, device and storage medium
CN115665319A (en) * 2022-09-27 2023-01-31 深圳振科智能科技有限公司 Application control method, device, equipment and storage medium based on wireless earphone

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244157A (en) * 2013-06-14 2014-12-24 奥迪康有限公司 A hearing assistance device with brain-computer interface

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5543929B2 (en) * 2009-02-05 2014-07-09 国立大学法人大阪大学 Input device, wearable computer, and input method
GB201103200D0 (en) * 2011-02-24 2011-04-13 Isis Innovation An optical device for the visually impaired
CN102426768B (en) * 2011-08-16 2016-02-10 海尔集团公司 The system and method for electronic equipment is controlled based on body electric wave
CN204707249U (en) * 2015-06-18 2015-10-14 浙江神灯生物科技有限公司 Smart bluetooth earphone

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244157A (en) * 2013-06-14 2014-12-24 奥迪康有限公司 A hearing assistance device with brain-computer interface

Also Published As

Publication number Publication date
CN108429956A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN109040887B (en) Master-slave earphone switching control method and related product
CN109068206B (en) Master-slave earphone switching control method and related product
CN108810693B (en) Wearable device and device control device and method thereof
CN108710615B (en) Translation method and related equipment
CN108886653B (en) Earphone sound channel control method, related equipment and system
CN108924706A (en) Bluetooth headset method for handover control, bluetooth headset and computer readable storage medium
CN109561420B (en) Emergency help-seeking method and related equipment
CN109067965B (en) Translation method, translation device, wearable device and storage medium
CN108966067A (en) Control method for playing back and Related product
CN107566604B (en) Message reminding control method and user terminal
CN110350935B (en) Audio signal output control method, wearable device and readable storage medium
CN108769850A (en) Apparatus control method and Related product
CN108540669A (en) Wireless headset, the control method based on headset detection and Related product
CN109150221B (en) Master-slave switching method for wearable equipment and related product
CN108897516B (en) Wearable device volume adjustment method and related product
CN108777827A (en) Wireless headset, method for regulation of sound volume and Related product
CN108683790B (en) Voice processing method and related product
CN108429956B (en) Wireless earphone, control operation method and related product
CN108668018B (en) Mobile terminal, volume control method and related product
CN109121034B (en) Master-slave switching method based on volume and related product
CN110072295B (en) Dual-channel communication method, device, first terminal and medium
CN108834013B (en) Wearable equipment electric quantity balancing method and related product
CN109144455B (en) Display control method and related product
CN108882084B (en) Wearable equipment electric quantity balancing method and related product
CN110543231A (en) electronic device control method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant