CN112218196A - Earphone and earphone control method - Google Patents

Earphone and earphone control method Download PDF

Info

Publication number
CN112218196A
CN112218196A CN201910621183.1A CN201910621183A CN112218196A CN 112218196 A CN112218196 A CN 112218196A CN 201910621183 A CN201910621183 A CN 201910621183A CN 112218196 A CN112218196 A CN 112218196A
Authority
CN
China
Prior art keywords
signal
user
earphone
ear
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910621183.1A
Other languages
Chinese (zh)
Inventor
刘绍斌
唐强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910621183.1A priority Critical patent/CN112218196A/en
Priority to PCT/CN2020/093626 priority patent/WO2021004194A1/en
Publication of CN112218196A publication Critical patent/CN112218196A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/6815Ear
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6844Monitoring or controlling distance between sensor and tissue
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/05Detection of connection of loudspeakers or headphones to amplifiers

Abstract

The embodiment of the application provides an earphone and a control method of the earphone. The earphone comprises a loudspeaker, a distance sensor, a respiration sensor and a microprocessor, wherein the loudspeaker is used for emitting sound; the distance sensor is used for detecting the distance between the earphone and the ear of the user, and the distance sensor sends a first signal when the distance between the earphone and the ear of the user is smaller than or equal to a preset distance; the breathing sensor is used for collecting a breathing signal of a user, the microprocessor collects the breathing signal of the user through the breathing sensor under the condition that the microprocessor receives the first signal, the breathing signal is compared with a preset signal, when the breathing signal is matched with the preset signal, the microprocessor sends a second signal, and the second signal is used for indicating the earphone to be worn on the ear of the user. The earphone provided by the embodiment of the application can detect whether the earphone is worn on the ear of a user, and is beneficial to enriching the application function of the earphone.

Description

Earphone and earphone control method
Technical Field
The application relates to the technical field of earphones, in particular to an earphone and an earphone control method.
Background
With the development of society, the mobile phone has become an indispensable contact tool in modern life, and the mobile phone is as fast-to-wear article, and the renewal is fast, and the key selling point also is constantly promoting. The earphone is used as a matching product of the mobile phone, the requirement of consumers for the earphone is higher and higher, and how to improve the function of the earphone is a challenge at present.
Disclosure of Invention
The embodiment of the application provides an earphone and an earphone control method, which are beneficial to improving the function of the earphone.
The embodiment of the application provides a headset, the headset includes:
a speaker for emitting sound;
the distance sensor is used for detecting the distance between the earphone and the ear of the user, and the distance sensor sends out a first signal when the distance between the earphone and the ear of the user is smaller than or equal to a preset distance;
the breathing sensor is used for acquiring a breathing signal of a user;
microprocessor, microprocessor receives under the condition of first signal, microprocessor will breathing signal compares with preset signal, works as breathing signal with when preset signal matches, microprocessor sends the second signal, the second signal is used for instructing the earphone is worn in user's ear.
The earphone that this application embodiment provided at first detects the distance between earphone and user's the ear through distance sensor, works as when distance between earphone and the user's the ear is less than or equal to when predetermineeing the distance, microprocessor control respiratory sensor gathers user's breathing signal, compares breathing signal and predetermined signal, works as breathing signal with when predetermined signal matches, microprocessor judges whether the earphone is worn in user's ear to can be under the circumstances of wearing in user's ear, through speaker output sound, improved the application function of earphone. And through the double guarantee of detecting the distance between the earphone and the ear of the user and the respiratory signal of the user, compared with the mode of only detecting the distance between the earphone and the ear of the user, the method provided by the embodiment of the application can reduce the risk of misjudgment.
An embodiment of the present application further provides an earphone, including:
the distance sensor is used for detecting whether the earphone is worn on the ear of the user, and the distance sensor sends a feedback signal under the condition that the earphone is worn on the ear of the user;
a heart rate sensor for acquiring heart rate parameters of a user;
and the microprocessor controls the heart rate sensor to acquire the heart rate parameters of the user under the condition that the microprocessor receives the feedback signal, and judges the health condition of the user according to the heart rate parameters.
The embodiment of the application further provides an earphone control method, which comprises the following steps:
detecting whether a communication connection is established with a target terminal;
detecting whether the target terminal is worn on the ear of the user under the condition of establishing communication connection with the target terminal;
and in the case of being worn on the ear of the user, receiving the electric signal from the target terminal, and converting the electric signal into a sound signal for output.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a first earphone according to an embodiment of the present application.
Fig. 2 is a block diagram of a second earphone according to an embodiment of the present application.
Fig. 3 is a block diagram of a third earphone according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a motion trajectory curve of a headset in space according to an embodiment of the present application.
Fig. 5 is a block diagram of a fourth earphone according to an embodiment of the present application.
Fig. 6 is a block diagram of a fifth earphone according to an embodiment of the present application.
Fig. 7 is a block diagram of a sixth earphone according to an embodiment of the present application.
Fig. 8 is a block diagram of a seventh earphone according to an embodiment of the present application.
Fig. 9 is a block diagram of an eighth headphone according to an embodiment of the present application.
Fig. 10 is a schematic diagram of an earphone interacting with a terminal according to an embodiment of the present application.
Fig. 11 is a schematic flowchart of a first earphone control method according to an embodiment of the present application.
Fig. 12 is a schematic partial flowchart of a headphone control method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive effort based on the embodiments in the present application are within the scope of protection of the present application.
Referring to fig. 1 and fig. 2 together, the earphone 10 according to the embodiment of the present application includes a speaker 200, a distance sensor 300, a respiration sensor 350, and a microprocessor 400, wherein the speaker 200 is used for emitting sound; the distance sensor 300 is configured to detect a distance between the earphone 10 and an ear of a user, and when the distance between the earphone 10 and the ear of the user is smaller than or equal to a preset distance, the distance sensor 300 sends a first signal; the respiration sensor 350 is used for acquiring a respiration signal of a user; under the condition that the microprocessor 400 receives the first signal, the microprocessor 400 compares the breathing signal with a preset signal, and when the breathing signal is matched with the preset signal, the microprocessor 400 sends a second signal which is used for indicating that the earphone 10 is worn on the ear of the user.
The earphone 10 is a pair of conversion units, which receives the electric signal from the media player or receiver and converts it into audible sound waves by the speaker 200 near the ear. The media player or the receiver may be an electronic device, and the electronic device may be any device having an audio playing function. For example: the system comprises intelligent equipment with an audio playing function, such as a tablet Computer, a mobile phone, an electronic reader, a remote controller, a Personal Computer (PC), a notebook Computer, vehicle-mounted equipment, a network television, wearable equipment and the like.
In one embodiment, the earphone 10 further comprises a main body portion 100, wherein the main body portion 100 is a housing portion of the earphone 10, and the main body portion 100 forms an integral frame of the earphone 10. The speaker 200 is mounted in the main body 100 and outputs sound. The distance between the earphone 10 and the user's ear can be considered as the distance between the main body portion 100 and the user's ear. The earphone 10 may be considered to be worn on the ear of the user as the main body 100 is worn on the ear of the user.
Microprocessor 400 may be a microchip, which is a central processing unit consisting of one or a few LSIs. These circuits perform the functions of the control unit and the arithmetic logic unit. The microprocessor 400, which is capable of performing operations such as instruction fetch, instruction execution, and information exchange with external memory and logic components, is an operation control part of the microcomputer, and may constitute the microcomputer with a memory and a peripheral circuit chip.
The distance sensor 300 includes an optical sensor, an infrared sensor, and an ultrasonic sensor. The optical sensor includes a proximity light sensor. Distance sensor 300 is used for detecting the distance between main part 100 and the user's ear, when distance between earphone 10 and the user's ear is less than or equal to the preset distance, further adopt microprocessor 400 control respiration sensor 350 to gather user's breathing signal to the user's breathing signal who will gather compares with preset signal, when user's breathing signal and preset signal match, think that earphone 10 wears in user's ear, at this moment, can transmit sound to the user through earphone 10's speaker 200. Wherein, the value range of the preset distance can be 0-10 mm.
The manner of acquiring the respiratory signal of the user by the respiratory sensor 350 may be one-time acquisition or multiple-time acquisition. The process of collecting the respiratory signal of the user can be carried out synchronously with the process of comparing the respiratory signal with the preset signal, namely, the respiratory signal of the user is collected while the respiratory signal of the user is collected and compared with the preset signal, so that whether the earphone 10 is worn on the ear of the user is judged in real time, and the wearing detection timeliness is improved.
The earphone 10 that this application embodiment provided at first detects the distance between earphone 10 and the user's ear through distance sensor 300, works as when distance between earphone 10 and the user's ear is less than or equal to when predetermineeing the distance, microprocessor 400 control respiratory sensor 350 gathers user's respiratory signal, compares respiratory signal with predetermineeing the signal, works as respiratory signal with when predetermineeing the signal and match, microprocessor 400 judges earphone 10 is worn in user's ear to can be under the condition of wearing with user's ear, through speaker 200 output sound, improved earphone 10's application function. By detecting the distance between the earphone 10 and the ear of the user and the dual guarantee of the breathing signal of the user, compared with a mode of only detecting the distance between the earphone 10 and the ear of the user, the method provided by the embodiment of the application can reduce the risk of misjudgment.
Further, in an embodiment, the distance sensor 300 includes a first sensing element 310 and a second sensing element 320, the first sensing element 310 is mounted on the front surface of the main body 100, the second sensing element 320 is mounted on the back surface of the main body 100, the first sensing element 310 may be used for sensing whether the second sensing element 320 is shielded, and the second sensing element 320 may be used for sensing whether the first sensing element 310 is shielded. When the main body 100 is worn on the ear of the user, the front surface is attached to the inner wall of the ear of the user, at this time, the first sensing element 310 is shielded, and when the second sensing element 320 senses that the first sensing element 310 is shielded, the second sensing element 320 sends out the first signal, which indicates that the earphone 10 is primarily worn on the ear of the user. At this moment, further adopt microprocessor 400 to gather user's breathing signal, compare with preset signal through breathing signal to confirm whether earphone 10 is worn on user's ear, can enrich the function of earphone 10, and can reduce the risk of misjudgement.
In one embodiment, the microprocessor 400 is configured to compare waveform parameters of the respiration signal with waveform parameters of a preset signal, wherein the waveform parameters include at least one of frequency, amplitude and period.
The preset signal may be a breathing heart rate graph of the user under an ideal condition.
In an embodiment, the preset signal may be an ideal waveform obtained through multiple tests, and the preset signal refers to a waveform obtained when the user is in a stable state and has a good health condition.
In another embodiment, the preset signal may also be a waveform downloaded from a database of authorities to ensure that the acquired preset signal is in compliance with the standard specification.
In yet another embodiment, the preset signal may be a preset signal obtained after training through a neural network model. The method comprises the steps of obtaining a large number of breathing signals of users with different sexes, ages, heights and weights, inputting the obtained breathing signals into a neural network model, carrying out algorithm processing on the input breathing signals through data in a database, outputting results, and enabling the output results to be considered as ideal preset signals and to accord with relevant standards.
Further, the preset signal may be obtained by downloading in real time, or may be obtained by storing in advance. When the preset signal is acquired in a real-time downloading manner, the acquired preset signal can be a newly issued waveform diagram, that is, the acquired preset signal is a waveform diagram conforming to the latest standard. When the preset signal is obtained by pre-storing, the time consumed in the downloading process can be avoided, and the process of comparing the preset signal with the acquired breathing signal can be rapidly completed, so that the wearing detection timeliness of the earphone 10 is improved.
The respiratory signal of the user is compared with the preset signal, and the comparison index can be the conditions of comparing the frequency, the amplitude, the period and the like of the two oscillograms. And then feeds back to the microprocessor 400 in real time according to the comparison result, and then the microprocessor 400 indicates whether the earphone 10 is worn on the ear of the user according to the comparison result.
In another embodiment, the microprocessor 400 is further configured to determine whether the respiration signal has a periodic signal, and if the respiration signal has the periodic signal, the microprocessor 400 intercepts the periodic signal in the respiration signal and compares the intercepted periodic signal with the preset signal.
Specifically, the respiratory signal of a person in a calm state generates a certain fluctuation, and the respiratory signal of the person in the calm state should have a periodic characteristic. In order to accurately judge the breathing signal, in this embodiment, whether a periodic signal exists in the breathing signal is judged firstly, on the premise that the periodic signal exists in the breathing signal, the microprocessor 400 intercepts the periodic signal in the breathing signal, then compares the intercepted periodic signal with a preset signal, and because the periodic signal has accurate amplitude, phase, period and other parameters, the characteristic comparison can be performed more accurately, which is helpful for improving the accuracy of the comparison between the breathing signal and the preset signal, and further improves the accuracy of the wearing detection of the earphone 10.
Further, the microprocessor 400 segments the periodic signals according to the number of complete cycles of the periodic signals in the respiratory signals, compares all the respiratory signals after the segmentation with preset signals, and determines that the respiratory signals are matched with the preset signals when the comparison result is greater than a preset percentage, and at this time, the earphone 10 is considered to be worn on the ear of the user.
Furthermore, after the slicing processing is performed on the respiratory signals, signal amplification, filtering and noise reduction processing are performed on the respiratory signals, and then all the respiratory signals after the slicing processing are compared with preset signals.
In yet another embodiment, when the distance between the earphone 10 and the user's ear is greater than the preset distance and the speaker 200 is playing the first audio, the microprocessor 400 controls the speaker 200 to stop playing the first audio and controls the speaker 200 to play the second audio, which is used to indicate that the distance between the earphone 10 and the user's ear is greater than the preset distance.
The first audio is different from the second audio, the first audio is the audio currently being played by the speaker 200 of the earphone 10, and the first audio may be music. The second audio is used for giving out a prompt to the user. When the distance between the earphone 10 and the user is greater than the preset distance, the microprocessor 400 determines that the wearing between the earphone 10 and the ear of the user is in a problem, at this time, the microprocessor 400 controls the speaker 200 to play a second audio, and the second audio is used for indicating that the distance between the earphone 10 and the ear of the user is greater than the preset distance, so that the wearing between the earphone 10 and the ear of the user is considered to be required to be adjusted.
With continued reference to fig. 3 and 4, the earphone 10 further includes a motion sensor 360, the motion sensor 360 is further configured to detect a motion trajectory curve of the earphone 10, the microprocessor 400 compares the motion trajectory curve with a preset trajectory curve, and the respiration sensor 350 is controlled to acquire a respiration signal of the user when the motion trajectory curve is matched with the preset trajectory curve.
Wherein, the motion sensor 360 is used for detecting the motion track information of the earphone 10.
In one embodiment, the characteristic parameters of the motion trajectory curve are compared with the characteristic parameters of a preset trajectory curve, wherein the characteristic parameters include a starting point a, an end point B and an inflection point C of the curve.
Specifically, first, three feature points, i.e., a start point a, an inflection point C, and an end point B, are identified, and the motion trajectory curve of the headphone 10 in space is considered to be a planar curve. The motion trajectory curve of the earphone 10 in the space forms a coordinate sequence in the form of elements (time, x-axis coordinate and y-axis coordinate), and if the time when the user's finger touches the earphone 10 is t0 and the time when the user's finger leaves the earphone 10 is tn, the coordinates of a starting point are (t0, x0 and y0) and the coordinates of an ending point are (tn, xn and yn), and then the coordinates of an inflection point are identified. From time t0, a point is taken every 0.01 second, and from point 2, each point is compared with the previous point by the x-coordinate value. For convenience of description, let the x coordinates of any two adjacent points be: the former point is xx1 and the latter point is xx2, and the comparison of the first two points has the following three cases:
(1) if xx2 is xx1, no action is taken and the identification of the next point is continued. If xx2> xx1 and yy2> yy1 enters step (2), if xx2< xx1 and yy2< yy1 enters step (3), if xx2 ═ xx1, repeat step (1), and continue to identify the next point.
(2) If xx2> xx1 and yy2> yy1, then direction 1 is noted (representing that the motion trajectory curve of the headphone 10 in the space is an upward parabola, the user is considered to pick up the headphone 10), the next point is continuously identified, if xx2> xx1 or xx2 xx1, yy2> yy1 or yy2 yy1, the next point is continuously identified, if xx2< xx1 and yy2< yy1, the point is noted as inflection point xx0, and direction-1 is noted (representing that the motion trajectory curve of the headphone 10 in the space is an upward parabola, the user is considered to put down the headphone 10), and the next point is continuously identified. Continuing to identify the next point if xx2< xx1 or xx2 ═ xx1, yy2< yy1, or yy2 ═ yy 1; if xx2> xx1 and yy2> yy1, the motion profile recognition ends, indicating that the user is picking up the headset 10 at this time.
(3) If xx2< xx1 and yy2< yy1, then direction-1 is noted (representing that the motion trajectory curve of the headphone 10 in the space is a downward and upward parabola, the user is considered to put down the headphone 10), the next point is continuously identified, if xx2< xx1 or xx2 ═ xx1, yy2< yy1 or yy2 ═ yy1, the next point is continuously identified, if xx2> xx1 and yy2> yy1, the point is noted as inflection point xx0, and direction-1 is noted (representing that the motion trajectory curve of the headphone 10 in the space is an upward parabola, the user is considered to pick up the headphone 10), and the next point is continuously identified: continuing to identify the next point if xx2> xx1 or xx2 ═ xx1, yy2> yy1, or yy2 ═ yy 1; if xx2< xx1 and yy2< yy1, the motion profile recognition ends, indicating that the user is dropping the headphone 10 at this time.
Further, the microprocessor 400 compares the acquired motion trajectory curve of the earphone 10 with a preset trajectory curve, and determines whether the motion trajectory curve of the earphone 10 is matched with the preset trajectory curve, and under the condition that the motion trajectory curve of the earphone 10 is matched with the preset trajectory curve, the microprocessor 400 compares the breathing signal with a preset signal.
The preset track curve can be a standard curve for a user to take an article under normal conditions.
In an embodiment, the preset trajectory curve may be an ideal curve obtained through multiple tests, and the preset trajectory curve is a standard curve for taking an article when the arm of the user is bent normally.
In another embodiment, the preset trajectory curve may also be a graph downloaded from a database of authorities, thereby ensuring that the obtained preset trajectory curve is in compliance with the standard specification.
In yet another embodiment, the preset trajectory curve may be a trajectory curve obtained after training through a neural network model. The method comprises the steps of obtaining a large number of track curves of users with different sexes, different ages, different heights, left-right article taking and right-hand article taking, inputting the obtained track curves into a neural network model, carrying out algorithm processing on the input track curves through data in a database, outputting results, and enabling the output results to be regarded as track curves when normal people take articles and to be in accordance with relevant standards.
Further, the preset trajectory curve may be obtained by downloading in real time or may be obtained by storing in advance. When the preset trajectory curve is obtained in a real-time downloading manner, the obtained preset trajectory curve can be a newly issued curve, that is, the obtained preset trajectory curve is a curve conforming to the latest standard. When the preset track curve is obtained by pre-storing, the time consumed in the downloading process can be avoided, and the process of comparing the obtained motion track curve of the earphone 10 with the preset track curve can be rapidly completed, so that the wearing detection timeliness of the earphone 10 is improved.
With reference to fig. 5, the earphone 10 includes a first wearing portion 110 and a second wearing portion 120, the respiration sensor 350 includes a first respiration sensor 351 and a second respiration sensor 352, the first respiration sensor 351 is installed in the first wearing portion 110, the second respiration sensor 352 is installed in the second wearing portion 120, the first respiration sensor 351 collects a first respiration signal of a first user's respiration, the second respiration sensor 352 collects a second respiration signal of a second user's respiration, and the microprocessor 400 sends the second signal when the first respiration signal matches with the preset signal and the second respiration signal matches with the preset signal.
The microprocessor 400 collects a first respiratory signal of a first user's breath according to the first wearing part 110, and collects a second respiratory signal of a second user's breath according to the second wearing part 120, and the microprocessor 400 sends the second signal under the condition that the first respiratory signal matches with the preset signal and the second respiratory signal matches with the preset signal.
Specifically, in the present embodiment, the earphone 10 is a separate earphone 10, the earphone 10 includes a first wearing portion 110 and a second wearing portion 120, the first wearing portion 110 and the second wearing portion 120 can be worn on the left and right ears of the same user at the same time, and the first wearing portion 110 and the second wearing portion 120 can be worn on the ears of the first user and the second user, respectively. At this time, the microprocessor 400 includes a first processor 410 and a second processor 420, the first processor 410 is located on the first wearing part 110, the second processor 420 is located on the second wearing part 120, the first processor 410 is used to acquire a first respiration signal of the first user's respiration, and the second processor 420 is used to acquire a second respiration signal of the second user's respiration. Under the condition that the first respiratory signal is matched with the preset signal and the second respiratory signal is matched with the preset signal, the microprocessor 400 determines whether the first wearing part 110 is worn on the ear of the first user, and the microprocessor 400 determines whether the second wearing part 120 is worn on the ear of the second user.
That is, only on the premise that the first wearing portion 110 is worn on the ear of the first user and the second wearing portion 120 is worn on the ear of the second user, the earphone 10 is determined to be worn on the ear of the user, and at this time, the speaker 200 is controlled to transmit the sound signal to the user, so that it is ensured that both the first user and the second user can hear the complete audio information from the speaker 200, and the first user and the second user can avoid missing the audio information from the speaker 200.
Referring to fig. 6, the earphone 10 includes a first communication module 371, a second communication module 372, a first positioning module 373, and a second positioning module 374, where the first communication module 371, the first positioning module 373, and the microprocessor 400 are all located in the first wearing part 110, the second communication module 372 and the second positioning module 374 are all located in the second wearing part 120, the first positioning module 373 is used to position the first wearing part 110, the second positioning module 374 is used to position the second wearing part 120, and the microprocessor 400 is used to obtain the positioning information of the first wearing part 110 and the positioning information of the second wearing part 120 through the first communication module 371 and the second communication module 372.
Further, the first wearing part 110 and the second wearing part 120 are communicatively connected through the first communication module 371 and the second communication module 372, the headset 10 includes a first memory 510 and a second memory 520, the microprocessor 400 includes a first processor 410 and a second processor 420, the first memory 510 and the first processor 410 are located in the first wearing part 110, the second memory 520 and the second processor 420 are located in the second wearing part 120, the first memory 510 stores a first map, the first positioning module 373 records current first position information of the first wearing part 110 in the first map, the second memory 520 stores a second map, the second positioning module 374 records current second position information of the second wearing part 120 in the second map, the first processor 410 locates the second wearable part 120 according to the obtained second map, and the second processor 420 locates the first wearable part 110 according to the obtained first map.
Specifically, in the present embodiment, the earphone 10 is a split type earphone 10. A communication connection may be established between the first wearing portion 110 and the second wearing portion 120, and the connection between the first wearing portion 110 and the second wearing portion 120 may be a bluetooth connection or a wifi connection.
Since the earphone 10 is a separate type earphone 10 and the first wearing portion 110 and the second wearing portion 120 are separately provided, the first wearing portion 110 and the second wearing portion 120 are lost. To this end, in the present embodiment, the first memory 510 of the first wearable unit 110 stores a first map in which the current first position information of the first wearable unit 110 is recorded in real time. The second memory 520 of the second wearable unit 120 stores a second map in which the current second position information of the second wearable unit 120 is recorded in real time. The first processor 410 may call a second map stored in the second memory 520 of the second wearable part 120 to acquire second location information of the second wearable part 120, thereby enabling the second wearable part 120 to be located. Also, the second processor 420 may call the first map stored in the first memory 510 of the first wearable part 110 to acquire the first location information of the first wearable part 110, thereby enabling the positioning of the first wearable part 110. That is, in the present embodiment, the first wearing portion 110 and the second wearing portion 120 are in the same position, and the first wearing portion 110 and the second wearing portion 120 can be positioned with each other, which contributes to solving the problem of loss of the first wearing portion 110 and the second wearing portion 120.
The first memory 510 and the second memory 520 are used to store instructions and data, including but not limited to: map data, temporary data generated when controlling the operation of the headset 10, such as position data of the headset 10, communication data, etc. The first processor 410 may read stored instructions in the first memory 510 to perform corresponding functions, and the second processor 420 may read stored instructions in the second memory 520 to perform corresponding functions. The first and second memories 510 and 520 may include Random Access Memory (RAM) and Non-Volatile Memory (NVM). The nonvolatile Memory unit may include a Hard Disk Drive (Hard Disk Drive, HDD), a Solid State Drive (SSD), a Silicon Disk Drive (SDD), a Read-Only Memory unit (ROM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy Disk, an optical data storage device, and the like.
With reference to fig. 7, the earphone 10 further includes an image capturing sensor 330, the image capturing sensor 330 is further configured to capture a contour image of an ear of the user, when the image capturing sensor 330 captures the contour image of the ear of the user, the image capturing sensor 330 sends a third signal, and when the microprocessor 400 receives the third signal, the microprocessor 400 captures a breathing signal of the user, and compares the breathing signal with a preset signal.
In particular, the image capturing sensor 330 is used to capture an image of the current environment of the headset 10. The image capture sensor 330330 includes one or more of a two-dimensional camera, a three-dimensional camera. For example, a two-dimensional camera may be provided on the side of the headset 10 facing the user's ear and capture images of the side of the headset 10 facing the user's ear, i.e., the user's ear profile.
As another example, a three-dimensional camera is provided on the side of the headset 10 facing the user's ear, and captures a three-dimensional image of the side of the headset 10 facing the user's ear. The three-dimensional image includes information about the distance from the headset 10 to the user's ear. A stereo camera module or a depth sensing module may be employed as the three-dimensional camera.
Referring to fig. 8, the earphone 10 further includes an environment sensor 380, the environment sensor 380 is further configured to detect a noise parameter of an environment where the earphone 10 is located, and when the noise parameter is greater than a preset parameter, the environment sensor 380 sends a fourth signal, and the microprocessor 400 controls the speaker 200 to increase the volume according to the fourth signal.
Specifically, the environment sensor 380 includes an acoustic sensing element 340, and the acoustic sensing element 340 is configured to sense the intensity of external noise and generate a noise parameter corresponding to the intensity of the noise. When the noise parameter is greater than the preset parameter, the microprocessor 400 controls the speaker 200 to amplify the sound signal, so that the volume output to the user is increased.
That is to say, the earphone 10 in the present embodiment has a sound amplification function, and adaptively adjusts the sound output from the speaker 200 to the user by detecting the noise parameter in the external environment, when the noise in the external environment is too large, the sound output from the speaker 200 to the ear of the user is appropriately increased, and when the external environment is relatively quiet, the sound output from the speaker 200 to the ear of the user is appropriately decreased, so that the earphone 10 has a certain degree of intelligence, meets the ergonomics, and enriches the application functions of the earphone 10.
With continuing reference to fig. 9, the present embodiment further provides an earphone 10, where the earphone 10 includes a distance sensor 300, a heart rate sensor 390 and a microprocessor 400, where the distance sensor 300 is configured to detect whether the earphone 10 is worn on the ear of the user, and in a case where the earphone 10 is worn on the ear of the user, the earphone 10 sends a feedback signal; the heart rate sensor 390 is configured to collect heart rate parameters of a user, and when the microprocessor 400 receives the feedback signal, the microprocessor 400 controls the heart rate sensor 390 to collect the heart rate parameters of the user, and determines the health condition of the user according to the heart rate parameters.
The earphone 10 is a pair of conversion units, which receives the electric signal from the media player or receiver and converts it into audible sound waves by the speaker 200 near the ear. The media player or the receiver may be an electronic device, and the electronic device may be any device having an audio playing function. For example: the system comprises intelligent equipment with an audio playing function, such as a tablet Computer, a mobile phone, an electronic reader, a remote controller, a Personal Computer (PC), a notebook Computer, vehicle-mounted equipment, a network television, wearable equipment and the like.
Reference is made to the foregoing description for the distance sensor 300 and the microprocessor 400, which are not repeated herein.
In this embodiment, whether the earphone 10 is worn on the ear of the user is detected by the distance sensor 300, and when the earphone 10 is worn on the ear of the user, the microprocessor 400 further collects the heart rate parameter of the user and then determines the health condition of the user according to the heart rate parameter. On the premise that the earphone 10 of the earphone 10 is worn on the ear of the user, the heart rate parameter of the user is monitored to judge the health condition of the user, so that the acquired heart rate parameter is more accurate, and the accuracy of judging the health condition of the user is improved. That is, the earphone 10 of the present embodiment has a diagnostic function, and can monitor the health condition of the user by being worn on the ear of the user, thereby enriching the application function of the earphone 10.
In addition, the microprocessor 400 may further obtain a respiratory signal of the user according to the obtained heart rate parameter of the user, compare the respiratory signal corresponding to the respiratory signal of the user with a preset signal, and when the respiratory signal is matched with the preset signal, the microprocessor 400 determines whether the earphone 10 is worn on the ear of the user, that is, the earphone 10 has a function of wearing detection in addition to the diagnostic function.
With continued reference to fig. 10, the microprocessor 400 sends the collected heart rate parameters of the user to the terminal device 20 in communication with the headset 10, and the heart rate parameters are displayed on the terminal device 20.
The terminal device 20 may be any device having communication and display functions. For example: the system comprises intelligent equipment with communication and display functions, such as a tablet Computer, a mobile phone, an electronic reader, a remote controller, a Personal Computer (PC), a notebook Computer, vehicle-mounted equipment, a network television, wearable equipment and the like.
The terminal device 20 may remotely control the headset 10, and the communication between the terminal device 20 and the headset 10 may be wireless communication, specifically, any one of the following: bluetooth communication, WiFi communication, Zigbee communication, mobile communication, and the like. The terminal device 20 has a display function, and the terminal device 20 displays the heart rate parameters sent by the earphone 10 for the user to view.
Further, the microprocessor 400 is configured to receive user attribute information from the terminal device 20, and compare the collected heart rate parameter of the user with a preset heart rate parameter according to the user attribute information to determine the health condition of the user, where different user attribute information corresponds to different preset heart rate parameters, and the user attribute information includes any one or more of a gender of the user, an age of the user, a height of the user, and a weight of the user.
Specifically, because attribute information such as sex, age, height and weight of the user all have the difference, consequently, before gathering the heart rate parameter of user and comparing, need to judge user's attribute information, the heart rate parameter that different user attribute information corresponds slightly differs, gather user's heart rate parameter based on user attribute information, then compare the heart rate parameter of gathering with the preset parameter, can improve the degree of accuracy of comparing, and then improve the degree of accuracy of judging to user's health status.
With continued reference to fig. 11, the present embodiment further provides a method for controlling the earphone 10, which includes, but is not limited to, S100, S200, and S300, and the following description is provided with respect to S100, S200, and S300.
S100: and detecting whether a communication connection is established with the target terminal.
S200: and detecting whether the target terminal is worn on the ear of the user or not under the condition of establishing communication connection with the target terminal.
S300: and in the case of being worn on the ear of the user, receiving the electric signal from the target terminal, and converting the electric signal into a sound signal for output.
Specifically, in the present embodiment, it is determined whether or not the earphone 10 is in communication with the target terminal, and when the earphone 10 is in communication with the target terminal, it is further detected whether or not the earphone 10 is worn on the ear of the user. When the earphone 10 is worn on the ear of a user, the electric signal sent by the target terminal is received, and then the received electric signal is converted into a sound signal on the earphone 10 to be output to the user, so that the sound on the target terminal can be completely and clearly transmitted to the user, and the problem of omission of the sound signal is prevented.
With continued reference to fig. 12, in one embodiment, the detecting whether the ear is worn on the user includes, but is not limited to, S210 and S220, which are described below with respect to S210 and S220.
S210: and under the condition that the distance between the earphone 10 and the ear of the user is smaller than or equal to the preset distance, acquiring the respiratory signal of the user, and comparing the respiratory signal with the preset signal.
S220: and under the condition that the breathing signal is matched with the preset signal, judging that the earphone is worn on the ear of the user.
When the distance between the earphone 10 and the ear of the user is less than or equal to the preset distance, further collecting the respiratory signal of the user, comparing the collected respiratory signal of the user with the preset signal, and when the respiratory signal of the user is matched with the preset signal, considering that the earphone 10 is worn on the ear of the user, at this time, the sound can be transmitted to the user through the loudspeaker 200 of the earphone 10. Wherein, the value range of the preset distance can be 0-10 mm.
The mode of acquiring the respiratory signal of the user may be one-time acquisition or multiple-time acquisition. The process of collecting the respiratory signal of the user can be carried out synchronously with the process of comparing the respiratory signal with the preset signal, namely, the respiratory signal of the user is collected while the respiratory signal of the user is collected and compared with the preset signal, so that whether the earphone 10 is worn on the ear of the user is judged in real time, and the wearing detection timeliness is improved.
The preset signal may be a breathing heart rate graph of the user under an ideal condition.
In an embodiment, the preset signal may be an ideal waveform obtained through multiple tests, and the preset signal refers to a waveform obtained when the user is in a stable state and has a good health condition.
In another embodiment, the preset signal may also be a waveform downloaded from a database of authorities to ensure that the acquired preset signal is in compliance with the standard specification.
In yet another embodiment, the preset signal may be a preset signal obtained after training through a neural network model. The method comprises the steps of obtaining a large number of breathing signals of users with different sexes, ages, heights and weights, inputting the obtained breathing signals into a neural network model, carrying out algorithm processing on the input breathing signals through data in a database, outputting results, and enabling the output results to be considered as ideal preset signals and to accord with relevant standards.
Further, the preset signal may be obtained by downloading in real time, or may be obtained by storing in advance. When the preset signal is acquired in a real-time downloading manner, the acquired preset signal can be a newly issued waveform diagram, that is, the acquired preset signal is a waveform diagram conforming to the latest standard. When the preset signal is obtained by pre-storing, the time consumed in the downloading process can be avoided, and the process of comparing the preset signal with the acquired breathing signal can be rapidly completed, so that the wearing detection timeliness of the earphone 10 is improved.
The respiratory signal of the user is compared with the preset signal, and the comparison index can be the conditions of comparing the frequency, the amplitude, the period and the like of the two oscillograms. And then feeds back to the microprocessor 400 in real time according to the comparison result, and then indicates whether the earphone 10 is worn on the ear of the user according to the comparison result.
The embodiment of the application provides a headphone 10 control method, at first detect the distance between headphone 10 and the user's ear, work as when distance between headphone 10 and the user's ear is less than or equal to when predetermineeing the distance, gather user's respiratory signal, compare respiratory signal and predetermined signal, work as respiratory signal with when predetermineeing signal phase matching, judge whether headphone 10 wears in user's ear to can be under the circumstances of wearing user's ear, through headphone 10's speaker 200 output sound, improved headphone 10's application function. By detecting the distance between the earphone 10 and the ear of the user and the dual guarantee of the breathing signal of the user, compared with a mode of only detecting the distance between the earphone 10 and the ear of the user, the method provided by the embodiment of the application can reduce the risk of misjudgment.
The present application also provides a computer readable storage medium storing a computer program for headset control, wherein the computer program for headset control when executed performs: the earphone control method provided by any one of the above embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set forth in the above-described headset control method embodiments. The computer program product may be a software installation package and the computer comprises a headset.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. An earphone, characterized in that the earphone comprises:
a speaker for emitting sound;
the distance sensor is used for detecting the distance between the earphone and the ear of the user, and the distance sensor sends out a first signal when the distance between the earphone and the ear of the user is smaller than or equal to a preset distance;
the breathing sensor is used for acquiring a breathing signal of a user;
microprocessor, microprocessor receives under the condition of first signal, microprocessor will breathing signal compares with preset signal, works as breathing signal with when preset signal matches, microprocessor sends the second signal, the second signal is used for instructing the earphone is worn in user's ear.
2. The headset of claim 1, wherein the microprocessor is configured to compare waveform parameters of the respiration signal with waveform parameters of a predetermined signal, wherein the waveform parameters include at least one of frequency, amplitude, and period.
3. The earphone according to claim 1 or 2, wherein the microprocessor is further configured to determine whether the respiration signal has a periodic signal, and in the case that the respiration signal has the periodic signal, the microprocessor intercepts the periodic signal in the respiration signal and compares the intercepted periodic signal with the preset signal.
4. The headset of claim 1, further comprising a motion sensor, wherein the motion sensor is further configured to detect a motion trace curve of the headset, the microprocessor compares the motion trace curve with a preset trace curve, and controls the respiration sensor to collect a respiration signal of the user when the motion trace curve matches the preset trace curve.
5. The earphone according to claim 4, wherein the characteristic parameters of the motion trajectory curve are compared with the characteristic parameters of a preset trajectory curve, wherein the characteristic parameters comprise a start point, an end point and an inflection point of the curve.
6. The headset of claim 1, wherein the headset comprises a first wearable portion and a second wearable portion, the breathing sensor comprises a first breathing sensor and a second breathing sensor, the first breathing sensor is mounted in the first wearable portion, the second breathing sensor is mounted in the second wearable portion, the first breathing sensor collects a first breathing signal of a first user's breath, the second breathing sensor collects a second breathing signal of a second user's breath, and the microprocessor sends the second signal when the first breathing signal matches the predetermined signal and the second breathing signal matches the predetermined signal.
7. The headset of claim 6, wherein the headset comprises a first communication module, a second communication module, a first positioning module, and a second positioning module, the first communication module, the first positioning module, and the microprocessor are all located in the first wearable portion, the second communication module and the second positioning module are all located in the second wearable portion, the first positioning module is configured to position the first wearable portion, the second positioning module is configured to position the second wearable portion, and the microprocessor is configured to obtain the positioning information of the first wearable portion and the second wearable portion through the first communication module and the second communication module.
8. The headset of claim 1, wherein the headset further comprises an image capture sensor, wherein the image capture sensor is further configured to capture a contour image of the ear of the user, wherein the image capture sensor sends a third signal if the image capture sensor captures the contour image of the ear of the user, and wherein the microprocessor captures a breathing signal of the user and compares the breathing signal with a predetermined signal if the microprocessor receives the third signal.
9. The headset of claim 1, wherein when the distance between the headset and the user's ear is greater than a preset distance and the speaker is playing a first audio, the microprocessor controls the speaker to stop playing the first audio and controls the speaker to play a second audio indicating that the distance between the headset and the user's ear is greater than the preset distance.
10. The headset of claim 1, wherein the headset further comprises an environment sensor, the environment sensor is further configured to detect a noise parameter of an environment in which the headset is located, and when the noise parameter is greater than a preset parameter, the environment sensor sends a fourth signal, and the microprocessor controls the speaker to increase the volume according to the fourth signal.
11. An earphone, characterized in that the earphone comprises:
the distance sensor is used for detecting whether the earphone is worn on the ear of the user, and the distance sensor sends a feedback signal under the condition that the earphone is worn on the ear of the user;
a heart rate sensor to collect heart rate parameters of a user;
and the microprocessor controls the heart rate sensor to acquire the heart rate parameters of the user under the condition that the microprocessor receives the feedback signal, and judges the health condition of the user according to the heart rate parameters.
12. The headset of claim 11, wherein the microprocessor transmits the collected heart rate parameters of the user to a terminal device in communication with the headset, the heart rate parameters being displayed on the terminal device.
13. The headset of claim 11, wherein the microprocessor is configured to receive user attribute information from a terminal device, and compare the collected heart rate parameter of the user with a preset heart rate parameter according to the user attribute information to determine the health condition of the user, wherein different user attribute information corresponds to different preset heart rate parameters, and the user attribute information includes any one or more of a gender of the user, an age of the user, a height of the user, and a weight of the user.
14. A headset control method, the method comprising:
detecting whether a communication connection is established with a target terminal;
detecting whether the target terminal is worn on the ear of the user under the condition of establishing communication connection with the target terminal;
and in the case of being worn on the ear of the user, receiving the electric signal from the target terminal, and converting the electric signal into a sound signal for output.
15. The headphone control method as recited in claim 14, wherein the detecting whether to be worn at an ear of a user comprises:
collecting a breathing signal of a user under the condition that the distance between the earphone and the ear of the user is smaller than or equal to a preset distance, and comparing the breathing signal with the preset signal;
and under the condition that the breathing signal is matched with the preset signal, judging that the earphone is worn on the ear of the user.
CN201910621183.1A 2019-07-10 2019-07-10 Earphone and earphone control method Pending CN112218196A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910621183.1A CN112218196A (en) 2019-07-10 2019-07-10 Earphone and earphone control method
PCT/CN2020/093626 WO2021004194A1 (en) 2019-07-10 2020-05-30 Earphone and earphone control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621183.1A CN112218196A (en) 2019-07-10 2019-07-10 Earphone and earphone control method

Publications (1)

Publication Number Publication Date
CN112218196A true CN112218196A (en) 2021-01-12

Family

ID=74048228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621183.1A Pending CN112218196A (en) 2019-07-10 2019-07-10 Earphone and earphone control method

Country Status (2)

Country Link
CN (1) CN112218196A (en)
WO (1) WO2021004194A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866876A (en) * 2021-03-15 2021-05-28 西安Tcl软件开发有限公司 In-ear detection method of TWS headset, and computer-readable storage medium
CN112995881A (en) * 2021-05-08 2021-06-18 恒玄科技(北京)有限公司 Earphone, earphone in and out detection method and storage medium of earphone

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316394A (en) * 2010-06-30 2012-01-11 索尼爱立信移动通讯有限公司 Bluetooth equipment and the audio frequency playing method that utilizes this bluetooth equipment
CN105491469A (en) * 2014-09-15 2016-04-13 Tcl集团股份有限公司 Method and system for controlling audio output mode based on wearing earphone state
CN206370927U (en) * 2016-12-01 2017-08-01 佳禾智能科技股份有限公司 A kind of intelligent wireless earphone
US20180011682A1 (en) * 2016-07-06 2018-01-11 Bragi GmbH Variable computing engine for interactive media based upon user biometrics
US20180027320A1 (en) * 2016-07-20 2018-01-25 Shenzhen GOODIX Technology Co., Ltd. Headphone and interaction system
CN108076405A (en) * 2016-11-10 2018-05-25 三星电子株式会社 Electronic equipment and its operating method
CN207573591U (en) * 2017-12-19 2018-07-03 深圳市百泰实业股份有限公司 A kind of ornaments bluetooth headset and earphone system
CN108810706A (en) * 2018-07-12 2018-11-13 中新工程技术研究院有限公司 A kind of control method, device, computer equipment and the storage medium of earphone sound channel
CN108810693A (en) * 2018-05-28 2018-11-13 Oppo广东移动通信有限公司 Apparatus control method and Related product
CN108966087A (en) * 2018-07-26 2018-12-07 歌尔科技有限公司 A kind of wear condition detection method, device and the wireless headset of wireless headset
CN109121059A (en) * 2018-07-26 2019-01-01 Oppo广东移动通信有限公司 Loudspeaker plug-hole detection method and Related product
CN109168102A (en) * 2018-11-22 2019-01-08 歌尔科技有限公司 A kind of earphone wearing state detection method, device and earphone
CN109511034A (en) * 2018-12-13 2019-03-22 歌尔股份有限公司 A kind of wearing detection method, device and the earphone of earphone

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108886653B (en) * 2016-04-20 2020-07-28 华为技术有限公司 Earphone sound channel control method, related equipment and system
CN109151669B (en) * 2018-09-30 2021-06-15 Oppo广东移动通信有限公司 Earphone control method, earphone control device, electronic equipment and storage medium
CN109743667B (en) * 2018-12-28 2022-06-10 歌尔科技有限公司 Earphone wearing detection method and earphone

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316394A (en) * 2010-06-30 2012-01-11 索尼爱立信移动通讯有限公司 Bluetooth equipment and the audio frequency playing method that utilizes this bluetooth equipment
CN105491469A (en) * 2014-09-15 2016-04-13 Tcl集团股份有限公司 Method and system for controlling audio output mode based on wearing earphone state
US20180011682A1 (en) * 2016-07-06 2018-01-11 Bragi GmbH Variable computing engine for interactive media based upon user biometrics
US20180027320A1 (en) * 2016-07-20 2018-01-25 Shenzhen GOODIX Technology Co., Ltd. Headphone and interaction system
CN108076405A (en) * 2016-11-10 2018-05-25 三星电子株式会社 Electronic equipment and its operating method
CN206370927U (en) * 2016-12-01 2017-08-01 佳禾智能科技股份有限公司 A kind of intelligent wireless earphone
CN207573591U (en) * 2017-12-19 2018-07-03 深圳市百泰实业股份有限公司 A kind of ornaments bluetooth headset and earphone system
CN108810693A (en) * 2018-05-28 2018-11-13 Oppo广东移动通信有限公司 Apparatus control method and Related product
CN108810706A (en) * 2018-07-12 2018-11-13 中新工程技术研究院有限公司 A kind of control method, device, computer equipment and the storage medium of earphone sound channel
CN108966087A (en) * 2018-07-26 2018-12-07 歌尔科技有限公司 A kind of wear condition detection method, device and the wireless headset of wireless headset
CN109121059A (en) * 2018-07-26 2019-01-01 Oppo广东移动通信有限公司 Loudspeaker plug-hole detection method and Related product
CN109168102A (en) * 2018-11-22 2019-01-08 歌尔科技有限公司 A kind of earphone wearing state detection method, device and earphone
CN109511034A (en) * 2018-12-13 2019-03-22 歌尔股份有限公司 A kind of wearing detection method, device and the earphone of earphone

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866876A (en) * 2021-03-15 2021-05-28 西安Tcl软件开发有限公司 In-ear detection method of TWS headset, and computer-readable storage medium
CN112995881A (en) * 2021-05-08 2021-06-18 恒玄科技(北京)有限公司 Earphone, earphone in and out detection method and storage medium of earphone
CN112995881B (en) * 2021-05-08 2021-08-20 恒玄科技(北京)有限公司 Earphone, earphone in and out detection method and storage medium of earphone
CN113573226A (en) * 2021-05-08 2021-10-29 恒玄科技(北京)有限公司 Earphone, earphone in and out detection method and storage medium of earphone

Also Published As

Publication number Publication date
WO2021004194A1 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
US11166104B2 (en) Detecting use of a wearable device
JP6365939B2 (en) Sleep assist system
CN109381165B (en) Skin detection method and mobile terminal
US20180271710A1 (en) Wireless earpiece for tinnitus therapy
US20200163561A1 (en) Electronic device for obtaining blood pressure value using pulse wave velocity algorithm and method for obtaining blood pressure value
US10747337B2 (en) Mechanical detection of a touch movement using a sensor and a special surface pattern system and method
US20130335226A1 (en) Earphone-Based Game Controller and Health Monitor
JP6742380B2 (en) Electronic device
KR20190061681A (en) Electronic device operating in asscociated state with external audio device based on biometric information and method therefor
WO2015112740A2 (en) Methods and systems for snore detection and correction
CN108874130B (en) Play control method and related product
KR20190021113A (en) Electronic device and method for measuring stress thereof
US20180120930A1 (en) Use of Body-Area Network (BAN) as a Kinetic User Interface (KUI)
WO2021004194A1 (en) Earphone and earphone control method
JP2017079807A (en) Biological sensor, biological data collection terminal, biological data collection system, and biological data collection method
CN109238306A (en) Step counting data verification method, device, storage medium and terminal based on wearable device
CN113646027B (en) Electronic device and method for providing information for decompression by the electronic device
JP7401634B2 (en) Server device, program and method
KR20200137460A (en) Electronic device for providing exercise information according to exercise environment
CN115052221A (en) Earphone control method and earphone
CN204971260U (en) Head -mounted electronic equipment
KR20210052874A (en) An electronic device for recognizing gesture of user using a plurality of sensor signals
CN112542030A (en) Intelligent wearable device, method and system for detecting gesture and storage medium
CN108632713B (en) Volume control method and device, storage medium and terminal equipment
CN108922224B (en) Position prompting method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112