CN110267144B - Information processing method and apparatus, and storage medium - Google Patents

Information processing method and apparatus, and storage medium Download PDF

Info

Publication number
CN110267144B
CN110267144B CN201910577886.9A CN201910577886A CN110267144B CN 110267144 B CN110267144 B CN 110267144B CN 201910577886 A CN201910577886 A CN 201910577886A CN 110267144 B CN110267144 B CN 110267144B
Authority
CN
China
Prior art keywords
current
matching result
ear canal
sound
earphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910577886.9A
Other languages
Chinese (zh)
Other versions
CN110267144A (en
Inventor
李应伟
李乐
宋肃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910577886.9A priority Critical patent/CN110267144B/en
Publication of CN110267144A publication Critical patent/CN110267144A/en
Application granted granted Critical
Publication of CN110267144B publication Critical patent/CN110267144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Headphones And Earphones (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The embodiment of the application discloses an information processing method, an information processing device and a storage medium, wherein the method comprises the following steps: when the earphone is in a wearing state, the current ear canal characteristic information is obtained through a sound pressure sensor in the earphone; when the ear canal feature set exists, matching the current ear canal feature information with the ear canal feature set to obtain a matching result; the ear canal feature set represents ear canal feature information acquired through a sound pressure sensor in a historical wearing process; determining a current use mode corresponding to the matching result from the corresponding relation between the preset matching result and the use mode; and presetting a corresponding relation between the matching result and the use mode to represent the sound playing mode corresponding to the type of the wearer one by one.

Description

Information processing method and apparatus, and storage medium
Technical Field
The present disclosure relates to communications technologies, and in particular, to an information processing method and apparatus, and a storage medium.
Background
At present, when a user listens to sound of a terminal by using an earphone, the user inserts the earphone into a corresponding interface of the terminal to listen to the sound output by the terminal, and can also control the volume of the sound or play of the sound by keys on the earphone, in order to improve the personalized use of the earphone, it is proposed that the earphone is controlled to send voiceprint information and/or fingerprint information to the terminal, and the terminal matches the voiceprint information with voiceprint data prestored in the terminal device and/or matches the fingerprint information with fingerprint data prestored in the terminal device; if the preset matching threshold is reached, the voiceprint information and/or the fingerprint information are judged to be valid data, and then the user is permitted to control the sound through the keys of the earphone.
However, the implementation process needs to record voiceprint data and/or fingerprint information on the terminal in advance, and the intelligence of personalized earphone use is reduced.
Disclosure of Invention
The application provides an information processing method and device and a storage medium, which can improve the intelligence of personalized earphone usage.
The technical scheme of the application is realized as follows:
an embodiment of the present application provides an information processing method, including:
when the earphone is in a wearing state, acquiring current ear canal characteristic information through a sound pressure sensor in the earphone;
when the ear canal feature set exists, matching the current ear canal feature information with the ear canal feature set to obtain a matching result; the ear canal feature set represents ear canal feature information acquired through the sound pressure sensor in a historical wearing process;
determining a current use mode corresponding to a matching result from a corresponding relation between a preset matching result and the use mode; and the corresponding relation between the preset matching result and the use mode represents the sound playing mode corresponding to the type of the wearer one by one.
An embodiment of the present application provides an information processing apparatus, the apparatus including: the device comprises an acquisition unit, a matching unit and a control unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring the current ear canal characteristic information through a sound pressure sensor in the earphone when the earphone is in a wearing state;
the matching unit is used for matching the current ear canal feature information with the ear canal feature set to obtain a matching result when the ear canal feature set exists; the ear canal feature set represents ear canal feature information acquired through the sound pressure sensor in a historical wearing process;
the control unit is used for determining a current using mode corresponding to a matching result from the corresponding relation between a preset matching result and the using mode; and the corresponding relation between the preset matching result and the use mode represents the sound playing mode corresponding to the type of the wearer one by one.
An embodiment of the present application provides an information processing apparatus, including: a processor, a memory and a communication bus, the memory communicating with the processor through the communication bus, the memory storing one or more programs executable by the processor, the processor performing any of the information processing methods as described above when the one or more programs are executed.
Embodiments of the present application provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement any one of the information processing methods described above.
The embodiment of the application provides an information processing method, an information processing device and a storage medium, wherein the method comprises the following steps: acquiring current auditory canal feature information through a sound pressure sensor in an earphone, acquiring an auditory canal feature set, and matching the current auditory canal feature information with the auditory canal feature set to obtain a matching result; and determining the current use mode corresponding to the matching result from the corresponding relation between the preset matching result and the use mode. By adopting the technical scheme, the ear canal characteristic information acquired in the historical wearing process is represented by the ear canal characteristic set, namely, information does not need to be manually input in advance, the matching result of the current ear canal characteristic information and the ear canal characteristic set can be obtained, the matching result is reused, the current using mode is determined from the corresponding relation between the preset matching result and the using mode, the one-to-one corresponding relation between the type of a wearer and the type of a sound playing mode is represented by the corresponding relation between the preset matching result and the using mode, namely, the current using mode is the sound playing mode corresponding to the type of the current wearer of the earphone, in conclusion, the sound playing mode corresponding to the type of the current wearer of the earphone can be obtained without being manually input in advance, and the intelligence of personalized use of the earphone is improved.
Drawings
Fig. 1 is a first schematic diagram of an information processing apparatus according to an embodiment of the present disclosure;
fig. 2 is a first flowchart of an information processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an earphone according to an embodiment of the present disclosure;
fig. 4 is a second flowchart of an information processing method according to an embodiment of the present application;
fig. 5 is a third flowchart of an information processing method according to an embodiment of the present application;
fig. 6 is a fourth flowchart of an information processing method according to an embodiment of the present application;
fig. 7 is a second schematic diagram of an information processing apparatus according to an embodiment of the present application;
fig. 8 is a third schematic diagram of an information processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an information processing apparatus 1 for implementing an embodiment of the present application, where the information processing apparatus 1 is a terminal capable of playing audio, such as a mobile phone, a notebook computer, a tablet computer (Pad), a desktop computer, and the like. The information processing apparatus shown in fig. 1 is merely an example, and should not bring any limitation to the functions and the range of use of the embodiments of the present application.
As shown in fig. 1, the information processing apparatus 1 may include a processing unit (e.g., a processor) 11, a storage unit (e.g., a memory) 12, a communication bus 13, and an I/O interface 14, wherein the processing unit 11 may perform various appropriate actions and processes according to a program stored in the storage unit 12, the storage unit 12 stores therein various programs and data necessary for the operation of the information processing apparatus 1, and the processing unit 11, the storage unit 12, and the I/O interface 14 are connected through the communication bus 13.
Generally, the following units may be connected to the I/O interface 14: an input unit 15 including, for example, a touch panel, a microphone, and the like, an output unit 16 including, for example, a speaker, a vibrator, and the like, a communication unit 17, and a display unit 18; among them, the communication unit 17 may allow the information processing apparatus 1 to perform wireless or wired communication with other devices to exchange data, for example, the information processing apparatus 1 performs wireless or wired communication with headphones.
Although fig. 1 shows the information processing apparatus 1 having various units, it is to be understood that it is not required to implement or have all the shown units, and that more or less units may be implemented instead or provided.
It should be noted that the embodiment of the present application can be implemented based on the information processing apparatus shown in fig. 1.
Example one
An embodiment of the present application provides an information processing method, as shown in fig. 2, the information processing method includes:
s201, when the earphone is in a wearing state, obtaining current auditory canal characteristic information through a sound pressure sensor in the earphone;
the information processing device detects whether the earphone is in a wearing state after communicating with the earphone, when the earphone is detected to be in the wearing state, an information detection instruction is sent to the earphone, and the earphone responds to the information detection instruction and obtains current auditory canal characteristic information through a sound pressure sensor; wherein the sound pressure sensor comprises a microphone.
In some embodiments, the information processing apparatus directly determines that the headset is in the wearing state when receiving a wearing instruction indicating that the headset is in the wearing state after communicating with the headset.
It should be noted that the sound pressure sensor may be installed at any position of the front cavity or the rear cavity of the earphone, referring to fig. 3, the earphone 3 includes the sound generating unit 30, the space inside the earphone 3 is divided into the front cavity 31 and the rear cavity 32 by the sound generating unit 30, and the sound pressure sensor may be installed at the upper position 311 or the lower position 312 of the front cavity 31, or may be installed at the upper position 321 or the lower position 322 of the rear cavity 32.
In some embodiments, the information processing apparatus plays a first audio through the headset after communicating with the headset, and detects an initial sound pressure signal generated when the first audio is played through a sound pressure sensor in the headset; and judging whether the earphone is in a wearing state or a non-wearing state based on the initial sound pressure signal.
The information processing device acquires a sound intensity value of the first audio, and judges whether the earphone is in a wearing state or a non-wearing state based on the sound intensity value of the first audio and the sound intensity value of the initial sound pressure signal; the first audio frequency can be audio frequency in any frequency band and within the sound intensity range which can be borne by human ears.
It should be noted that, when the earphone is in the wearing state, due to the blockage of the ear cavity, the first audio frequency may be reflected back to the sound pressure sensor, the sound intensity value of the initial sound pressure signal collected by the sound pressure sensor is relatively large, and when the earphone is in the non-wearing state, the first audio frequency is diffused without the blockage of the ear cavity, and the sound intensity value of the initial sound pressure signal collected by the sound pressure sensor is relatively small.
In some embodiments, the information processing apparatus determines that the earphone is in a wearing state when the degree of reduction is not greater than a preset degree threshold, and otherwise, determines that the earphone is in a non-wearing state; the preset degree threshold may be obtained according to an empirical value, or may be obtained according to a sound intensity value of the first audio and a sound intensity value of the to-be-compared sound pressure signal, where the to-be-compared sound pressure signal is detected by the sound pressure sensor when the earphone is not worn and the first audio is played.
For example, the degree of reduction may be a quotient of a first difference value and a sound intensity value of the first audio, the first difference value being a absolute value of a difference between the sound intensity value of the initial sound signal and the sound intensity value of the first audio.
In some embodiments, the information processing apparatus, after communicating with the earphone, transmits a first ultrasonic wave at a transmission time through an ultrasonic transmitter in the earphone, receives a second ultrasonic wave reflected back by the first ultrasonic wave through an ultrasonic receiver in the earphone, and determines a reception time; and calculating the time difference between the transmitting time and the receiving time, and judging whether the earphone is in a wearing state or a non-wearing state based on the time difference.
In some embodiments, the current secondary ear canal feature information includes a current secondary impedance curve and a current secondary acoustic model, the ear canal feature set includes a set of impedance curves and a set of acoustic models, the current secondary impedance curve corresponds to the set of impedance curves, and the current secondary acoustic model corresponds to the set of acoustic models.
Illustratively, the impedance curve is a curve of ear canal impedance values and sound pressure values; the acoustic model is a model created based on the first ultrasonic wave and the second ultrasonic wave.
In some embodiments, the current secondary ear canal characteristic information includes a current secondary impedance curve; the information processing device plays audio through the earphone, and detects a current signal, a voltage signal and a sound pressure signal generated when the audio is played through a sound pressure sensor in the earphone, so that a current impedance curve generated by the earphone based on the current signal, the voltage signal and the sound pressure signal is obtained.
The method comprises the steps that when the information processing device determines that the earphone is in a wearing state and audio is played through the earphone, an impedance detection instruction is sent to the earphone, the earphone responds to the impedance detection instruction, a current signal, a voltage signal and a sound pressure signal which are generated when the audio is played are detected by a sound pressure sensor within a first time period, the sound pressure signal can be a sound pressure value, the earphone obtains an ear canal impedance value according to the current signal and the voltage signal, an impedance curve, namely a current impedance curve, is generated by the ear canal impedance value and the sound pressure value, and then the current impedance curve is sent to the information processing device; the audio may be a sweep tone, and the first duration may be less than or equal to a duration of the audio played by the earphone.
Illustratively, the earphone detects a current signal generated when the audio is played by using the current digital-to-analog converter and detects a voltage signal generated when the audio is played by using the voltage digital-to-analog converter during the first time period.
It should be noted that, when the earphone is worn, since the ear canals of different users are different from the cavity formed by the earphone, the impedance curves corresponding to the different cavities are also different, and thus the users can be distinguished by using the impedance curves.
In some embodiments, the current secondary ear canal feature information comprises a current secondary acoustic model; the information processing device controls the earphone to emit the first ultrasonic wave and receive the second ultrasonic wave reflected by the first ultrasonic wave, so that the current secondary acoustic model generated by the earphone based on the first ultrasonic wave and the second ultrasonic wave is obtained.
When the information processing device determines that the earphone is in a wearing state, an acoustic detection instruction is sent to the earphone, the earphone responds to the acoustic detection instruction, the ultrasonic transmitter is controlled to transmit first ultrasonic waves within a second time length, second ultrasonic waves reflected by the first ultrasonic waves are received through the ultrasonic receiver, an acoustic model, namely a current acoustic model, is generated according to signal parameters of the first ultrasonic waves and signal parameters of the second ultrasonic waves, and then the current acoustic model is sent to the information processing device; wherein the second time period may be less than or equal to the time period that the earphone is in the wearing state.
Illustratively, the current acoustic model is generated by inputting the signal parameters of the first ultrasonic wave as a model and outputting the signal parameters of the second ultrasonic wave as a model.
It should be noted that, when the earphone is worn, since the ear canals of different users are different from the cavity formed by the earphone, the acoustic models of the ultrasonic waves corresponding to the different cavities are also different, and thus the users can be distinguished by using the acoustic models of the ultrasonic waves; in addition, the ultrasonic waves exceed the range of the auditory frequency of human ears, and a user cannot perceive the ultrasonic waves, so that the process of acquiring the acoustic model cannot influence the user to receive the audio frequency within the range of the auditory frequency of human ears from the earphone.
S202, when the ear canal feature set exists, matching the current ear canal feature information with the ear canal feature set to obtain a matching result; the ear canal feature set represents ear canal feature information acquired through a sound pressure sensor in a historical wearing process;
the information processing device detects whether an ear canal feature set exists or not, the ear canal feature set is composed of ear canal feature information acquired through a sound pressure sensor in a historical wearing process, when the ear canal feature set is determined to exist, the ear canal feature set is acquired, the current ear canal feature information is matched with the ear canal feature information in the ear canal feature set, and a matching result is obtained.
In some embodiments, the information processing device detects whether a set of ear canal features is present in the local memory.
In some embodiments, the information processing apparatus detects whether there is an ear canal feature set in the cloud storage.
The information processing device accesses the cloud storage corresponding to the account number after receiving the account number of the user login cloud storage, detects whether the ear canal feature set exists, and acquires the ear canal feature set from the cloud storage corresponding to the account number if the ear canal feature set exists.
In some embodiments, the matching result may characterize the current wearer type of the headset, e.g., the owner of the headset, the non-owner of the headset, a typical user of the headset, etc.
In some embodiments, the ear canal feature set comprises at least one historical ear canal feature information, the matching results comprise a first matching result and a second matching result; the information processing device matches the current auditory canal characteristic information with each auditory canal characteristic information in at least one historical auditory canal characteristic information to obtain at least one information matching result; obtaining a matching degree based on at least one information matching result; when the matching degree is greater than or equal to a preset matching degree threshold value, obtaining a first matching result, wherein the first matching result indicates that the current wearer of the earphone is the owner of the earphone; and when the matching degree is smaller than the preset matching degree threshold value, obtaining a second matching result, wherein the second matching result indicates that the current wearer of the earphone is a non-owner of the earphone.
The information processing device matches the current auditory canal characteristic information with each auditory canal characteristic information to obtain an information matching result, and the information matching result indicates that the information matching is successful or failed so as to obtain at least one information matching result; calculating to obtain the matching degree according to at least one information matching result; and comparing the matching degree with a preset matching degree threshold value, determining that the current wearer is the owner of the earphone when the matching degree is not less than the preset matching degree threshold value, and otherwise, determining that the current wearer is not the owner of the earphone, namely, the current wearer is not the owner of the earphone.
In some embodiments, the information processing device calculates the similarity between the current time ear canal feature information and each ear canal feature information, determines that the current time ear canal feature information and each ear canal feature information are successfully matched when the similarity is greater than a preset similarity threshold, and otherwise determines that the current time ear canal feature information and each ear canal feature information are unsuccessfully matched; wherein, the preset similarity threshold may be 80% or 90%.
In some embodiments, the information processing apparatus counts a total number of results of the at least one information matching result and a number of successful matches in the at least one information matching result, divides the total number of results by the number of successful matches to obtain a matching degree, and compares the matching degree with a preset matching degree threshold, where the preset matching degree threshold may be 60% or 70%.
In some embodiments, after matching the current ear canal feature information with the ear canal feature set to obtain a matching result, the information processing device determines that the matching result is a second matching result, that is, when the current wearer of the earphone is a non-owner of the earphone, a prompt is sent according to a preset prompt mode; the preset reminding mode comprises the steps of sending reminding information to a contact telephone appointed by a user, ringing vibration and the like.
In some embodiments, after the current time ear canal feature information is matched with the ear canal feature set to obtain a matching result, when the information processing device determines that the matching result is a second matching result, the current time operation information is obtained; and obtaining a first reference user type according to the current operation information and the corresponding relation between the preset operation information and the user type.
The information processing device further acquires current operation information after obtaining a second matching result according to the current ear canal feature information and the ear canal feature set, and determines a first reference user type corresponding to the current operation information from the corresponding relation between the preset operation information and the user type; wherein, presetting the corresponding relation between the operation information and the user type comprises: operational information corresponding to underage users, including children.
In some embodiments, the current operation information is operation target information generated by a user operating on the information processing apparatus, and the current operation information includes at least one of: application information and operation object information in the application; wherein the application information may be an application type, for example, a mini-game type; the operation object information in the application program may be a children's cartoon.
Illustratively, the preset correspondence relationship between the operation information and the user type includes: a mini-game type and a kid animation corresponding to an underage user; the information processing device determines that the current operation information includes a mini-game type or a children cartoon, and determines that the first reference user type is an underage user from the corresponding relationship between the preset operation information and the user type.
In some embodiments, after the current time ear canal feature information is matched with the ear canal feature set to obtain a matching result, the information processing device obtains a classification parameter based on the ear canal feature set when the matching result is a second matching result; and obtaining a second reference user type according to the current auditory canal feature information and the classification parameters.
The information processing device obtains a second matching result according to the current auditory canal feature information and the auditory canal feature set, and then obtains classification parameters based on the auditory canal feature set; and judging the current auditory canal characteristic information by using the classification parameters to obtain a second reference user type.
In some embodiments, taking the current secondary ear canal characteristic information as the current secondary impedance curve and the ear canal characteristic set as the impedance curve set as an example, due to the cavity formed by the ear canal and the earphone of the underage user, the volume is smaller compared to the cavity formed by the ear canal and the earphone of the adult user, and the sound pressure level generated by the sound with the same intensity in the reduced ear canal volume is increased; based on the principle, the information processing device determines a sound pressure value as a classification parameter according to the sound pressure values of all impedance curves in the impedance curve set, and the probability of being an underage user is higher when the sound pressure value is larger than the classification parameter, and the probability of being an underage user is higher when the sound pressure value is not larger than the classification parameter; and judging that all the sound pressure values (or most of the sound pressure values) in the current secondary impedance curve are greater than the classification parameters, and determining that the user type of the second parameter is a minor user, otherwise, determining that the user type of the second parameter is a major user.
In some embodiments, after the current time ear canal feature information is matched with the ear canal feature set to obtain a matching result, when the information processing device determines that the matching result is a second matching result, the current time operation information is obtained; obtaining a first reference user type according to the current operation information and the corresponding relation between the preset operation information and the user type; when the first reference user type is a minor user, obtaining a classification parameter based on the ear canal feature set; and obtaining a second reference user type according to the current auditory canal feature information and the classification parameters.
The information processing device further acquires current operation information after obtaining a second matching result according to the current ear canal feature information and the ear canal feature set, and determines a first reference user type corresponding to the current operation information from the corresponding relation between the preset operation information and the user type; when the first reference user type is an underage user, further determining a second reference user type, otherwise, no longer determining the second reference user type, and determining that the current wearer is an underage user.
S203, determining a current using mode corresponding to the matching result from the corresponding relation between the preset matching result and the using mode; and presetting a corresponding relation between the matching result and the use mode to represent the sound playing mode corresponding to the type of the wearer one by one.
The information processing device takes the use mode corresponding to the matching result in the corresponding relation between the preset matching result and the use mode as the current use mode, and then controls sound playing according to the current use mode; wherein, the preset matching result and the corresponding relation of the use mode are as follows: a one-to-one correspondence of a wearer type including an owner of the headset, a non-owner of the headset, etc., and a sound playing mode including sound output parameters and/or sound control permissions.
In some embodiments, the preset corresponding relationship between the matching result and the usage pattern is a corresponding relationship between the matching result and the sound output parameter; the information processing apparatus determines a current infrasound output parameter corresponding to the matching result from the correspondence between the matching result and the sound output parameter.
The information processing apparatus sets, as the current-time sound output parameter, a sound output parameter corresponding to the matching result in the correspondence between the matching result and the sound output parameter.
In some embodiments, the preset corresponding relationship between the matching result and the usage pattern is the corresponding relationship between the matching result and the sound control authority; the information processing device determines the current secondary sound control authority corresponding to the matching result from the corresponding relation between the matching result and the sound control authority.
And the information processing device takes the sound input control authority corresponding to the matching result in the corresponding relationship between the matching result and the sound control authority as the current sound control authority.
In some embodiments, the preset matching result and the corresponding relationship of the usage pattern are: and the information processing device determines the sound output parameters and the sound control authority corresponding to the matching result from the corresponding relationship of the matching result and the sound control authority.
In some embodiments, the presetting of the correspondence relationship between the matching result and the usage pattern includes: a first sound control authority corresponding to the first matching result and a second sound control authority corresponding to the second matching result, wherein the first sound control authority is larger than the second sound control authority; the first matching result is that the current wearer of the earphone is the owner of the earphone, and the second matching result is that the current wearer of the earphone is the non-owner of the earphone.
In some embodiments, the first sound control authority corresponding to the owner of the headset and the second sound control authority corresponding to the non-owner of the headset may be authority to control sound playing through the headset or authority to control sound of the information processing apparatus through the headset, for example, control sound playing or pausing, control sound volume, and the like.
In some embodiments, the presetting of the correspondence relationship between the matching result and the usage pattern includes: a first sound output parameter corresponding to an owner of the headset and a second sound output parameter corresponding to a non-owner of the headset; the first sound output parameter and the second sound output parameter are parameters when the information processing device plays sound, such as sound volume, sound effect, sound playing time and the like.
In some embodiments, to provide a better use experience for the owner of the headset, the first sound output parameter comprises at least one of: the earphone comprises a high-quality sound effect, a first sound volume and a first sound playing time length, wherein the first sound playing time length is the maximum earphone using time length without damaging the hearing of human ears, and the first sound volume is the sound volume according with the use preference of an owner of the earphone.
Illustratively, in the history wearing process, the information processing device obtains a first matching result, determines a first sound output parameter corresponding to the first matching result, where the first sound output parameter includes an initial sound volume, plays sound through an earphone according to the initial sound volume, receives a volume adjustment instruction of a user from the earphone, determines whether the sound volume indicated by the volume adjustment instruction is consistent with the initial sound volume, and if not, takes the sound volume indicated by the volume adjustment instruction as the first sound volume and replaces the initial sound volume in the first sound output parameter with the first sound volume.
In some embodiments, the second sound output parameter comprises at least one of: the information processing device comprises a low-quality sound effect, a second sound volume and a second sound playing time, a non-owner of the earphone uses the earphone on the information processing device, the fact that the non-owner of the earphone is likely to be a child is considered, the second sound volume is set to be smaller than the first sound volume, the second sound playing time is set to be smaller than the first sound playing time, and therefore the hearing of the child is protected.
In some embodiments, after obtaining a first reference user type according to the current operation information and the corresponding relationship between the preset operation information and the user type, or after obtaining a second reference user type according to the current ear canal feature information and the classification parameter, when the first reference user type is an underage user or the second reference user type is an underage user, the information processing device determines a current secondary usage mode corresponding to the underage user from the second corresponding relationship between the preset underage user and the usage mode; and controlling the sound playing according to the current use mode.
And when the first reference user type is an underage user and/or the second reference user is an underage user, the information processing device determines the current using mode from the second corresponding relation and controls sound playing according to the current using mode.
In some embodiments, the second corresponding relationship is a third sound output parameter and a third sound control authority corresponding to an underage user; wherein the third sound output parameter may be the same as the second sound output parameter; the third sound control authority is smaller than the second sound control authority, for example, an authority to prohibit control of sound in which the information processing apparatus does not respond to the sound control instruction received from the headphone.
In some embodiments, an information processing method as shown in fig. 4, after step S203, the information processing method further includes: s204, the following steps are carried out:
and S204, controlling sound playing according to the current use mode.
When the information processing device acquires the current sound output parameter from the current use mode, playing sound according to the current sound output parameter; the information processing device acquires the current sound control authority from the current use mode and determines whether to respond to the sound control command according to the current sound control authority when receiving the sound control command from the earphone, responds to the sound control command when the sound control command is within the current sound control authority, adjusts and plays sound according to the sound control command when the sound control command is not within the current sound control authority, and does not respond to the sound control command when the sound control command is not within the current sound control authority.
In some embodiments, after determining the current sub-sound output parameter corresponding to the matching result from the corresponding relationship between the matching result and the sound output parameter; the information processing device controls sound playing according to the current sound output parameter.
The information processing device sets the current sound output parameter as the current sound output parameter of the information processing device, and then plays the sound according to the current sound output parameter of the information processing device when playing the sound.
In some embodiments, after determining the current sound control authority corresponding to the matching result from the corresponding relationship between the matching result and the sound control authority; and when the information processing device receives the sound control instruction from the earphone and the current sound control authority representation allows the sound control instruction to be executed, adjusting the preset sound output parameter according to the sound control instruction to obtain the adjusted sound output parameter, and controlling sound playing according to the adjusted sound output parameter.
The information processing device receives the sound control instruction from the earphone and judges whether the sound control instruction belongs to the current sound control authority or not; when the sound control instruction belongs to the current secondary sound control authority, adjusting preset sound output parameters according to the sound control instruction; the preset sound output parameter is the current sound output parameter of the information processing device.
Illustratively, the current secondary sound control permission includes a volume adjustment permission; the user operates the volume adjustment key on the earphone, the earphone generates a sound control instruction which is a volume adjustment instruction and sends the volume adjustment instruction to the information processing device, the information processing device determines that the volume adjustment instruction belongs to the volume adjustment authority, and the volume in the current sound output parameter is adjusted according to the volume adjustment parameter indicated by the volume adjustment instruction.
In some embodiments, an information processing method as shown in fig. 5, after step S201, the information processing method further includes: S402-S404, as follows:
s402, when the ear canal feature set does not exist, storing the current ear canal feature information;
and when the information processing device determines that the ear canal feature set does not exist, the current ear canal feature information is stored in a local memory or a cloud memory.
For example, after a user logs in an account of cloud storage on an information processing device, the information processing device stores current ear canal feature information into the cloud storage corresponding to the account.
It should be noted that more ear canal characteristic information can be stored in the cloud storage than in the local storage.
S403, counting the total number of the stored ear canal feature information;
the information processing device counts the total number of the stored ear canal feature information.
In some embodiments, the information processing apparatus sets the initial value of the total number to 0, adds 1 to the total number and stores the total number while storing the current time ear canal feature information each time, and further, the information processing apparatus may directly acquire the total number.
S404, when the total number is larger than or equal to the preset total number threshold value, obtaining an ear canal feature set based on the stored ear canal feature information.
And the information processing device judges whether the total number is not less than a preset total number threshold value or not, and when the total number is determined to be not less than the preset total number threshold value, the stored auditory canal characteristic information forms an auditory canal characteristic set and stores the auditory canal characteristic set.
It should be noted that the larger the preset total number threshold is, the more accurate the type of the wearer determined based on the ear canal feature set is, but it takes longer time to obtain the ear canal feature set, so that the preset total number threshold can be set moderately.
Illustratively, an information processing method as shown in fig. 6 includes:
s501, when the information processing device detects that the earphone is in a wearing state for the ith time, obtaining the ith ear canal characteristic information through a sound pressure sensor in the earphone; i is an integer greater than 0;
s502, the information processing device stores the ith ear canal characteristic information until I is equal to I; i is an integer greater than 0;
s503, the information processing device enables the stored I auditory canal feature information to form an auditory canal feature set and stores the auditory canal feature set in a cloud storage mode;
s504, when the information processing device detects that the earphone is in a wearing state for the (I + 1) th time, acquiring the (I + 1) th ear canal characteristic information through a sound pressure sensor in the earphone;
s505, the information processing device acquires an ear canal feature set from the cloud storage, and matches the I +1 st ear canal feature information with the ear canal feature set to obtain a matching result;
s506, the information processing device determines that the matching result is the owner of the earphone, and obtains a first sound output parameter and a first sound control authority from the corresponding relation between the preset matching result and the use mode;
and S507, the information processing device determines that the matching result is the non-owner of the earphone, and only acquires the second sound output parameter from the corresponding relation between the preset matching result and the use mode.
It can be understood that, because the ear canal feature set represents the ear canal feature information obtained in the historical wearing process, that is, the matching result of the current ear canal feature information and the ear canal feature set can be obtained without manually inputting information in advance, the current using mode is determined from the corresponding relationship between the preset matching result and the using mode by using the matching result, since the correspondence between the preset matching result and the usage pattern represents the one-to-one correspondence between the wearer type and the sound playing pattern, that is, the current use mode is the sound playing mode corresponding to the current wearer type of the earphone, so that the sound playing mode corresponding to the current wearer type of the earphone can be obtained without manual pre-entry, and the sound playing is controlled according to the sound playing mode corresponding to the current wearer type of the earphone, so that the intelligence of personalized earphone use is improved.
Example two
The further description will be made based on the same inventive concept of the first embodiment.
An embodiment of the present application provides an information processing apparatus, and as shown in fig. 7, the information processing apparatus 6 includes an acquisition unit 61, a matching unit 62, and a control unit 63; wherein the content of the first and second substances,
the acquisition unit 61 is used for acquiring the current ear canal characteristic information through a sound pressure sensor in the earphone when the earphone is in a wearing state;
the matching unit 62 is configured to match the current ear canal feature information with the ear canal feature set to obtain a matching result when the ear canal feature set exists; the ear canal feature set represents ear canal feature information acquired through a sound pressure sensor in a historical wearing process;
the control unit 63 is configured to determine a current usage mode corresponding to the matching result from a corresponding relationship between the preset matching result and the usage mode; and presetting a corresponding relation between the matching result and the use mode to represent the sound playing mode corresponding to the type of the wearer one by one.
In some embodiments, the apparatus further comprises:
the set generating unit is used for storing the current auditory canal characteristic information when the auditory canal characteristic information does not exist after the current auditory canal characteristic information is acquired through a sound pressure sensor in the earphone; counting the total number of the stored ear canal characteristic information; and when the total number is larger than or equal to a preset total number threshold value, obtaining an ear canal feature set based on the stored ear canal feature information.
In some embodiments, the current secondary ear canal feature information includes a current secondary impedance curve and a current secondary acoustic model, the ear canal feature set includes a set of impedance curves and a set of acoustic models, the current secondary impedance curve corresponds to the set of impedance curves, and the current secondary acoustic model corresponds to the set of acoustic models.
In some embodiments, the current secondary ear canal characteristic information includes a current secondary impedance curve;
the obtaining unit 61 is specifically configured to play audio through an earphone, and detect a current signal, a voltage signal, and a sound pressure signal generated when the audio is played through a sound pressure sensor in the earphone, so as to obtain a current secondary impedance curve generated by the earphone based on the current signal, the voltage signal, and the sound pressure signal.
In some embodiments, the current secondary ear canal feature information comprises a current secondary acoustic model;
the obtaining unit 61 is specifically configured to control the earphone to emit a first ultrasonic wave and receive a second ultrasonic wave reflected by the first ultrasonic wave, so as to obtain a current secondary acoustic model generated by the earphone based on the first ultrasonic wave and the second ultrasonic wave.
In some embodiments, the preset corresponding relationship between the matching result and the usage pattern is a corresponding relationship between the matching result and the sound output parameter;
the control unit 63 is specifically configured to determine a current infrasound output parameter corresponding to the matching result from the corresponding relationship between the matching result and the sound output parameter.
In some embodiments, the control unit 63 is further configured to control sound playing according to the current sound output parameter after determining the current usage mode corresponding to the matching result from the preset correspondence relationship between the matching result and the usage mode.
In some embodiments, the preset corresponding relationship between the matching result and the usage pattern is the corresponding relationship between the matching result and the sound control authority;
the control unit 63 is specifically configured to determine the current sub-sound control right corresponding to the matching result from the corresponding relationship between the matching result and the sound control right.
In some embodiments, the control unit 63 is further configured to, after determining a current usage mode corresponding to the matching result from the corresponding relationship between the preset matching result and the usage mode, adjust the preset sound output parameter according to the sound control instruction when the sound control instruction is received from the headset and the current sound control authority indicates that the sound control instruction is allowed to be executed, to obtain an adjusted sound output parameter, and control sound playing according to the adjusted sound output parameter.
In some embodiments, the ear canal feature set comprises at least one historical ear canal feature information, the matching results comprise a first matching result and a second matching result;
the matching unit 62 is specifically configured to match the current ear canal feature information with each ear canal feature information in the at least one piece of historical ear canal feature information to obtain at least one information matching result; obtaining a matching degree based on at least one information matching result; when the matching degree is larger than or equal to a preset matching degree threshold value, obtaining a first matching result, wherein the first matching result indicates that the current wearer of the earphone is the owner of the earphone; and when the matching degree is smaller than a preset matching degree threshold value, obtaining a second matching result, wherein the second matching result indicates that the current wearer of the earphone is a non-owner of the earphone.
In some embodiments, the presetting of the correspondence relationship between the matching result and the usage pattern includes: the first sound control authority corresponding to the first matching result and the second sound control authority corresponding to the second matching result, wherein the first sound control authority is larger than the second sound control authority.
In some embodiments, the matching unit 62 is further configured to, after the current-time ear canal feature information is matched with the ear canal feature set to obtain a matching result, obtain current-time operation information when the matching result is a second matching result; and obtaining a first reference user type according to the current operation information and the corresponding relation between the preset operation information and the user type.
In some embodiments, the matching unit 62 is further configured to, after the current ear canal feature information is matched with the ear canal feature set to obtain a matching result, obtain a classification parameter based on the ear canal feature set when the matching result is a second matching result; and obtaining a second reference user type according to the current ear canal feature information and the classification parameters.
In some embodiments, the control unit 63 is further configured to, after obtaining a first reference user type according to the current operation information and the corresponding relationship between the preset operation information and the user type, or after obtaining a second reference user type according to the current ear canal feature information and the classification parameter, determine, from the second corresponding relationship between the preset underage user and the usage pattern, a current usage pattern corresponding to the underage user when the first reference user type is an underage user or the second reference user type is an underage user; and controlling the sound playing according to the current use mode.
In some embodiments, the control unit 63 is further configured to control the sound playing according to the current usage mode after determining the current usage mode corresponding to the matching result from the preset correspondence relationship between the matching result and the usage mode.
In practical applications, the obtaining Unit 61, the matching Unit 62 and the control Unit 63 may be implemented by a processor 64 located on the information Processing apparatus 6, specifically, implemented by a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
An embodiment of the present application further provides an information processing apparatus 6, as shown in fig. 8, where the apparatus 6 includes: a processor 64, a memory 65 and a communication bus 66, the memory 65 communicating with the processor 64 through the communication bus 66, the memory 65 storing one or more programs executable by the processor 64, the one or more programs, when executed, performing any one of the information processing methods according to the foregoing embodiments by the processor 64.
The embodiment of the present application provides a computer-readable storage medium, which stores one or more programs, where the one or more programs are executable by one or more processors 64, and when the program is executed by the processors 64, the information processing method according to the first embodiment is implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (30)

1. An information processing method, characterized in that the method comprises:
when the earphone is in a wearing state, acquiring current ear canal characteristic information through a sound pressure sensor in the earphone;
when the ear canal feature set exists, matching the current ear canal feature information with the ear canal feature set to obtain a matching result; the ear canal feature set represents ear canal feature information acquired through the sound pressure sensor in a historical wearing process; the ear canal feature set comprises a set of impedance curves; the impedance curve is a curve of the ear canal impedance value and the sound pressure value; the ear canal impedance value is generated by a voltage signal and a current signal generated when the earphone plays audio;
determining a current use mode corresponding to a matching result from a corresponding relation between a preset matching result and the use mode; the corresponding relation between the preset matching result and the use mode represents the sound playing mode corresponding to the type of the wearer one by one;
when the ear canal feature set does not exist, saving the current secondary ear canal feature information;
counting the total number of the stored ear canal characteristic information;
when the total number is larger than or equal to a preset total number threshold value, obtaining the auditory canal feature set based on the stored auditory canal feature information; the preset total number threshold is an integer greater than or equal to 2.
2. The method of claim 1, wherein the current secondary ear canal feature information comprises a current secondary impedance curve and a current secondary acoustic model, wherein the ear canal feature set comprises a set of impedance curves and a set of acoustic models, wherein the current secondary impedance curve corresponds to the set of impedance curves, and wherein the current secondary acoustic model corresponds to the set of acoustic models.
3. The method of claim 1, wherein the current secondary ear canal characteristic information comprises a current secondary impedance curve; the current time ear canal characteristic information is obtained through a sound pressure sensor in the earphone, and the method comprises the following steps:
playing audio through the earphone, and detecting a current signal, a voltage signal and a sound pressure signal generated when the audio is played through a sound pressure sensor in the earphone, so as to obtain the current secondary impedance curve generated by the earphone based on the current signal, the voltage signal and the sound pressure signal.
4. The method of claim 1, wherein the current secondary ear canal characteristic information comprises a current secondary acoustic model; the current time ear canal characteristic information is obtained through a sound pressure sensor in the earphone, and the method comprises the following steps:
controlling the earphone to emit first ultrasonic waves and receive second ultrasonic waves reflected by the first ultrasonic waves, so as to acquire the current secondary acoustic model generated by the earphone based on the first ultrasonic waves and the second ultrasonic waves.
5. The method according to claim 1, wherein the correspondence between the preset matching result and the usage pattern is a correspondence between the matching result and a sound output parameter; determining the current usage mode corresponding to the matching result from the corresponding relationship between the preset matching result and the usage mode, including:
and determining the current secondary sound output parameter corresponding to the matching result from the corresponding relation between the matching result and the sound output parameter.
6. The method according to claim 5, wherein after determining the current usage pattern corresponding to the matching result from the preset correspondence relationship between the matching result and the usage pattern, the method further comprises:
and controlling sound playing according to the current sound output parameter.
7. The method according to claim 1, wherein the correspondence between the preset matching result and the usage pattern is a correspondence between the matching result and a sound control authority; determining the current usage mode corresponding to the matching result from the corresponding relationship between the preset matching result and the usage mode, including:
and determining the current sub-voice control authority corresponding to the matching result from the corresponding relation between the matching result and the voice control authority.
8. The method according to claim 7, wherein after determining the current usage pattern corresponding to the matching result from the preset correspondence relationship between the matching result and the usage pattern, the method further comprises:
and when a sound control instruction is received from the earphone and the current sound control authority representation allows the sound control instruction to be executed, adjusting a preset sound output parameter according to the sound control instruction to obtain an adjusted sound output parameter, and controlling sound playing according to the adjusted sound output parameter.
9. The method of claim 1, wherein the ear canal feature set comprises at least one historical ear canal feature information, and wherein the matching results comprise a first matching result and a second matching result; the right the present time of the ear canal characteristic information with the ear canal characteristic set is matched, and a matching result is obtained, including:
matching the current auditory canal characteristic information with each auditory canal characteristic information in the at least one historical auditory canal characteristic information to obtain at least one information matching result;
obtaining a matching degree based on the at least one information matching result;
when the matching degree is greater than or equal to a preset matching degree threshold value, obtaining a first matching result, wherein the first matching result indicates that the current wearer of the earphone is the owner of the earphone;
and when the matching degree is smaller than a preset matching degree threshold value, obtaining a second matching result, wherein the second matching result indicates that the current wearer of the earphone is a non-owner of the earphone.
10. The method according to claim 9, wherein the preset matching result and the corresponding relationship of the usage pattern comprise: and the first sound control authority corresponding to the first matching result and the second sound control authority corresponding to the second matching result, wherein the first sound control authority is larger than the second sound control authority.
11. The method of claim 9, wherein after said matching the current-time ear canal feature information with the ear canal feature set to obtain a matching result, the method further comprises:
when the matching result is the second matching result, obtaining current operation information;
and obtaining a first reference user type according to the current operation information and the corresponding relation between the preset operation information and the user type.
12. The method of claim 11, wherein after said matching the current-time ear canal feature information with the ear canal feature set to obtain a matching result, the method further comprises:
when the matching result is the second matching result, obtaining a classification parameter based on the ear canal feature set;
and obtaining a second reference user type according to the current ear canal feature information and the classification parameters.
13. The method according to claim 12, wherein after obtaining a first reference user type according to the current operation information and the corresponding relationship between preset operation information and user types, or after obtaining a second reference user type according to the current ear canal feature information and the classification parameters, the method further comprises:
when the first reference user type is an underage user or the second reference user type is the underage user, determining a current secondary usage mode corresponding to the underage user from a second corresponding relation of preset underage users and usage modes;
and controlling sound playing according to the current use mode.
14. The method according to claim 1, wherein after determining the current usage pattern corresponding to the matching result from the preset correspondence relationship between the matching result and the usage pattern, the method further comprises:
and controlling sound playing according to the current use mode.
15. An information processing apparatus characterized in that the apparatus comprises: the device comprises an acquisition unit, a matching unit, a control unit and a set generation unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring the current ear canal characteristic information through a sound pressure sensor in the earphone when the earphone is in a wearing state;
the matching unit is used for matching the current ear canal feature information with the ear canal feature set to obtain a matching result when the ear canal feature set exists; the ear canal feature set represents ear canal feature information acquired through the sound pressure sensor in a historical wearing process; the ear canal feature set comprises a set of impedance curves; the impedance curve is a curve of the ear canal impedance value and the sound pressure value; the ear canal impedance value is generated by a voltage signal and a current signal generated when the earphone plays audio;
the control unit is used for determining a current using mode corresponding to a matching result from the corresponding relation between a preset matching result and the using mode; the corresponding relation between the preset matching result and the use mode represents the sound playing mode corresponding to the type of the wearer one by one;
the set generating unit is configured to, after the current ear canal feature information is acquired by the sound pressure sensor in the earphone, store the current ear canal feature information when the ear canal feature set does not exist; counting the total number of the stored ear canal characteristic information; when the total number is larger than or equal to a preset total number threshold value, obtaining the auditory canal feature set based on the stored auditory canal feature information; the preset total number threshold is an integer greater than or equal to 2.
16. The apparatus of claim 15, wherein the current secondary ear canal feature information comprises a current secondary impedance curve and a current secondary acoustic model, wherein the ear canal feature set comprises a set of impedance curves and a set of acoustic models, wherein the current secondary impedance curve corresponds to the set of impedance curves, and wherein the current secondary acoustic model corresponds to the set of acoustic models.
17. The apparatus of claim 15, wherein the current secondary ear canal characteristic information comprises a current secondary impedance curve;
the obtaining unit is specifically configured to play an audio through the earphone, and detect a current signal, a voltage signal, and a sound pressure signal generated when the audio is played through a sound pressure sensor in the earphone, so as to obtain the current secondary impedance curve generated by the earphone based on the current signal, the voltage signal, and the sound pressure signal.
18. The apparatus of claim 15, wherein the current secondary ear canal characteristic information comprises a current secondary acoustic model;
the obtaining unit is specifically configured to control the earphone to emit a first ultrasonic wave and receive a second ultrasonic wave reflected by the first ultrasonic wave, so as to obtain the current secondary acoustic model generated by the earphone based on the first ultrasonic wave and the second ultrasonic wave.
19. The apparatus of claim 15, wherein the predetermined matching result and the usage pattern are corresponding to each other;
and the control unit is specifically configured to determine a current infrasound output parameter corresponding to the matching result from the corresponding relationship between the matching result and the sound output parameter.
20. The apparatus of claim 18,
and the control unit is further configured to control sound playing according to the current sound output parameter after determining the current usage mode corresponding to the matching result in the corresponding relationship between the preset matching result and the usage mode.
21. The apparatus of claim 15, wherein the correspondence between the preset matching result and the usage pattern is a correspondence between the matching result and a sound control authority;
and the control unit is specifically used for determining the current secondary sound control authority corresponding to the matching result from the corresponding relation between the matching result and the sound control authority.
22. The apparatus of claim 21,
the control unit is further configured to, after determining a current usage mode corresponding to the matching result in the corresponding relationship between the preset matching result and the usage mode, adjust a preset sound output parameter according to a sound control instruction when the sound control instruction is received from the earphone and the current sound control authority indicates that execution of the sound control instruction is allowed, to obtain an adjusted sound output parameter, and control sound playing according to the adjusted sound output parameter.
23. The apparatus of claim 15, wherein the ear canal feature set comprises at least one historical ear canal feature information, the match results comprising a first match result and a second match result;
the matching unit is specifically configured to match the current ear canal feature information with each ear canal feature information in the at least one historical ear canal feature information to obtain at least one information matching result; obtaining a matching degree based on the at least one information matching result; when the matching degree is larger than or equal to a preset matching degree threshold value, obtaining a first matching result, wherein the first matching result indicates that the current wearer of the earphone is the owner of the earphone; and when the matching degree is smaller than a preset matching degree threshold value, obtaining a second matching result, wherein the second matching result indicates that the current wearer of the earphone is a non-owner of the earphone.
24. The apparatus of claim 23, wherein the preset matching result and the corresponding relationship of the usage pattern comprise: and the first sound control authority corresponding to the first matching result and the second sound control authority corresponding to the second matching result, wherein the first sound control authority is larger than the second sound control authority.
25. The apparatus of claim 23,
the matching unit is further configured to, after the current ear canal feature information is matched with the ear canal feature set to obtain a matching result, obtain current operation information when the matching result is the second matching result; and obtaining a first reference user type according to the current operation information and the corresponding relation between the preset operation information and the user type.
26. The apparatus of claim 25,
the matching unit is further configured to, after the current ear canal feature information is matched with the ear canal feature set to obtain a matching result, obtain a classification parameter based on the ear canal feature set when the matching result is the second matching result; and obtaining a second reference user type according to the current ear canal feature information and the classification parameters.
27. The apparatus of claim 26,
the control unit is further configured to determine a current secondary usage mode corresponding to the underage user from a second corresponding relationship between preset underage users and usage modes when the first reference user type is an underage user or the second reference user type is the underage user after a first reference user type is obtained according to the current secondary operation information and a corresponding relationship between preset operation information and user types or a second reference user type is obtained according to the current secondary ear canal feature information and the classification parameter; and controlling sound playing according to the current using mode.
28. The apparatus of claim 15,
the control unit is further configured to control sound playing according to the current usage mode after determining the current usage mode corresponding to the matching result in the corresponding relationship between the preset matching result and the usage mode.
29. An information processing apparatus characterized by comprising: a processor, a memory and a communication bus, the memory in communication with the processor through the communication bus, the memory storing one or more computer programs executable by the processor, the processor performing the method of any of claims 1-14 when the one or more computer programs are executed.
30. A computer-readable storage medium, having one or more computer programs stored thereon, the one or more computer programs being executable by one or more processors to implement the method of any one of claims 1-14.
CN201910577886.9A 2019-06-28 2019-06-28 Information processing method and apparatus, and storage medium Active CN110267144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910577886.9A CN110267144B (en) 2019-06-28 2019-06-28 Information processing method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910577886.9A CN110267144B (en) 2019-06-28 2019-06-28 Information processing method and apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN110267144A CN110267144A (en) 2019-09-20
CN110267144B true CN110267144B (en) 2021-07-09

Family

ID=67923163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910577886.9A Active CN110267144B (en) 2019-06-28 2019-06-28 Information processing method and apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN110267144B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099358B (en) * 2020-01-08 2023-08-22 北京小米移动软件有限公司 Method and device for adjusting earphone audio parameters, earphone and storage medium
CN111356053A (en) * 2020-03-11 2020-06-30 瑞声科技(新加坡)有限公司 Earphone and wearing state detection method thereof
CN113495713B (en) * 2020-03-20 2024-03-22 北京小米移动软件有限公司 Method and device for adjusting earphone audio parameters, earphone and storage medium
CN112272346B (en) * 2020-11-27 2023-01-24 歌尔科技有限公司 In-ear detection method, earphone and computer readable storage medium
CN112464196B (en) * 2020-12-07 2023-11-24 南昌华勤电子科技有限公司 Bluetooth headset connection method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108429969A (en) * 2018-05-28 2018-08-21 Oppo广东移动通信有限公司 Audio frequency playing method, device, terminal, earphone and readable storage medium storing program for executing
CN108737921A (en) * 2018-04-28 2018-11-02 维沃移动通信有限公司 A kind of control method for playing back, system, earphone and mobile terminal
CN108803859A (en) * 2018-05-28 2018-11-13 Oppo广东移动通信有限公司 Information processing method, device, terminal, earphone and readable storage medium storing program for executing
CN108900694A (en) * 2018-05-28 2018-11-27 Oppo广东移动通信有限公司 Ear line information acquisition method and device, terminal, earphone and readable storage medium storing program for executing
WO2019008390A1 (en) * 2017-07-07 2019-01-10 Cirrus Logic International Semiconductor Limited Methods, apparatus and systems for audio playback

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763901B (en) * 2018-05-28 2020-09-22 Oppo广东移动通信有限公司 Ear print information acquisition method and device, terminal, earphone and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019008390A1 (en) * 2017-07-07 2019-01-10 Cirrus Logic International Semiconductor Limited Methods, apparatus and systems for audio playback
CN108737921A (en) * 2018-04-28 2018-11-02 维沃移动通信有限公司 A kind of control method for playing back, system, earphone and mobile terminal
CN108429969A (en) * 2018-05-28 2018-08-21 Oppo广东移动通信有限公司 Audio frequency playing method, device, terminal, earphone and readable storage medium storing program for executing
CN108803859A (en) * 2018-05-28 2018-11-13 Oppo广东移动通信有限公司 Information processing method, device, terminal, earphone and readable storage medium storing program for executing
CN108900694A (en) * 2018-05-28 2018-11-27 Oppo广东移动通信有限公司 Ear line information acquisition method and device, terminal, earphone and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN110267144A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110267144B (en) Information processing method and apparatus, and storage medium
CN111903112B (en) Ear proximity detection
CN113709614B (en) Volume adjustment method and device and computer readable storage medium
US10382853B2 (en) Method and device for voice operated control
US8031884B2 (en) Method and apparatus for reproducing music file
GB2596950A (en) Methods, apparatus and systems for biometric processes
CN110832484A (en) Method, device and system for audio playback
CN106851460B (en) Earphone and sound effect adjusting control method
US20160353195A1 (en) Intelligent headphone
CN108763901B (en) Ear print information acquisition method and device, terminal, earphone and readable storage medium
US8041063B2 (en) Hearing aid and hearing aid system
WO2017181365A1 (en) Earphone channel control method, related apparatus, and system
CN110175014B (en) Wireless earphone volume control method, system, wireless earphone and storage medium
EP3800900A1 (en) A wearable electronic device for emitting a masking signal
JP2017527148A (en) Method and headset for improving sound quality
CN104581526B (en) Sensor
WO2021103260A1 (en) Control method for headphones and headphones
US20220122605A1 (en) Method and device for voice operated control
CN110505547B (en) Earphone wearing state detection method and earphone
KR101520799B1 (en) Earphone apparatus capable of outputting sound source optimized about hearing character of an individual
CN111683316A (en) Wearing calibration method, device and system of earphone and storage medium
CN113099336A (en) Method and device for adjusting audio parameters of earphone, earphone and storage medium
CN111800699B (en) Volume adjustment prompting method and device, earphone equipment and storage medium
CN110324742B (en) Control method, earphone and storage medium
CN115696123A (en) Audio compensation method and audio compensation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant