CN112887871B - Headset voice playing method based on permission, headset and storage medium - Google Patents

Headset voice playing method based on permission, headset and storage medium Download PDF

Info

Publication number
CN112887871B
CN112887871B CN202110307073.5A CN202110307073A CN112887871B CN 112887871 B CN112887871 B CN 112887871B CN 202110307073 A CN202110307073 A CN 202110307073A CN 112887871 B CN112887871 B CN 112887871B
Authority
CN
China
Prior art keywords
earphone
voice data
voice
area
wearer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110307073.5A
Other languages
Chinese (zh)
Other versions
CN112887871A (en
Inventor
何定
刘治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianan Technology Co ltd
Original Assignee
Shenzhen Qianan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianan Technology Co ltd filed Critical Shenzhen Qianan Technology Co ltd
Priority to CN202110307073.5A priority Critical patent/CN112887871B/en
Publication of CN112887871A publication Critical patent/CN112887871A/en
Application granted granted Critical
Publication of CN112887871B publication Critical patent/CN112887871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application is applicable to the field of audio data processing, and provides an earphone voice playing method based on permission, an earphone and a storage medium. The permission-based earphone voice playing method is applied to a first earphone and comprises the following steps: acquiring voice data to be played; acquiring a first real-time position of a first earphone, and detecting whether the first real-time position meets a preset position condition; if the first real-time position meets the position condition, detecting whether a third wearer has the authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold; if the third wearer does not have the right to listen to the voice data, the voice data is processed, and the processed voice data is played. The embodiment of the application can improve the privacy security of voice playing.

Description

Headset voice playing method based on permission, headset and storage medium
The application is a divisional application of China patent application with 2021, 01 and 04 days, the application number of which is 202110001135.X and the application name of which is 'a voice playing method, earphone and storage medium'.
Technical Field
The application belongs to the technical field of audio data processing, and particularly relates to an earphone voice playing method based on permission, an earphone and a storage medium.
Background
Headphones are audio players that convert voice data into audible sound waves using speakers positioned proximate the ears. The earphone can enable a wearer to listen to the sound without affecting other people; can also isolate the sound of the surrounding environment, and is helpful for the wearer in the noisy environment such as recording studio, bar, journey, sports, etc.
However, the current voice playing method of the earphone is low in safety and prone to privacy disclosure.
Disclosure of Invention
The embodiment of the application provides an earphone voice playing method based on permission, an earphone and a storage medium, which can solve the problems that the safety of the earphone voice playing method is low and privacy leakage is easy to occur.
An embodiment of the present application provides a method for playing voice of an earphone based on permission, which is applied to a first earphone, and includes:
acquiring voice data to be played;
acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition;
If the first real-time position meets the position condition, playing the voice data;
the playing the voice data comprises the following steps:
detecting whether a third wearer has authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold;
and if the third wearer does not have the authority to listen to the voice data, processing the voice data and playing the processed voice data.
A second aspect of the embodiments of the present application provides a voice sending method, applied to a second earphone, including:
acquiring voice data;
and establishing communication connection with a first earphone, sending voice data to the first earphone, and playing the voice data by the first earphone according to the permission-based earphone voice playing method provided by the first aspect.
An earphone voice playing device based on permission provided in a third aspect of the present application is configured in a first earphone, and includes:
The acquisition unit is used for acquiring voice data to be played;
the detection unit is used for acquiring a first real-time position of the first earphone and detecting whether the first real-time position meets a preset position condition;
a playing unit, configured to play the voice data if the first real-time position meets the position condition
The playing unit is also specifically configured to: detecting whether a third wearer has authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold; and if the third wearer does not have the authority to listen to the voice data, processing the voice data and playing the processed voice data.
In a fourth aspect of the present application, a voice sending device configured in a second earphone includes:
an acquisition unit configured to acquire voice data;
the sending unit is configured to establish a communication connection with a first earphone, send voice data to the first earphone, and play the voice data by the first earphone according to the permission-based earphone voice playing method provided in the first aspect.
A fifth aspect of the embodiments of the present application provides a headset, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the rights-based headset voice playing method provided in the first aspect and/or the rights-based headset voice playing method provided in the second aspect when the processor executes the computer program.
A sixth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program, where the computer program is executed by a processor to implement the steps of the rights-based headphone voice playing method provided in the first aspect and/or the rights-based headphone voice playing method provided in the second aspect.
A seventh aspect of embodiments of the present application provides a computer program product, which when executed on a headset, causes the headset to perform the steps of the method for playing a voice of a headset based on rights provided in the first aspect and/or the method for playing a voice of a headset based on rights provided in the second aspect.
In the embodiment of the application, the voice data to be played is obtained; then, acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition; and if the first real-time position meets the position condition, playing the voice data. That is, after the first earphone receives the voice data, the first earphone does not directly play the voice data, but judges whether the current first real-time position of the first earphone meets the position condition, and plays the voice data when the first real-time position meets the position condition. For example, when the first earphone is located in the conference room, the voice data is played. Therefore, privacy leakage caused by playing voice data when the first earphone does not meet the position condition can be avoided, and safety of earphone voice playing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a method for playing earphone voice based on permission according to an embodiment of the present application;
fig. 2a is a schematic diagram of a headset according to an embodiment of the present application for establishing a connection through other terminals;
fig. 2b is a schematic diagram of the connection between headphones according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a specific implementation of step S102 provided in the embodiment of the present application;
fig. 4 is a schematic flowchart of a specific implementation of step S301 provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of a specific implementation of processing and playing voice data according to area information according to an embodiment of the present application;
FIG. 6 is a schematic view of a third region provided by an embodiment of the present application;
fig. 7 is a schematic diagram of a plurality of headphones according to an embodiment of the present application for establishing communication;
Fig. 8 is a schematic diagram of a first implementation flow of step S103 provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a second implementation flow of step S103 provided in an embodiment of the present application;
fig. 10 is a schematic flowchart of an implementation of a voice sending method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an implementation flow of performing sound effect adjustment according to an embodiment of the present disclosure;
fig. 12 is a schematic flowchart of an implementation of step S1002 provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of an earphone voice playing device based on rights according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a voice sending device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an earphone according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
Headphones are audio players that convert voice data into audible sound waves using speakers positioned proximate the ears. The earphone can enable a wearer to listen to the sound without affecting other people; can also isolate the sound of the surrounding environment, and is helpful for the wearer in the noisy environment such as recording studio, bar, journey, sports, etc.
However, the current voice playing method of the earphone is low in safety and prone to privacy disclosure.
For example, for some people with poor hearing, the sound of the earphone is generally adjusted to the maximum volume when the earphone is used, and at this time, other people near the earphone can still hear the sound transmitted by the earphone, so that privacy is compromised.
In order to illustrate the technical solution of the present application, the following description is made by specific examples.
Fig. 1 shows a schematic implementation flow chart of an earphone voice playing method based on permission, which is provided in the embodiment of the present application, and the method may be applied to a first earphone, and may be applicable to a situation where privacy security of voice playing needs to be improved. The first earphone may be an earplug earphone, an ear-hanging earphone, an in-ear earphone, or a headphone.
Specifically, the above-mentioned method for playing the earphone voice based on the rights may include the following steps S101 to S103.
Step S101, obtaining the voice data to be played.
The voice data refers to data that needs to be played by the first earphone. The source and the acquisition mode of the voice data can be selected according to actual conditions.
In some embodiments of the present application, the voice data may be voice data stored in a storage module of the first earphone in advance, or may be received after the first earphone establishes a communication connection with another terminal, and then sent by the other terminal.
The terminal may be a mobile phone, a computer, or an intelligent watch, and the first earphone may receive voice data sent by other terminals after pairing with the other terminals is completed.
Specifically, in some embodiments of the present application, the terminal may also be a second earphone different from the first earphone. That is, the first earphone may establish a communication connection with the second earphone and receive voice data transmitted by the second earphone. Generally, as shown in fig. 2a, when a user a wearing a first earphone 21 and a user B wearing a second earphone 22 talk, other terminals such as a mobile phone and a computer are generally required to be intermediate forwarders. For example, after the mobile phone 24 of the user B is paired with the second earphone 22, the voice data collected by the second earphone 22 is obtained, and then the voice data is forwarded to the mobile phone 23 of the user a through the pre-established communication connection, and the mobile phone 23 of the user a sends the voice data to the first earphone 21 for playing. In the forwarding process, the problem of privacy disclosure is easy to occur.
In some embodiments of the present application, as shown in fig. 2B, the first earphone 25 of the user a may directly establish a communication connection with the second earphone 26 of the user B, and receive the voice data sent by the second earphone.
At this time, the first earphone and the second earphone do not need to be connected with the terminal, namely the voice data collected by the second earphone is not required to be forwarded to the first earphone through the forwarding of the terminal, but the voice data sent by the second earphone is directly received by the first earphone through the communication connection between the first earphone and the second earphone, so that the problem that privacy leakage occurs in the process of forwarding the voice data by the terminal can be avoided.
Specifically, the specific establishment mode of the communication connection may be selected according to practical situations, for example, when the first earphone and the second earphone are both bluetooth earphones, the first earphone and the second earphone may be paired through bluetooth. For another example, the first earphone and the second earphone are both configured with a network module, and the first earphone and the second earphone may establish a communication connection based on a wireless network. In other embodiments of the present application, the user may connect the first earphone and the second earphone to the same control terminal in advance, for example, the control terminal may be a mobile phone or a computer, and the control terminal establishes a communication connection for the first earphone and the second earphone.
In other embodiments of the present application, the first earphone and the second earphone may be configured with an attitude sensor, and based on the attitude sensor, the first earphone and the second earphone may detect whether or not each other is in a shake state at the same time. If the first earphone and the second earphone are in a shaking state at the same time, the first earphone and the second earphone can establish communication connection.
Step S102, a first real-time position of the first earphone is obtained, and whether the first real-time position meets a preset position condition is detected.
The first real-time position refers to a current position of the first earphone.
In some embodiments of the present application, if the first earphone is configured with a global positioning system (Global Positioning System, GPS), the first earphone may acquire the first real-time location where the first earphone is located through the global positioning system. In other embodiments of the present application, the first earphone may be configured with a camera, and collect an environmental image through the camera, and compare the environmental image with a pre-stored electronic map to determine a first real-time position of the first earphone in the electronic map.
It should be noted that other ways of obtaining the first real-time position are also suitable for the scheme of the present application, which is not described in detail herein.
In the embodiment of the present application, before playing the voice data, it is required to detect whether the first real-time position meets a preset position condition, so as to determine whether the current state of the first earphone meets the requirement of privacy protection.
The location condition refers to whether the location is located at a location satisfying the privacy protection requirement. Specifically, the above location condition may refer to whether the first real-time location of the first earphone is located in a certain area that meets the privacy protection requirement, or whether the relative location of the first earphone and other terminals or other people meets the privacy protection requirement, for example, whether the distance between the first real-time location and the second real-time location of the other terminals meets the privacy protection requirement.
Step S103, if the first real-time position meets the position condition, the voice data is played.
In the embodiment of the present application, if the first real-time position meets the position condition, it indicates that the first real-time position where the first earphone is located meets the requirement of privacy protection, so that the first earphone can play the voice data.
In some embodiments of the present application, if the first real-time position does not satisfy the position condition, it indicates that the first real-time position where the first earphone is located does not satisfy the requirement of privacy protection, so the first earphone may not play the voice data, or play the processed voice data after processing the voice data.
To illustrate with a practical application scenario, for example, each participant may use a separate earphone when a meeting is desired. In order to prevent the conference content from being leaked, the position condition may be set to be whether the first real-time position of the first earphone is located in the conference room, that is, when the first real-time position of the first earphone is located in the conference room, it indicates that the first earphone meets the privacy protection condition, and the first real-time position meets the position condition, at this time, the first earphone may play the voice data.
For another example, when a player selects to play a singing game, in order to prevent other players from eavesdropping on the alternate track of the player wearing the first earpiece, the position condition may be set to be whether the distance from the other players is greater than the distance threshold. That is, when the distance between the first real-time position of the first earphone and the position of the other player is greater than the distance threshold, it indicates that the first earphone meets the privacy protection condition, which indicates that the first real-time position meets the position condition, the first earphone can play the voice data. The distance threshold value can be adjusted according to actual conditions.
In the embodiment of the application, the voice data to be played is acquired; then, acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition; and if the first real-time position meets the position condition, playing the voice data. That is, after the first earphone receives the voice data, the first earphone does not directly play the voice data, but judges whether the current first real-time position of the first earphone meets the position condition, and plays the voice data when the first real-time position meets the position condition. For example, when the first earphone is located in the conference room, the voice data is played. Therefore, privacy leakage caused by playing voice data when the first earphone does not meet the position condition can be avoided, and safety of earphone voice playing is improved.
For some older people, even if the first earphone is loud, the problem of privacy leakage is not required because the first real-time position of the first earphone worn by the older people meets the position condition.
In the embodiment of the present application, based on requirements of different scenes, the above-mentioned position conditions may be set differently, and accordingly, a detection manner of whether the first real-time position satisfies the preset position condition may also be different. The following description is made with respect to a number of specific examples.
In some embodiments of the present application, as shown in fig. 3, the above-mentioned detecting whether the first real-time position meets the preset position condition may include the following steps S301 to S303.
Step S301 identifies a first wearer of a first earpiece and obtains a first area associated with the first wearer.
Wherein the first wearer refers to a user currently wearing the first earphone. The identification mode of the first wearer can be selected according to actual conditions.
In some embodiments of the present application, each first earphone may be pre-associated with a corresponding wearer. Each employee may be provided with a corresponding headset, for example, when the employee is engaged. At this time, the first wearer of the first earphone can be identified directly from the information such as the identifier, the production number, etc. of the first earphone.
In other embodiments of the present application, the first earpiece may also perform recognition of the first wearer by different means such as voiceprint recognition, face recognition, personal identifier recognition (e.g., recognizing a work number on a employee's work card), etc.
In embodiments of the present application, upon identification of a first wearer, a first zone associated with the first wearer may be acquired. The first area refers to an area where the first earphone is allowed to play voice data. It should be noted that, each first wearer may have a first area associated therewith, and the first areas associated with different first wearers may be the same or different.
It should be noted that, the acquiring manner of the first area may be selected according to the actual situation. For example, in some embodiments of the present application, each piece of personal information of the first wearer is associated with an area in advance, and after the first wearer is identified, the first earphone may determine the area associated with the first wearer based on the mapping relationship between the piece of personal information of the first wearer and the area, and take the area as the first area. For example, the character information may be a job number, and the area may be a station area corresponding to the job number.
In step S302, it is detected whether the first real-time location is located in the first area.
Specifically, in some embodiments of the present application, the first earphone may store an electronic map, and based on the electronic map, the first earphone may detect whether the first real-time location is located in the first area. In other embodiments of the present application, the first earphone may also determine whether the current first real-time location is located in the first area based on the scene recognition.
In step S303, if the first real-time position is located in the first area, it is determined that the first real-time position meets the position condition.
Based on the above description, each first wearer is associated with a first area, and when the first real-time position of the first earphone is located in the first area associated with the first wearer, it is described that the current position of the first earphone meets the requirement of privacy protection, so that it can be confirmed that the first real-time position meets the position condition, and the first earphone plays the voice data.
To illustrate with a specific application scenario, each participant typically has a seat in the conference room when a meeting is desired, for example. If each participant wears a first earphone, i.e., each participant is a first wearer, the first region of each first earphone may be a region within a predetermined distance range centered on the seat associated with its first wearer. That is, when the first real-time position of the first earphone of the first wearer is located on or near the seat of the first wearer, the first real-time position of the first earphone satisfies the position condition, and thus the first earphone plays the voice data so that the first wearer can hear the voice data through the first earphone on or near the seat.
For another example, when a meeting is desired and the meeting room is occupied, each participant is typically required to participate in the meeting at its own workstation. Thus, the first region of the first earpiece may be a station region associated with its first wearer. That is, when the first real-time position of the first earphone of the first wearer is located in the station area of the first wearer, the first real-time position of the first earphone satisfies the position condition, and thus the first earphone plays the voice data, so that the first wearer can hear the voice data through the first earphone in the own station area.
In an embodiment of the present application, based on the identification of a first wearer of a first headset, a first area associated with the first wearer may be acquired, and then whether the first real-time location is located within the first area is detected; and if the first real-time position is positioned in the first area, confirming that the first real-time position meets the position condition. That is, the first earphone must be in the area associated with the first wearer to play the voice data, and the different first earphone has the corresponding area meeting the privacy protection requirement, for example, the first wearer can hear the voice data in the own station area, but cannot hear the voice data in the area of other people, so that the privacy security of the voice playing of the earphone can be improved.
In view of the needs of practical applications, the first area associated with the first wearer in step S301 may not be a fixed area. Thus, as shown in fig. 4, in some embodiments of the present application, the above-described acquiring the first region associated with the first wearer may include the following steps S401 to S404.
Step S401, a voice instruction associated with a first wearer is acquired.
The voice command is a command for instructing the first wearer to perform a certain operation. The voice command may be obtained in different ways.
Specifically, in some embodiments of the present application, the voice command may be obtained by identifying the voice data. For example, when the voice data is the voice data sent by the second earphone worn by the conference host, the voice data includes an instruction sent by the conference host to the first wearer, for example, "please help me go to the office to get a conference file", the first earphone can recognize the voice instruction from the voice data.
In some embodiments of the present application, the voice command may also be obtained by recognizing voice data collected by a microphone of the first earphone. For example, when the first wearer takes a meeting file by holding his hand and saying "i need to go to the office," the first earphone can recognize the voice command from the voice data collected by the microphone.
Step S402, according to the voice command, identifying the target position pointed by the voice command.
The target position refers to a position to which the first wearer needs to go in the process of executing the voice instruction. In some embodiments of the present application, the identification manner of the target location may be selected according to the actual situation.
In some embodiments of the present application, the first earphone may identify a keyword related to the target location, and determine the target location pointed by the voice command according to the keyword. For example, if the voice command includes the name of the target location, the target location may be determined according to the name of the target location.
If the voice command includes an operation to be executed by the first wearer, the first earphone may also identify a keyword related to the operation content, and determine a target position pointed by the voice command according to the keyword. For example, if the voice command is "get a financial contract", then the content related keyword can be identified as "financial contract" according to the voice command, and accordingly, the target location associated with the command is a financial office.
Step S403, determining a listener-able path of the first earphone according to the first real-time position and the target position.
The audible path is a path between the first real-time position and the target position. Specifically, based on the electronic map stored in the first earphone, a path from the first real-time position to the target position can be determined, and the path is a listening path.
Step S404, determining a first area according to the audible path.
In order to ensure the first wearer's listening, in some embodiments of the present application, the first wearer's listening should also be ensured during the operation performed by the first wearer while heading to the target location. Thus, after the listener paths are determined, the first region may be determined from the listener paths, for example, a region within a preset distance range of the listener paths may be determined as the first region.
In the embodiment of the application, the target position pointed by the voice command is identified by acquiring the voice command associated with the first wearer according to the voice command, and then the listeneable path of the first earphone is determined according to the first real-time position and the target position, and further the first area is determined according to the listeneable path. That is, when the first wearer wears the first earphone and goes to the target position to execute the operation corresponding to the voice command, the first earphone can still play the voice data, so that the first area can be adjusted according to the actual requirement, namely the voice command, and the practicality of the first earphone is ensured while the privacy security is ensured.
In practical applications, the above-mentioned listening path may pass through many pre-planned second areas, for example, the listening path may pass through a foreground, a manager office, and/or a tea room, etc. In order to further improve privacy security, in some embodiments of the present application, the first earphone may process the voice data before playing the voice data.
Specifically, as shown in fig. 5, the playing of the voice data may include the following steps S501 to S503.
In step S501, a third area in the first area is determined according to the second area planned in advance.
The second areas are pre-planned areas, and the areas may be pre-planned according to different information such as functions, people, etc., for example, the office building may be pre-planned into a plurality of second areas, including a foreground, an office area, a tea room, a conference room, etc.
In some embodiments of the present application, the first earphone may obtain the second area planned in advance through an electronic map or a design drawing of the current scene. The first earphone may determine a third area in the first area according to the second area, where the third area is a superposition area of the first area and the second area.
For ease of illustration, fig. 6 provides a schematic view of a scenario in which the pre-planned second area includes a foreground 62, a manager office 63, and a bathroom 64. After the first earphone determines the listener able path 60, an area within a preset distance range of the listener able path 60 is determined as a first area 61, a superposition area of the first area 61 and the front stage 62 is a third area 65, a superposition area of the first area 61 and the total manager office 63 is a third area 66, and a superposition area of the first area 61 and the toilet 64 is a third area 67. When the first wearer walks on the listening path 60, conditions may occur, such as the need to give way to others, at which point the first wearer may enter a third area, i.e. the first real-time position of the first earpiece is located in the third area.
In step S502, if the first real-time position is located in the third area, the area information of the third area is obtained.
The above-mentioned area information refers to characteristic information of the third area, and may be, for example, privacy level, area openness, noise level, and the like. The manner of obtaining the region information may be selected according to the actual situation, for example, in some embodiments of the present application, since the region information of the second region is generally known, the region information of each third region may be identified as the region information of the second region associated with the third region. In other embodiments of the present application, the area information may also be obtained by identifying the scene information by the first earphone after the user enters the third area.
In some embodiments of the present application, when the first real-time location is located in the third area, the first earphone may process the first earphone according to the acquired area information of the third area.
Step S503, processing the voice data according to the region information, and playing the processed voice data.
In some embodiments of the present application, the voice data may be processed differently according to the specific content of the region information. Further, for different third areas, different area information may be processed differently for each third area.
Specifically, in some embodiments of the present application, the area information may include a privacy level of the third area; the above-mentioned privacy class is used to indicate the privacy security of the third area, and a high privacy class indicates that the privacy security corresponding to the third area is high, and a low privacy class indicates that the privacy security corresponding to the third area is low. Since each second area has information such as a corresponding function and a traffic volume, the privacy level of the second area may be determined in advance, and thus the privacy level of the third area may be determined as the privacy level of the corresponding second area. In this case, the processing the voice data based on the area information may include: if the privacy level is lower than the preset privacy level, identifying keywords in the voice data, and silencing the keywords in the voice data.
The preset privacy level can be adjusted according to actual needs. The above-mentioned keywords refer to words with high privacy. When the privacy level of the third area is lower than the preset privacy level, the third area is not required to meet the privacy protection requirement, and after the words with higher privacy in the voice data are subjected to silencing, the voice data subjected to silencing are played.
Note that, if the privacy level of the third area is equal to or higher than the preset privacy level, it is indicated that the third area meets the requirement of privacy protection, the voice data may not be processed.
Continuing to explain with fig. 6, since the foreground 62 has a large flow of people and more people outside, the privacy level thereof is lower than the preset privacy level, and the privacy level of the third area 65 is determined as the privacy level of the foreground 62, that is, the privacy level of the third area 65 corresponding to the foreground 62 is lower than the preset privacy level, when the first real-time position of the first earphone is located in the third area 65, the first earphone needs to identify the keyword in the voice data, and perform the silencing processing on the keyword in the voice data, and then play the voice data after the silencing processing, so as to avoid sending privacy disclosure in the third area 65. The privacy level of the total manager office 63 is higher than the preset privacy level, and the privacy level of the third area 66 is determined as the privacy level of the total manager office 63, that is, the privacy level of the third area 66 corresponding to the total manager office 63 is higher than the preset privacy level, so that when the first real-time position of the first earphone is located in the third area, the first earphone can directly play the voice data.
Further, in some embodiments of the present application, if the privacy level is equal to or higher than the preset privacy level, the first earphone may further identify the number of people in the third area, and if the number of people is greater than the preset number of people, identify the keywords in the voice data, and mute the keywords in the voice data.
The preset number of people refers to the maximum number of people allowed in the third area under the condition that the privacy protection requirement is met, and the specific value of the preset number of people can be adjusted according to actual conditions.
The identification mode of the number of people in the third area can be selected according to actual situations.
For example, in some embodiments of the present application, the number of people in the third area may be identified by means of image recognition, and the number of people in the third area is counted; the laser radar may also be used to identify the outline of the person and determine the number of persons in the third area based on the outline of the person. Alternatively, in other embodiments of the present application, the noise level of the current environment may be collected by the microphone of the first earphone, and the number of people in the third area may be estimated according to the noise level of the environment.
If the number of people in the third area is larger than the preset number of people, the problem that privacy leakage easily occurs due to the fact that the current third area is large in people flow is indicated, and therefore the first earphone can identify keywords in voice data and conduct silencing processing on the keywords in the voice data.
In the embodiment of the application, according to the privacy level of the third area, the first earphone may perform silencing processing on the keywords in the voice data when the privacy level is lower than the preset privacy level, or when the privacy level is higher than or equal to the preset privacy level, but the number of people is higher than the preset number of people, and play the voice data after the silencing processing, so that privacy disclosure in the third area can be avoided, and privacy security of voice playing is improved.
In other embodiments of the present application, the above area information may further include a noise level of the third area, and according to the noise level of the third area, the first earphone may perform noise reduction processing on the voice data, or may increase the volume of the voice data when the noise level of the third area is greater than a preset noise level threshold.
In an embodiment of the present application, if the first real-time position is located in the third area, the first earphone acquires area information of the third area, processes the voice data according to the area information, and plays the processed voice data. That is, the first earphone can make different treatments on the voice data according to different area information when the first earphone is positioned in different third areas, so that the use experience of a user can be improved under the condition of ensuring privacy and safety.
In order to further improve privacy security, if the first real-time position of the first earphone meets the position condition, the voice data may be further processed according to the actual situation before being played.
Specifically, in some embodiments of the present application, the playing the voice data may include: and acquiring a first authority of the first wearer, processing the voice data according to the first authority, and playing the processed voice data.
In some embodiments of the present application, after the first wearer is identified, a first right of the first wearer, which refers to a right of the first wearer to listen to the private information involved in the voice data, may be acquired. That is, the higher the first right, the more private information the first wearer can listen to; the lower the first right, the less private information the first wearer can listen to. Thus, the processing the voice data according to the first authority may include: identifying keywords in the voice data; and according to the first authority, silencing the keywords in the voice data. And, according to different first authorities, silencing results to keywords in the voice data may be different.
Specifically, the first earphone may store predetermined minimum permission requirements associated with different keywords, and then compare the minimum permission requirements associated with each keyword with the first permission. If the first authority is equal to or higher than the minimum authority requirement associated with the keyword, the keyword is not required to be silenced; if the first authority is lower than the minimum authority requirement associated with the keyword, the keyword needs to be silenced.
According to the method, the actual application scene is used for explaining, in the process of holding a conference, a plurality of first earphones can be connected with one another, at the moment, a conference host sends voice data to other first earphones through the earphones or the mobile phone, and each first earphone can process the voice data according to the first authority of the first wearer of the first earphone. For example, the voice data includes a keyword X, a keyword Y, and a keyword Z, wherein the minimum authority requirement associated with the keyword X is level 5, the minimum authority requirement associated with the keyword Y is level 3, and the minimum authority requirement associated with the keyword Z is level 1. Wherein the privacy importance level of the 5-level representation is higher than the 3-level, and the privacy importance level of the 3-level representation is higher than the 1-level. If the first wearer is a clan, the first authority of the first wearer may be level 2, and at this time, the first earphone worn by the first wearer needs to perform silencing treatment on the keyword X and the keyword Y. If the first wearer is a manager, the first authority of the first wearer may be level 4, and at this time, the first earphone worn by the first wearer needs to perform silencing treatment on the keyword X.
In the embodiment of the application, for the same voice data, different first earphones can adjust the voice data based on the first authority of the current first wearer, so that users with low authority can not hear private information with high privacy importance degree, and privacy safety of voice playing is improved.
In other embodiments of the present application, the voice data may include voice data respectively transmitted by a plurality of second headphones.
Specifically, the first earphone may establish communication connection with a plurality of second earphones respectively. For example, as shown in fig. 7, during a conference, headphones (e.g., headphones 71, 72, 73, 74, and 75 in fig. 7) worn by each participant may be connected to each other in communication, where the headphones worn by the speaking participant are second headphones, and during the conference, there may be multiple participants speaking simultaneously, where there are multiple second headphones (e.g., headphones 72, 73, 74, and 75 in fig. 7), and where the headphones 71 (first headphone) of the first wearer (one of the participants) may receive voice data transmitted by the headphones 72, 73, 74, and 75 (multiple second headphones).
At this time, as shown in fig. 8, in some embodiments of the present application, the playing of the voice data may include the following steps S801 to S803.
Step S801 acquires the priority of the second wearer associated with each voice data.
Wherein the priority refers to the importance of the voice data of the second wearer, and the second wearer is the wearer of the second earphone. In general, the priority may be determined based on information of the second wearer's job level, age, etc.
In some embodiments of the present application, after receiving the respective voice data, the second wearer corresponding to each voice data may be identified based on a pre-established voice database.
In other embodiments of the present application, the second earphone may also transmit the information of the second wearer to the first earphone together with the voice data after identifying the second wearer. For example, information such as the ID of the second wearer, the job level of the second wearer, etc. is transmitted to the first earphone together with the voice data. At this time, the first earphone may determine the priority of the second wearer of each voice data according to the received information.
Step S802, ordering each voice data according to the priority, and obtaining the voice playing sequence.
The above-mentioned voice playing sequence refers to the sequence of playing each voice data.
In some embodiments of the present application, the respective voice data may be ordered based on different priorities, and in general, the higher the priority the voice data, the earlier in the voice playing order, i.e., the higher the priority the voice data is played.
Step 803, playing each voice data according to the voice playing sequence.
According to the practical application scene, if three voice data are acquired in the conference process, wherein a second wearer associated with the first voice data is a manager, a second wearer associated with the second voice data is a division, a second wearer associated with the third voice data is a member, the priority of the second wearer associated with the first voice data is greater than the priority of the second wearer associated with the second voice data according to the job level of the second wearer, and the priority of the second wearer associated with the second voice data is greater than the priority of the second wearer associated with the third voice data, so that the voice playing sequence is that the first voice data are played first, then the second voice data are played, and then the third voice data are played.
In the embodiment of the application, when the received voice data are multiple, the first earphone can sort each voice data according to the priority, so as to obtain the voice playing sequence, and then play each voice data according to the voice playing sequence, so that each voice data can be played according to the importance degree, and the situation that the first wearer cannot hear the voice content in the listening process due to the simultaneous playing of the voice data is avoided. The voice playing sequence is determined according to the priority of the second wearer from which the voice data is sourced, so that the higher the priority of the second wearer is, the more preferentially the voice data is played, and the first wearer can hear the voice data of the second wearer with higher priority preferentially.
In other embodiments of the present application, the playing the voice data may further include: detecting whether the first earphone meets a safety condition; if the first earphone does not meet the safety condition, the voice data is processed, and the processed voice data is played.
The security condition refers to whether the current state of the first earphone meets the requirement of privacy protection. The safety condition may be related to the first wearer themselves or to other persons. When the first earphone does not meet the security condition, the state of the first earphone is not met the requirement of privacy protection, so that voice data is required to be processed and played. When the first earphone meets the safety condition, the state of the first earphone meets the privacy protection requirement, so that the voice data can be directly played.
Specifically, in some embodiments of the present application, the first earphone may acquire identity information of a person whose distance from the first earphone is smaller than a distance threshold, and determine whether the first earphone is in a safe state according to the identity information of the person.
The identity information may include whether the person is an internal person in the current scene, or whether the person has a right to listen to voice, etc., and the distance threshold may be adjusted according to the actual situation.
To illustrate with a specific application scenario, when a first wearer listens to voice data in his or her workstation area, other persons may appear in the workstation area, for example, other staff coming into the workstation area of the first wearer wish to talk to the first wearer, where the distance between the staff and the first earpiece is less than the distance threshold. If the first wearer uses the headset with a relatively loud volume, the employee may hear the outgoing voice data from within the headset. Therefore, the first earphone can identify other people in the station area, identify whether the person has the right to listen to the voice, and confirm that the first earphone does not meet the safety condition if the person does not have the right to listen to the voice. At this time, the first earphone may mute the keyword in the voice data, or may not play the voice data.
In particular, in some embodiments of the present application, the person having a distance from the first earpiece that is less than the distance threshold may wear an earpiece that has a communication connection with the first earpiece. At this time, the playing the voice data may include: detecting whether the third wearer has authority to listen to the voice data; if the third wearer does not have the right to listen to the voice data, the voice data is processed, and the processed voice data is played.
The third wearer is a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold.
That is, if the person having a distance from the first earphone smaller than the preset distance wears the third earphone, it is possible to detect whether the third wearer has authority to listen to the voice data according to the communication connection between the third earphone and the first earphone. The permission can be directly sent to the first earphone by the third earphone, and at the moment, the first earphone can finish confirmation of the permission of the third wearer without carrying out character recognition on the third wearer.
In the above description, if other staff entering the station area of the first wearer is in the conference, the staff wearing a third earphone, the third earphone will identify the third wearer (i.e. the staff entering the station area of the first wearer) when in use, and play the voice data according to whether the third wearer has the authority to listen to the voice data. And the first earphone may detect whether the third wearer has authority to listen to the voice data through a communication connection with the third earphone. For example, an indication message is sent to the third earphone, and permission information of the third wearer sent by the third earphone after the indication message is received, and whether the third wearer has permission to listen to the voice data can be confirmed according to the permission information.
If the third wearer does not have the right to listen to the data, the first earphone does not meet the safety condition, and the first earphone can process the keywords in the voice data and play the processed voice data. If the third wearer has the right to listen to the voice data, the first earphone can directly play the voice data, wherein the first earphone meets the safety condition. The processing may be silencing a keyword in the voice data.
In the embodiment of the application, whether the third wearer has the right to listen to the voice data is detected; if the third wearer does not have the right to listen to the voice data, the voice data is processed, the processed voice data is played, and when the third wearer who does not have the right to listen to the voice data appears in the first area, the voice data is processed, the third wearer is prevented from listening to the voice data, and privacy safety of voice playing is guaranteed.
In other embodiments of the present application, the first earphone may further detect a relative positional relationship between the first earphone and an ear of the first wearer, determine whether the first earphone is in a wearing state based on the relative positional relationship, and if the first earphone is not in the wearing state, confirm that the first earphone does not meet the safety condition.
The relative positional relationship may be obtained in various ways, for example, by means of a laser sensor, image recognition, or the like, or may be determined by detecting a motion trajectory of the wearer by means of an attitude sensor.
Based on the above-mentioned relative positional relationship, it is possible to determine whether the first earphone is in a worn state, and if the first earphone is not in a worn state, since the first earphone may be lost and picked up by other people, it should be confirmed that the first earphone does not satisfy the safety condition at this time, and the voice data is processed. If the first earphone is in the wearing state, the playing of the voice data can be normally performed.
In the embodiment of the application, whether the first earphone meets the safety condition is detected; if the first earphone does not meet the safety condition, the voice data is processed, and the processed voice data is played, so that the privacy safety of voice playing can be further improved.
In other embodiments of the present application, the voice data includes a plurality of voice commands, and the playing order of each voice command is different. At this time, as shown in fig. 9, the above-mentioned playing of the voice data may further include the following steps S901 to S903.
In step S901, a target voice command of the plurality of voice commands is identified, and a playing order of the target voice command is obtained.
The target voice command is a voice command associated with the first earphone.
Specifically, after receiving the voice data, the first earphone may identify the voice command included in the voice data, further identify, based on the information of the first wearer, a voice command associated with the first wearer from among the voice commands, and then determine the voice command associated with the first wearer as the target voice command. Then, the play order of the target voice command in the plurality of voice data can be acquired.
Step S902, if the playing order of the target voice command is the first playing, playing the target voice command, and sending a command playing prompt message to the next earphone of the first earphone after completing the playing of the target voice command.
The next earphone of the first earphone is an earphone which needs to play the next voice command corresponding to the target voice command.
In step S903, if the playing order of the target voice command is non-first playing, after receiving the command playing prompt message sent by the previous earphone of the first earphone, the target voice command is played, and if the playing order of the target voice command is non-last playing, the command playing prompt message is sent to the next earphone of the first earphone after the playing of the target voice command is completed.
The former earphone of the first earphone is an earphone which needs to play the last voice instruction corresponding to the target voice instruction.
That is, based on the plurality of voice commands, the first headphones may play according to the order of the target voice commands, and for the scene of the plurality of first headphones, each first headphone may play the target voice commands associated with each first headphone in sequence.
The application scenario is described, for example, when the conference host sends out voice data containing three voice commands, including a voice command D, a voice command E and a voice command F. The first earphone H worn by the user G recognizes that the target voice command is a voice command D and the playing order is first played, the first earphone H plays the target voice command D and sends command playing prompt information to the next earphone (the first earphone J worn by the user I) of the first earphone after the playing of the voice command D is completed. The first earphone J recognizes that the target voice command is a voice command E and the playing order is non-first playing, then after receiving the command playing prompt information sent by the previous earphone (the first earphone H) of the first earphone, the first earphone J plays the voice command E, and after completing playing of the target voice command, the first earphone J sends the command playing prompt information to the next earphone (the first earphone L worn by the user K) of the first earphone because the playing order of the target voice command is non-last playing. The first earphone L recognizes that the target voice command is a voice command F and the playing order is non-first playing, after receiving the command playing prompt information sent by the earphone (the first earphone J) before the first earphone, the first earphone L plays the voice command F, and because the playing order of the target voice command is last playing, the whole voice data playing is finished.
In the embodiment of the application, by identifying the target voice command in the plurality of voice commands, the first earphone can only play the target voice command associated with the first earphone, and the whole playing process is in a certain sequence. In the scene of a plurality of earphones, the wearer of each first earphone cannot hear voice instructions of other wearers, and privacy safety of voice playing can be improved. Meanwhile, the playing of each voice command is in a certain sequence, so that the time sequence of a plurality of voice commands is met, and the execution efficiency of the voice commands is improved.
Fig. 10 is a schematic implementation flow chart of a voice sending method according to an embodiment of the present application, where the method may be applied to a second earphone, and may be applicable to situations where privacy security of voice playing needs to be improved. The second earphone may be an earplug earphone, an ear-hanging earphone, an in-ear earphone, or a headphone.
Specifically, the above-described voice transmission method may include the following steps S1001 to S1002.
In step S1001, voice data is acquired.
The voice data refers to data that the second earphone needs to send to the first earphone. The voice data may be acquired in different ways.
Specifically, the voice data may include at least one of first voice data transmitted by the first terminal and second voice data collected by a microphone on the second earphone. The first terminal is a terminal for establishing communication connection with the second earphone, for example, may be a mobile phone or a computer.
That is, the second earphone collects voice data according to its own microphone, and may also receive the voice data transmitted from the first terminal by establishing a communication connection with the first terminal.
Step S1002, a communication connection is established with the first earphone, and voice data is sent to the first earphone.
In an embodiment of the present application, after acquiring the data, the second earphone may establish a communication connection with the first earphone, and send the voice data to the first earphone based on the communication connection. After the first earphone receives the and data sent by the second earphone, the first earphone can play the voice data according to the permission-based earphone voice playing method described in the foregoing fig. 1 to 9.
In the embodiment of the application, the second earphone can acquire voice data, establish communication connection with the first earphone and send the voice data to the first earphone, so that the first earphone can play the voice data according to the permission-based earphone voice playing method provided by the application, the voice data can be played by the first earphone under the condition of meeting privacy protection requirements, and privacy safety of voice playing is improved.
Since the above-mentioned voice data may contain both the first voice data and the second voice data, i.e. the first voice data transmitted by the first terminal and the second voice data collected by the microphone on the second earphone. For example, the speaker of the second earpiece of the wearer is speaking the composition of the video being played on the first terminal, at which time the voice data acquired by the second earpiece may include the first voice data from the video on the first terminal, as well as the speaker's speech (i.e., the second voice data) acquired by the microphone.
If the first voice data and the second voice data are not processed, a great difference between the first voice data and the second voice data may occur, and after the voice data are transmitted to the first earphone, the first wearer of the first earphone may not hear the content of any one of the first voice data and the second voice data when the first earphone plays the voice data.
Thus, in some embodiments of the present application, as shown in fig. 11, before sending the voice data to the first earphone, the method may further include: step S1101 and step S1102.
Step S1101 detects a first sound effect of the first voice data and a second sound effect of the second voice data.
The attribute related to the playing effect of the voice data, such as loudness, sampling rate, bit rate, number of channels, or sampling precision, may be obtained according to parameters of sound such as amplitude and frequency of the sound during the process of obtaining the voice data.
Step S1102, performing an audio adjustment operation on at least one of the first voice data and the second voice data based on the first audio and the second audio.
Specifically, when the difference between the first sound effect and the second sound effect is greater than the difference threshold, the second earphone may perform a sound effect adjustment operation on at least one of the first sound data and the second sound data, so that the sound effects of the first sound data and the second sound data are approximately the same.
It should be noted that, if the difference between the first sound effect and the second sound effect is less than or equal to the difference threshold, the second earphone may not perform the sound effect adjustment operation on the first voice data and the second voice data.
The difference threshold may be adjusted according to a specific attribute of the sound effect.
In this embodiment, through detecting the first audio effect of first voice data and the second audio effect of second voice data to based on first audio effect and second audio effect, carry out audio effect adjustment operation to at least one voice data in first voice data and the second voice data, make first audio effect and second audio effect as close as possible, and then first earphone when playing voice data, the first wearer of first earphone can hear two different voice data.
For example, the first sound effect and the second sound effect refer to the loudness of the voice data, if the loudness of the first voice data is far greater than the loudness of the second voice data, the first earphone may not hear the second voice data when receiving and playing the voice data, and based on the method provided by the application, the first wearer can hear the first voice data and the second voice data by adjusting the loudness of the first voice data or the second voice data.
In other embodiments of the present application, if the voice data includes both the first voice data and the second voice data, the second earphone may further process the first voice data and the second voice data to obtain processed voice data, and in the processed voice data, the left and right channels respectively correspond to one voice data of the first voice data and the second voice data.
That is, after the first earphone receives the processed voice data, the processed voice data is played, one voice data of the first voice data and the second voice data is played in the left channel, and the other voice data different from the left channel is played in the right channel.
At this time, the first wearer can choose to wear only one earphone according to the situation of the first wearer, namely only one sound channel voice data is received, so that the first wearer can conveniently acquire any voice data in the first voice data or the second voice data, and the use experience of the first wearer is improved.
In other embodiments of the present application, if the voice data includes first voice data, the first voice data may include some private content because the first voice data is voice data sent by the first terminal. In order to prevent privacy disclosure, as shown in fig. 12, in some embodiments of the present application, the foregoing sending the voice data to the first earphone may further include the following steps S1201 to S1203.
In step S1201, source information associated with the first voice data is acquired.
The source information refers to information related to a sound source of the first voice data. For example, the source information may be information such as a name and a position of a call target. For another example, the source information may refer to information such as a version number and confidentiality of a video being played by the terminal.
Step S1202, determining whether the first voice data is internal information that cannot be sent out according to the source information.
Since in some embodiments of the present application, the second earphone is connected to the first terminal, some situations may occur in the first terminal, for example, the first terminal accesses a phone, or the first terminal plays a video with higher security due to a user's misoperation. In order to prevent the first voice data (e.g., call content, video content, etc.) of the first terminal from being leaked in this case, the above-described second terminal may determine whether the first voice data is internal information that cannot be transmitted outward, based on the source information.
Specifically, the second earphone may determine whether the conversation content (i.e., the received first voice data) of the conversation object is an internal message according to the position of the conversation object. Or, the second earphone may determine whether the information of the call object is related to the scene information of the current scene according to the name of the call object, and if the information of the call object is not related to the scene information of the current scene, determine the first voice data as internal information that cannot be sent outwards. For example, according to the name of the call object, it is determined whether the call object is a participant in the present conference, and if the call object is not a participant in the present conference, the first voice data is determined as internal information which cannot be transmitted to the outside. Or, the second earphone may determine whether the first voice data of the video is internal information that cannot be sent out according to the version number of the video.
In step S1203, if the first voice data is not the internal information, the first voice data is sent to the first earphone.
In some embodiments of the present application, if the first voice data is internal information, which indicates that the first voice data is not available for sending to other first headphones, that is, the first voice data is private information and cannot be listened to by the first wearer of the first headphones, the second headphones may not send the first voice data to the first headphones. If the first voice data is not internal information, the first voice data can be sent to other first earphones, namely, the first voice data can be listened to by a first wearer of the first earphone, and the second earphone can send the first voice data to the first earphone.
In the embodiment of the application, the source information associated with the first voice data is acquired, whether the first voice data is the internal information which can not be sent outwards is determined according to the source information, and if the first voice data is not the internal information, the first voice data is sent to the first earphone, so that the internal information is not revealed to the first earphone, and the privacy security is improved.
It should be noted that, the executing main body of the permission-based earphone voice playing method and the executing main body of the voice sending method may be the same earphone, that is, one earphone may be used as a first earphone, to receive and play voice data sent by other terminals, or may be used as a second earphone, to send the voice data to other first earphones.
Taking fig. 7 as an example, the headset 71, the headset 72, the headset 73, the headset 74 and the headset 75 have all established communication connection, and any one of the headset 71, the headset 72, the headset 73, the headset 74 and the headset 75 can be used as the first headset to receive the voice data sent by the other headset. Similarly, any one of the headphones 71, 72, 73, 74, and 75 may be used as the second headphone, and the acquired voice data may be transmitted to the other headphones.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order according to the present application.
Fig. 13 is a schematic structural diagram of an authority-based earphone voice playing device 1300 according to an embodiment of the present application, where the authority-based earphone voice playing device 1300 is configured on a first earphone. The right-based headphone audio playback apparatus 1300 may include: an acquisition unit 1301, a detection unit 1302, and a playback unit 1303.
An obtaining unit 1301, configured to obtain voice data to be played;
a detecting unit 1302, configured to obtain a first real-time position where the first earphone is located, and detect whether the first real-time position meets a preset position condition;
and the playing unit 1303 is configured to play the voice data if the first real-time position meets the position condition.
In some embodiments of the present application, the detection unit 1302 may be specifically configured to: identifying a first wearer of the first earphone and acquiring a first area associated with the first wearer, the first area being an area that allows the first earphone to play the voice data; detecting whether the first real-time location is located within the first region; and if the first real-time position is positioned in the first area, confirming that the first real-time position meets the position condition.
In some embodiments of the present application, the detection unit 1302 may be specifically configured to: acquiring a voice instruction associated with the first wearer; identifying a target position pointed by the voice instruction according to the voice instruction; determining a listener-able path of the first earpiece based on the first real-time location and the target location; and determining the first area according to the audible path.
In some embodiments of the present application, the detection unit 1302 may be specifically configured to: determining a third area in the first area according to a second area planned in advance, wherein the third area is a superposition area of the first area and the second area; if the first real-time position is located in the third area, acquiring area information of the third area; and processing the voice data according to the region information, and playing the processed voice data.
In some embodiments of the present application, the above-mentioned region information includes a privacy level of the third region; the detection unit 1302 may specifically be configured to: and if the privacy level is lower than the preset privacy level, identifying keywords in the voice data, and silencing the keywords in the voice data.
In some embodiments of the present application, the voice data includes voice data respectively transmitted by a plurality of second headphones; the playing unit 1302 may specifically be configured to: acquiring the priority of each second wearer associated with the voice data; the second wearer is a wearer of a second earphone; sequencing each voice data according to the priority to obtain a voice playing sequence; and playing the voice data according to the voice playing sequence.
In some embodiments of the present application, the playing unit 1302 may be specifically configured to: detecting whether the first earphone meets a safety condition or not; and if the first earphone does not meet the safety condition, processing the voice data and playing the processed voice data.
In some embodiments of the present application, the playing unit 1302 may be specifically configured to: detecting whether a third wearer has authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold; and if the third wearer does not have the authority to listen to the voice data, processing the voice data and playing the processed voice data.
In some embodiments of the present application, the voice data includes a plurality of voice commands, and a playing order of each voice command is different; the playing unit 1302 may specifically be configured to: identifying a target voice command in the voice commands, and acquiring the playing order of the target voice command; the target voice command is a voice command associated with the first earphone; if the playing order of the target voice command is the first playing, playing the target voice command, and sending command playing prompt information to the next earphone of the first earphone after completing playing of the target voice command; the next earphone of the first earphone is an earphone which needs to play the next voice instruction corresponding to the target voice instruction; and if the playing order of the target voice command is non-first playing, playing the target voice command after receiving command playing prompt information sent by a previous earphone of the first earphone, and if the playing order of the target voice command is non-last playing, sending command playing prompt information to a next earphone of the first earphone after completing playing of the target voice command, wherein the previous earphone of the first earphone is an earphone which needs to play a last voice command corresponding to the target voice command.
It should be noted that, for convenience and brevity of description, the specific working process of the above-mentioned headphone voice playing apparatus 1300 based on the rights may refer to the corresponding process of the method described in fig. 1 to 9, and will not be described herein again.
Fig. 14 is a schematic structural diagram of a voice transmitting apparatus 1400 according to an embodiment of the present application, where the voice transmitting apparatus 1400 is configured on a second earphone. The voice transmission apparatus 1400 may include: an acquisition unit 1401, and a transmission unit 1402.
An acquisition unit 1401 for acquiring voice data;
a sending unit 1402, configured to establish a communication connection with a first earphone, send voice data to the first earphone, and play the voice data by the first earphone according to the permission-based earphone voice playing method described in fig. 1 to 9.
In some embodiments of the present application, the voice data includes at least one of first voice data sent by a first terminal and second voice data collected by a microphone on the second earphone; the first terminal is a terminal which establishes communication connection with the second earphone.
In some embodiments of the present application, the voice data includes first voice data and second voice data, and the voice transmitting apparatus 1400 further includes an audio adjustment unit that may be used to: detecting a first sound effect of the first voice data and a second sound effect of the second voice data; and performing an effect adjustment operation on at least one voice data of the first voice data and the second voice data based on the first effect and the second effect.
In some embodiments of the present application, the voice data includes the first voice data, and the sending unit 1402 may be further configured to: acquiring source information associated with the first voice data; determining whether the first voice data is internal information which cannot be sent outwards according to the source information; and if the first voice data is not the internal information, transmitting the first voice data to the first earphone.
It should be noted that, for convenience and brevity, the specific operation of the voice transmitting apparatus 1400 may refer to the corresponding process of the method described in fig. 10 to 12, which is not described herein again.
Fig. 15 is a schematic diagram of an earphone according to an embodiment of the present application. The earphone 15 may include: a processor 150, a memory 151, and a computer program 152 stored in the memory 151 and executable on the processor 150, such as a rights-based headset voice playback program. The processor 150, when executing the computer program 152, implements the steps of the above-described embodiments of the method for playing earphone voice based on rights, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 150 implements the steps in the above-described respective voice transmission method embodiments when executing the computer program 152, such as steps S1001 to S1002 shown in fig. 10.
The processor 150, when executing the computer program 152, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the acquisition unit 1301, the detection unit 1302, and the playing unit 1303 shown in fig. 13. Alternatively, the processor 150 may implement the functions of the modules/units in the above-described embodiments of the apparatus when executing the computer program 152, for example, the functions of the acquisition unit 1401 and the transmission unit 1402 shown in fig. 14.
The computer program may be divided into one or more modules/units, which are stored in the memory 151 and executed by the processor 150 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the headset.
For example, the computer program may be split into: the device comprises an acquisition unit, a detection unit and a playing unit. The device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring voice data to be played; the detection unit is used for acquiring a first real-time position of the first earphone and detecting whether the first real-time position meets a preset position condition; a playing unit, configured to play the voice data if the first real-time position meets the position condition; the playing unit is also specifically configured to: detecting whether a third wearer has authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold; and if the third wearer does not have the authority to listen to the voice data, processing the voice data and playing the processed voice data.
For another example, the computer program may be partitioned into: an acquisition unit and a transmission unit. The device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring voice data; and the sending unit is used for establishing communication connection with the first earphone and sending the voice data to the first earphone, and the first earphone plays the voice data according to the permission-based earphone voice playing method described in the previous figures 1 to 9.
The headset may include, but is not limited to, a processor 150, a memory 151. It will be appreciated by those skilled in the art that fig. 15 is merely an example of a headset and is not meant to be limiting, and that more or fewer components than shown may be included, or that certain components may be combined, or that different components may be included, for example, the headset may also include input and output devices, network access devices, buses, etc.
The processor 150 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 151 may be an internal storage unit of the earphone, for example, a hard disk or a memory of the earphone. The memory 151 may also be an external storage device of the headset, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the headset. Further, the memory 151 may also include both an internal storage unit and an external storage device of the earphone. The memory 151 is used to store the computer program and other programs and data required by the headset. The memory 151 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/earphone and method may be implemented in other manners. For example, the above-described apparatus/earphone embodiments are merely illustrative, and for example, the division of the modules or units is merely a logical function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. The method is characterized by being applied to a first earphone and comprising the following steps of:
acquiring voice data to be played;
acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition;
if the first real-time position meets the position condition, playing the voice data;
the playing the voice data comprises the following steps:
detecting whether a third wearer has authority to listen to the voice data; the third wearer is a wearer of a third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than a distance threshold;
And if the third wearer does not have the authority to listen to the voice data, processing the voice data and playing the processed voice data.
2. The method for playing back the headset voice based on the authority of claim 1, wherein the detecting whether the first real-time location satisfies a preset location condition comprises:
identifying a first wearer of the first earphone and acquiring a first area associated with the first wearer, the first area being an area that allows the first earphone to play the voice data;
detecting whether the first real-time location is located within the first region;
and if the first real-time position is positioned in the first area, confirming that the first real-time position meets the position condition.
3. The method for playing the voice of the earphone based on the permission as claimed in claim 2, wherein the playing the voice data comprises:
determining a third area in the first area according to a second area planned in advance, wherein the third area is a superposition area of the first area and the second area;
if the first real-time position is located in the third area, acquiring area information of the third area;
And processing the voice data according to the region information, and playing the processed voice data.
4. The right-based headphone voice playing method according to claim 3, wherein the area information includes a privacy level of the third area;
the processing the voice data according to the region information comprises the following steps:
and if the privacy level is lower than the preset privacy level, identifying keywords in the voice data, and silencing the keywords in the voice data.
5. The method for playback of headphone-based voice of claim 4, wherein if the privacy level is equal to or higher than the preset privacy level, the number of persons in the third area is identified, and if the number of persons is greater than the preset number of persons, keywords in the voice data are identified, and the keywords in the voice data are silenced.
6. The right-based headphone voice playing method according to claim 1 or 2, wherein the voice data includes voice data transmitted by a plurality of second headphones, respectively;
the playing the voice data comprises the following steps:
Acquiring the priority of each second wearer associated with the voice data; the second wearer is a wearer of a second earphone;
sequencing each voice data according to the priority to obtain a voice playing sequence;
and playing the voice data according to the voice playing sequence.
7. A voice transmission method, applied to a second earphone, comprising:
acquiring voice data;
a communication connection is established with a first earphone, voice data is sent to the first earphone, and the first earphone plays the voice data according to the permission-based earphone voice playing method of claim 1.
8. A headset comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when executing the computer program or the steps of the method according to claim 7 when executing the computer program.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6 or the computer program when executed by a processor implements the steps of the method according to claim 7.
CN202110307073.5A 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium Active CN112887871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110307073.5A CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110001135.XA CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium
CN202110307073.5A CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110001135.XA Division CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium

Publications (2)

Publication Number Publication Date
CN112887871A CN112887871A (en) 2021-06-01
CN112887871B true CN112887871B (en) 2023-06-23

Family

ID=74427729

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110307073.5A Active CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium
CN202110001135.XA Active CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium
CN202110307077.3A Active CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202110001135.XA Active CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium
CN202110307077.3A Active CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium

Country Status (1)

Country Link
CN (3) CN112887871B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887871B (en) * 2021-01-04 2023-06-23 深圳千岸科技股份有限公司 Headset voice playing method based on permission, headset and storage medium
CN114999489A (en) * 2022-06-28 2022-09-02 歌尔科技有限公司 Wearable device control method and apparatus, terminal device and storage medium
CN116229987B (en) * 2022-12-13 2023-11-21 广东保伦电子股份有限公司 Campus voice recognition method, device and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103391118A (en) * 2013-07-23 2013-11-13 广东欧珀移动通信有限公司 Bluetooth headset and audio sharing method by virtue of Bluetooth headsets
CN103650528A (en) * 2011-07-13 2014-03-19 诺基亚公司 Method and apparatus for providing content to an earpiece in accordance with a privacy filter and content selection rule
CN103985405A (en) * 2014-04-18 2014-08-13 青岛尚慧信息技术有限公司 Audio player
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
CN106686186A (en) * 2016-12-27 2017-05-17 广东小天才科技有限公司 Wearable device play control method and wearable device
CN106714105A (en) * 2016-12-27 2017-05-24 广东小天才科技有限公司 Wearable equipment playing mode control method and wearable equipment
CN106791024A (en) * 2016-11-30 2017-05-31 广东欧珀移动通信有限公司 Voice messaging player method, device and terminal
CN106851450A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of wireless headset pair and electronic equipment
CN107609371A (en) * 2017-09-04 2018-01-19 联想(北京)有限公司 A kind of message prompt method and audio-frequence player device
CN109639908A (en) * 2019-01-28 2019-04-16 上海与德通讯技术有限公司 A kind of bluetooth headset, anti-eavesdrop method, apparatus, equipment and medium
KR20190118171A (en) * 2017-02-14 2019-10-17 아브네라 코포레이션 Method for detecting user voice activity in communication assembly, its communication assembly
US10728655B1 (en) * 2018-12-17 2020-07-28 Facebook Technologies, Llc Customized sound field for increased privacy
CN111709008A (en) * 2020-06-10 2020-09-25 上海闻泰信息技术有限公司 Earphone control method and device, electronic equipment and computer readable storage medium
CN112351364B (en) * 2021-01-04 2021-04-16 深圳千岸科技股份有限公司 Voice playing method, earphone and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10721594B2 (en) * 2014-06-26 2020-07-21 Microsoft Technology Licensing, Llc Location-based audio messaging
CN106063235A (en) * 2015-01-23 2016-10-26 华为技术有限公司 Voice playing method and voice playing device
CN106998397B (en) * 2016-01-25 2020-02-07 平安科技(深圳)有限公司 Voice broadcasting method and system for multiple service types
CN106973160A (en) * 2017-03-27 2017-07-21 广东小天才科技有限公司 A kind of method for secret protection, device and equipment
CN107182011B (en) * 2017-07-21 2024-04-05 深圳市泰衡诺科技有限公司上海分公司 Audio playing method and system, mobile terminal and WiFi earphone
CN110162252A (en) * 2019-05-24 2019-08-23 北京百度网讯科技有限公司 Simultaneous interpretation system, method, mobile terminal and server
CN111586515A (en) * 2020-04-30 2020-08-25 歌尔科技有限公司 Sound monitoring method, equipment and storage medium based on wireless earphone
CN111883128A (en) * 2020-07-31 2020-11-03 中国工商银行股份有限公司 Voice processing method and system, and voice processing device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103650528A (en) * 2011-07-13 2014-03-19 诺基亚公司 Method and apparatus for providing content to an earpiece in accordance with a privacy filter and content selection rule
CN103391118A (en) * 2013-07-23 2013-11-13 广东欧珀移动通信有限公司 Bluetooth headset and audio sharing method by virtue of Bluetooth headsets
CN103985405A (en) * 2014-04-18 2014-08-13 青岛尚慧信息技术有限公司 Audio player
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
CN106791024A (en) * 2016-11-30 2017-05-31 广东欧珀移动通信有限公司 Voice messaging player method, device and terminal
CN106851450A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of wireless headset pair and electronic equipment
CN106714105A (en) * 2016-12-27 2017-05-24 广东小天才科技有限公司 Wearable equipment playing mode control method and wearable equipment
CN106686186A (en) * 2016-12-27 2017-05-17 广东小天才科技有限公司 Wearable device play control method and wearable device
KR20190118171A (en) * 2017-02-14 2019-10-17 아브네라 코포레이션 Method for detecting user voice activity in communication assembly, its communication assembly
CN107609371A (en) * 2017-09-04 2018-01-19 联想(北京)有限公司 A kind of message prompt method and audio-frequence player device
US10728655B1 (en) * 2018-12-17 2020-07-28 Facebook Technologies, Llc Customized sound field for increased privacy
CN109639908A (en) * 2019-01-28 2019-04-16 上海与德通讯技术有限公司 A kind of bluetooth headset, anti-eavesdrop method, apparatus, equipment and medium
CN111709008A (en) * 2020-06-10 2020-09-25 上海闻泰信息技术有限公司 Earphone control method and device, electronic equipment and computer readable storage medium
CN112351364B (en) * 2021-01-04 2021-04-16 深圳千岸科技股份有限公司 Voice playing method, earphone and storage medium

Also Published As

Publication number Publication date
CN112887872A (en) 2021-06-01
CN112887872B (en) 2023-06-23
CN112351364B (en) 2021-04-16
CN112351364A (en) 2021-02-09
CN112887871A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112887871B (en) Headset voice playing method based on permission, headset and storage medium
EP3202160B1 (en) Method of providing hearing assistance between users in an ad hoc network and corresponding system
CN106162413B (en) The Headphone device of specific environment sound prompting mode
JP6193844B2 (en) Hearing device with selectable perceptual spatial sound source positioning
US10388297B2 (en) Techniques for generating multiple listening environments via auditory devices
US10805756B2 (en) Techniques for generating multiple auditory scenes via highly directional loudspeakers
CN106170108B (en) Earphone device with decibel reminding mode
CN102860048A (en) Modifying spatial image of a plurality of audio signals
CN110636402A (en) Earphone device with local call condition confirmation mode
US20210329370A1 (en) Method for providing service using earset
US20160088403A1 (en) Hearing assistive device and system
CN113038337A (en) Audio playing method, wireless earphone and computer readable storage medium
EP2887695B1 (en) A hearing device with selectable perceived spatial positioning of sound sources
US10200795B2 (en) Method of operating a hearing system for conducting telephone calls and a corresponding hearing system
EP2216975A1 (en) Telecommunication device
CN104734829A (en) An audio communication system with merging and demerging communications zones
CN114667744B (en) Real-time communication method, device and system
US11825283B2 (en) Audio feedback for user call status awareness
US11184484B2 (en) Prioritization of speakers in a hearing device system
EP4184507A1 (en) Headset apparatus, teleconference system, user device and teleconferencing method
US20220279305A1 (en) Automatic acoustic handoff
KR102036010B1 (en) Method for emotional calling using binaural sound and apparatus thereof
WO2023286320A1 (en) Information processing device and method, and program
TW202341763A (en) Multi-user voice communication system having broadcast mechanism
CN115361474A (en) Method for auxiliary recognition of sound source in telephone conference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant