CN115657995A - Sound signal processing method, processing device, intelligent head-mounted equipment and medium - Google Patents

Sound signal processing method, processing device, intelligent head-mounted equipment and medium Download PDF

Info

Publication number
CN115657995A
CN115657995A CN202211153614.4A CN202211153614A CN115657995A CN 115657995 A CN115657995 A CN 115657995A CN 202211153614 A CN202211153614 A CN 202211153614A CN 115657995 A CN115657995 A CN 115657995A
Authority
CN
China
Prior art keywords
microphone
target object
controlling
sound signal
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211153614.4A
Other languages
Chinese (zh)
Inventor
黄若舟
安康
吴劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202211153614.4A priority Critical patent/CN115657995A/en
Publication of CN115657995A publication Critical patent/CN115657995A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

The application discloses a processing method and a processing device of a sound signal, intelligent head-mounted equipment and a medium, wherein the processing method of the sound signal is applied to the intelligent head-mounted equipment, the intelligent head-mounted equipment comprises an equipment main body, at least one set of audio device is arranged on the equipment main body, the audio device comprises a microphone and a loudspeaker, and the microphone can rotate relative to the equipment main body; the method comprises the following steps: acquiring a target sound source from the environment where the intelligent head-mounted equipment is located; acquiring the position of a target object, wherein the target object is a target sound source closest to a microphone; controlling the microphone to rotate, enabling the sound receiving direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object; and controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.

Description

Sound signal processing method, processing device, intelligent head-mounted equipment and medium
Technical Field
The present disclosure relates to the field of electronic product technologies, and in particular, to a method and an apparatus for processing a sound signal, an intelligent head-mounted device, and a medium.
Background
In recent years, with the development of scientific technology, intelligent wearable devices bring great convenience to the life of people, and intelligent head-mounted devices as intelligent wearable devices are also increasingly popular. The intelligent head-mounted device can be regarded as a micro intelligent device, and integrates a display screen, a loudspeaker, a microphone, bluetooth, a lithium battery and the like to complete various functions of multimedia, conversation, map navigation, interaction with friends and the like.
The position of microphone among the current intelligent head-mounted device is fixed position, and the microphone has very big limitation to sound signal's collection like this, and then will influence the effect that intelligent head-mounted device carries out the broadcast to sound signal to user's experience sense has been reduced.
In view of the above, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
An object of the present application is to provide a new technical solution for a method and an apparatus for processing a sound signal, an intelligent headset, and a medium.
According to a first aspect of the present application, a sound signal processing method is provided, and the sound signal processing method is applied to an intelligent head-mounted device, where the intelligent head-mounted device includes a device main body, the device main body is provided with at least one set of audio devices, the audio devices include a microphone and a speaker, the microphone and the speaker are arranged correspondingly, and the microphone can rotate relative to the device main body;
the method comprises the following steps:
acquiring a target sound source from the environment where the intelligent head-mounted equipment is located;
acquiring the position of a target object, wherein the target object is a target sound source closest to the microphone;
controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect a sound signal of the target object;
and controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.
Optionally, the apparatus main body is provided with two sets of audio devices, where the two sets of audio devices are a first audio device and a second audio device, respectively, the first audio device is disposed near a first side of the apparatus main body, and the second audio device is disposed near a second side of the apparatus main body; the first audio device comprises a first microphone and a first loudspeaker, the second audio device comprises a second microphone and a second loudspeaker, and both the first microphone and the second microphone can rotate relative to the equipment body;
the acquiring the position of the target object comprises:
acquiring the position of a first target object, wherein the first target object is the position of a target sound source closest to the first microphone; and/or acquiring the position of a second target object, wherein the second target object is the position of a target sound source closest to the second microphone.
Optionally, the controlling the microphone to rotate, so that the sound collecting direction of the microphone faces the position of the target object, and controlling the microphone to collect the sound signal of the target object includes:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and/or controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
Optionally, in a case where the first target object is different from the second target object, and the first target object is close to the first microphone, the second target object is close to the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
Optionally, the controlling the speaker to play the sound signal of the target object collected by the microphone includes:
controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone; and controlling the second loudspeaker to play the sound signal of the second target object acquired by the second microphone.
Optionally, in a case where the first target object is the same as the second target object and the first target object is the same distance from the first microphone and the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
and controlling the first microphone and the second microphone to rotate, enabling the sound receiving direction of the first microphone and the sound receiving direction of the second microphone to face the position of the first target object, and controlling the first microphone and the second microphone to simultaneously acquire the sound signal of the first target object.
Optionally, the controlling the speaker to play the sound signal of the target object collected by the microphone includes:
controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone; and controlling the second loudspeaker to play the sound signal of the first target object acquired by the second microphone.
Optionally, in a case where the first target object is the same as the second target object and the first microphone is closer to the first target object than the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to detect the sound signal in the environment where the intelligent head-mounted device is located.
Optionally, the controlling the speaker to play the sound signal of the target object collected by the microphone includes:
and controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone.
Optionally, the acquiring a target sound source from an environment in which the smart headset is located includes: and filtering sound signals except the target sound source from the environment where the intelligent head-mounted equipment is located.
According to a second aspect of the present application, there is provided an apparatus for processing a sound signal, which is applied to a smart headset, the apparatus including:
the first acquisition module is used for acquiring a target sound source from the environment where the intelligent head-mounted equipment is located;
a second obtaining module, configured to obtain a position of a target object, where the target object is a position of a target sound source closest to the microphone;
the first control module is used for controlling the microphone to rotate, enabling the sound receiving direction of the microphone to face the position of the target object, and controlling the microphone to acquire a sound signal of the target object;
and the second control module is used for controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.
According to a third aspect of the present application, there is provided a smart headset comprising:
a memory for storing executable computer instructions;
a processor for executing the method for processing sound signals according to the first aspect under the control of the executable computer instructions.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the method of processing a sound signal according to the first aspect.
According to the processing method and device for the sound signals, the intelligent head-mounted device and the medium, the microphone is set to rotate on the intelligent head-mounted device, and before the sound signals of the target object are collected, the microphone is rotated firstly, so that the sound receiving direction of the microphone faces to the position of the target object, the definition of the sound signals of the target object collected by the microphone can be improved, and the quality of the collected sound signals is higher; therefore, the definition of the sound signal of the target object played by the subsequent loudspeaker is improved, and the experience of listening by wearing the intelligent head-wearing equipment by a user is better.
Other features of the present application and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
Fig. 1a is a schematic structural diagram of an intelligent head-mounted device according to an embodiment of the present application;
fig. 1b is a schematic partial structural diagram of an intelligent head-mounted device according to an embodiment of the present application;
fig. 1c is a schematic diagram illustrating a connection between a first rotation mechanism and a first microphone in an intelligent head-mounted device according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating the steps of the method for processing audio signals according to the present application;
FIG. 3 is a schematic block diagram of a signal processing apparatus according to the present application;
fig. 4 is a schematic block diagram of an intelligent headset according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 a-1 c are schematic structural diagrams of an intelligent headset 1 according to an embodiment of the disclosure.
In one embodiment, as shown in fig. 1 a-1 c, the smart headset 1 includes a device body 10; for example, the smart headset 1 is smart glasses, and the device body 10 includes two temples 100 and a frame 101, where the two temples 100 are respectively located on two sides of the frame 101. A first audio device is arranged on the first temple 100, and the first audio device specifically comprises a first microphone 20 and a first speaker 21; a second audio device, which specifically includes a second microphone 30 and a second speaker 31, is provided on the second temple 100. Wherein, a first accommodating hole 40 is opened on the first temple 100, a first rotating mechanism 41 is installed in the first accommodating hole 40, the first rotating mechanism 41 can rotate in the first accommodating hole 40, and the first microphone 20 is arranged on the first rotating mechanism 41; the second temple 100 has a second receiving hole 50, a second rotating mechanism 51 is installed in the second receiving hole 50, the second rotating mechanism 51 can rotate in the second receiving hole 50, and the second microphone 30 is disposed on the second rotating mechanism 51.
The range of rotation of the first rotating mechanism 41 and the second rotating mechanism 51 may be, for example, 0 ° to 180 °; the angle of each rotation may be, for example, 30 °.
< method examples >
Referring to fig. 2, according to an embodiment of the present application, there is provided a sound signal processing method applied to an intelligent headset, the method including:
s101, acquiring a target sound source from the environment where the intelligent head-mounted equipment is located;
in the embodiment of the present application, the smart headset to which the processing method of the sound signal is applied may be, for example, smart glasses; the user can listen to the sound by wearing the intelligent glasses. The smart glasses are provided with at least one set of audio devices, for example, a first audio device is provided on the first temple 100, the first audio device specifically includes a first microphone 20 and a first speaker 21, the first microphone 20 and the first speaker 21 are correspondingly provided, wherein the first microphone 20 can rotate relative to the temple 100.
In step S101, the target sound source may be, for example, a voice signal generated when a person in the environment where the user wearing the smart glasses is located speaks; when someone in the user's surroundings starts speaking, a speech signal is generated, which serves as a target sound source, and the microphone, such as the first microphone 20, is controlled to start acquiring the target sound source. Optionally, when the target sound source is acquired, other sound signals except the target sound source are filtered, for example, sounds generated by vehicles in the environment, and sounds of animals are filtered as noises, so as to improve the quality of acquiring the target sound source.
S102, acquiring the position of a target object, wherein the target object is a target sound source closest to the microphone;
in step S102, for example, only one person in the environment where the user wearing the smart glasses is speaking is the target object; for example, a plurality of people speaking in the environment where the user wearing the smart glasses is located, but the plurality of people are all located on the same side of the user and have the same distance from the user, so that the plurality of people are simultaneously used as target objects; for example, if a plurality of persons speak in an environment where a user wearing the smart glasses is located and the distances between the plurality of persons and the user are different from each other, a person closest to the user is used as a target object.
S103, controlling the microphone to rotate, enabling the sound receiving direction of the microphone to face the position of the target object, and controlling the microphone to collect a sound signal of the target object;
in step S103, for example, if the microphone is the first microphone 20, the first rotating mechanism 41 is used to drive the first microphone 20 to rotate, so that the sound receiving direction of the first microphone 20 faces the position of the target object, and then the first microphone 20 is controlled to collect the sound signal of the target object; the sound receiving direction of the first microphone 20 is adjusted to face the target object, so that the definition of the sound signal of the target object collected by the first microphone 20 can be improved, and the quality of the collected sound signal is higher.
And S104, controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.
In step S104, the speaker is, for example, the first speaker 21, and the sound signal of the target object collected by the first microphone 20 is played by the first speaker 21, so that the user can listen to the sound signal of the target object.
In summary, in the processing method of the sound signal provided in the embodiment of the present application, the microphone is set to be capable of rotating on the smart headset, and before the sound signal of the target object is collected, the microphone is rotated first, so that the sound receiving direction of the microphone faces to the position where the target object is located, which can improve the definition of the sound signal of the target object collected by the microphone, and improve the quality of the collected sound signal; therefore, the definition of the sound signal of the target object played by the subsequent loudspeaker is improved, and the experience of listening by wearing the intelligent head-wearing equipment by a user is better.
In one embodiment, the apparatus main body is provided with two sets of audio devices, the two sets of audio devices are respectively a first audio device and a second audio device, the first audio device is arranged near a first side of the apparatus main body, and the second audio device is arranged near a second side of the apparatus main body; the first audio device comprises a first microphone and a first loudspeaker, the second audio device comprises a second microphone and a second loudspeaker, and both the first microphone and the second microphone can rotate relative to the equipment body;
the acquiring the position of the target object comprises:
acquiring the position of a first target object, wherein the first target object is the position of a target sound source closest to the first microphone; and/or acquiring the position of a second target object, wherein the second target object is the position of a target sound source closest to the second microphone.
In this embodiment, two sets of audio devices, namely, a first audio device and a second audio device, are provided on the device main body of the smart glasses; for example, a first audio device is provided on the first temple 100, and a second audio device is provided on the second temple 100; for example, in a state where the smart glasses are worn, the first temple 100 is located on the left side of the wearer, and the second temple 100 is located on the right side of the wearer; the first audio device comprises in particular a first microphone 20 and a first loudspeaker 21, and the second audio device comprises in particular a second microphone 30 and a second loudspeaker 31.
For example, the target object is unique, then the unique target object may be both the first target object and the second target object; that is, the target object is the first target object for the first microphone 20, and the second target object for the second microphone 30.
For example, there are a plurality of target objects, and the plurality of target objects are distributed at different positions, then the target sound source closest to the first microphone 20 is the first target object, for example, the first target object is distributed on the left side of the wearer; the target sound source closest to the second microphone 30 is the second target object, which is distributed on the right side of the wearer, for example.
In one embodiment, the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object includes:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and/or controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
If the target object is unique, for example, only one person in the environment where the wearer of the smart glasses is located is speaking; then, there may be two cases: in the first case, the only target object is both the first target object and the second target object. Then, the first microphone 20 is controlled to rotate to acquire its sound signal toward the unique target object, and the second microphone 30 is controlled to rotate to acquire its sound signal toward the unique target object. And, the first speaker 21 is controlled to play the sound signal collected by the first microphone 20, and the second speaker 31 is controlled to play the sound signal collected by the second microphone 30.
In the second case, it is first determined which of the first microphone 20 and the second microphone 30 is closer to the unique target object, for example, if the first microphone 20 is closer to the unique target object and the second microphone 30 is farther from the unique target object, only the first microphone 20 is controlled to rotate to capture the sound signal of the unique target object, and the first speaker 21 is controlled to play the sound signal captured by the first microphone 20.
If there are multiple target objects, for example, two people in the environment of the wearer of the smart eyewear are speaking and the two people are located at different locations; then, a target sound source (one of the speaking persons) at a close distance from the first microphone 20 is taken as a first target object, and the first microphone 20 is controlled to rotate to acquire a sound signal thereof toward the first target object; a target sound source (another speaking person) at a short distance from the second microphone 30 is set as a second target object, and the second microphone 30 is controlled to rotate to pick up its sound signal toward the second target object. The first speaker 21 is controlled to play the sound signal collected by the first microphone 20, and the second speaker 31 is controlled to play the sound signal collected by the second microphone 30.
In one embodiment, in the case where the first target object is different from the second target object, and the first target object is near the first microphone, the second target object is near the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound receiving direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
In this embodiment, if there are a plurality of target objects, for example, people who are speaking exist on both sides of the wearer of the smart glasses, then the person who is closer to the first microphone 20 (for example, the person who is located on the left side of the wearer) is regarded as the first target object, and the first microphone 20 is controlled to rotate to capture the sound signal thereof toward the first target object; a person who is closer to the second microphone 30 (e.g., a person located on the right side of the wearer) is set as the second target object, and the second microphone 30 is controlled to turn to pick up its sound signal toward the second target object. And, the first speaker 21 is controlled to play the sound signal collected by the first microphone 20, and the second speaker 31 is controlled to play the sound signal collected by the second microphone 30.
In one embodiment, if the first target object is the same as the second target object and the first target object is the same distance from the first microphone and the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
and controlling the first microphone and the second microphone to rotate, enabling the sound receiving direction of the first microphone and the sound receiving direction of the second microphone to face the position of the first target object, and controlling the first microphone and the second microphone to simultaneously acquire the sound signal of the first target object.
In this embodiment, there is a unique target object, for example, only one person in the environment of the wearer of the smart eyewear is speaking, the speaking person is the only target object, and the first microphone 20 and the second microphone 30 are symmetrically distributed with respect to the speaking person, for example, the speaking person is located directly in front of or behind the wearer; or, although a plurality of people are speaking in the environment where the wearer of the smart glasses is located, the plurality of people are all located right in front of or right behind the wearer, and the speaking sounds of the plurality of people can be regarded as the same sound source, that is, the plurality of speaking people also serve as the only target object.
In the present embodiment, the first microphone 20 and the second microphone 30 are controlled to rotate toward the unique target object and the first microphone 20 and the second microphone 30 are controlled to simultaneously capture the sound signal of the unique target object. And, the first speaker 21 is controlled to play the sound signal collected by the first microphone 20, and the second speaker 31 is controlled to play the sound signal collected by the second microphone 30.
In one embodiment, where the first target object is the same as the second target object, and the first microphone is closer to the first target object than the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to detect the sound signal in the environment where the intelligent head-mounted device is located.
In this embodiment, there is a unique target object, for example, only one person in the environment of the wearer of the smart eyewear is speaking, this speaking person is the only target object, and the position of this speaking person is biased toward one side of the wearer, for example, on the left side of the wearer; or, although a plurality of people are speaking in the environment where the wearer of the smart glasses is located, the plurality of people are all located on the same side of the wearer, for example, all located on the left side of the wearer, and the speaking sounds of the plurality of people can be regarded as the same sound source, that is, the plurality of speaking people also serve as the only target object.
In the present embodiment, the first microphone 20 closer to the unique target object is controlled to rotate towards the target object and the first microphone 20 is controlled to capture the sound signal of the target object, and the first speaker 21 is controlled to play the sound signal captured by the first microphone 20; meanwhile, the second microphone 30 is controlled to be in the detection mode, that is, the second microphone 30 is controlled to detect the sound signals in the surrounding environment, and whether the target sound source exists is determined.
Similarly, if the only target object is located at the right side of the wearer, that is, the target object is relatively close to the second microphone 30, the second microphone 30 is controlled to rotate towards the target object and the second microphone 30 is controlled to collect the sound signal of the target object, and the second speaker 31 is controlled to play the sound signal collected by the second microphone 30; meanwhile, the first microphone 20 is controlled to be in a detection mode, that is, the first microphone 20 is controlled to detect sound signals in the surrounding environment, and whether a target sound source exists is determined.
< apparatus embodiment >
Referring to fig. 3, according to another embodiment of the present application, there is provided a processing apparatus 200 for a sound signal, the signal processing apparatus 200 being applied to a smart headset, the apparatus including:
a first obtaining module 201, configured to obtain a target sound source from an environment in which the smart headset is located;
a second obtaining module 202, configured to obtain a position of a target object, where the target object is a position of a target sound source closest to the microphone;
the first control module 203 is used for controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect a sound signal of the target object;
a second control module 204, configured to control the speaker to play the sound signal of the target object acquired by the microphone.
In the embodiment of the present application, the smart headset applied to the signal processing apparatus 200 may be, for example, smart glasses; the user can listen to the sound by wearing the intelligent glasses.
For the first obtaining module 201, the target sound source may be, for example, a voice signal generated when a person in the environment where the user wearing the smart glasses is located speaks; when a person in the user's surrounding environment starts speaking, a speech signal is generated, which serves as a target sound source, and a microphone, such as the first microphone 20, is controlled to start acquiring the target sound source. Optionally, when the target sound source is obtained, other sound signals except the target sound source are filtered, for example, sounds generated by vehicles in the environment, and barking sounds of animals are filtered as noises, so as to improve the quality of obtaining the target sound source.
For the second obtaining module 202, for example, only one person in the environment where the user wearing the smart glasses is speaking is the target object; for example, a plurality of people speaking in the environment where the user wearing the smart glasses is located, but the plurality of people are all located on the same side of the user and have the same distance from the user, so that the plurality of people are simultaneously used as target objects; for example, if a plurality of persons speak in an environment where a user wearing the smart glasses is located and the distances between the plurality of persons and the user are different from each other, a person closest to the user is used as a target object.
For the first control module 203, for example, the microphone is the first microphone 20, then the first rotating mechanism 41 is used to drive the first microphone 20 to rotate, so that the sound receiving direction of the first microphone 20 faces the position of the target object, and then the first microphone 20 is controlled to collect the sound signal of the target object; the sound receiving direction of the first microphone 20 is adjusted to face the target object, so that the definition of the sound signal of the target object collected by the first microphone 20 can be improved, and the quality of the collected sound signal is higher.
For the second control module 204, the speaker is, for example, the first speaker 21, and the first speaker 21 is used to play the sound signal of the target object collected by the first microphone 20, so that the user can listen to the sound signal of the target object.
According to still another embodiment of the present application, referring to fig. 4, there is provided an intelligent headset 300, the intelligent headset 300 including:
a memory 301 for storing executable computer instructions;
a processor 302 for executing the processing method of the sound signal as described above according to the control of the executable computer instructions.
< computer-readable storage Medium >
According to still another embodiment of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the method of processing a sound signal as described above.
The disclosed embodiments may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement aspects of embodiments of the disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the disclosed embodiments by personalizing the custom electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of the computer-readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Although some specific embodiments of the present application have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustrative purposes only and are not intended to limit the scope of the present application. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present application. The scope of the application is defined by the appended claims.

Claims (13)

1. A processing method of sound signals is applied to intelligent head-mounted equipment and is characterized in that the intelligent head-mounted equipment comprises an equipment main body, at least one set of audio devices are arranged on the equipment main body, each audio device comprises a microphone and a loudspeaker, the microphones and the loudspeakers are arranged correspondingly, and the microphones can rotate relative to the equipment main body;
the method comprises the following steps:
acquiring a target sound source from the environment where the intelligent head-mounted equipment is located;
acquiring the position of a target object, wherein the target object is a target sound source closest to the microphone;
controlling the microphone to rotate, enabling the sound receiving direction of the microphone to face the position of the target object, and controlling the microphone to acquire a sound signal of the target object;
and controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.
2. The method for processing the sound signal according to claim 1, wherein the apparatus main body is provided with two sets of audio devices, the two sets of audio devices are respectively a first audio device and a second audio device, the first audio device is disposed near a first side of the apparatus main body, and the second audio device is disposed near a second side of the apparatus main body; the first audio device comprises a first microphone and a first loudspeaker, the second audio device comprises a second microphone and a second loudspeaker, and both the first microphone and the second microphone can rotate relative to the equipment body;
the acquiring the position of the target object comprises:
acquiring the position of a first target object, wherein the first target object is the position of a target sound source closest to the first microphone; and/or acquiring the position of a second target object, wherein the second target object is the position of a target sound source closest to the second microphone.
3. The method for processing the sound signal according to claim 2, wherein the controlling the microphone to rotate to make the sound collecting direction of the microphone face the position of the target object and to control the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and/or controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
4. The method according to claim 3, wherein when the first target object is different from the second target object, and the first target object is close to the first microphone, the second target object is close to the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to rotate, enabling the sound receiving direction of the second microphone to face the position of the second target object, and controlling the second microphone to collect the sound signal of the second target object.
5. The method for processing the sound signal according to claim 4, wherein the controlling the speaker to play the sound signal of the target object collected by the microphone comprises:
controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone; and controlling the second loudspeaker to play the sound signal of the second target object acquired by the second microphone.
6. The method according to claim 3, wherein when the first target object is the same as the second target object and the first target object is the same distance from the first microphone and the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
and controlling the first microphone and the second microphone to rotate, enabling the sound receiving direction of the first microphone and the sound receiving direction of the second microphone to face the position of the first target object, and controlling the first microphone and the second microphone to simultaneously acquire the sound signal of the first target object.
7. The method for processing the sound signal according to claim 6, wherein the controlling the speaker to play the sound signal of the target object collected by the microphone comprises:
controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone; and controlling the second loudspeaker to play the sound signal of the first target object acquired by the second microphone.
8. The method according to claim 3, wherein in a case where the first target object is the same as the second target object and the first microphone is closer to the first target object than the second microphone; the controlling the microphone to rotate, enabling the sound collecting direction of the microphone to face the position of the target object, and controlling the microphone to collect the sound signal of the target object comprises:
controlling the first microphone to rotate, enabling the sound collecting direction of the first microphone to face the position of the first target object, and controlling the first microphone to collect a sound signal of the first target object; and controlling the second microphone to detect sound signals in the environment where the intelligent head-mounted equipment is located.
9. The method for processing the sound signal according to claim 8, wherein the controlling the speaker to play the sound signal of the target object collected by the microphone comprises:
and controlling the first loudspeaker to play the sound signal of the first target object acquired by the first microphone.
10. The method for processing the sound signal according to claim 1, wherein the obtaining a target sound source from an environment in which the smart headset is located comprises: and filtering sound signals except for the target sound source from the environment where the intelligent head-mounted equipment is located.
11. A processing device of sound signals is applied to intelligent head-mounted equipment, and is characterized by comprising:
the first acquisition module is used for acquiring a target sound source from the environment where the intelligent head-mounted equipment is located;
a second obtaining module, configured to obtain a position of a target object, where the target object is a position of a target sound source closest to the microphone;
the first control module is used for controlling the microphone to rotate, enabling the sound receiving direction of the microphone to face the position of the target object, and controlling the microphone to acquire a sound signal of the target object;
and the second control module is used for controlling the loudspeaker to play the sound signal of the target object acquired by the microphone.
12. An intelligent headset, comprising:
a memory for storing executable computer instructions;
a processor for performing the method of processing a sound signal according to any one of claims 1-10 under the control of the executable computer instructions.
13. A computer-readable storage medium, having stored thereon computer instructions which, when executed by a processor, perform a method of processing a sound signal according to any one of claims 1 to 10.
CN202211153614.4A 2022-09-21 2022-09-21 Sound signal processing method, processing device, intelligent head-mounted equipment and medium Pending CN115657995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211153614.4A CN115657995A (en) 2022-09-21 2022-09-21 Sound signal processing method, processing device, intelligent head-mounted equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211153614.4A CN115657995A (en) 2022-09-21 2022-09-21 Sound signal processing method, processing device, intelligent head-mounted equipment and medium

Publications (1)

Publication Number Publication Date
CN115657995A true CN115657995A (en) 2023-01-31

Family

ID=84983643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211153614.4A Pending CN115657995A (en) 2022-09-21 2022-09-21 Sound signal processing method, processing device, intelligent head-mounted equipment and medium

Country Status (1)

Country Link
CN (1) CN115657995A (en)

Similar Documents

Publication Publication Date Title
US11095985B2 (en) Binaural recording for processing audio signals to enable alerts
EP3424229B1 (en) Systems and methods for spatial audio adjustment
CN112770214B (en) Earphone control method and device and earphone
US9113246B2 (en) Automated left-right headphone earpiece identifier
US20140233917A1 (en) Video analysis assisted generation of multi-channel audio data
US20230362544A1 (en) Head-mounted device and control method, apparatus and system
US20220066207A1 (en) Method and head-mounted unit for assisting a user
US9319513B2 (en) Automatic un-muting of a telephone call
US10187738B2 (en) System and method for cognitive filtering of audio in noisy environments
CN115657995A (en) Sound signal processing method, processing device, intelligent head-mounted equipment and medium
CN114339582B (en) Dual-channel audio processing method, device and medium for generating direction sensing filter
US20220122630A1 (en) Real-time augmented hearing platform
CN114664320A (en) Volume adjusting method, electronic device and readable storage medium
CN115843433A (en) Acoustic environment control system and method
US11163522B2 (en) Fine grain haptic wearable device
CN111554314A (en) Noise detection method, device, terminal and storage medium
CN116320144B (en) Audio playing method, electronic equipment and readable storage medium
CN114615609B (en) Hearing aid control method, hearing aid device, apparatus, device and computer medium
US20230267942A1 (en) Audio-visual hearing aid
WO2022178852A1 (en) Listening assisting method and apparatus
CN115529534A (en) Sound signal processing method and device, intelligent head-mounted equipment and medium
CN115379332A (en) Earphone control method and device, earphone and medium
CN116631419A (en) Voice signal processing method and device, electronic equipment and storage medium
CN117319889A (en) Audio signal processing method and device, electronic equipment and storage medium
CN117880731A (en) Audio and video recording method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination