CN115243134A - Signal processing method and device, intelligent head-mounted equipment and medium - Google Patents

Signal processing method and device, intelligent head-mounted equipment and medium Download PDF

Info

Publication number
CN115243134A
CN115243134A CN202210767544.5A CN202210767544A CN115243134A CN 115243134 A CN115243134 A CN 115243134A CN 202210767544 A CN202210767544 A CN 202210767544A CN 115243134 A CN115243134 A CN 115243134A
Authority
CN
China
Prior art keywords
target object
audio signal
acquiring
signal processing
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210767544.5A
Other languages
Chinese (zh)
Inventor
童紫薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN202210767544.5A priority Critical patent/CN115243134A/en
Publication of CN115243134A publication Critical patent/CN115243134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/08Mouthpieces; Microphones; Attachments therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops

Abstract

The application discloses a signal processing method, a signal processing device, intelligent head-mounted equipment and a medium, wherein the signal processing method is applied to the intelligent head-mounted equipment and comprises the following steps: acquiring position information of a target object; the position information comprises direction information of the target object relative to the intelligent head-mounted equipment and/or distance information between the target object and the intelligent head-mounted equipment; judging whether the target object is positioned in the target sound pickup area or not according to the position information; under the condition that a target object is located in a target sound pickup area, acquiring an audio signal of the target object; and according to the first audio signal, amplifying the audio signal of the target object to obtain a second audio signal.

Description

Signal processing method and device, intelligent head-mounted equipment and medium
Technical Field
The present application relates to the field of electronic product technologies, and in particular, to a signal processing method and apparatus, an intelligent head-mounted device, and a medium.
Background
In recent years, with the development of scientific technology, intelligent wearable devices bring great convenience to the life of people, and intelligent head-mounted devices as the intelligent wearable devices are more and more popular. The intelligent head-mounted device can be regarded as a micro intelligent device, and integrates a display screen, a loudspeaker, a microphone, bluetooth, a lithium battery and the like to complete various functions of multimedia, conversation, map navigation, interaction with friends and the like.
The existing intelligent head-mounted equipment has no pertinence when playing sound, and only can play audio information stored in a system of the intelligent head-mounted equipment; this has produced very big limitation to the use scene of intelligent head-mounted apparatus, has also reduced user's experience simultaneously.
In view of the above, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
An object of the present application is to provide a new technical solution for a signal processing method, apparatus, smart headset and medium.
According to a first aspect of the present application, there is provided a signal processing method applied to a smart headset, the method including:
acquiring position information of a target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
judging whether the target object is located in a target sound picking area or not according to the position information;
under the condition that the target object is located in a target sound pickup area, acquiring an audio signal of the target object;
and according to the first audio signal, amplifying the audio signal of the target object to obtain a second audio signal.
Optionally, the method further comprises the step of acquiring the first audio signal,
the acquiring the first audio signal comprises:
and under the condition that a preset audio signal is configured in the intelligent head-mounted device, taking the preset audio signal as the first audio signal.
Optionally, the acquiring the first audio signal further includes:
providing a first configuration interface under the condition that a preset audio signal is not configured in the intelligent head-mounted device;
the hearing test data selected by the first configuration interface is acquired as the first audio signal.
Optionally, the method further comprises:
when the target object is located outside a target pickup area, outputting prompt information; the prompt information is used for prompting that the target object exceeds the target pickup area.
Optionally, the smart headset comprises at least two first microphones,
the acquiring of the audio signal of the target object comprises:
acquiring an audio signal of the target object by at least one of at least two first microphones.
Optionally, the smart headset includes a second microphone, and the second microphone is an azimuth directional microphone;
the acquiring of the audio signal of the target object comprises:
and acquiring an audio signal of the target object through the second microphone.
Optionally, the method further comprises:
and acquiring audio signals except the target object, and performing attenuation processing on the audio signals except the target object.
Optionally, the acquiring the position information of the target object includes: acquiring an eyeball image acquired by the eyeball tracking module;
identifying and obtaining the position information of the point of regard of the user from the eyeball image;
the intelligent head-mounted equipment displays a target picture, and acquires the distribution position of the gaze point of the eyeball of the user in the target picture according to the gaze point position information;
and acquiring the position information of the target object according to the distribution position.
According to a second aspect of the present application, there is provided a signal processing apparatus applied to a smart headset, the apparatus including:
the acquisition module is used for acquiring the position information of the target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
the judging module is used for judging whether the target object is positioned in a target sound pickup area or not according to the position information;
the signal acquisition module is used for acquiring an audio signal of the target object under the condition that the target object is positioned in a target sound pickup area;
an amplifying module; and the audio signal processing module is used for amplifying the audio signal of the target object according to the first audio signal to obtain a second audio signal.
According to a third aspect of the present application, there is provided a smart headset comprising:
a memory for storing executable computer instructions;
a processor for performing the signal processing method of the first aspect under the control of the executable computer instructions.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the signal processing method of the first aspect.
The signal processing method, the signal processing device, the intelligent head-mounted equipment and the medium provided by the embodiment of the application can perform targeted amplification processing on the audio signal according to the target object watched by the user; therefore, selective sound amplification according to user requirements is achieved, the use scene of the intelligent head-mounted equipment is widened, and the experience of the user in using the intelligent head-mounted equipment is improved.
Further features of the present application and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which is to be read in connection with the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram illustrating a hardware configuration of an intelligent head-mounted device according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating steps of a signal processing method according to the present application;
FIG. 3 is a schematic block diagram of a signal processing apparatus according to the present application;
fig. 4 is a schematic block diagram of an intelligent headset according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a block diagram of a hardware configuration of a smart headset 1000 according to an embodiment of the present disclosure.
In one embodiment, as shown in fig. 1, the smart headset 1000 may include a processor 1100, a memory 1200, an interface device 1300, an input device 1400, a speaker 1500, a microphone 1600, and the like.
The processor 1100 may include, but is not limited to, a central processing unit CPU, a microprocessor MCU, and the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes various bus interfaces, for example. The input device 1400 includes, for example, a touch screen, a keyboard, a handle, and the like. The smart headset 1000 may output audio information through the speaker 1500 and may collect the audio information through the microphone 1600.
It should be understood by those skilled in the art that although a plurality of devices of the smart headset 1000 are shown in fig. 1, the smart headset 1000 of the present specification may only refer to some of the devices, and may also include other devices, which are not limited herein.
In this embodiment, the memory 1200 of the smart headset 1000 is configured to store instructions for controlling the processor 1100 to operate to implement or support implementation of the signal processing method according to any of the embodiments. The skilled person can design the instructions according to the solution disclosed in the present specification. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
In the above description, the skilled person can design the instructions according to the solutions provided in the present disclosure. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< method examples >
Referring to fig. 2, according to an embodiment of the present application, there is provided a signal processing method applied to a smart headset, the method including:
s101, acquiring position information of a target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
in the embodiment of the present application, the smart headset to which the signal processing method is applied may be, for example, smart glasses; the user can listen to the sound by wearing the intelligent glasses.
In step S101, the target object is, for example, an object at which the eyes of the user wearing the smart glasses are gazing; the target object is positioned outside the intelligent glasses; for example, the target object is a conversation object talking to the wearer of the smart glasses. The intelligent head-mounted equipment comprises an eyeball tracking module, wherein the eyeball tracking module can be used for example when the position information of a target object is acquired, the eyeball tracking module can comprise a processor, a camera device and an infrared lamp, and the infrared lamp is used for transmitting an infrared light signal so that the infrared light signal can be received by eyes of a user; the camera device is used for collecting human eye images of the user after the eyes of the user receive the infrared light signals; the processor is used for tracking eyeballs of the user according to the human eye image; therefore, the position information of the target object is acquired by adopting an eyeball tracking method. The position information may be direction information of the target object with respect to the smart glasses, distance information of the target object with respect to the smart glasses, or direction information and distance information of the target object with respect to the smart glasses.
The method for acquiring the position information of the target object by adopting the eyeball tracking method specifically comprises the following steps:
acquiring an eyeball image acquired by the eyeball tracking module, wherein the eyeball image comprises an image of an eyeball of a user who uses the intelligent head-mounted equipment at present; identifying and obtaining the position information of the point of regard of the user from the eyeball image; the intelligent head-mounted equipment displays a target picture, acquires the distribution position of the gaze point of the eyeball of the user in the target picture according to the gaze point position information, and acquires the position information of the target object according to the distribution position of the gaze point in the target picture.
S102, judging whether the target object is located in a target sound pickup area or not according to the position information;
in step S102, it is determined whether or not the target object is located within a target sound pickup area, for example, a sound pickup range of a microphone provided in smart glasses, based on the acquired position information of the target object. For example, whether the direction of the target object relative to the smart glasses is within the sound pickup range of the microphone, and whether the distance of the target object relative to the smart glasses is within the sound pickup range of the microphone.
S103, under the condition that the target object is located in a target sound picking area, collecting an audio signal of the target object;
in step S103, if the target object is located within the target sound pickup area, acquiring an audio signal of the target object; for example, if the target object is located in the sound pickup range of a microphone provided on smart glasses, the microphone is driven to pick up an audio signal emitted from the target object.
And S104, according to the first audio signal, amplifying the audio signal of the target object to obtain a second audio signal.
In step S104, the acquired audio signal sent by the target object is amplified, and after the amplification, the obtained second audio signal is close to the first audio signal, or the obtained second audio signal is the same as the first audio signal.
The specific application scenario may be, for example: in a conference in which a plurality of persons participate, a user wearing the intelligent glasses can watch a speaker in the conference and amplify voice sent by the speaker, so that the user can be helped to better receive voice information of the speaker.
The specific application scenario may be, for example: for a user with hearing impairment, the voice of the opposite party communicating with the user can be directionally picked up and amplified through the watching of the user, so that the function of hearing assistance for the user with hearing impairment is realized.
Optionally, in the application scenario, after directionally picking up the voice of the opposite party communicating with the user, determining the attribute of the target object, which may be a person or a device, for example; and then the picked-up voice of the other party is amplified.
Optionally, in the application scenario, before directionally picking up and amplifying the sound of the opposite party communicating with the user, a decibel value at the microphone is determined first, and if the decibel value is lower than a decibel threshold, the sound of the opposite party communicating with the user is directionally picked up and amplified.
The specific application scenario may be, for example: in an environment where a plurality of audio information exist, the audio information of interest to the user can be selectively amplified by the user's gaze.
In summary, the signal processing method provided by the embodiment of the present application may perform targeted amplification processing on an audio signal according to a target object gazed by a user; therefore, selective sound amplification is achieved according to user requirements, the use scene of the intelligent head-mounted equipment is widened, and the experience of the user in using the intelligent head-mounted equipment is improved.
In one embodiment, the method further comprises the step of acquiring the first audio signal,
the acquiring the first audio signal comprises:
and under the condition that a preset audio signal is configured in the intelligent head-mounted device, taking the preset audio signal as the first audio signal.
In this specific example, before the audio signal of the target object is amplified, it is first determined whether a preset audio signal is configured in the smart headset, where the preset audio signal is, for example, a decibel value of the audio signal that is pre-stored in the smart headset and is amplified; under the condition that a preset audio signal is configured in the intelligent head-mounted device, the preset audio signal is used as the first audio signal, namely, the audio signal of the target object is amplified according to the preset audio signal to obtain a second audio signal, so that the second audio signal approaches to the preset audio signal, or the second audio signal is the same as the preset audio signal.
In one embodiment, the obtaining the first audio signal further comprises:
providing a first configuration interface under the condition that a preset audio signal is not configured in the intelligent head-mounted equipment;
the hearing test data selected by the first configuration interface is acquired as the first audio signal.
In this specific example, before performing amplification processing on an audio signal of a target object, it is first determined whether a preset audio signal is configured in the smart headset, where the preset audio signal is, for example, a decibel value of an audio signal that is pre-stored in the smart headset and is amplified; providing a first configuration interface under the condition that a preset audio signal is not configured in the intelligent head-mounted device, wherein the first configuration interface can be a pull-down menu, for example, and hearing test data of a user is selected as the first audio signal through the pull-down menu; the audio signal of the target object is amplified according to the hearing test data of the user to obtain a second audio signal, and the second audio signal approaches the hearing test data of the user or is matched with the hearing test data of the user.
Optionally, when the preset audio signal is not configured in the smart headset, it is first determined whether the hearing test data of the user is configured in the smart headset, and if the hearing test data of the user is not configured in the smart headset, the audio signal of the target object is not amplified, but the audio signal of the target object is played according to the original sound. This avoids the audio signal of the target object being amplified to a level outside the hearing tolerance range of the user and adversely affecting the hearing ability of the user.
Further, when the audio signal of the target object is amplified according to the hearing test data of the user, the audio signal of the target object is subjected to corresponding gain amplification; the gain value of the gain amplification is obtained by calculating hearing test data of the user through a WDRC (Wide Dynamic Range Compression) parameter configuration formula.
In one embodiment, the method further comprises:
outputting prompt information under the condition that the target object is located outside a target sound pickup area; the prompt information is used for prompting that the target object exceeds the target pickup area.
In this specific example, after the position information of the target object is acquired, it is determined whether or not the target object is located within the target sound-pickup area based on the acquired position information of the target object; if the target object exceeds the target sound pickup area, for example, the target object is not located in the sound pickup range of a microphone arranged on the smart glasses, a prompt message is output for prompting the user that the selected target object exceeds the sound pickup range of the microphone, and the audio signal of the target object cannot be amplified. The prompt message may be, for example, a voice prompt message, or a text prompt message.
If the target object exceeds the target pickup area, the user can adjust the position of the user so as to enable the target object to be located in the target pickup area.
In one embodiment, the smart headset includes at least two first microphones,
the acquiring of the audio signal of the target object comprises:
acquiring an audio signal of the target object by at least one of at least two first microphones.
In this specific example, a plurality of microphones may be disposed on the smart glasses, and different microphones may collect audio signals in different directions; and then one or more proper microphones can be selected according to the position information of the target object to collect the audio signals.
In one embodiment, the smart headset includes a second microphone, and the second microphone is an azimuth directional microphone;
the acquiring of the audio signal of the target object comprises:
and acquiring an audio signal of the target object through the second microphone.
In this specific example, an azimuth directional microphone may be provided on the smart glasses, and the azimuth directional microphone may be further caused to collect an audio signal of an azimuth that matches the position information of the target object, according to the position information of the target object.
In one embodiment, the method further comprises:
and acquiring audio signals except the target object, and attenuating the audio signals except the target object.
In this specific example, not only the audio signal of the target object but also the audio signal other than the target object is subjected to the amplification processing, thereby achieving the purpose of noise reduction while amplifying the desired audio signal.
< apparatus embodiment >
Referring to fig. 3, according to another embodiment of the present application, there is provided a signal processing apparatus 200, the signal processing apparatus 200 being applied to an intelligent headset, the apparatus including:
an obtaining module 201, configured to obtain position information of a target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
the judging module 202 is configured to judge whether the target object is located in a target sound pickup area according to the position information;
the signal acquisition module 203 is used for acquiring an audio signal of the target object under the condition that the target object is located in a target sound pickup area;
an amplification module 204; and the audio signal processing unit is used for amplifying the audio signal of the target object according to the first audio signal to obtain a second audio signal.
In the embodiment of the present application, the smart headset applied to the signal processing apparatus 200 may be, for example, smart glasses; the user can listen to the sound by wearing the intelligent glasses.
For the acquisition module 201, the target object is, for example, an object looked at by eyes of a user wearing the smart glasses; when the position information of the target object is acquired, for example, an eyeball tracking module can be utilized, and the eyeball tracking module can include a processor, a camera device and an infrared lamp, for example, the infrared lamp is used for transmitting an infrared light signal so that the eyes of a user can receive the infrared light signal; the camera device is used for collecting human eye images of the user after the eyes of the user receive the infrared light signals; the processor is used for tracking eyeballs of the user according to the human eye image; therefore, the eyeball tracking method is adopted to acquire the position information of the target object. The position information may be direction information of the target object with respect to the smart glasses, distance information of the target object with respect to the smart glasses, or direction information and distance information of the target object with respect to the smart glasses.
The determining module 202 determines whether the target object is located in a target sound pickup area according to the acquired position information of the target object, where the target sound pickup area is, for example, a sound pickup range of a microphone provided on smart glasses. For example, whether the direction of the target object relative to the smart glasses is within the sound pickup range of the microphone, and whether the distance of the target object relative to the smart glasses is within the sound pickup range of the microphone.
For the signal acquisition module 203, if the target object is located in the target sound pickup area, acquiring an audio signal of the target object; for example, if the target object is located in the sound pickup range of a microphone provided on the smart glasses, the microphone is driven to pick up an audio signal emitted from the target object.
For the amplifying module 204, the audio signal sent by the acquired target object is amplified, and after the amplification, the obtained second audio signal approaches to the first audio signal, or the obtained second audio signal is the same as the first audio signal.
The specific application scenario may be, for example: in a conference in which a plurality of persons participate, a user wearing the intelligent glasses can watch a speaker in the conference and amplify voice sent by the speaker, so that the user can be helped to better receive voice information of the speaker.
The specific application scenario may be, for example: for a user with hearing impairment, the user can directionally pick up and amplify the sound of the opposite party communicating with the user through the gazing of the user, so that the function of hearing assistance for the user with hearing impairment is realized.
The specific application scenario may be, for example: in an environment where a plurality of audio information exist, the audio information of interest to the user can be selectively amplified by the user's gaze.
In summary, the signal processing apparatus provided in the embodiment of the present application may perform targeted amplification processing on an audio signal according to a target object watched by a user; therefore, selective sound amplification is achieved according to user requirements, the use scene of the intelligent head-mounted equipment is widened, and the experience of the user in using the intelligent head-mounted equipment is improved.
According to yet another embodiment of the present application, referring to fig. 4, there is provided an intelligent headset 300, the intelligent headset 300 including:
a memory 301 for storing executable computer instructions;
a processor 302 for executing the signal processing method as described above according to the control of the executable computer instructions.
< computer-readable storage Medium >
According to yet another embodiment of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the signal processing method as described above.
Embodiments of the present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement aspects of embodiments of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the disclosed embodiments by personalizing the custom electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of the computer-readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Although some specific embodiments of the present application have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustrative purposes only and are not intended to limit the scope of the present application. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present application. The scope of the application is defined by the appended claims.

Claims (11)

1. A signal processing method is applied to intelligent head-mounted equipment and is characterized by comprising the following steps:
acquiring position information of a target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
judging whether the target object is positioned in a target sound pickup area or not according to the position information;
under the condition that the target object is located in a target sound pickup area, acquiring an audio signal of the target object;
and according to the first audio signal, amplifying the audio signal of the target object to obtain a second audio signal.
2. The signal processing method of claim 1, further comprising the step of obtaining the first audio signal,
the acquiring the first audio signal comprises:
and under the condition that a preset audio signal is configured in the intelligent head-mounted device, taking the preset audio signal as the first audio signal.
3. The signal processing method of claim 2, wherein the obtaining the first audio signal further comprises:
providing a first configuration interface under the condition that a preset audio signal is not configured in the intelligent head-mounted device;
the hearing test data selected by the first configuration interface is acquired as the first audio signal.
4. The signal processing method of claim 1, further comprising:
outputting prompt information under the condition that the target object is located outside a target sound pickup area; the prompt information is used for prompting that the target object exceeds the target pickup area.
5. The signal processing method of claim 1, wherein the smart headset comprises at least two first microphones,
the acquiring of the audio signal of the target object comprises:
collecting the audio signal of the target object through at least one of at least two of the first microphones.
6. The signal processing method of claim 1, wherein the smart headset comprises a second microphone, and the second microphone is an azimuth directional microphone;
the acquiring of the audio signal of the target object comprises:
and acquiring an audio signal of the target object through the second microphone.
7. The signal processing method of claim 1, further comprising:
and acquiring audio signals except the target object, and attenuating the audio signals except the target object.
8. The signal processing method according to claim 1, wherein the acquiring the position information of the target object includes:
acquiring an eyeball image acquired by an eyeball tracking module;
identifying and obtaining the position information of the point of regard of the user from the eyeball image;
the intelligent head-mounted equipment displays a target picture, and acquires the distribution position of the gaze point of the eyeball of the user in the target picture according to the gaze point position information;
and acquiring the position information of the target object according to the distribution position.
9. A signal processing device applied to intelligent head-mounted equipment is characterized by comprising:
the acquisition module is used for acquiring the position information of the target object; wherein the position information comprises direction information of the target object relative to the smart headset and/or distance information between the target object and the smart headset;
the judging module is used for judging whether the target object is positioned in a target sound picking-up area or not according to the position information;
the signal acquisition module is used for acquiring an audio signal of the target object under the condition that the target object is positioned in a target sound pickup area;
an amplifying module; and the audio signal processing module is used for amplifying the audio signal of the target object according to the first audio signal to obtain a second audio signal.
10. An intelligent head-mounted device, the intelligent head-mounted device comprising:
a memory for storing executable computer instructions;
a processor for performing the signal processing method of any one of claims 1-8 under the control of the executable computer instructions.
11. A computer-readable storage medium, having stored thereon computer instructions which, when executed by a processor, perform the signal processing method of any one of claims 1-8.
CN202210767544.5A 2022-06-30 2022-06-30 Signal processing method and device, intelligent head-mounted equipment and medium Pending CN115243134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210767544.5A CN115243134A (en) 2022-06-30 2022-06-30 Signal processing method and device, intelligent head-mounted equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210767544.5A CN115243134A (en) 2022-06-30 2022-06-30 Signal processing method and device, intelligent head-mounted equipment and medium

Publications (1)

Publication Number Publication Date
CN115243134A true CN115243134A (en) 2022-10-25

Family

ID=83672223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210767544.5A Pending CN115243134A (en) 2022-06-30 2022-06-30 Signal processing method and device, intelligent head-mounted equipment and medium

Country Status (1)

Country Link
CN (1) CN115243134A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117357880A (en) * 2023-12-07 2024-01-09 深圳失重魔方网络科技有限公司 Motion state identification method and system based on intelligent equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117357880A (en) * 2023-12-07 2024-01-09 深圳失重魔方网络科技有限公司 Motion state identification method and system based on intelligent equipment
CN117357880B (en) * 2023-12-07 2024-02-09 深圳失重魔方网络科技有限公司 Motion state identification method and system based on intelligent equipment

Similar Documents

Publication Publication Date Title
KR102538348B1 (en) Electronic device and method for controlling an operation thereof
EP3163748A2 (en) Method, device and terminal for adjusting volume
CN107493500B (en) Multimedia resource playing method and device
EP2961195B1 (en) Do-not-disturb system and apparatus
US20140079212A1 (en) Signal processing apparatus and storage medium
CN106454644B (en) Audio playing method and device
KR20170076181A (en) Electronic device and method for controlling an operation thereof
US20220066207A1 (en) Method and head-mounted unit for assisting a user
CN109257498B (en) Sound processing method and mobile terminal
US20210090548A1 (en) Translation system
CN108174269B (en) Visual audio playing method and device
CN112154412A (en) Providing audio information with a digital assistant
CN115243134A (en) Signal processing method and device, intelligent head-mounted equipment and medium
CN113038337B (en) Audio playing method, wireless earphone and computer readable storage medium
CN112243142B (en) Method, device and storage medium for processing audio data
CN113596662B (en) Method for suppressing howling, device for suppressing howling, earphone, and storage medium
CN106598247B (en) Response control method and device based on virtual reality
CN114598970A (en) Audio processing method and device, electronic equipment and storage medium
CN113763940A (en) Voice information processing method and system for AR glasses
CN112394771A (en) Communication method, communication device, wearable device and readable storage medium
CN114125735B (en) Earphone connection method and device, computer readable storage medium and electronic equipment
CN112637416A (en) Volume adjusting method and device and storage medium
GB2579085A (en) Handling multiple audio input signals using a display device and speech-to-text conversion
CN111225318A (en) Audio adjusting method and device and electronic equipment
US20180262606A1 (en) Performing a notification event at a headphone device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination