CN113380218A - Signal processing method and system, and processing device - Google Patents

Signal processing method and system, and processing device Download PDF

Info

Publication number
CN113380218A
CN113380218A CN202010116682.8A CN202010116682A CN113380218A CN 113380218 A CN113380218 A CN 113380218A CN 202010116682 A CN202010116682 A CN 202010116682A CN 113380218 A CN113380218 A CN 113380218A
Authority
CN
China
Prior art keywords
signal
sound field
field data
processing
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010116682.8A
Other languages
Chinese (zh)
Inventor
余勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010116682.8A priority Critical patent/CN113380218A/en
Publication of CN113380218A publication Critical patent/CN113380218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The application discloses a signal processing method and system and a processing device. Wherein, the method comprises the following steps: the processing equipment acquires first sound field data of an audio signal to be processed; the processing equipment acquires an ultrasonic signal corresponding to the first sound field data; the processing device transmits an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal. The method and the device solve the technical problem of poor signal processing effect for eliminating snore in the related art.

Description

Signal processing method and system, and processing device
Technical Field
The present application relates to the field of signal processing, and in particular, to a signal processing method and system, and a processing device.
Background
A couple sleeping in the same room or in the same bed often has difficulty in falling asleep if the sleeping environment is noisy, for example, one or more people snore.
The existing sleep-assisting device is generally a medicine-assisting device or a foreign object-assisting device, wherein the realization scheme of the medicine-assisting device is that a user needs to take auxiliary medicines before sleeping so as to prevent the influence of other people; the implementation scheme of the foreign object assistance requires a snorer to wear the foreign object assistance facility, so that the noise influence of the snore on other people and the environment is reduced.
However, the drug assistance has an impact on the body of the people taking the drug; the external aid is usually only suitable for the scene of single person snoring, and the external aid is uncomfortable to wear and even can influence the sleep of the snorer.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a signal processing method, a signal processing system and signal processing equipment, which are used for at least solving the technical problem of poor signal processing effect on snore elimination in the related technology.
According to an aspect of an embodiment of the present application, there is provided a signal processing method including: the processing equipment acquires first sound field data of an audio signal to be processed; the processing equipment acquires an ultrasonic signal corresponding to the first sound field data; the processing device transmits an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a signal processing method, including: the processing equipment acquires first sound field data of an audio signal to be processed; the processing equipment sends first sound field data to target equipment and receives a signal generation algorithm returned by the target equipment, wherein the target equipment is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm; the processing device generates an ultrasonic signal based on a signal generation algorithm; the processing device transmits an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a signal processing method, including: the target equipment receives first sound field data of an audio signal to be processed, which is sent by the processing equipment; the target equipment sends the first sound field data to the server and receives a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm; the target device sends the signal generation algorithm to the processing device, wherein the processing device is configured to transmit the ultrasonic signal based on the signal generation algorithm, and the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a processing apparatus, including: the signal acquisition device is used for acquiring first sound field data of the audio signal to be processed; the processor is connected with the signal acquisition device and used for acquiring the ultrasonic signal corresponding to the first sound field data; and the ultrasonic transmitting device is connected with the processor and is used for transmitting the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a signal processing system, including: a processing device for acquiring first sound field data of an audio signal to be processed; the target equipment is in communication connection with the processing equipment and the server and is used for sending the first sound field data to the server and receiving a signal generation algorithm returned by the server; the server is used for processing the first sound field data to generate a signal generation algorithm; the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a signal processing system, including: a processing device for acquiring first sound field data of an audio signal to be processed; the server is in communication connection with the processing equipment and is used for processing the first sound field data and generating a signal generation algorithm; the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the above-mentioned signal processing method.
According to another aspect of the embodiments of the present application, there is also provided a computing device, including: the device comprises a memory and a processor, wherein the processor is used for operating a program stored in the memory, and the program executes the signal processing method when running.
According to another aspect of the embodiments of the present application, there is also provided a signal processing system, including: a processor; and a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal at a user.
According to another aspect of the embodiments of the present application, there is also provided a signal processing method, including: the processing equipment acquires first data of a sensory signal to be processed; the processing equipment acquires a target sensory signal corresponding to the first data; the processing device outputs a target sensory signal.
In the embodiment of the application, after the first sound field data of the audio signal to be processed is acquired, the processing device may acquire an ultrasonic signal corresponding to the first sound field data and transmit the ultrasonic signal, and the ultrasonic signal may be demodulated into a target audio signal, so as to implement shielding and eliminating of snore. It is easy to notice that because the sound wave directional transmission technology based on ultrasonic wave carries out snore shielding and elimination, the snore shielding effect is improved, the technical effect of the snorer sleeping is not influenced, and the technical problem of poor signal processing effect for snore elimination in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal for implementing a signal processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a first method of signal processing according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario of snore elimination according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another scenario for snore elimination according to an embodiment of the present application;
FIG. 5 is a block diagram of a snore relieving system according to an embodiment of the present application;
FIG. 6 is a block diagram of another snore relieving system according to an embodiment of the present application;
FIG. 7 is a flow chart of a second method of signal processing according to an embodiment of the present application;
FIG. 8 is a flow chart of a third method of signal processing according to an embodiment of the present application;
FIG. 9 is a schematic view of an operation interface of a smart sound box with a screen according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a first signal processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a second signal processing apparatus according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a third signal processing apparatus according to an embodiment of the present application;
FIG. 13 is a schematic view of a processing apparatus according to an embodiment of the present application;
FIG. 14 is a schematic view of another treatment apparatus according to an embodiment of the present application;
FIG. 15 is a schematic diagram of an in-ear active snore canceller according to an embodiment of the application;
fig. 16 is a schematic diagram of an active snore eliminator based on a smart speaker according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a signal processing system according to an embodiment of the present application;
FIG. 18 is a schematic diagram of another signal processing system according to an embodiment of the present application;
FIG. 19 is a flow chart of a fourth method of signal processing according to an embodiment of the present application; and
fig. 20 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
sound field: it may refer to the area in air where sound waves exist, and the physical quantities describing the sound field may be sound pressure, particle vibration velocity, displacement or medium density, etc., and are generally functions of position and time.
Example 1
In accordance with an embodiment of the present application, there is provided a signal processing method, it should be noted that the steps shown in the flowchart of the figure can be executed in a computer system such as a set of computer executable instructions, and that while a logical order is shown in the flowchart, in some cases the steps shown or described can be executed in an order different from that here.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a signal processing method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission means 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the signal processing method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the signal processing method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the above operating environment, the present application provides a signal processing method as shown in fig. 2. Fig. 2 is a flowchart of a first signal processing method according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
step S202, acquiring first sound field data of an audio signal to be processed by processing equipment;
the signal processing method provided by the embodiment of the application is mainly applied to the application scene of snore elimination, but is not limited to the application scene, and can also be applied to other cited scenes needing to eliminate sound. In the embodiment of the present application, an application scenario as shown in fig. 3 and 4 is taken as an example for explanation.
To the above application scenario, the conventional sleep assisting device is usually worn by the snorer, and the wearing comfort is poor, even the influence is caused on the sleep of the snorer, so that the snore stopping effect of the conventional sleep assisting device is poor. In order to solve the above problems, the present application provides an active snore eliminator, that is, a processing device in the above steps. In an alternative embodiment, the processing device may be a wearable device worn by a user who does not wish to be affected by snoring. For example, as shown in fig. 3, in order to improve the snore shielding effect, the processing device 10 may be an earphone, and optionally, in order to improve the wearing comfort, the processing device 10 may be an in-ear earphone, which is worn in the ear canal of the user, that is, the embodiment of the present application provides an in-ear active snore canceller. In another alternative embodiment, the processing device may be a mobile terminal, a home device, or the like, placed in the vicinity of a user who does not wish to be affected by snoring. For example, as shown in fig. 4, in order to reduce implementation cost, the processing device 10 may be a smart speaker, that is, the embodiment of the present application further provides an active snore eliminator based on the smart speaker. In addition, the processing equipment does not need to be worn by the snorer, so the sleep quality of the snorer is not influenced. In an actual application scenario, the determination can be made according to the use requirement.
The audio signal to be processed in the above step may be an audio signal that the user wishes to eliminate, for example, in an application scenario of snore elimination, the audio signal to be processed may be a snore signal sent by a snorer.
In an alternative embodiment, in order to ensure the signal processing effect near the ear of the user, a sound collecting device may be provided on the processing device, and first sound field data near the ear of the user is collected by the sound collecting device, wherein the first sound field data may include, but is not limited to: sound pressure, particle vibration velocity, displacement or medium density, etc.
Step S204, the processing equipment acquires an ultrasonic signal corresponding to the first sound field data;
in order to improve the signal processing effect, the processing device provided by the embodiment of the application can perform signal processing by using a sound wave directional propagation technology of ultrasonic waves, and the basic principle is that audible sound signals are modulated onto ultrasonic carrier signals and are emitted into the air by an ultrasonic emitter, and ultrasonic waves of different frequencies interact and self-demodulate in the process of air propagation due to the nonlinear acoustic effect of the air, so that new sound waves with the frequency of the sum (sum frequency) of the original ultrasonic frequencies and the difference (difference frequency) of the frequencies are generated. As long as the ultrasonic wave is properly selected, the difference frequency sound wave falls in an audible sound interval, namely 20Hz-20000 Hz. In this way, the process of directional sound propagation is achieved by virtue of the high directivity of the ultrasonic wave itself.
In an optional embodiment, a processor is disposed in the processing device, and after the sound collection device collects the first sound field data, the processor may process the first sound field data, determine the amplitude and phase of the snoring sound by processing the first sound field data, and further generate a corresponding ultrasonic signal based on the amplitude and phase of the snoring sound.
In another optional embodiment, the processing device is provided with a processor and a communication module, after the sound collection device collects the first sound field data, the first sound field data can be sent to the cloud server through the communication module, the cloud server processes the first sound field data, the processing device receives data returned by the cloud server through the communication module, and further the processor can generate corresponding ultrasonic signals based on the data returned by the cloud server.
By adopting the first mode, the number of the devices deployed in the application scene can be reduced, and the flexibility is higher, however, the processing operation amount of the first sound field data is increased, the processing resource consumption of the processing device is larger, and the cost of the processing device is increased. By adopting the second mode, the computation amount of the processing equipment can be reduced, and the cost of the processing equipment is reduced, but due to the fact that data interaction is required to be carried out with the cloud server, when the processing equipment is wearable equipment or household equipment, target equipment needs to be additionally arranged in an application scene for data forwarding. In an actual application scenario, the determination can be made according to the use requirement.
In step S206, the processing device transmits an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
In the application scene of snore elimination, sounds such as music, story voice and the like can be played beside the ears of a user, and the snore sound is covered by playing the sounds, so that the snore shielding and elimination are realized; or, another snore sound with the same amplitude and opposite phase (the phase difference is 180 degrees) with the snore sound can be played at the ear of the user, and the snore shielding and elimination are realized by the superposition of the two snore sounds. On the basis, the target audio signal in the above steps may be a sound such as music, story voice, etc., or another snore sound with the same amplitude and opposite phase as the snore sound. In the embodiment of the present application, the target audio signal is taken as an example of a snore sound.
In an alternative embodiment, the collected sound field data of the snore can be processed to generate another snore sound, and further the ultrasound signal carrying the snore sound is generated by modulation. The processing device may transmit an ultrasonic signal to the vicinity of the user's ear based on a sound wave directional propagation technique of ultrasonic waves, where the ultrasonic signal may self-demodulate to another snore sound.
Based on the scheme provided by the embodiment of the application, after the first sound field data of the audio signal to be processed is acquired, the processing device can acquire the ultrasonic signal corresponding to the first sound field data and transmit the ultrasonic signal, and the ultrasonic signal can be demodulated into the target audio signal, so that the shielding and elimination of the snore are realized. It is easy to notice that because the sound wave directional transmission technology based on ultrasonic wave carries out snore shielding and elimination, the snore shielding effect is improved, the technical effect of the snorer sleeping is not influenced, and the technical problem of poor signal processing effect for snore elimination in the related technology is solved.
In the above embodiments of the present application, the acquiring, by the processing device, the ultrasonic signal corresponding to the first sound field data includes: the processing equipment sends the first sound field data to the server and receives a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm; the processing device generates an ultrasonic signal based on a signal generation algorithm.
For example, still taking an application scenario of snore elimination as an example for explanation, in order to reduce the cost of the processing device and improve the flexibility of the deployment manner, the processing device may be connected to the cloud server, the cloud server processes the first sound field data, and the processing device is only used for acquiring the first sound field data and generating the ultrasonic information.
Optionally, for wearable devices such as earphones or other home devices except the smart sound box, because the wearable devices cannot directly communicate with the internet, the target device may be deployed in an application scene, and data transmission between the cloud server and the processing device is achieved through the target device. That is, the processing device transmits the first sound field data to the target device and receives a signal generation algorithm returned by the target device, wherein the target device is used for transmitting the first sound field data to the server and receiving the signal generation algorithm returned by the server.
The target device in the above steps may be an intelligent sound box deployed in an application scene, or may be a mobile terminal of a user, such as a smart phone, a tablet computer, a palmtop computer, a notebook computer, or the like, but is not limited thereto, and may also be other devices. The processing device may connect to the target device in a bluetooth Low Power (BLE) mode, but is not limited thereto, and may connect to the target device in other manners.
The signal generating algorithm in the above step may be an algorithm for generating a target audio signal, for example, in the case that the target audio signal is a snore sound, the algorithm may be an anti-sound field algorithm, and a target audio signal having the same amplitude and the opposite phase as the audio signal to be processed may be generated by the algorithm. In an optional embodiment, the cloud server may extract the sound field features of the first sound field data through modeling, and perform anti-sound field algorithm operation based on the extracted sound field features, thereby obtaining the signal generation algorithm.
In an optional embodiment, the processing device may collect a sound field of snoring near the user's ear through the sound collection device, and upload the collected data to the cloud server through the internet. And the cloud server operates the snore sound field to generate a signal generation algorithm. The processing equipment executes a signal generation algorithm, transmits ultrasonic signals, and forms sound signals with the same amplitude and the opposite phase with the snore near the ears of the user, so that the snore shielding and elimination are realized.
In another optional embodiment, the processing device may collect a sound field of snoring near the ears of the user through the sound collection device, send the sound field to the smart speaker through BLE, and upload the collected data to the cloud server through the internet by the smart speaker. The cloud server operates the snore sound field to generate a signal generation algorithm, and the intelligent sound box downloads the signal generation algorithm from the cloud server and forwards the signal generation algorithm to the processing equipment through BLE. The processing equipment executes a signal generation algorithm, transmits ultrasonic signals, and forms sound signals with the same amplitude and the opposite phase with the snore near the ears of the user, so that the snore shielding and elimination are realized.
In the above embodiment of the present application, after the processing device transmits the ultrasonic signal, the method further includes the steps of: the processing device acquires second sound field data of the processed audio signal; the processing device sends the second sound field data to a server, wherein the server is configured to update the signal generation algorithm based on the second sound field data.
For example, still taking an application scenario of snore elimination as an example for explanation, in order to further improve the snore shielding effect, after the processing device transmits the ultrasonic signal, the processing device may acquire the residual situation of the snore sound field again, that is, acquire the above-mentioned second sound field data, and feed back the residual situation of the snore sound field to the cloud server, and the cloud server updates the signal generation algorithm.
Optionally, for wearable devices such as earphones or other home devices other than the smart speaker, that is, the processing device sends the second sound field data to the target device, and the target device is configured to send the second sound field data to the server.
It should be noted that the processing device may perform the snore shielding and eliminating again according to the updated signal generating algorithm, and repeat the above scheme continuously until the snore is completely shielded.
In the above embodiments of the present application, the acquiring, by the processing device, the ultrasonic signal corresponding to the first sound field data includes: the processing equipment processes the first sound field data to generate a signal generation algorithm; the processing device generates an ultrasonic signal based on a signal generation algorithm.
For example, still taking the application scenario of snore elimination as an example for explanation, the processing device may collect a snore sound field near the ear of the user through the sound collection device, perform an operation on the snore sound field to generate a signal generation algorithm, further execute the signal generation algorithm, transmit an ultrasonic signal, and form a sound signal with the same amplitude and the opposite phase as the snore near the ear of the user, thereby implementing the snore shielding and elimination.
In the above embodiment of the present application, after the processing device transmits the ultrasonic signal, the method further includes the steps of: the processing device acquires second sound field data of the processed audio signal; the processing device updates the signal generation algorithm based on the second sound field data.
For example, still taking an application scenario of snore elimination as an example for explanation, in order to further improve the snore shielding effect, after the processing device transmits the ultrasonic signal, the processing device may acquire the residual condition of the snore sound field again, that is, acquire the above-mentioned second sound field data, and update the signal generation algorithm based on the residual condition of the snore sound field.
Similarly, the processing device may perform the snore masking and elimination again according to the updated signal generation algorithm, and repeat the above scheme until the snore is completely masked.
A preferred embodiment of the present application will be described in detail below with reference to fig. 3 and 5, taking an intelligent speaker and an in-ear active snore eliminator as examples. As shown in fig. 3, when the couple sleeps, if one of the two people snores, the other can wear the in-ear active snore eliminator to ease the sleep, and the placement position of the intelligent sound box can be determined according to the needs of the user. As shown in fig. 5, the in-ear active snore eliminator 10 may be connected to the smart speaker 20 through BLE, and upload the collected snore sound field data; the intelligent sound box uploads the collected snore sound field data to the cloud 30 for operation; the cloud end operates snore sound field data to generate an algorithm; the intelligent sound box downloads the cloud algorithm, is connected with the in-ear active snore eliminator through BLE and sends the cloud algorithm; the in-ear active snore eliminator receives the finished cloud algorithm, executes the cloud algorithm, emits modulated ultrasonic signals, forms sound signals with the same amplitude and opposite phase with a snore sound field beside the ears of a user, and conducts snore shielding and elimination; the above-mentioned scheme is continuously executed in a circulating way until the snore is shielded.
Therefore, the in-ear active snore eliminator provided by the embodiment of the application has the advantages of being comfortable to wear, good in shielding effect, low in cost, low in deployment mode and the like, and the snorer sleep is not affected.
A preferred embodiment of the present application will be described in detail below with reference to fig. 4 and 6 by taking an example of an active snore canceller based on a smart speaker as an example. As shown in fig. 4, when two couples sleep, if one of the two couples snores, the other couple can put an active snore eliminator based on an intelligent sound box around the other couple, and the position of the active snore eliminator can be determined according to the needs of the user. As shown in fig. 6, the active snore eliminator 10 based on an intelligent sound box can be connected to the cloud 30 through WIFI, and upload the collected snore sound field data; the cloud end operates snore sound field data to generate an algorithm; the active snore eliminator based on the intelligent sound box downloads a cloud algorithm, executes the cloud algorithm, emits modulated ultrasonic signals, forms sound signals with the same amplitude and opposite phases with a snore sound field beside ears of a user, and conducts snore shielding and elimination; the above-mentioned scheme is continuously executed in a circulating way until the snore is shielded.
Therefore, the in-ear active snore eliminator provided by the embodiment of the application has the advantages of being comfortable to wear, good in shielding effect, low in cost, low in deployment mode and the like, and the snorer sleep is not affected.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example 2
According to the embodiment of the application, a signal processing method is further provided.
Fig. 7 is a flowchart of a second signal processing method according to an embodiment of the present application. As shown in fig. 7, the method includes the steps of:
step S702, acquiring first sound field data of an audio signal to be processed by processing equipment;
the signal processing method provided by the embodiment of the application is mainly applied to the application scene of snore elimination, but is not limited to the application scene, and can also be applied to other cited scenes needing to eliminate sound. In the embodiment of the present application, an application scenario as shown in fig. 3 is taken as an example for explanation.
The processing device in the above step may be an active snore eliminator that cannot be directly connected to the cloud server, and in an alternative embodiment, may be a wearable device that is worn by a user who does not want to be affected by snore. For example, as shown in fig. 3, the processing device 10 may be an earphone for improving the snore shielding effect, and optionally, the processing device 10 may be an in-ear earphone for improving the wearing comfort, which is worn in the ear canal of the user. In another alternative embodiment, the home device, which cannot be directly connected to the cloud server, is placed near the user who does not want to be affected by snoring.
The audio signal to be processed in the above step may be an audio signal that the user wishes to eliminate, for example, in an application scenario of snore elimination, the audio signal to be processed may be a snore signal sent by a snorer. The first sound field data may include, but is not limited to: sound pressure, particle vibration velocity, displacement or medium density, etc.
Step S704, the processing device sends first sound field data to a target device and receives a signal generation algorithm returned by the target device, wherein the target device is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm;
the target device in the above steps may be an intelligent sound box deployed in an application scene, or may be a mobile terminal of a user, such as a smart phone, a tablet computer, a palmtop computer, a notebook computer, or the like, but is not limited thereto, and may also be other devices. The processing device may connect with the target device through the bluetooth low energy mode, but is not limited thereto, and may connect with the target device through other manners. The signal generation algorithm may be an algorithm for generating a target audio signal, for example, in the case where the target audio signal is a snore sound, the algorithm may be an anti-sound-field algorithm, by which a target audio signal having the same amplitude as the audio signal to be processed and an opposite phase to the target audio signal may be generated.
Step S706, the processing equipment generates an ultrasonic signal based on a signal generation algorithm;
in step S708, the processing device transmits an ultrasonic signal, which is demodulated into a target audio signal.
The target audio signal in the above step may be music, story voice, or another snore sound with the same amplitude and opposite phase as the snore sound.
In the above embodiment of the present application, after the processing device transmits the ultrasonic signal, the method further includes the steps of: the processing device acquires second sound field data of the processed audio signal; and the processing device sends the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is also used for updating the signal generation algorithm based on the second sound field data.
For example, still taking an application scenario of snore elimination as an example for explanation, in order to further improve the snore shielding effect, after the processing device transmits the ultrasonic signal, the processing device may acquire the residual situation of the snore sound field again, that is, acquire the above-mentioned second sound field data, and feed back the residual situation of the snore sound field to the cloud server, and the cloud server updates the signal generation algorithm.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 3
According to the embodiment of the application, a signal processing method is further provided.
Fig. 8 is a flowchart of a third signal processing method according to an embodiment of the present application. As shown in fig. 8, the method includes the steps of:
step S802, the target device receives first sound field data of the audio signal to be processed, which is sent by the processing device;
the signal processing method provided by the embodiment of the application is mainly applied to the application scene of snore elimination, but is not limited to the application scene, and can also be applied to other cited scenes needing to eliminate sound. In the embodiment of the present application, an application scenario as shown in fig. 3 is taken as an example for explanation.
The target device may be an intelligent sound box deployed in an application scene, or may be a mobile terminal of a user, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or the like, but is not limited thereto, and may also be other devices. The processing device in the above step may be an active snore eliminator that cannot be directly connected to the cloud server, and in an alternative embodiment, may be a wearable device that is worn by a user who does not want to be affected by snore, and in an alternative embodiment. For example, as shown in fig. 3, the processing device 10 may be an earphone for improving the snore shielding effect, and optionally, the processing device 10 may be an in-ear earphone for improving the wearing comfort, which is worn in the ear canal of the user. The processing device may connect with the target device through the bluetooth low energy mode, but is not limited thereto, and may connect with the target device through other manners. In another alternative embodiment, the home device, which cannot be directly connected to the cloud server, is placed near the user who does not want to be affected by snoring.
The audio signal to be processed in the above step may be an audio signal that the user wishes to eliminate, for example, in an application scenario of snore elimination, the audio signal to be processed may be a snore signal sent by a snorer. The first sound field data may include, but is not limited to: sound pressure, particle vibration velocity, displacement or medium density, etc.
Step S804, the target device sends the first sound field data to a server and receives a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm;
the signal generation algorithm may be an algorithm for generating a target audio signal, for example, in the case where the target audio signal is a snore sound, the algorithm may be an anti-sound-field algorithm, by which a target audio signal having the same amplitude as the audio signal to be processed and an opposite phase to the target audio signal may be generated.
Step S806, the target device sends a signal generation algorithm to the processing device, wherein the processing device is configured to transmit an ultrasonic signal based on the signal generation algorithm, and the ultrasonic signal is demodulated into a target audio signal.
The target audio signal in the above step may be music, story voice, or another snore sound with the same amplitude and opposite phase as the snore sound.
In the above embodiment of the present application, after the target device sends the signal generation algorithm to the processing device, the method further includes the following steps: the target device receives second sound field data of the processed audio signal sent by the processing device; the target device sends the second sound field data to the server, wherein the server is used for updating the signal generation algorithm based on the second sound field data.
For example, still taking an application scenario of snore elimination as an example for explanation, in order to further improve the snore shielding effect, after the processing device transmits the ultrasonic signal, the processing device may acquire the residual situation of the snore sound field again, that is, acquire the above-mentioned second sound field data, and feed back the residual situation of the snore sound field to the cloud server, and the cloud server updates the signal generation algorithm.
In the above-described embodiments of the present application, before the target apparatus receives the first sound field data of the audio signal to be processed sent by the processing apparatus, the method further includes the steps of: the target equipment outputs prompt information, wherein the prompt information is used for confirming whether the audio signal to be processed is processed or not; after receiving the confirmation information corresponding to the prompt information, the target device establishes a connection with the processing device based on the confirmation information.
For example, still taking the application scenario of snore elimination as an example for illustration, in order to further reduce the battery consumption of the processing device, the user may determine whether to trigger the snore elimination through the smart speaker or the mobile terminal. In an optional embodiment, the intelligent sound box may output a prompt message in a display or voice broadcast manner to prompt a user to determine whether to trigger snore elimination, and if the user determines that the snore elimination is required, the user may operate on a display screen of the intelligent sound box or determine in a voice manner, so that after receiving the determination message, the intelligent sound box may establish connection with the processing device through BLE, and further implement the snore shielding and elimination by using the above-mentioned scheme.
As shown in fig. 9, taking an intelligent sound box with a screen as an example for explanation, prompt information of whether snore shielding and elimination are needed or not may be displayed in a display screen, and if a user determines that snore elimination is needed, a yes button may be clicked, so that the snore elimination scheme provided by the embodiment of the present application is adopted to perform snore shielding and elimination; if the user determines that snore elimination is not required, the "no" button can be clicked to operate in the normal mode.
In the above embodiment of the present application, after receiving a processing instruction for processing an audio signal to be processed, a target device outputs prompt information; and/or in the case that the preset time is up, the target device outputs prompt information.
The processing instruction in the above step may be an instruction generated by a user operating the target device. In an alternative embodiment, when the user accurately sleeps, the processing instruction may be generated in a voice or touch control manner, and after receiving the processing instruction, the target device may determine that the user is ready to sleep, and may further output a prompt message to confirm whether snore elimination is required by the user.
As shown in fig. 9, still taking the smart sound box with a screen as an example for explanation, the user may switch the state of the smart sound box by clicking the sleep mode displayed on the display screen, at this time, the smart sound box determines that the user is ready to sleep, further displays the prompt information of "whether to perform snore shielding and elimination", and waits for the user to select "yes" or "no".
The preset time in the above steps may be a sleeping time preset by the user. In an alternative embodiment, when the preset time is reached, the target device determines that the user is ready to sleep, and may further output a prompt message to confirm whether snore elimination is required by the user.
As shown in fig. 9, still taking the smart sound box with a screen as an example for explanation, a user may set an alarm clock by clicking a setting button on a display screen, and when the alarm clock arrives, the smart sound box determines that the user is ready to sleep, further displays a prompt message of "whether snore shielding and elimination is needed", and waits for the user to select "yes" or "no".
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 4
According to an embodiment of the present application, there is also provided a signal processing apparatus for implementing the signal processing method, the apparatus being integrated in a processing device, as shown in fig. 10, the apparatus 800 including: a first acquisition module 802, a second acquisition module 804, and a transmission module 806.
The first obtaining module 802 is configured to obtain first sound field data of an audio signal to be processed; the second obtaining module 804 is configured to obtain an ultrasonic signal corresponding to the first sound field data; the transmitting module 806 is configured to transmit an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
It should be noted here that the first obtaining module 802, the second obtaining module 804 and the transmitting module 806 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
In the above embodiments of the present application, the second obtaining module includes: a communication unit and a generation unit.
The communication unit is used for sending the first sound field data to the server and receiving a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm; the generating unit is used for generating an ultrasonic signal based on a signal generating algorithm.
In the above embodiments of the present application, the communication unit is configured to send the first sound field data to the target device and receive a signal generation algorithm returned by the target device, where the target device is configured to send the first sound field data to the server and receive the signal generation algorithm returned by the server.
In the above embodiment of the present application, the apparatus further includes: the third acquisition module and the sending module.
The third acquisition module is used for acquiring second sound field data of the processed audio signal; the sending module is used for sending the second sound field data to the server, wherein the server is used for updating the signal generation algorithm based on the second sound field data.
In the above embodiments of the present application, the second obtaining module includes: a processing unit and a generating unit.
The processing unit is used for processing the first sound field data to generate a signal generation algorithm; the generating unit is used for generating an ultrasonic signal based on a signal generating algorithm.
In the above embodiment of the present application, the apparatus further includes: the device comprises a third acquisition module and an updating module.
The third acquisition module is used for acquiring second sound field data of the processed audio signal; the update module is configured to update the signal generation algorithm based on the second sound field data.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 5
According to an embodiment of the present application, there is also provided a signal processing apparatus for implementing the signal processing method, the apparatus being integrated in a processing device, as shown in fig. 11, the apparatus 900 including: a first acquisition module 902, a communication module 904, a generation module 906, and a transmission module 908.
The first obtaining module 902 is configured to obtain first sound field data of an audio signal to be processed; the communication module 904 is configured to send the first sound field data to a target device, and receive a signal generation algorithm returned by the target device, where the target device is configured to send the first sound field data to a server, and receive the signal generation algorithm returned by the server, and the server is configured to process the first sound field data to generate a signal generation algorithm; the generating module 906 is configured to generate an ultrasonic signal based on a signal generating algorithm; the transmitting module 908 is used for transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
It should be noted here that the first obtaining module 902, the communication module 904, the generating module 906, and the transmitting module 908 correspond to steps S702 to S708 in embodiment 2, and the four modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
In the above embodiment of the present application, the apparatus further includes: the device comprises a second acquisition module and a sending module.
The second acquisition module is used for acquiring second sound field data of the processed audio signal; the sending module is used for sending the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is used for updating the signal generation algorithm based on the second sound field data.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 6
According to an embodiment of the present application, there is also provided a signal processing apparatus for implementing the signal processing method, the apparatus being integrated in a target device, as shown in fig. 12, the apparatus 1000 including: a first receiving module 1002, a communication module 1004, and a first transmitting module 1006.
The first receiving module 1002 is configured to receive first sound field data of an audio signal to be processed, which is sent by a processing device; the communication module 1004 is configured to send the first sound field data to a server, and receive a signal generation algorithm returned by the server, where the server is configured to process the first sound field data to generate the signal generation algorithm; the first transmitting module 1006 is configured to transmit the signal generating algorithm to a processing device, wherein the processing device is configured to transmit an ultrasonic signal based on the signal generating algorithm, and the ultrasonic signal is demodulated into a target audio signal.
It should be noted here that the first receiving module 1002, the communication module 1004, and the first sending module 1006 correspond to steps S802 to S806 in embodiment 3, and the three modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
In the above embodiment of the present application, the apparatus further includes: the second receiving module and the second sending module.
The second receiving module is used for receiving second sound field data of the processed audio signal sent by the processing equipment; the second sending module is configured to send the second sound field data to a server, where the server is configured to update the signal generation algorithm based on the second sound field data.
In the above embodiment of the present application, the apparatus further includes: the device comprises an output module and a connecting module.
The output module is used for outputting prompt information, wherein the prompt information is used for confirming whether to process the audio signal to be processed; the connection module is used for establishing connection with the processing equipment based on the confirmation information after receiving the confirmation information corresponding to the prompt information.
In the above embodiment of the present application, the output module is further configured to output, by the target device, a prompt message after receiving a processing instruction for processing the audio signal to be processed; and/or in the case that the preset time is up, the target device outputs prompt information.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 7
According to an embodiment of the present application, there is also provided a processing apparatus, as shown in fig. 13, the processing apparatus 10 including: a signal acquisition device 112, a processor 114 and an ultrasonic transmitting device 116, wherein the processor is connected with the signal acquisition device and the ultrasonic transmitting device.
The signal acquisition device is used for acquiring first sound field data of an audio signal to be processed; the processor is used for acquiring an ultrasonic signal corresponding to the first sound field data; the ultrasonic wave emitting device is used for emitting an ultrasonic wave signal, wherein the ultrasonic wave signal is demodulated into a target audio signal.
The signal collecting device may be a sound collecting device, the Processor may be a Central Processing Unit (CPU), and the ultrasonic generator may be an ultrasonic transmitter.
In the above-described embodiment of the present application, as shown in fig. 13, the processing apparatus further includes: and the communication module 118 is connected with the processor.
The communication module is used for sending the first sound field data to the server and receiving a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm; the processor is also configured to generate an ultrasonic signal based on a signal generation algorithm.
The communication module may be a bluetooth communication module, and may be connected to the target device through a bluetooth low energy mode, but is not limited thereto, and may also be connected to the target device through other manners.
In the above embodiments of the present application, the communication module is configured to send the first sound field data to the target device and receive a signal generation algorithm returned by the target device, where the target device is configured to send the first sound field data to the server and receive the signal generation algorithm returned by the server.
In the above embodiments of the present application, the signal collecting device is further configured to collect second sound field data of the processed audio signal; the communication module is further configured to send the second sound field data to a server, wherein the server is configured to update the signal generation algorithm based on the second sound field data.
In the above-described embodiment of the present application, as shown in fig. 14, the processor 114 includes: the ultrasonic wave transmission device.
The algorithm generation module is used for processing the first sound field data to generate a signal generation algorithm; the signal generation module is used for generating an ultrasonic signal based on a signal generation algorithm.
In the above embodiments of the present application, as shown in fig. 14, the processor 114 further includes: and the algorithm updating module 1146 is connected with the signal acquisition device.
The signal acquisition device is also used for acquiring second sound field data of the processed audio signal; the algorithm updating module is used for updating the signal generating algorithm based on the second sound field data.
A preferred embodiment of the present application will be described in detail below with reference to fig. 3 and fig. 15, taking an intelligent speaker and an in-ear active snore eliminator as examples. As shown in fig. 15, the in-ear active snore canceller may be composed of four parts: an arithmetic operation and feedback CPU, a BLE module, a MIC (Microphone) and an ultrasonic transmitter. The system comprises a wireless communication module, a BLE module, a CPU (central processing unit), an ultrasonic emitter, a sound signal processing module and a snore sound field module, wherein the MIC is used for acquiring surrounding snore sound field data, the BLE module is used for sending the MIC to acquire the snore sound field data and receiving a cloud algorithm, the CPU is used for executing the cloud algorithm to generate modulated ultrasonic signals, the ultrasonic emitter is used for emitting ultrasonic signals, behaviors around ears of a user are the same as snore amplitudes, and phases of the sound signals are opposite, and then snore is shielded and eliminated.
A preferred embodiment of the present application will be described in detail below with reference to fig. 4 and 16 by taking an example of an active snore canceller based on a smart speaker as an example. As shown in fig. 13, the active snore eliminator based on the smart speaker can be composed of four parts: the system comprises an algorithm operation and feedback CPU, a WIFI module, an MIC and an ultrasonic transmitter. The system comprises a wireless communication module, a WIFI module, a CPU, an ultrasonic transmitter and a user, wherein the MIC is used for acquiring surrounding snore sound field data, the WIFI module is used for sending the MIC to acquire the snore sound field data and receiving a cloud algorithm, the CPU is used for executing the cloud algorithm to generate modulated ultrasonic signals, the ultrasonic transmitter is used for transmitting ultrasonic signals, behaviors around ears of the user are the same as snore amplitudes, and sound signals with opposite phases are shielded and eliminated.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 8
According to an embodiment of the present application, there is also provided a signal processing system, as shown in fig. 17, including: a processing device 10, a target device 20, and a server 30, the target device being communicatively coupled to the processing device and the server.
The processing equipment is used for acquiring first sound field data of an audio signal to be processed; the target equipment is used for sending the first sound field data to the server and receiving a signal generation algorithm returned by the server; the server is used for processing the first sound field data to generate a signal generation algorithm; the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
The signal processing method provided by the embodiment of the application is mainly applied to the application scene of snore elimination, but is not limited to the application scene, and can also be applied to other cited scenes needing to eliminate sound. In the embodiment of the present application, an application scenario as shown in fig. 3 is taken as an example for explanation.
The processing device may be an active snore eliminator that cannot be directly connected to the cloud server, and in an alternative embodiment, may be a wearable device that is worn by a user who does not wish to be affected by snore, and in an alternative embodiment. For example, as shown in fig. 3, the processing device 10 may be an earphone for improving the snore shielding effect, and optionally, the processing device 10 may be an in-ear earphone for improving the wearing comfort, which is worn in the ear canal of the user. In another alternative embodiment, the home device, which cannot be directly connected to the cloud server, is placed near the user who does not want to be affected by snoring.
The target device may be an intelligent sound box deployed in an application scene, or may be a mobile terminal of a user, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or the like, but is not limited thereto, and may also be other devices. The processing device may connect to the target device in a bluetooth Low Power (BLE) mode, but is not limited thereto, and may connect to the target device in other manners.
In the above embodiment of the present application, the processing device is further configured to obtain second sound field data of the processed audio signal; the target device is also used for sending second sound field data to the server; the server is further configured to update the signal generation algorithm based on the second sound field data.
A preferred embodiment of the present application will be described in detail below with reference to fig. 3 and 5, taking an intelligent speaker and an in-ear active snore eliminator as examples. As shown in fig. 5, the in-ear active snore eliminator 10 may be connected to the smart speaker 20 through BLE, and upload the collected snore sound field data; the intelligent sound box uploads the collected snore sound field data to the cloud 30 for operation; the cloud end operates snore sound field data to generate an algorithm; the intelligent sound box downloads the cloud algorithm, is connected with the in-ear active snore eliminator through BLE and sends the cloud algorithm; the in-ear active snore eliminator receives the finished cloud algorithm, executes the cloud algorithm, emits modulated ultrasonic signals, forms sound signals with the same amplitude and opposite phase with a snore sound field beside the ears of a user, and conducts snore shielding and elimination; the above-mentioned scheme is continued until the snore is shielded.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 9
According to an embodiment of the present application, there is also provided a signal processing system, as shown in fig. 18, including: a processing device 10 and a server 30, the processing device being communicatively connected to the server.
The processing equipment is used for acquiring first sound field data of an audio signal to be processed; the server is used for processing the first sound field data to generate a signal generation algorithm; the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
The signal processing method provided by the embodiment of the application is mainly applied to the application scene of snore elimination, but is not limited to the application scene, and can also be applied to other cited scenes needing to eliminate sound. In the embodiment of the present application, an application scenario as shown in fig. 4 is taken as an example for explanation.
The processing device can be an active snore eliminator which can be directly connected with a cloud server, such as a mobile terminal and an intelligent sound box, and is placed nearby a user who does not want to be affected by snore. For example, as shown in fig. 4, to reduce implementation costs, the processing device may be a smart speaker.
In the above embodiment of the present application, the processing device is further configured to obtain second sound field data of the processed audio signal; the server is further configured to update the signal generation algorithm based on the second sound field data.
A preferred embodiment of the present application will be described in detail below with reference to fig. 4 and 6 by taking an example of an active snore canceller based on a smart speaker as an example. As shown in fig. 4, when two couples sleep, if one of the two couples snores, the other couple can put an active snore eliminator based on an intelligent sound box around the other couple, and the position of the active snore eliminator can be determined according to the needs of the user. As shown in fig. 6, the active snore eliminator 10 based on an intelligent sound box can be connected to the cloud 30 through WIFI, and upload the collected snore sound field data; the cloud end operates snore sound field data to generate an algorithm; the active snore eliminator based on the intelligent sound box downloads a cloud algorithm, executes the cloud algorithm, emits modulated ultrasonic signals, forms sound signals with the same amplitude and opposite phases with a snore sound field beside ears of a user, and conducts snore shielding and elimination; the above-mentioned scheme is continuously executed in a circulating way until the snore is shielded.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 10
According to an embodiment of the present application, there is also provided a signal processing system including:
a processor; and
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal at a user.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 11
According to the embodiment of the application, a signal processing method is further provided.
Fig. 19 is a flowchart of a fourth signal processing method according to an embodiment of the present application. As shown in fig. 19, the method includes the steps of:
step 1902, a processing device acquires first data of a sensory signal to be processed;
the sensory signal to be processed in the above steps may be a sensory signal that the user wishes to eliminate, including but not limited to: the sense signal to be processed may be a snore signal from a snore person in a snore-eliminating reference scene, for example, an audio signal corresponding to auditory sense, an odor signal corresponding to olfactory sense, a touch signal corresponding to tactile sense, an image signal corresponding to visual sense, and the like. Wherein the first data may be data capable of characterizing the sensory signal to be processed, for example, for audio signals, the first data may be sound field data, including but not limited to: sound pressure, particle vibration velocity, displacement or medium density, etc.
Step 1904, the processing device obtains a target sensory signal corresponding to the first data;
the target sensory signal in the above step may be a signal capable of eliminating the sensory signal to be processed, and may be obtained by analyzing the sensory signal to be processed, for example, by means of signal superposition, for example, generating a signal completely opposite to the sensory signal to be processed; the method can also be realized by signal coverage, for example, a new signal is generated and directly covered on the sensory signal to be processed, and the sensory signal to be processed is submerged.
Alternatively, in the case that the sensory signal to be processed is an audio signal to be processed, the target sensory signal may be an ultrasonic signal.
In step S1906, the processing device outputs a target sensory signal.
Optionally, the output mode may be determined according to the specific type of the target sensory signal, for example, for the ultrasonic signal, the transmission may be directly performed; for the scent signal, it can be emitted by a scent generator; for a touch signal, it may be generated by a vibration sensor or the like; for the image signal, display may be performed through a display screen.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 12
The embodiment of the application can provide a computer terminal, and the computer terminal can be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the signal processing method: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Alternatively, fig. 20 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 20, the computer terminal a may include: one or more processors 1502 (only one of which is shown), and a memory 1504.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the signal processing method and apparatus in the embodiments of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the signal processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the processor may further execute the program code of the following steps: the method comprises the steps of sending first sound field data to target equipment and receiving a signal generation algorithm returned by the target equipment, wherein the target equipment is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm; an ultrasonic signal is generated based on a signal generation algorithm.
Optionally, the processor may further execute the program code of the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; and sending the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is used for updating the signal generation algorithm based on the second sound field data.
Optionally, the processor may further execute the program code of the following steps: processing the first sound field data to generate a signal generation algorithm; an ultrasonic signal is generated based on a signal generation algorithm.
Optionally, the processor may further execute the program code of the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; the signal generation algorithm is updated based on the second field data.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring first sound field data of an audio signal to be processed; the method comprises the steps of sending first sound field data to target equipment and receiving a signal generation algorithm returned by the target equipment, wherein the target equipment is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm; generating an ultrasonic signal based on a signal generation algorithm; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the processor may further execute the program code of the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; and sending the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is also used for updating the signal generation algorithm based on the second sound field data.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: receiving first sound field data of an audio signal to be processed, which is sent by processing equipment; sending the first sound field data to a server, and receiving a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data to generate the signal generation algorithm; and sending a signal generation algorithm to a processing device, wherein the processing device is used for transmitting an ultrasonic signal based on the signal generation algorithm, the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the processor may further execute the program code of the following steps: after sending the signal generation algorithm to the processing device, receiving second sound field data of the processed audio signal sent by the processing device; and sending the second sound field data to a server, wherein the server is used for updating the signal generation algorithm based on the second sound field data.
Optionally, the processor may further execute the program code of the following steps: before receiving first sound field data of an audio signal to be processed, which is sent by processing equipment, outputting prompt information, wherein the prompt information is used for confirming whether the audio signal to be processed is processed; after receiving the confirmation information corresponding to the prompt information, establishing connection with the processing device based on the confirmation information.
Optionally, the processor may further execute the program code of the following steps: outputting prompt information after receiving a processing instruction for processing an audio signal to be processed; and/or outputting prompt information under the condition that the preset time is up.
By adopting the embodiment of the application, a scheme for signal processing is provided. Snore shielding and eliminating are carried out by the sound wave directional propagation technology based on ultrasonic waves, so that the snore shielding effect is improved, the technical effect of the sleep of a snorer is not influenced, and the technical problem that the signal processing effect for snore eliminating in the related technology is poor is solved.
It can be understood by those skilled in the art that the structure shown in fig. 20 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 20 is a diagram illustrating a structure of the electronic device. For example, the computer terminal a may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 20, or have a different configuration than shown in fig. 20.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 13
Embodiments of the present application also provide a storage medium. Alternatively, in this embodiment, the storage medium may be configured to store program codes executed by the signal processing method provided in the foregoing embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the storage medium is further configured to store program codes for performing the following steps: the method comprises the steps of sending first sound field data to target equipment and receiving a signal generation algorithm returned by the target equipment, wherein the target equipment is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm; an ultrasonic signal is generated based on a signal generation algorithm.
Optionally, the storage medium is further configured to store program codes for performing the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; and sending the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is used for updating the signal generation algorithm based on the second sound field data.
Optionally, the storage medium is further configured to store program codes for performing the following steps: processing the first sound field data to generate a signal generation algorithm; an ultrasonic signal is generated based on a signal generation algorithm.
Optionally, the storage medium is further configured to store program codes for performing the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; the signal generation algorithm is updated based on the second field data.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring first sound field data of an audio signal to be processed; the method comprises the steps of sending first sound field data to target equipment and receiving a signal generation algorithm returned by the target equipment, wherein the target equipment is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm; generating an ultrasonic signal based on a signal generation algorithm; and transmitting an ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the storage medium is further configured to store program codes for performing the following steps: acquiring second sound field data of the processed audio signal after transmitting the ultrasonic signal; and sending the second sound field data to the target device, wherein the target device is used for sending the second sound field data to the server, and the server is also used for updating the signal generation algorithm based on the second sound field data.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: receiving first sound field data of an audio signal to be processed, which is sent by processing equipment; sending the first sound field data to a server, and receiving a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data to generate the signal generation algorithm; and sending a signal generation algorithm to a processing device, wherein the processing device is used for transmitting an ultrasonic signal based on the signal generation algorithm, the ultrasonic signal is demodulated into a target audio signal, the amplitude of the target audio signal is the same as that of the audio signal to be processed, and the phase of the target audio signal is opposite to that of the audio signal to be processed.
Optionally, the storage medium is further configured to store program codes for performing the following steps: after sending the signal generation algorithm to the processing device, receiving second sound field data of the processed audio signal sent by the processing device; and sending the second sound field data to a server, wherein the server is used for updating the signal generation algorithm based on the second sound field data.
Optionally, the storage medium is further configured to store program codes for performing the following steps: before receiving first sound field data of an audio signal to be processed, which is sent by processing equipment, outputting prompt information, wherein the prompt information is used for confirming whether the audio signal to be processed is processed; after receiving the confirmation information corresponding to the prompt information, establishing connection with the processing device based on the confirmation information.
Optionally, the storage medium is further configured to store program codes for performing the following steps: outputting prompt information after receiving a processing instruction for processing an audio signal to be processed; and/or outputting prompt information under the condition that the preset time is up.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (27)

1. A signal processing method, comprising:
the processing equipment acquires first sound field data of an audio signal to be processed;
the processing equipment acquires an ultrasonic signal corresponding to the first sound field data;
the processing device transmits the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
2. The method of claim 1, wherein the processing device acquiring ultrasound signals corresponding to the first sound field data comprises:
the processing equipment sends the first sound field data to a server and receives a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm;
the processing device generates an ultrasonic signal based on the signal generation algorithm.
3. The method of claim 2, wherein the processing device sending the first sound field data to a server and receiving the signal generation algorithm returned by the server comprises:
the processing device sends first sound field data to a target device and receives the signal generation algorithm returned by the target device, wherein the target device is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server.
4. The method of claim 2, wherein after the processing device transmits the ultrasonic signal, the method further comprises:
the processing device acquires second sound field data of the processed audio signal;
the processing device sends the second sound field data to the server, wherein the server is configured to update the signal generation algorithm based on the second sound field data.
5. The method of claim 1, wherein the processing device acquiring ultrasound signals corresponding to the first sound field data comprises:
the processing equipment processes the first sound field data to generate a signal generation algorithm;
the processing device generates an ultrasonic signal based on the signal generation algorithm.
6. The method of claim 5, wherein after the processing device transmits the ultrasonic signal, the method further comprises:
the processing device acquires second sound field data of the processed audio signal;
the processing device updates the signal generation algorithm based on the second acoustic field data.
7. A signal processing method, comprising:
the processing equipment acquires first sound field data of an audio signal to be processed;
the processing device sends the first sound field data to a target device and receives a signal generation algorithm returned by the target device, wherein the target device is used for sending the first sound field data to a server and receiving the signal generation algorithm returned by the server, and the server is used for processing the first sound field data and generating the signal generation algorithm;
the processing device generates an ultrasonic signal based on the signal generation algorithm;
the processing device transmits the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
8. The method of claim 7, wherein after the processing device transmits the ultrasonic signal, the method further comprises:
the processing device acquires second sound field data of the processed audio signal;
and the processing device sends the second sound field data to a target device, wherein the target device is used for sending the second sound field data to the server, and the server is further used for updating the signal generation algorithm based on the second sound field data.
9. A signal processing method, comprising:
the target equipment receives first sound field data of an audio signal to be processed, which is sent by the processing equipment;
the target device sends the first sound field data to a server and receives a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm;
the target device sends the signal generation algorithm to the processing device, wherein the processing device is configured to transmit an ultrasonic signal based on the signal generation algorithm, and the ultrasonic signal is demodulated into a target audio signal.
10. The method of claim 9, wherein after the target device transmits the signal generation algorithm to the processing device, the method further comprises:
the target device receives second sound field data of the processed audio signal sent by the processing device;
the target device sends the second sound field data to the server, wherein the server is configured to update the signal generation algorithm based on the second sound field data.
11. The method of claim 9, wherein prior to the target device receiving the first sound field data of the audio signal to be processed sent by the processing device, the method further comprises:
the target equipment outputs prompt information, wherein the prompt information is used for confirming whether the audio signal to be processed is processed or not;
after receiving the confirmation information corresponding to the prompt information, the target device establishes connection with the processing device based on the confirmation information.
12. The method of claim 11, wherein,
after receiving a processing instruction for processing the audio signal to be processed, the target device outputs the prompt information; and/or
And under the condition that the preset time is up, the target equipment outputs the prompt information.
13. A processing device, comprising:
the signal acquisition device is used for acquiring first sound field data of the audio signal to be processed;
the processor is connected with the signal acquisition device and used for acquiring the ultrasonic signal corresponding to the first sound field data;
and the ultrasonic transmitting device is connected with the processor and used for transmitting the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
14. The processing device of claim 13, wherein the processing device further comprises:
the communication module is connected with the processor and is used for sending the first sound field data to a server and receiving a signal generation algorithm returned by the server, wherein the server is used for processing the first sound field data and generating the signal generation algorithm;
the processor is further configured to generate an ultrasonic signal based on the signal generation algorithm.
15. The processing device of claim 14, wherein the communication module is further configured to send first sound field data to a target device and receive the signal generation algorithm returned by the target device, wherein the target device is configured to send the first sound field data to the server and receive the signal generation algorithm returned by the server.
16. The processing apparatus of claim 14,
the signal acquisition device is also used for acquiring second sound field data of the processed audio signal;
the communication module is further configured to send the second acoustic data to the server, where the server is configured to update the signal generation algorithm based on the second acoustic data.
17. The processing device of claim 13, wherein the processor comprises:
the algorithm generation module is connected with the signal acquisition device and used for processing the first sound field data to generate a signal generation algorithm;
and the signal generation module is connected with the algorithm generation module and the ultrasonic transmitting device and used for generating an ultrasonic signal based on the signal generation algorithm.
18. The processing apparatus of claim 17,
the signal acquisition device is also used for acquiring second sound field data of the processed audio signal;
the processor further comprises: an algorithm update module to update the signal generation algorithm based on the second sound field data.
19. A signal processing system comprising:
a processing device for acquiring first sound field data of an audio signal to be processed;
the target device is in communication connection with the processing device and the server and is used for sending the first sound field data to the server and receiving a signal generation algorithm returned by the server;
the server is used for processing the first sound field data to generate the signal generation algorithm;
the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
20. The system of claim 19, wherein,
the processing device is further configured to obtain second sound field data of the processed audio signal;
the target device is further configured to send the second sound field data to the server;
the server is further configured to update the signal generation algorithm based on the second sound field data.
21. A signal processing system comprising:
a processing device for acquiring first sound field data of an audio signal to be processed;
the server is in communication connection with the processing equipment and is used for processing the first sound field data to generate a signal generation algorithm;
the processing device is further configured to generate an ultrasonic signal based on the signal generation algorithm and transmit the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal.
22. The system of claim 21, wherein,
the processing device is further configured to obtain second sound field data of the processed audio signal;
the server is further configured to update the signal generation algorithm based on the second sound field data.
23. A storage medium comprising a stored program, wherein an apparatus in which the storage medium is located is controlled to perform the signal processing method of any one of claims 1 to 12 when the program is executed.
24. A computing device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program when executed performs the signal processing method of any one of claims 1 to 12.
25. A signal processing system comprising:
a processor; and
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring first sound field data of an audio signal to be processed; acquiring an ultrasonic signal corresponding to the first sound field data; and transmitting the ultrasonic signal, wherein the ultrasonic signal is demodulated into a target audio signal at a user.
26. A signal processing method, comprising:
the processing equipment acquires first data of a sensory signal to be processed;
the processing equipment acquires a target sensory signal corresponding to the first data;
the processing device outputs the target sensory signal.
27. The method according to claim 26, wherein in case the sensory signal to be processed is an audio signal to be processed, the target sensory signal is an ultrasonic signal.
CN202010116682.8A 2020-02-25 2020-02-25 Signal processing method and system, and processing device Pending CN113380218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010116682.8A CN113380218A (en) 2020-02-25 2020-02-25 Signal processing method and system, and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010116682.8A CN113380218A (en) 2020-02-25 2020-02-25 Signal processing method and system, and processing device

Publications (1)

Publication Number Publication Date
CN113380218A true CN113380218A (en) 2021-09-10

Family

ID=77569370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010116682.8A Pending CN113380218A (en) 2020-02-25 2020-02-25 Signal processing method and system, and processing device

Country Status (1)

Country Link
CN (1) CN113380218A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020102A (en) * 2011-09-22 2013-04-03 歌乐株式会社 Information terminal, server device, searching system and corresponding searching method
CN104318919A (en) * 2014-10-22 2015-01-28 上海斐讯数据通信技术有限公司 Environment noise elimination method and system and mobile terminal
CN104918177A (en) * 2014-03-12 2015-09-16 索尼公司 Signal processing apparatus, signal processing method, and program
CN105157204A (en) * 2015-10-19 2015-12-16 珠海格力电器股份有限公司 Noise reducing method and system, electronic expansion valve and air conditioner
KR20160096007A (en) * 2015-09-30 2016-08-12 서울대학교산학협력단 Sound Collecting Terminal, Sound Providing Terminal, Sound Data Processing Server and Sound Data Processing System using thereof
CN106098053A (en) * 2016-06-01 2016-11-09 安瑞装甲材料(芜湖)科技有限公司 Active noise reducing device
CN106713343A (en) * 2017-01-09 2017-05-24 青岛金思特电子有限公司 Method for realizing intelligent analysis of communication protocol based on cloud computation
CN109525918A (en) * 2018-11-10 2019-03-26 东莞市华睿电子科技有限公司 A kind of processing method that earpiece audio signal is shared
CN110223668A (en) * 2019-05-10 2019-09-10 深圳市奋达科技股份有限公司 A kind of method that speaker and isolation are snored, storage medium
CN110753291A (en) * 2019-10-31 2020-02-04 朗狮(深圳)科技有限公司 Noise reduction device and noise reduction method for indoor switch

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020102A (en) * 2011-09-22 2013-04-03 歌乐株式会社 Information terminal, server device, searching system and corresponding searching method
CN104918177A (en) * 2014-03-12 2015-09-16 索尼公司 Signal processing apparatus, signal processing method, and program
CN104318919A (en) * 2014-10-22 2015-01-28 上海斐讯数据通信技术有限公司 Environment noise elimination method and system and mobile terminal
KR20160096007A (en) * 2015-09-30 2016-08-12 서울대학교산학협력단 Sound Collecting Terminal, Sound Providing Terminal, Sound Data Processing Server and Sound Data Processing System using thereof
CN105157204A (en) * 2015-10-19 2015-12-16 珠海格力电器股份有限公司 Noise reducing method and system, electronic expansion valve and air conditioner
CN106098053A (en) * 2016-06-01 2016-11-09 安瑞装甲材料(芜湖)科技有限公司 Active noise reducing device
CN106713343A (en) * 2017-01-09 2017-05-24 青岛金思特电子有限公司 Method for realizing intelligent analysis of communication protocol based on cloud computation
CN109525918A (en) * 2018-11-10 2019-03-26 东莞市华睿电子科技有限公司 A kind of processing method that earpiece audio signal is shared
CN110223668A (en) * 2019-05-10 2019-09-10 深圳市奋达科技股份有限公司 A kind of method that speaker and isolation are snored, storage medium
CN110753291A (en) * 2019-10-31 2020-02-04 朗狮(深圳)科技有限公司 Noise reduction device and noise reduction method for indoor switch

Similar Documents

Publication Publication Date Title
US11559252B2 (en) Hearing assistance device incorporating virtual audio interface for therapy guidance
KR102179043B1 (en) Apparatus and method for detecting abnormality of a hearing aid
US9426585B2 (en) Binaural hearing aid system and a method of providing binaural beats
WO2014153246A2 (en) Sleep management implementing a wearable data-capable device for snoring-related conditions and other sleep disturbances
WO2010135179A1 (en) Method and apparatus for enhancing the generation of three-dimensional sound in headphone devices
US20190028789A1 (en) Earphones for Measuring and Entraining Respiration
CN103874000A (en) Hearing instrument
US11523231B2 (en) Methods and systems for assessing insertion position of hearing instrument
US20140369538A1 (en) Assistive Listening System
CN107609371B (en) Message prompting method and audio playing device
CN113613156A (en) Wearing state detection method and device, headset and storage medium
US9706316B2 (en) Method of auditory training and a hearing aid system
EP3021599A1 (en) Hearing device having several modes
CN113380218A (en) Signal processing method and system, and processing device
US20230000395A1 (en) Posture detection using hearing instruments
CN108882112B (en) Audio playing control method and device, storage medium and terminal equipment
KR102250547B1 (en) An implantable hearing aid with energy harvesting and external charging
CN105641900B (en) A kind of respiratory state based reminding method and electronic equipment and system
US20220192541A1 (en) Hearing assessment using a hearing instrument
CN113873382A (en) Headset control method, headset, and computer-readable storage medium
CN114630223B (en) Method for optimizing functions of hearing-wearing device and hearing-wearing device
US20230283967A1 (en) Bone conduction and air conduction hearing aid switching device and method thereof
US20230164545A1 (en) Mobile device compatibility determination
WO2023021794A1 (en) Sound signal processing method, program, and sound signal processing device
US20230328500A1 (en) Responding to and assisting during medical emergency event using data from ear-wearable devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40058135

Country of ref document: HK