CN111930551B - Method and system for transmitting information by sound - Google Patents

Method and system for transmitting information by sound Download PDF

Info

Publication number
CN111930551B
CN111930551B CN202011020229.3A CN202011020229A CN111930551B CN 111930551 B CN111930551 B CN 111930551B CN 202011020229 A CN202011020229 A CN 202011020229A CN 111930551 B CN111930551 B CN 111930551B
Authority
CN
China
Prior art keywords
amplitude
audio file
preset
point
data embedding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011020229.3A
Other languages
Chinese (zh)
Other versions
CN111930551A (en
Inventor
陈洋
张鲲鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hansang Nanjing Technology Co ltd
Original Assignee
Hansong Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hansong Nanjing Technology Co ltd filed Critical Hansong Nanjing Technology Co ltd
Priority to CN202110089317.7A priority Critical patent/CN112860468B/en
Priority to CN202011020229.3A priority patent/CN111930551B/en
Publication of CN111930551A publication Critical patent/CN111930551A/en
Application granted granted Critical
Publication of CN111930551B publication Critical patent/CN111930551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The embodiment of the specification discloses a method and a system for transmitting information by using sound. The audio playing device with the data processing function converts log information which can be used for fault analysis into binary information, processes an audio file to be played according to a preset coding rule according to the binary information, and plays the processed audio file. And the server receives the audio file uploaded by the user side, extracts binary information from the audio file according to a preset decoding rule, and converts the binary information into log information.

Description

Method and system for transmitting information by sound
Technical Field
The present disclosure relates to the field of information technology, and more particularly, to a method and system for transmitting information using sound.
Background
The audio playback device may malfunction during daily use. The user often is difficult to judge the failure reason or can not maintain by oneself, needs to provide relevant information of equipment to the technical staff carries out preliminary failure analysis, and then feeds back failure reason, maintenance suggestion etc. or provides the maintenance service on the home.
At present, it is desirable to provide a scheme for providing a user with convenience in enjoying the maintenance service of an audio playback device.
Disclosure of Invention
One embodiment of the present disclosure provides a method for transmitting information by using sound. The method is executed by an audio playing device with a data processing function, and comprises the following steps: acquiring log information, wherein the log information can be used for fault analysis; converting the log information into binary information; according to the binary information, processing the audio file according to a preset coding rule, and playing the processed audio file so as to: the user side can obtain a recording file obtained by recording and playing the processed audio file and upload the recording file to the server side; the server side can extract binary information from the audio file according to a preset decoding rule and convert the extracted binary information into log information, and the decoding rule is matched with the encoding rule. The audio file waveform is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is located outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit before processing, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit before processing. The encoding rule includes: each code element of the binary information corresponds to a data embedding point, the waveform of the audio file is adjusted to enable the peak or the trough of the data embedding point corresponding to a preset code element to move to the first amplitude interval, and the preset code element is 0 or 1.
One embodiment of the present disclosure provides a system for communicating information using sound. The system is realized on an audio playing device with a data processing function, and comprises the following components: the log information acquisition module is used for acquiring log information which can be used for fault analysis; the first conversion module is used for converting the log information into binary information; and the processing module is used for processing the audio file according to the binary information and a preset coding rule and playing the processed audio file so as to enable: the user side can obtain a recording file obtained by recording and playing the processed audio file and upload the recording file to the server side; the server side can extract binary information from the audio file according to a preset decoding rule and convert the extracted binary information into log information, and the decoding rule is matched with the encoding rule. The audio file waveform is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is located outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit before processing, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit before processing. The encoding rule includes: each code element of the binary information corresponds to a data embedding point, the waveform of the audio file is adjusted to enable the peak or the trough of the data embedding point corresponding to a preset code element to move to the first amplitude interval, and the preset code element is 0 or 1.
One of the embodiments of the present specification provides an apparatus for transferring information by using sound, including a processor and a storage device, where the storage device is configured to store instructions, and when the processor executes the instructions, the method for transferring information by using sound according to any one of the embodiments of the present specification is implemented.
One of the embodiments of the present specification provides a method for extracting information from sound. The method is executed by a server side and comprises the following steps: receiving a recording file uploaded by a user side, wherein the recording file is obtained by recording and playing an audio file processed according to a preset coding rule; extracting binary information from the audio record file according to a preset decoding rule, wherein the decoding rule is matched with the encoding rule; and converting the binary information into log information. The audio file waveform is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is located outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit. The audio file waveform has a data extraction point corresponding to each data embedding point on the audio file waveform. The decoding rule includes: determining data extraction points on the audio file waveform based on the pre-processed audio file waveform; each code element of the binary information corresponds to a data extraction point, and for each data extraction point, whether the amplitude of the data extraction point is within a second amplitude interval corresponding to the first amplitude interval is judged, if yes, the code element corresponding to the data extraction point is determined to be a preset code element, and if not, the code element is determined to be a non-preset code element, and the preset code element is 0 or 1.
One of the embodiments of the present specification provides a system for extracting information from sound. The system is implemented on a server side and comprises: the recording file receiving module is used for receiving a recording file uploaded by a user side, wherein the recording file is obtained by recording and playing an audio file processed according to a preset coding rule; the information extraction module is used for extracting binary information from the sound recording file according to a preset decoding rule, and the decoding rule is matched with the coding rule; and the second conversion module is used for converting the binary information into log information. The audio file waveform is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is located outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit. The audio file waveform has a data extraction point corresponding to each data embedding point on the audio file waveform. The decoding rule includes: determining data extraction points on the audio file waveform based on the pre-processed audio file waveform; each code element of the binary information corresponds to a data extraction point, and for each data extraction point, whether the amplitude of the data extraction point is within a second amplitude interval corresponding to the first amplitude interval is judged, if yes, the code element corresponding to the data extraction point is determined to be a preset code element, and if not, the code element is determined to be a non-preset code element, and the preset code element is 0 or 1.
One of the embodiments of the present specification provides an apparatus for extracting information from sound, including a processor and a storage device, where the storage device is configured to store instructions, and when the processor executes the instructions, the method for extracting information from sound according to any one of the embodiments of the present specification is implemented.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario for transferring log information using sound, according to some embodiments of the present description;
FIG. 2 is a schematic diagram of an application scenario for communicating information using sound, according to some embodiments of the present description;
FIG. 3 is a waveform diagram of an audio file before processing according to some embodiments of the present description;
FIG. 4 is a schematic comparison of audio file waveforms before and after processing according to some embodiments of the present description;
FIG. 5 is a waveform diagram of an audio recording file according to some embodiments of the present description;
FIG. 6 is a block diagram of a system for communicating information using sound, according to some embodiments of the present description;
FIG. 7 is a block diagram of a system for extracting information from sound in accordance with some embodiments of the present description;
FIG. 8 is a block diagram of a system for communicating information using sound according to further embodiments of the present disclosure;
FIG. 9 is a block diagram of a system for extracting information from sound in accordance with further embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification, the terms "a", "an" and/or "the" are not intended to be inclusive of the singular, but rather are intended to be inclusive of the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario for transferring log information by sound according to some embodiments of the present description. As shown in fig. 1, the system 100 may include an audio playing device 110, an audio recording device 120, a user terminal 130, a service terminal 140, and a network 150.
The audio playing device 110 has a basic sound playing function, and also has a certain data processing function, that is, a function of processing an audio file to be played to transmit information. In some embodiments, the audio playback device 110 may be a smart speaker.
The audio playing device 110 may obtain log information that can be used for fault analysis, convert the log information into binary information, process an original (pre-processed) audio file according to a preset encoding rule according to the binary information, and play the processed audio file. The user can record the played processed audio file by using the audio recording device 120, and upload the obtained audio file to the server 140 through the user terminal 130. The server 140 may extract binary information corresponding to the log information from the received audio file according to a decoding rule matching the encoding rule, and convert (restore) the extracted binary information into the log information.
Therefore, the user can provide the log information which can be used for fault analysis to the server by recording and uploading the recording file, so that the technical personnel can perform preliminary fault analysis according to the log information, further feed back fault reasons, maintenance suggestions and the like, or prepare according to preliminary conclusions (such as preparing maintenance tools, components and parts and the like which may be needed), and then go to the home to provide maintenance service.
In addition, the processed audio file may be an audio file, such as a song file, played by an audio playback device in everyday use. The audio file is processed on the principle that the original hearing of the audio file is not influenced as much as possible, namely the hearing experience of the audio file is guaranteed as much as possible while the audio file is played to transmit information.
It should be noted that the log information may include the complete log content, or may include only a part of the log content. In some embodiments, the complete log content may be divided into at least N segments (N is an integer not less than 2), each segment of log content is used to process one audio file, the N segments of log content may be extracted from the N recording files obtained by recording by playing the processed N audio files corresponding to the N segments of log content one by one in turn, and the complete log content may be obtained by splicing the N segments of log content.
In some embodiments, audio recording device 120 may be integrated into user terminal 130. For example, the user terminal 130 may be a mobile phone, a computer, etc. integrated with a microphone. In some embodiments, audio recording device 120 may be an external device to user terminal 130. In some embodiments, audio recording device 120 may be a device that supports data export, such as a recording pen, and user terminal 130 may obtain a record file exported from audio recording device 120.
In some embodiments, the user terminal 130 may include various types of computing devices, such as a smart phone, a tablet, a laptop, a desktop computer, and so on.
The server 140 may include various types of computing devices, such as a smart phone, a tablet, a laptop, a desktop, a server, and so forth. Wherein a server may be a stand-alone server or a group of servers, which may be centralized or distributed. In some embodiments, the server may be regional or remote. In some embodiments, the server may execute on a cloud platform. For example, the cloud platform may include one or any combination of a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like.
The network 150 connects the components of the system 100 to enable communication between the components (e.g., the client 130 and the server 140). The network between the various parts in system 100 may include wired networks and/or wireless networks. For example, network 150 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network (ZigBee), Near Field Communication (NFC), an intra-device bus, an intra-device line, a cable connection, and the like, or any combination thereof. The network connection between each two parts may be in one of the above-mentioned ways, or in a plurality of ways.
FIG. 2 is a schematic diagram of an application scenario for communicating information using sound, according to some embodiments of the present description. The system 200 may include an audio playback device 210 and a receiving device 220. The receiving device 220 may be integrated with an audio recording module as shown in fig. 2, and may also support an external audio recording device. The receiving device 220 also has certain data processing functions, i.e., functions for extracting information from the audio file. For details of the audio playing device 210, reference may be made to the related description of the audio playing device 110 in fig. 1.
In some embodiments, sound may be utilized to communicate information, such as control instructions, Wi-Fi passwords (wireless network passwords), etc., between the audio playback device 210 and the receiving device 220 in close proximity. For example, other nearby electrical devices (e.g., a television, an air conditioner, etc.) may be controlled by the smart speaker, and the user may control the receiving device 220 by pressing a key on the smart speaker associated with a control instruction applied to the receiving device 220, which is very convenient for control. For another example, the Wi-Fi password can be automatically sent to a nearby user terminal (such as a mobile phone, a tablet computer, a notebook computer, etc.) with the recording function started by means of the smart sound box, so that the user can conveniently obtain the Wi-Fi password without manually inputting the password, scanning the password, etc. Referring to the foregoing, the audio playing device 210 may convert information to be transmitted (such as a transmission control instruction, a Wi-Fi password, and the like) into binary information, process an original (before-processing) audio file according to the binary information according to a preset encoding rule, and play the processed audio file. The receiving device 220 near the audio playing device 210 may record the played processed audio file, extract binary information corresponding to the information to be transmitted from the received audio file according to a decoding rule matched with the encoding rule, and convert (restore) the extracted binary information into the information to be transmitted.
The following illustrates, in conjunction with fig. 3, 4, and 5, encoding rules for embedding information into an audio file and decoding rules for extracting information from a sound recording file.
FIG. 3 is a waveform diagram of an audio file before processing according to some embodiments described herein. FIG. 4 is a schematic diagram comparing audio file waveforms before and after processing according to some embodiments of the present description. In the figure, the horizontal axis represents time and the vertical axis represents amplitude, which may be in dB.
As shown in fig. 3, a data embedding point (black dot) may be selected on the audio file waveform before processing to transfer each bit (bit, hereinafter referred to as symbol) of binary information.
It will be appreciated that each symbol of binary information may correspond to a data embedding point on the audio file waveform prior to processing. In addition, the order of the symbols in the binary information may be consistent with the time order of the data embedding points on the audio file waveform before processing. That is, if the first (first) occurring data embedding point on the waveform is recorded as the 1 st data embedding point in chronological order, the subsequent occurring data embedding points are recorded as the 2 nd data embedding point, the 3 rd data embedding point, the. Taking fig. 3 as an example, the binary information at least includes 6 bits, and the data embedding points D1-D6 correspond to the 1 st symbol to the 6 th symbol of the binary information one by one.
Furthermore, the relationship of the symbols of the binary information and the data embedding point may be considered as part of the encoding rule.
In some embodiments, an extreme point on the audio file waveform may be used as a data embedding point, where different symbols are "embedded" by processing different characteristics of the amplitude of the data embedding point (the extreme value corresponding to the point) before and after processing.
In some embodiments, the amplitude interval may be preset, and then an extreme point outside the preset amplitude interval before processing may be selected as the data embedding point. For example, an upper limit of the amplitude (forming an amplitude interval with "zero") may be set, and a maximum point with an amplitude greater than the upper limit may be selected as the data embedding point. For another example, as shown in fig. 3, an upper amplitude limit b and a lower amplitude limit a may be set to form an amplitude interval (a, b), and maximum points (e.g., D1, D2, D4, and D6) having an amplitude greater than the upper limit b and minimum points (e.g., D3 and D5) having an amplitude less than the lower limit a may be selected as the data embedding points.
For example only, the maximum amplitude value may be reduced by a preset value, resulting in an upper amplitude limit. Similarly, the minimum amplitude value may be increased by a preset value, resulting in a lower amplitude limit. In some embodiments, the preset value for obtaining the upper/lower amplitude limit may be within [3dB,6dB ]. It should be understood that the preset value for obtaining the upper amplitude limit and the preset value for obtaining the lower amplitude limit may be the same or different.
In some embodiments, as shown in FIG. 3, an extreme point on the audio file waveform prior to processing that is later than a first time (t = t 1) may be selected as the data embedding point, the first time t1 being later than the start time (t = 0) of the audio file waveform. It will be appreciated that the maximum/minimum amplitude after the first time t1 may be decreased/increased by a preset value, resulting in an upper/lower amplitude limit.
It should be noted that, since the amplitude of the sound signal from the played audio file to the recorded audio file may be scaled by a certain ratio (hereinafter referred to as gain ratio, denoted as G1/G2), that is, in the case where recording and playing are started simultaneously and noise is ignored, the waveform of the recorded audio file undergoes a certain ratio of stretching in the longitudinal direction with respect to the waveform of the played audio file, and the amplitude interval of the audio file waveform mapped to the amplitude interval of the audio file waveform also undergoes the same ratio of stretching. For example, the amplitude interval (0, b) on the audio file waveform corresponds to the amplitude interval (0, b G1/G2) on the audio file waveform. For another example, the amplitude intervals (a, b) on the audio file waveform correspond to the amplitude intervals (a G1/G2, b G1/G2) on the audio file waveform. For the sake of distinction, in this specification, an amplitude interval on the audio file waveform is referred to as a first amplitude interval, and an amplitude interval corresponding thereto on the audio file waveform is referred to as a second amplitude interval.
Accordingly, the encoding rule may include: and adjusting the waveform of the audio file to enable the peak or the trough of the data embedding point corresponding to the preset code element to move to a first amplitude interval, wherein the preset code element is 0 or 1.
It is understood that whether the peak/trough moves to a certain amplitude interval is defined by an extremum, that is, when the maximum value corresponding to the peak/trough is located in a certain amplitude interval, the peak/trough is considered to be located in the amplitude interval. Therefore, the waveform of the audio file is adjusted such that the amplitude (extremum) of the data embedding point corresponding to the predetermined symbol changes from outside the first amplitude interval to inside the first amplitude interval, and the amplitude of the data embedding point corresponding to the non-predetermined symbol is always outside the first amplitude interval. Based on this, a matching decoding rule may be set, and specifically, refer to fig. 5 and its related description.
Without assuming that the predetermined symbol is 0, referring to fig. 4, if the binary information corresponding to the log information is "101101", the peak where the data embedding point D2 is located may be moved down into the first amplitude interval (a, b) and the trough where the data embedding point D5 is located may be moved up into the first amplitude interval (a, b).
It should be noted that, on one hand, an excessively large adjustment value of the amplitude (extreme value) of the data embedding point may result in that the binary information corresponding to the log information cannot be accurately extracted from the audio file. After the upper limit/the lower limit of the amplitude is determined, if a maximum value point with an excessively large value or a minimum value point with an excessively small value is used as the data embedding point, the adjustment value of the amplitude (extreme value) of the data embedding point is also excessively large, so that the listening feeling of the audio file is greatly influenced. For this purpose, a maximum point smaller than a first threshold or a minimum point larger than a second threshold before processing may be selected as the data embedding point. Wherein the first threshold is between the amplitude maximum and the amplitude upper limit and the second threshold is between the amplitude minimum and the amplitude upper limit.
On the other hand, the adjustment value of the amplitude (extremum) of the data embedding point is too small, so that the adjusted extremum is too close to the boundary value of the first amplitude interval, that is, the adjusted maximum value is close to the upper limit of the first amplitude interval or the adjusted minimum value is close to the lower limit of the first amplitude interval, which may result in that the binary information corresponding to the log information cannot be accurately extracted from the audio file under the influence of noise.
In view of the above, in some embodiments, the extremum value (the amplitude of the data embedding point corresponding to the preset symbol) may be adjusted according to a difference between the extremum value before adjustment (the amplitude of the data embedding point corresponding to the preset symbol) and the boundary value of the first amplitude interval and a preset ratio. For example, for a data embedding point belonging to the maximum value point, the adjustment value for the amplitude of the data embedding point is the preset ratio of the difference between the amplitude before adjustment and the amplitude upper limit. For another example, for a data embedding point belonging to a minimum value point, the adjustment value of the amplitude of the data embedding point is the preset ratio of the difference between the lower limit of the amplitude and the amplitude before adjustment, wherein the preset ratio is not less than 0.5 and not more than 1.
In still other embodiments, for the data embedding points belonging to the maximum value point, the amplitude of the data embedding point corresponding to the preset symbol may be adjusted to a third threshold within the first amplitude interval. And for the data embedding points belonging to the minimum value points, adjusting the amplitude of the data embedding points corresponding to the preset code elements to be a fourth threshold value in the first amplitude interval. Wherein the third threshold is greater than the fourth threshold. Furthermore, the third/fourth threshold value may be determined by testing. That is, a plurality of candidate values of the third threshold/the fourth threshold may be set and a probability of accurately extracting binary information corresponding to log information (which may be referred to as a decoding success rate) may be tested under different candidate values to determine a candidate value with a high decoding success rate as the third threshold/the fourth threshold.
FIG. 5 is a waveform diagram of an audio record file according to some embodiments of the present description. The audio file waveform has a data extraction point corresponding to each data embedding point on the audio file waveform. Accordingly, referring to the foregoing, each symbol of binary information may correspond to one data extraction point. Referring to FIGS. 3 and 5, data embedding points D1-D6 correspond to data extraction points E1-E6 one to one, and 1 st to 6 th symbols of binary information correspond to data extraction points E1-E6 one to one.
It will be appreciated that the user may start recording before the (processed) audio file is played to ensure that the complete (processed) audio file is recorded. In some embodiments, the user terminal 130 may prompt the user to start recording in advance.
In some embodiments, data extraction points on the audio file waveform may be determined based on the pre-processed audio file waveform.
As described above, each data embedding point on the pre-processed audio file waveform corresponds to a time that is later than the first time t 1. To locate the second time t2 corresponding to the first time t1 of the audio file in the audio file, the device (e.g., the server 140, the receiving device 220) may store a time interval (t1-t0) from the starting time (denoted as t0) to the first time of the audio file, and the amplitude corresponding to the time interval from the starting time to the starting time on the waveform of the audio file may be approximately zero, and the condition that the amplitude does not exceed the set threshold may be that the amplitude is approximately zero. Based on the characteristics of the turn-on time, the device may locate the turn-on time (denoted t 0') on the recorded file waveform. Since t1-t0= t1-t0', the device can extend the opening time t0' by a time interval (t1-t0) so as to be positioned to a second time t2 corresponding to the first time t 1. It will be appreciated that the audio file may be opened at a time later than the start time, e.g., a song may have a quiet prelude. Of course, referring to fig. 3-5 in combination, the opening time of the audio file may also be the starting time (t = 0), and then t2= t0' + t1 is satisfied.
Locating the start time/turn-on time helps locate the data extraction point corresponding to each data embedding point. The device may store a time interval from any time to the start time/start time on the audio file waveform within a time period corresponding to the peak or trough where each data embedding point is located, and extend the start time/start time on the audio file by the same time interval, thereby locating the data extraction point corresponding to each data embedding point.
Locating the second time t2 helps determine the gain ratio G1/G2. The device may record the amplitude corresponding to the first time t1 on the audio file waveform and calculate the ratio of the amplitude corresponding to the first time t1 to the amplitude corresponding to the second time t2 on the audio file waveform resulting in a gain ratio G1/G2.
In matching with the aforementioned encoding rule, the decoding rule may include: and for each data extraction point, judging whether the amplitude of the data extraction point is in a second amplitude interval, if so, determining that the code element corresponding to the data extraction point is a preset code element, and if not, determining that the code element is a non-preset code element.
In addition, the determination of the data extraction points and the corresponding manner of each bit of the extracted binary information and the data extraction points may be part of the decoding rules.
In some embodiments, the apparatus may store the critical value (upper limit or upper and lower limits) of the first amplitude interval, and may scale the first amplitude interval according to the gain ratio G1/G2 to determine the second amplitude interval before determining whether the amplitude of the data extraction point is within the second amplitude interval. With combined reference to fig. 3 and 5, the first amplitude interval is (a, b), and the second amplitude interval is (a × G1/G2, b × G1/G2). Without assuming that the first symbol is 0, referring to fig. 5, the amplitudes of the data extraction points E2 and E5 are within the second amplitude interval (a × G1/G2, b × G1/G2), the second symbol and the fifth symbol of the binary information are extracted, both of which are 0, and the remaining symbols are 1, thereby extracting the binary information "101101".
In some embodiments, the apparatus may store the threshold value (upper limit or upper and lower limits) of the first amplitude interval, and further may scale the amplitude of the data extraction point according to the gain ratio G1/G2, and then determine whether the scaled amplitude is within the first amplitude interval, which is equivalent to determining whether the amplitude of the data extraction point is within the second amplitude interval. Fig. 6 is a block diagram of a system for communicating information using sound, according to some embodiments of the present description. The system 600 may be implemented on the audio playback device 110. As shown in fig. 6, the system 600 may include a log information acquisition module 610, a first conversion module 620, and a processing module 630.
The log information acquisition module 610 may be configured to acquire log information that can be used for fault analysis.
The first conversion module 620 may be used to convert the log information into binary information.
The processing module 630 processes the audio file to be played according to the binary information and the preset encoding rule, and plays the processed audio file.
FIG. 7 is a block diagram of a system for extracting information from sound, according to some embodiments of the present description. The system 700 may be implemented on the server 140. As shown in fig. 7, the system 700 may include a sound file receiving module 710, an information extracting module 720, and a second converting module 730.
The audio record receiving module 710 may be configured to receive an audio record file uploaded by the user terminal 130, where the audio record file is obtained (for example, by the audio recording device 120) by recording a played audio file processed according to a preset encoding rule.
The information extraction module 720 may be configured to extract binary information from the audio record file according to a preset decoding rule, where the decoding rule matches the encoding rule.
The second conversion module 730 may be configured to convert the extracted binary information into log information.
Fig. 8 is a block diagram of a system for communicating information using sound according to further embodiments of the present disclosure. The system 800 may be implemented on the audio playback device 210. As shown in fig. 8, the system 800 may include an information obtaining module 810 to be transmitted, a binary information obtaining module 820, and a processing module 830.
The to-be-transmitted information obtaining module 810 may be configured to obtain information to be transmitted. In some embodiments, the information to be communicated may include control instructions, Wi-Fi passwords, and the like.
The binary information obtaining module 820 may be configured to obtain binary information based on the information to be transmitted. In some embodiments, the information to be communicated may itself be binary information. In some embodiments, the information to be communicated may be converted to binary information.
The processing module 830 may be configured to process the audio file to be played according to the information to be transmitted and a preset encoding rule, and play the processed audio file.
FIG. 9 is a block diagram of a system for extracting information from sound in accordance with further embodiments of the present description. System 900 may be implemented on receiving device 220. As shown in fig. 9, the system 900 may include a sound recording file obtaining module 910, an information extracting module 920, and an information obtaining module 930 to be transmitted.
The sound recording file obtaining module 910 may be configured to obtain a sound recording file. The recording file may be obtained by recording and playing an audio file processed according to a preset encoding rule, and the receiving device 220 may be integrated with an audio recording module and/or support an external audio recording device.
The information extraction module 920 may be configured to extract binary information from the audio record file according to a preset decoding rule, where the decoding rule matches the encoding rule.
The to-be-transmitted information obtaining module 930 may be configured to obtain the to-be-transmitted information based on the extracted binary information. In some embodiments, the extracted binary information itself may be the information to be transferred. In some embodiments, the extracted binary information may be converted (restored) into the information to be transmitted.
For more details on fig. 6-9 and their modules, reference may be made to fig. 1-5 and their associated description.
It should be understood that the system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the system and its modules is for convenience only and should not limit the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the log information obtaining module 610 and the first converting module 620 may be different modules in a system, or may be a module that implements the functions of the two modules. For another example, in some embodiments, the information extraction module 720 and the second conversion module 730 may be two modules or may be combined into one module. Such variations are within the scope of the present disclosure.
It should be noted that the description of the flow in this specification is for illustration and description only, and does not limit the scope of the application of this specification. Various modifications and alterations to the flow may occur to those skilled in the art, given the benefit of this description. However, such modifications and variations are intended to be within the scope of the present description.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) the information is transmitted by using the sound and the information is extracted from the sound; (2) by means of selecting a data embedding point, controlling an adjustment value of an extreme value and the like, the influence on the listening feeling of the audio file can be reduced as much as possible, namely, information can be transmitted under the condition of keeping the normal playing of the audio file; (3) the user can provide log information which can be used for fault analysis to the server by recording and uploading a recording file, so that technicians can perform preliminary fault analysis according to the log information and further feed back fault reasons, maintenance suggestions and the like, or prepare according to preliminary conclusions (such as preparing maintenance tools, components and the like which may be needed) and then go to the home to provide maintenance service; (4) the user can control other nearby electrical appliances (such as a television, an air conditioner and the like) by means of the intelligent sound box, and the control is very convenient; (5) the Wi-Fi password can be automatically sent to a user terminal (such as a mobile phone, a tablet computer, a notebook computer and the like) which is close to the user terminal and starts a recording function by means of the intelligent sound box, so that the user can conveniently acquire the Wi-Fi password without manually inputting the password, scanning the password and the like. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the embodiments herein. Various modifications, improvements and adaptations to the embodiments described herein may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the embodiments of the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the embodiments of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of embodiments of the present description may be carried out entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the embodiments of the present specification may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for operation of various portions of the embodiments of the present description may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, VisualBasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
In addition, unless explicitly stated in the claims, the order of processing elements and sequences, use of numbers and letters, or use of other names in the embodiments of the present specification are not intended to limit the order of the processes and methods in the embodiments of the present specification. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more embodiments of the invention. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are possible within the scope of the embodiments of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (9)

1. A method for communicating information by sound, the method being performed by an audio playback device having a data processing function, comprising:
acquiring log information, wherein the log information can be used for fault analysis;
converting the log information into binary information;
according to the binary information, processing the audio file according to a preset coding rule, and playing the processed audio file so as to: the user side can obtain a recording file obtained by recording and playing the processed audio file and upload the recording file to the server side; the server side can extract binary information from the audio file according to a preset decoding rule, and convert the extracted binary information into log information, wherein the decoding rule is matched with the encoding rule;
the waveform of the audio file is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is positioned outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit before processing, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit before processing; the encoding rule includes: each code element of the binary information corresponds to a data embedding point, the waveform of the audio file is adjusted to enable the peak or the trough of the data embedding point corresponding to a preset code element to move to the first amplitude interval, and the preset code element is 0 or 1.
2. The method of claim 1, wherein each data embedding point on the pre-processed audio file waveform corresponds to a time later than a first time, the first time later than a start time of the audio file waveform.
3. The method as claimed in claim 1, wherein the adjusting the audio file waveform such that the peak or the trough of the data embedding point corresponding to the preset symbol is moved to be within the first amplitude interval comprises:
adjusting the amplitude of the data embedding point corresponding to the preset code element according to the difference value between the amplitude of the data embedding point corresponding to the preset code element and the boundary value of the first amplitude interval and a preset proportion;
wherein, for the data embedding point belonging to the maximum value point, the adjustment value of the amplitude of the data embedding point is the preset proportion of the difference between the amplitude before adjustment and the amplitude upper limit; and for the data embedding point belonging to the minimum value point, the adjustment value of the amplitude of the data embedding point is the preset proportion of the difference between the amplitude lower limit and the amplitude before adjustment, and the preset proportion is not less than 0.5 and not more than 1.
4. The method as claimed in claim 1, wherein the adjusting the audio file waveform such that the peak or the trough of the data embedding point corresponding to the preset symbol is moved to be within the first amplitude interval comprises:
for the data embedding points belonging to the maximum value point, adjusting the amplitude of the data embedding point corresponding to the preset code element to be a third threshold value in the first amplitude interval; for the data embedding points belonging to the minimum value points, adjusting the amplitude of the data embedding point corresponding to the preset code element to be a fourth threshold value in the first amplitude interval; the third threshold is greater than the fourth threshold.
5. A system for communicating information by sound, the system being implemented on an audio playback device having data processing capabilities, comprising:
the log information acquisition module is used for acquiring log information which can be used for fault analysis;
the first conversion module is used for converting the log information into binary information;
and the processing module is used for processing the audio file according to the binary information and a preset coding rule and playing the processed audio file so as to enable: the user side can obtain a recording file obtained by recording and playing the processed audio file and upload the recording file to the server side; the server side can extract binary information from the audio file according to a preset decoding rule, and convert the extracted binary information into log information, wherein the decoding rule is matched with the encoding rule;
the waveform of the audio file is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is positioned outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit before processing, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit before processing; the encoding rule includes: each code element of the binary information corresponds to a data embedding point, the waveform of the audio file is adjusted to enable the peak or the trough of the data embedding point corresponding to a preset code element to move to the first amplitude interval, and the preset code element is 0 or 1.
6. An apparatus for communicating information using sound, comprising a processor and a storage device for storing instructions which, when executed by the processor, implement the method of any one of claims 1 to 4.
7. A method for extracting information from sound, the method being performed by a server and comprising:
receiving a recording file uploaded by a user side, wherein the recording file is obtained by recording and playing an audio file processed according to a preset coding rule;
extracting binary information from the audio record file according to a preset decoding rule, wherein the decoding rule is matched with the encoding rule;
converting the binary information into log information;
the waveform of the audio file is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is positioned outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit; the waveform of the audio file has a data extraction point corresponding to each data embedding point on the waveform of the audio file; the decoding rule includes: determining data extraction points on the audio file waveform based on the pre-processed audio file waveform; each code element of the binary information corresponds to a data extraction point, and for each data extraction point, whether the amplitude of the data extraction point is within a second amplitude interval corresponding to the first amplitude interval is judged, if yes, the code element corresponding to the data extraction point is determined to be a preset code element, and if not, the code element is determined to be a non-preset code element, and the preset code element is 0 or 1.
8. A system for extracting information from sound, the system implemented on a server side comprising:
the recording file receiving module is used for receiving a recording file uploaded by a user side, wherein the recording file is obtained by recording and playing an audio file processed according to a preset coding rule;
the information extraction module is used for extracting binary information from the sound recording file according to a preset decoding rule, and the decoding rule is matched with the coding rule;
the second conversion module is used for converting the binary information into log information;
the waveform of the audio file is provided with a data embedding point, the data embedding point is an extreme point, the data embedding point is positioned outside a first amplitude interval formed by a preset amplitude upper limit and a preset amplitude lower limit before processing, the maximum value corresponding to the data embedding point is smaller than a first threshold value between the amplitude maximum value and the amplitude upper limit, and the minimum value corresponding to the data embedding point is larger than a second threshold value between the amplitude minimum value and the amplitude lower limit; the waveform of the audio file has a data extraction point corresponding to each data embedding point on the waveform of the audio file; the decoding rule includes: determining data extraction points on the audio file waveform based on the pre-processed audio file waveform; each code element of the binary information corresponds to a data extraction point, and for each data extraction point, whether the amplitude of the data extraction point is within a second amplitude interval corresponding to the first amplitude interval is judged, if yes, the code element corresponding to the data extraction point is determined to be a preset code element, and if not, the code element is determined to be a non-preset code element, and the preset code element is 0 or 1.
9. An apparatus for extracting information from sound, comprising a processor and a storage device for storing instructions which, when executed by the processor, implement the method of claim 7.
CN202011020229.3A 2020-09-25 2020-09-25 Method and system for transmitting information by sound Active CN111930551B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110089317.7A CN112860468B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound
CN202011020229.3A CN111930551B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020229.3A CN111930551B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110089317.7A Division CN112860468B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound

Publications (2)

Publication Number Publication Date
CN111930551A CN111930551A (en) 2020-11-13
CN111930551B true CN111930551B (en) 2021-01-08

Family

ID=73334134

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110089317.7A Active CN112860468B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound
CN202011020229.3A Active CN111930551B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110089317.7A Active CN112860468B (en) 2020-09-25 2020-09-25 Method and system for transmitting information by sound

Country Status (1)

Country Link
CN (2) CN112860468B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
US20080091805A1 (en) * 2006-10-12 2008-04-17 Stephen Malaby Method and apparatus for a fault resilient collaborative media serving array
JP2010136234A (en) * 2008-12-08 2010-06-17 Mitsubishi Electric Corp Wireless communication apparatus
CN107084754A (en) * 2017-04-27 2017-08-22 深圳万发创新进出口贸易有限公司 A kind of transformer fault detection device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3991249B2 (en) * 1998-07-15 2007-10-17 ソニー株式会社 Encoding apparatus and encoding method, decoding apparatus and decoding method, information processing apparatus and information processing method, and recording medium
CN101964202B (en) * 2010-09-09 2012-03-28 南京中兴特种软件有限责任公司 Audio data file playback processing method mixed with multiple encoded formats
CN105225683B (en) * 2014-06-18 2019-11-05 中兴通讯股份有限公司 Audio frequency playing method and device
CN104991936B (en) * 2015-07-03 2018-04-13 广州市动景计算机科技有限公司 A kind of target information acquisition, method for pushing and device
CN108024120B (en) * 2016-11-04 2020-04-17 上海动听网络科技有限公司 Audio generation, playing and answering method and device and audio transmission system
CN108735223B (en) * 2017-04-14 2020-08-07 北大方正集团有限公司 Method and system for embedding and extracting digital watermark of audio file
CN108964787B (en) * 2018-07-06 2021-02-19 南京航空航天大学 Information broadcasting method based on ultrasonic waves
CN110309662A (en) * 2019-06-10 2019-10-08 广东云立方互动科技有限公司 Acoustic signal processing method, electronic equipment, server and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
US20080091805A1 (en) * 2006-10-12 2008-04-17 Stephen Malaby Method and apparatus for a fault resilient collaborative media serving array
JP2010136234A (en) * 2008-12-08 2010-06-17 Mitsubishi Electric Corp Wireless communication apparatus
CN107084754A (en) * 2017-04-27 2017-08-22 深圳万发创新进出口贸易有限公司 A kind of transformer fault detection device

Also Published As

Publication number Publication date
CN112860468A (en) 2021-05-28
CN112860468B (en) 2022-05-10
CN111930551A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
US9336773B2 (en) System and method for standardized speech recognition infrastructure
US6775651B1 (en) Method of transcribing text from computer voice mail
US10643620B2 (en) Speech recognition method and apparatus using device information
CN102842306A (en) Voice control method and device as well as voice response method and device
CN107274895B (en) Voice recognition device and method
US20210248225A1 (en) Contactless user authentication method
CN108305618A (en) Voice acquisition and search method, intelligent pen, search terminal and storage medium
JP2014176033A (en) Communication system, communication method and program
CN110364155A (en) Voice control error-reporting method, electric appliance and computer readable storage medium
CN111930551B (en) Method and system for transmitting information by sound
CN105498168A (en) Method and device for controlling treadmill through voices
CN109147791A (en) A kind of shorthand system and method
CN106528715A (en) Audio content checking method and device
CN103730117A (en) Self-adaptation intelligent voice device and method
CN116403591A (en) Speech enhancement method, apparatus and computer readable storage medium
CN204440902U (en) Terminal intelligent tone testing system
JP2016180918A (en) Voice recognition system, voice recognition method, and program
CN109859763A (en) A kind of intelligent sound signal type recognition system
CN114095883B (en) Fixed telephone terminal communication method, device, computer equipment and storage medium
CN110853651A (en) Voice voting method, voting content verification method and system thereof
CN106297775A (en) Speech recognition equipment and method
CN114666706B (en) Sound effect enhancement method, device and system
CN114179083B (en) Leading robot voice information generation method and device and leading robot
CN117238275B (en) Speech synthesis model training method and device based on common sense reasoning and synthesis method
CN110956964B (en) Method, apparatus, storage medium and terminal for providing voice service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.8, Kangping street, Jiangning Economic and Technological Development Zone, Nanjing, Jiangsu, 211106

Patentee after: Hansang (Nanjing) Technology Co.,Ltd.

Address before: No.8, Kangping street, Jiangning Economic and Technological Development Zone, Nanjing, Jiangsu, 211106

Patentee before: HANSONG (NANJING) TECHNOLOGY CO.,LTD.