WO2015106576A1 - 基于音频的数据标签发布系统及方法 - Google Patents
基于音频的数据标签发布系统及方法 Download PDFInfo
- Publication number
- WO2015106576A1 WO2015106576A1 PCT/CN2014/086606 CN2014086606W WO2015106576A1 WO 2015106576 A1 WO2015106576 A1 WO 2015106576A1 CN 2014086606 W CN2014086606 W CN 2014086606W WO 2015106576 A1 WO2015106576 A1 WO 2015106576A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- data
- processing device
- audio data
- processor
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 109
- 230000011218 segmentation Effects 0.000 claims description 19
- 210000005069 ears Anatomy 0.000 claims description 13
- 230000003321 amplification Effects 0.000 claims description 6
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000004883 computer application Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000000060 site-specific infrared dichroism spectroscopy Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
- G06F16/433—Query formulation using audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/067—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
- G06K19/07—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
- G06K19/0723—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
Definitions
- the present invention relates to the field of computer application technologies, and in particular, to the field of computer technology for performing audio processing, and in particular to an audio-based data label issuing system and method.
- the data tag refers to tagged data information that can be read by a specific device, including NFC, RFID, barcode, and two-dimensional code, etc. In the present application, it mainly refers to a barcode and a two-dimensional code pattern that can be transmitted through a network.
- the object of the present invention is to overcome the above-mentioned shortcomings in the prior art, and to provide a data tag corresponding to information to be propagated associated with one or more audios, and the associated audio data may be the background music itself or the background music.
- the viewer or the customer can recognize the background music through the mobile phone and other devices, and then obtain the specific communication information through the network, so that the viewer can intuitively obtain the product information without hindering the performance, and can also quickly and easily
- the audio-based data label issuing system of the present invention has the following constitution:
- the system includes an audio processing device, an audio playback device, a client, and a data tag server.
- the audio processing device is configured to define the audio data according to a preset rule, and associate a part, some parts, or all of the audio data with the preset one or more data tag information to form associated audio data.
- the audio processing device is further configured to receive user-side audio data, and search for one or more data tag information associated with the received user-side audio data;
- the audio playback device is coupled to the audio processing device for reading Correlating the audio data, and outputting the associated audio data in the form of sound waves;
- the user end includes a microphone, a sound wave digitizing processor connected to the microphone, and a network module connected to the sound wave digitizing processor,
- the microphone is configured to receive the sound wave output by the audio playback device, the sound wave digitizing processor converts the sound wave received by the microphone into digitized user terminal audio data, and the network module connects the audio processing device through a network, And transmitting the user terminal audio data to the audio processing device
- the user end is further configured to acquire the data information corresponding to the data label according to the obtained data label
- the audio processing device includes a high audio processor and a mixing processor.
- the high audio processor is configured to associate high audio data that is inaudible to one or more human ears with one or more data tag information, and the high audio processor is further configured to receive user terminal audio data according to the received user.
- the end audio data is searched for one or more data tag information associated therewith;
- the mixing processor is coupled to the high audio processor and stores audio data that can be heard by a human ear to be played for
- the high-audio data that cannot be heard by one or more human ears is mixed with the audio data that can be heard by the human ear that needs to be played to form the associated audio data.
- the audio frequency of the high-audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz.
- the audio frequencies between the high-audio data that cannot be heard by the plurality of human ears are different.
- the audio processing device includes: an audio segmentation processor that stores audio data that needs to be played to segment the audio data, and each piece of audio data and a preset Correlating one or more data tag information to form associated audio data, the audio segmentation processor is further configured to receive user terminal audio data, and search for one or more data tag information associated with the user terminal audio data according to the received .
- the audio segmentation processor segments the audio data according to syllables of audio data.
- the audio-based data label issuing system further includes a wireless network access device, and the wireless network access device is divided into Do not connect the audio processing device, the client, and the data tag server by wireless signals.
- the wireless network access device is a 2G/3G/4G wireless signal transceiving device, and the wireless signal is a 2G/3G/4G wireless signal; or the wireless network is The access device is a wireless router, and the wireless signal is a WIFI signal.
- the wireless network access device has a device identification code for indicating a location, and the wireless network access device is configured to, after receiving the user-side audio data, Transmitting the user terminal audio data and the device identification code to the audio processing device; the audio processing device further includes a device identifier resolution server, configured to confirm the issuance by parsing the device identifier Whether the UE of the client audio data is connected to the wireless network access device connected to the audio processing device to determine the location of the client, and if so, the audio processing device searches for one or more associated with the received audio data of the client. Data tag information.
- the audio playback device includes a power amplifier and an audio, an input end of the power amplifier is connected to an output end of the audio processing device, and an output end of the power amplifier is connected to the The input of the sound.
- the present invention also provides an audio-based data label issuing method implemented by the system, the method comprising the following steps:
- the audio processing device defines the audio data according to a preset rule, and associates a part, some parts or all of the audio data with a preset one or more data tag information to form an associated audio. data;
- the audio playback device reads the associated audio data from the audio processing device, and outputs the associated audio data in the form of sound waves;
- the user terminal receives the sound wave output by the audio playback device through the microphone, and converts the sound wave received by the microphone into digitized user terminal audio data through the sound wave digitizing processor;
- the user terminal sends the user terminal audio data to the audio processing device through a network module
- the audio processing device searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server;
- the data tag server sends a corresponding data tag to the user terminal according to the data tag information received from the audio processing device;
- the user end acquires the data information corresponding to the data label from the server corresponding to the label through the network according to the obtained data label.
- the audio processing device includes a high-audio processor and a mixing processor, and the step (1) specifically includes the following steps:
- the high audio processor is one or more high audio data and one or more numbers that are inaudible to one or more human ears. Associated with the tag information;
- the mixing processor mixes the high-audio data that cannot be heard by the one or more human ears with the audio data that can be heard by the human ear that needs to be played, and forms a Associated audio data.
- the step (5) is specifically:
- the high audio processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
- the audio frequency of the high-audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz.
- the audio frequencies between the high-audio data that cannot be heard by the plurality of human ears are different.
- the audio processing device includes an audio segmentation processor, and the step (1) specifically includes the following steps:
- the audio segmentation processor segments the audio data to be played, and associates each piece of audio data with a preset one or more data tag information to form associated audio data.
- the step (5) is specifically:
- the audio segmentation processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
- the audio segmentation processor segments the audio data according to a syllable of audio data that needs to be played.
- the system further includes a wireless network access device, wherein the wireless network access device respectively connects the audio processing device, the user end, and the data label server by using a wireless signal, Step (4) is specifically:
- the user terminal sends the user terminal audio data to the audio processing device through the wireless network access device through a network module;
- step (6) is specifically:
- the data tag server transmits a corresponding data tag to the user terminal via the wireless network access device according to the data tag information received from the audio processing device.
- the wireless network access device has a device identifier for indicating a location
- the audio processing device further includes a device identifier resolution server
- the step (41) is specific. for:
- the device identifier resolution server of the audio processing device determines whether the received device identifier is the same as the device identifier of the wireless network access device to which the audio processing device is connected, and if yes, proceeds to the step ( 5) If not, this exits the method.
- the audio playback device includes a power amplifier and an audio
- the step (2) specifically includes the following steps:
- the power amplifier reads the associated audio data from the audio processing device, and performs power amplification processing on the associated audio data;
- the audio-based data labeling system and method of the invention are employed, as the system includes an audio processing device, an audio playback device, a client, and a data tag server. Therefore, the audio processing device can be used to associate the data tag corresponding to the information to be propagated with one or more audios, and then the associated audio data is played as the background music through the audio playing device, and the user recognizes the background music through the user end such as the mobile phone or the tablet computer.
- the user terminal audio data is sent to the audio processing device through the network, and after the audio processing device finds the corresponding data tag information, the data tag server sends the data tag to the user end, and the user terminal can further obtain the product corresponding to the data tag. Information and other data.
- the associated audio data can be used as the background music to enable the viewer to intuitively obtain the product information without hindering the performance of the performance; or to enable the customer to quickly obtain a large amount of product information, and the present invention is based on
- the audio data label issuance system has a simple structure and a wide range of applications, and the method is simple to implement and the implementation cost is relatively low.
- FIG. 1 is a schematic structural diagram of an audio-based data label issuing system of the present invention.
- FIG. 2 is a schematic diagram showing the steps of a method for distributing an audio-based data tag according to the present invention.
- FIG. 3 is a schematic diagram of an audio-based data label distribution system and method of the present invention implemented in an additional high-audio manner in practical applications.
- FIG. 4 is a schematic diagram of the audio-based data label issuing system and method of the present invention implemented in a background music segmentation definition manner in practical applications.
- FIG. 1 is a schematic structural diagram of an audio-based data label issuing system of the present invention.
- the audio-based data label distribution system includes an audio processing device, an audio playback device, Client and data tag server.
- the audio processing device is configured to define the audio data according to a preset rule, and associate a part, some parts, or all of the audio data with the preset one or more data tag information to form associated audio data.
- the audio processing device is further configured to receive user-side audio data, and search for one or more data tag information associated with the received user-side audio data;
- the audio playback device is coupled to the audio processing device for reading Correlating the audio data, and outputting the associated audio data in the form of sound waves;
- the user end includes a microphone, a sound wave digitizing processor connected to the microphone, and a network module connected to the sound wave digitizing processor,
- the microphone is configured to receive the sound wave output by the audio playback device, the sound wave digitizing processor converts the sound wave received by the microphone into digitized user terminal audio data, and the network module connects the audio processing device through a network, And transmitting the user terminal audio data to the audio processing device
- the user end is further configured to acquire the data information corresponding to the data label according to the obtained data label
- the audio-based data label issuing method implemented by the system described in this embodiment, as shown in FIG. 2, includes the following steps:
- the audio processing device defines the audio data according to a preset rule, and associates a part, some parts or all of the audio data with a preset one or more data tag information to form an associated audio. data;
- the audio playback device reads the associated audio data from the audio processing device, and outputs the associated audio data in the form of sound waves;
- the user terminal receives the sound wave output by the audio playback device through the microphone, and converts the sound wave received by the microphone into digitized user terminal audio data through the sound wave digitizing processor;
- the user terminal sends the user terminal audio data to the audio processing device through a network module
- the audio processing device searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server;
- the data tag server sends a corresponding data tag to the user terminal according to the data tag information received from the audio processing device;
- the user end acquires the data information corresponding to the data label from the server corresponding to the label through the network according to the obtained data label.
- the audio processing device comprises a high audio processor and a mixing processor.
- the high-audio processor is used to record high-audio data and one or more data labels that are inaudible to one or more human ears. Signing information association, the high audio processor is further configured to receive user terminal audio data, and search for one or more data tag information associated with the received user terminal audio data; the mixing processor is connected to the high An audio processor, and storing audio data that can be heard by a human ear that needs to be played, to enable high-audio data that cannot be heard by the one or more human ears and the human ear that needs to be played can be heard.
- the audio data is subjected to mixing processing to form the associated audio data.
- the audio frequency of the high audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz. When there are a plurality of high-audio data that cannot be heard by the human ear, the audio frequencies between the pieces of high-audio data are different.
- the step (1) specifically includes the following steps:
- the high audio processor associating one or more pieces of high audio data that are inaudible to a human ear with one or more data tag information;
- the mixing processor mixes the high-audio data that cannot be heard by the one or more human ears with the audio data that can be heard by the human ear that needs to be played, and forms a Associated audio data.
- step (5) is specifically:
- the high audio processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
- the audio processing device includes an audio segmentation processor that stores audio data that needs to be played to segment the audio data, and each piece of audio is segmented.
- the data is associated with the preset one or more data tag information to form associated audio data
- the audio segmentation processor is further configured to receive the user terminal audio data, and search for the associated one or according to the received client audio data. Multiple data tag information.
- Its audio segmentation processor preferably segments the audio data based on the syllables of the audio data.
- the step (1) specifically includes the following steps:
- the audio segmentation processor segments the audio data to be played, and associates each piece of audio data with a preset one or more data tag information to form associated audio data.
- step (5) is specifically:
- the audio segmentation processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
- the system further comprises a wireless network access device, wherein the wireless network access device is respectively connected to the audio processing device, the client and the data tag server by wireless signals.
- the wireless network access device may be a signal transmitting and receiving device of 2G/3G/4G wireless communication, and the corresponding wireless signal is a 2G/3G/4G wireless signal. number.
- the wireless network access device can also select a wireless router, and the corresponding wireless signal is a WIFI signal.
- the step (4) is specifically:
- the user terminal sends the user terminal audio data to the audio processing device through the wireless network access device through a network module;
- the step (6) is specifically as follows:
- the data tag server transmits a corresponding data tag to the user terminal via the wireless network access device according to the data tag information received from the audio processing device.
- the wireless network access device has a device identification code (such as an SSID of a wireless router) for indicating or as location information, and the wireless network access device is configured to receive the After the user terminal audio data is described, the user terminal audio data and the device identification code are sent to the audio processing device;
- a device identification code such as an SSID of a wireless router
- the audio processing device further includes a device identifier resolution server, configured to confirm, by parsing the device identifier, whether the user end that sends the user terminal audio data and the wireless network access device connected to the audio processing device The connection is determined to determine the location of the client, and if so, the audio processing device looks up one or more data tag information associated with the received audio data of the client.
- a device identifier resolution server configured to confirm, by parsing the device identifier, whether the user end that sends the user terminal audio data and the wireless network access device connected to the audio processing device The connection is determined to determine the location of the client, and if so, the audio processing device looks up one or more data tag information associated with the received audio data of the client.
- step (41) is specifically:
- the device identifier resolution server of the audio processing device determines whether the received device identifier is the same as the device identifier of the wireless network access device to which the audio processing device is connected, and if yes, proceeds to the step ( 5) If not, this exits the method.
- the audio playback device includes a power amplifier and an audio, an input end of the power amplifier is connected to an output end of the audio processing device, and an output end of the power amplifier is connected to the The input of the sound.
- the step (2) specifically includes the following steps:
- the power amplifier reads the associated audio data from the audio processing device, and performs power amplification processing on the associated audio data;
- the present invention may generally include two schemes as shown in FIGS. 3 and 4.
- Option 1 As shown in Figure 3, using professional audio processing and mixing equipment, the audio electronic tag is 16 kHz to 20 kHz beyond the normal receiving range of the human ear, without affecting the viewer/customer's enjoyment of music/song/sound effects.
- the data tag corresponding to the audio electronic tag is pushed to the smart terminal App of the viewer/customer through the WIFI network to display/display the related information.
- the mic/microphone audio receiving range of smart terminals such as mobile phones and tablet computers ranges from 20 Hz to 20 kHz.
- the listening range of the human ear is in the range of 20 Hz to 20 kHz, but the range that can be heard by the human ear in real life is between 90 Hz and 15.1 kHz.
- the human ear increases with age and other reasons, the actual hearing range is much smaller than 90 Hz to 15.1 kHz (mainly in high frequency). As the age increases, the human hearing range is determined by the physiological structure.
- the dedicated server is resolved using a dedicated audio electronic tag corresponding to the definition of the data tag.
- the smart terminal launches a dedicated App, and uses the microphone/microphone of the smart terminal to collect audio electronic tags of 16 kHz to 20 kHz.
- the app of the smart terminal collects the audio electronic tag through the microphone, connects to the dedicated server through the WIFI network for parsing, obtains the data tag corresponding to the electronic tag, realizes the release and application of the data tag, and enables the user to obtain the corresponding information.
- Solution 2 As shown in FIG. 4, the existing music/song/sound effects are segmented, and each segment corresponds to a corresponding audio electronic tag.
- the WIFI network is used to determine the location/location, the audio electronic tag and the location/location are combined, and the corresponding data tag is pushed to the viewer/customer's smart terminal App through the WIFI network to display/display the related information.
- the location/location determination in the above solution is mainly based on the WIFI network (the specific meaning and method of "using the device identification code of the WIFI or other wireless network access device as the determined location" in the present application can refer to the application number 201310460760.6, the name The "data tag carrier information application and processing system and method" is determined.)
- the present invention can also adopt the 3G/4G base station positioning technology of the mobile/unicom/telecom network, and the two can be used in combination.
- each brand store has its own corresponding WIFI network and its corresponding background music/song/sound effect.
- the customer uses the smart terminal to open a specific App program to connect to the WIFI network, and receive the audio electronic tag for obtaining the segmented background music/song/sound effect through the microphone/microphone of the smart terminal, and the audio electronic tag and the place/location (WIFI) combination are connected to Define the resolution of the dedicated server for parsing and get the corresponding data label.
- the customer's smart terminal obtains a data tag corresponding to the site, and the data tag pushes the site related information to the smart terminal App for the user to browse/operate.
- the audience/clients use their own smart terminals to open a specific app to connect to the WIFI network, and receive the segmented background music/songs via the microphone/microphone of the smart terminal.
- the audio electronic tag, the audio electronic tag and the location/location (WIFI) combination are connected to the definition resolution dedicated server for parsing and obtaining the corresponding data tag.
- the smart terminal of the viewer/customer obtains a data tag corresponding to the scene, and the data tag pushes the relevant information on the site to the smart terminal App for viewer/customer to browse/operate.
- the application of the present invention is not limited to the specific embodiments described above, and the obvious invention can also be used to introduce the performance content or dialogue, lyrics and the like to the audience during the performances such as dramas and musicals, or to use background music in the tourist attractions.
- the location information of the access point introduces the tourist information and the visit path information to the tourists.
- the audio-based data labeling system and method of the invention are employed, as the system includes an audio processing device, an audio playback device, a client, and a data tag server. Therefore, the audio processing device can be used to associate the data tag corresponding to the information to be propagated with one or more audios, and then the associated audio data is played as the background music through the audio playing device, and the user recognizes the background music through the user end such as the mobile phone or the tablet computer.
- the user terminal audio data is sent to the audio processing device through the network, and after the audio processing device finds the corresponding data tag information, the data tag server sends the data tag to the user end, and the user terminal can further obtain the product corresponding to the data tag. Information and other data.
- the associated audio data can be used as the background music to enable the viewer to intuitively obtain the product information without hindering the performance of the performance; or to enable the customer to quickly obtain a large amount of product information, and the present invention is based on
- the audio data label issuance system has a simple structure and a wide range of applications, and the method is simple to implement and the implementation cost is relatively low.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Telephonic Communication Services (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
Claims (22)
- 一种基于音频的数据标签发布系统,其特征在于,包括:音频处理设备,用以将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频处理设备还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;音频播放设备,连接所述的音频处理设备,用以读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;用户端,包括麦克风、与麦克风连接的声波数字化处理器以及与所述的声波数字化处理器连接的网络模块,所述的麦克风用以接收所述的音频播放设备输出的声波,所述的声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据,所述的网络模块通过网络连接所述的音频处理设备,并将所述的用户端音频数据发送到所述的音频处理设备;该用户端还用以根据获得的数据标签通过网络获取所述的数据标签对应的数据信息;数据标签服务器,通过网络分别连接所述的音频处理设备和用户端,用以根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端。
- 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频处理设备包括:高音频处理器,用以将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联,该高音频处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;混音处理器,连接于所述的高音频处理器,并存储有需要播放的人耳能够听到的音频数据,用以将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
- 根据权利要求2所述的基于音频的数据标签发布系统,其特征在于,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
- 根据权利要求2所述的基于音频的数据标签发布系统,其特征在于,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
- 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频处理设备包括:音频分段处理器,存储有需要播放的音频数据,用以将该音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频分段处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
- 根据权利要5所述的基于音频的数据标签发布系统,其特征在于,所述的音频分段处理器根据音频数据的音节将所述的音频数据分段。
- 根据权利要1至6中任一项所述的基于音频的数据标签发布系统,其特征在于,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器。
- 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备为2G/3G/4G无线信号收发设备,所述的无线信号为2G/3G/4G无线信号。
- 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备为无线路由器,所述的无线信号为WIFI信号。
- 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的无线网络接入设备用以在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;所述的音频处理设备还包括设备识别码解析服务器,用以通过解析所述的设备识别码确认发出所述的用户端音频数据的用户端是否与连接该音频处理设备的无线网络接入设备连接,以确定用户端位置,若是,则音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
- 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频播放设备包括功放和音响,所述的功放的输入端连接所述的音频处理设备的输出端,所述的功放的输出端连接所述的音响的输入端。
- 一种利用权利要求1所述的系统实现的基于音频的数据标签发布方法,其特征在于,所述的方法包括以下步骤:(1)所述的音频处理设备将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据;(2)所述的音频播放设备从所述的音频处理设备读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;(3)所述的用户端通过麦克风接收音频播放设备输出的声波,并通过声波数字化处理器 将麦克风接收的声波转换为数字化的用户端音频数据;(4)所述的用户端通过网络模块将所述的用户端音频数据发送到所述的音频处理设备;(5)所述的音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器;(6)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端;(7)所述的用户端根据获得的数据标签通过网络从与所述标签对应的服务器获取所述的数据标签对应的数据信息。
- 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频处理设备包括高音频处理器和混音处理器,所述的步骤(1)具体包括以下步骤:(11-1)所述的高音频处理器将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联;(11-2)所述的混音处理器将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
- 根据权利要求13所述的基于音频的数据标签发布方法,其特征在于,所述的步骤(5)具体为:(51)所述的高音频处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
- 根据权利要求13所述的基于音频的数据标签发布方法,其特征在于,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
- 根据权利要求15所述的基于音频的数据标签发布方法,其特征在于,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
- 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频处理设备包括音频分段处理器,所述的步骤(1)具体包括以下步骤:(12)所述的音频分段处理器将需要播放的音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据。
- 根据权利要求17所述的基于音频的数据标签发布方法,其特征在于,所述的步骤(5)具体为:(52)所述的音频分段处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
- 根据权利要求17所述的基于音频的数据标签发布方法,其特征在于,所述的音频分段处理器根据需要播放的音频数据的音节将该音频数据分段。
- 根据权利要求12至19中任一项所述的基于音频的数据标签发布方法,其特征在于,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器,所述的步骤(4)具体为:(41)所述的用户端通过网络模块将所述的用户端音频数据经过所述的无线网络接入设备发送到所述的音频处理设备;所述的步骤(6)具体为:(61)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签经过所述的无线网络接入设备发送至所述的用户端。
- 根据权利要求20所述的基于音频的数据标签发布方法,其特征在于,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的音频处理设备还包括设备识别码解析服务器,所述的步骤(41)具体为:(41-1)所述的无线网络接入设备在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;(41-2)所述的音频处理设备的设备识别码解析服务器判断接收到的设备识别码是否与该音频处理设备所连接的无线网络接入设备的设备识别码相同,若是,则进入步骤(5),若否,这退出该方法。
- 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频播放设备包括功放和音响,所述的步骤(2)具体包括以下步骤:(2-1)所述的功放从所述的音频处理设备读取所述的关联音频数据,并对所述的关联音频数据进行功率放大处理;(2-2)所述的音响从将所述的经功率放大处理的关联音频数据以声波形式输出。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14878964.7A EP3098725A4 (en) | 2014-01-20 | 2014-09-16 | Audio-based data label distribution system and method |
RU2016123401A RU2676031C2 (ru) | 2014-01-20 | 2014-09-16 | Система и способ передачи информации с помощью тегов данных на основе частоты звукового диапазона |
US14/418,886 US9569716B2 (en) | 2014-01-20 | 2014-09-16 | System and method for distributing audio-based data tags |
JP2016543055A JP2017504892A (ja) | 2014-01-20 | 2014-09-16 | 音声周波数をベースとするデータタグ配布システム及び方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410026089.9A CN104794127B (zh) | 2014-01-20 | 2014-01-20 | 基于音频的数据标签发布系统及方法 |
CN201410026089.9 | 2014-01-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015106576A1 true WO2015106576A1 (zh) | 2015-07-23 |
Family
ID=53542368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/086606 WO2015106576A1 (zh) | 2014-01-20 | 2014-09-16 | 基于音频的数据标签发布系统及方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9569716B2 (zh) |
EP (1) | EP3098725A4 (zh) |
JP (1) | JP2017504892A (zh) |
CN (1) | CN104794127B (zh) |
RU (1) | RU2676031C2 (zh) |
WO (1) | WO2015106576A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105513512A (zh) * | 2015-12-30 | 2016-04-20 | 华益(天津)文化传播有限公司 | 一种文化传播用智能设备 |
CN108833403A (zh) * | 2018-06-11 | 2018-11-16 | 颜彦 | 一种具有嵌入式代码移植的融媒体信息发布生成方法 |
CN110855802B (zh) * | 2020-01-15 | 2022-02-11 | 广州欧赛斯信息科技有限公司 | 职教诊改系统的数据分片分发存储方法、装置及服务器 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040254792A1 (en) * | 2003-06-10 | 2004-12-16 | Bellsouth Intellectual Proprerty Corporation | Methods and system for creating voice files using a VoiceXML application |
CN1629970A (zh) * | 2003-12-15 | 2005-06-22 | 国际商业机器公司 | 用于在音频文件内表示内容的方法和系统 |
CN1909704A (zh) * | 2006-08-30 | 2007-02-07 | 钟杨 | 移动终端设备获取用户接口的方法以及移动终端装置 |
CN101187974A (zh) * | 2007-12-06 | 2008-05-28 | 深圳华为通信技术有限公司 | 二维码的应用方法和装置 |
CN102402542A (zh) * | 2010-09-14 | 2012-04-04 | 腾讯科技(深圳)有限公司 | 一种视频标签方法及系统 |
CN103281401A (zh) * | 2013-06-19 | 2013-09-04 | 安科智慧城市技术(中国)有限公司 | 展厅及其导览方法 |
CN104517136A (zh) | 2013-09-30 | 2015-04-15 | 曲立东 | 数据标签载体信息应用与处理系统及方法 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4179669A (en) * | 1978-06-05 | 1979-12-18 | Bose Corporation | Amplifying and equalizing |
US7562392B1 (en) * | 1999-05-19 | 2009-07-14 | Digimarc Corporation | Methods of interacting with audio and ambient music |
JP2003523571A (ja) * | 2000-02-15 | 2003-08-05 | オーツー・マイクロ・インク | ポータブル電子機器のオーディオコントローラ |
JP2001297106A (ja) * | 2000-04-13 | 2001-10-26 | Sony Corp | データベース作成方法 |
JP2002169579A (ja) * | 2000-12-01 | 2002-06-14 | Takayuki Arai | オーディオ信号への付加データ埋め込み装置及びオーディオ信号からの付加データ再生装置 |
WO2006126462A1 (ja) * | 2005-05-24 | 2006-11-30 | Matsushita Electric Industrial Co., Ltd. | 放送システム、ラジオ受信機、電子機器及びラジオ放送用変復調方法 |
KR100782055B1 (ko) * | 2005-05-25 | 2007-12-04 | (주)뮤레카 | 오디오유전자를 이용한 음악관련 정보 제공방법 및 시스템 |
US8249559B1 (en) * | 2005-10-26 | 2012-08-21 | At&T Mobility Ii Llc | Promotion operable recognition system |
US7684991B2 (en) * | 2006-01-05 | 2010-03-23 | Alpine Electronics, Inc. | Digital audio file search method and apparatus using text-to-speech processing |
CN101047722A (zh) * | 2006-03-30 | 2007-10-03 | 腾讯科技(深圳)有限公司 | 媒体文件推送系统及方法 |
US20070286358A1 (en) * | 2006-04-29 | 2007-12-13 | Msystems Ltd. | Digital audio recorder |
CN101115124B (zh) * | 2006-07-26 | 2012-04-18 | 日电(中国)有限公司 | 基于音频水印识别媒体节目的方法和装置 |
US20080262928A1 (en) * | 2007-04-18 | 2008-10-23 | Oliver Michaelis | Method and apparatus for distribution and personalization of e-coupons |
US8713593B2 (en) * | 2010-03-01 | 2014-04-29 | Zazum, Inc. | Detection system and method for mobile device application |
JP2012155706A (ja) * | 2011-01-07 | 2012-08-16 | Yamaha Corp | 情報提供システム、携帯端末装置、識別情報解決サーバおよびプログラム |
JP2013008109A (ja) * | 2011-06-22 | 2013-01-10 | Yamaha Corp | 文書投稿支援システム、携帯端末装置および文書投稿支援プログラム |
CN104160714A (zh) * | 2012-03-02 | 2014-11-19 | 雅马哈株式会社 | 内容提供系统、内容提供方法、内容编辑装置、内容解析系统、以及播送站id放音装置 |
JP5812910B2 (ja) * | 2012-03-22 | 2015-11-17 | 富士通エフ・アイ・ピー株式会社 | 認証装置及び認証方法 |
KR102317364B1 (ko) * | 2012-05-01 | 2021-10-25 | 엘아이에스엔알, 인크. | 콘텐츠 전달 및 관리를 위한 시스템 및 방법 |
US20130318114A1 (en) * | 2012-05-13 | 2013-11-28 | Harry E. Emerson, III | Discovery of music artist and title by broadcast radio receivers |
US9305559B2 (en) * | 2012-10-15 | 2016-04-05 | Digimarc Corporation | Audio watermark encoding with reversing polarity and pairwise embedding |
JP6125837B2 (ja) * | 2012-12-28 | 2017-05-10 | ヤフー株式会社 | 情報提供システムおよび情報提供方法 |
CN103152106B (zh) * | 2013-03-13 | 2015-11-25 | 荆效民 | 基于音频的超声波信息推送方法及系统 |
-
2014
- 2014-01-20 CN CN201410026089.9A patent/CN104794127B/zh active Active
- 2014-09-16 EP EP14878964.7A patent/EP3098725A4/en not_active Withdrawn
- 2014-09-16 WO PCT/CN2014/086606 patent/WO2015106576A1/zh active Application Filing
- 2014-09-16 RU RU2016123401A patent/RU2676031C2/ru active
- 2014-09-16 US US14/418,886 patent/US9569716B2/en not_active Expired - Fee Related
- 2014-09-16 JP JP2016543055A patent/JP2017504892A/ja active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040254792A1 (en) * | 2003-06-10 | 2004-12-16 | Bellsouth Intellectual Proprerty Corporation | Methods and system for creating voice files using a VoiceXML application |
CN1629970A (zh) * | 2003-12-15 | 2005-06-22 | 国际商业机器公司 | 用于在音频文件内表示内容的方法和系统 |
CN1909704A (zh) * | 2006-08-30 | 2007-02-07 | 钟杨 | 移动终端设备获取用户接口的方法以及移动终端装置 |
CN101187974A (zh) * | 2007-12-06 | 2008-05-28 | 深圳华为通信技术有限公司 | 二维码的应用方法和装置 |
CN102402542A (zh) * | 2010-09-14 | 2012-04-04 | 腾讯科技(深圳)有限公司 | 一种视频标签方法及系统 |
CN103281401A (zh) * | 2013-06-19 | 2013-09-04 | 安科智慧城市技术(中国)有限公司 | 展厅及其导览方法 |
CN104517136A (zh) | 2013-09-30 | 2015-04-15 | 曲立东 | 数据标签载体信息应用与处理系统及方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3098725A4 |
Also Published As
Publication number | Publication date |
---|---|
CN104794127A (zh) | 2015-07-22 |
EP3098725A4 (en) | 2017-10-04 |
CN104794127B (zh) | 2018-03-13 |
EP3098725A1 (en) | 2016-11-30 |
RU2676031C2 (ru) | 2018-12-25 |
US20160314391A1 (en) | 2016-10-27 |
US9569716B2 (en) | 2017-02-14 |
JP2017504892A (ja) | 2017-02-09 |
RU2016123401A (ru) | 2018-02-28 |
RU2016123401A3 (zh) | 2018-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130301392A1 (en) | Methods and apparatuses for communication of audio tokens | |
US20100093393A1 (en) | Systems and Methods for Music Recognition | |
CN103152480B (zh) | 利用移动终端进行到站提示的方法和装置 | |
JP6580132B2 (ja) | メディアコンテンツに関連付けられた情報を提供する方法および装置 | |
JP2007164659A (ja) | 音楽情報を利用した情報配信システム及び情報配信方法 | |
JP2013008109A (ja) | 文書投稿支援システム、携帯端末装置および文書投稿支援プログラム | |
CN104601202A (zh) | 基于蓝牙技术实现文件搜索的方法、终端及蓝牙设备 | |
CN106572241A (zh) | 一种信息展示方法和装置 | |
WO2015106576A1 (zh) | 基于音频的数据标签发布系统及方法 | |
CN109145226A (zh) | 内容推送方法和装置 | |
US20100222072A1 (en) | Systems, methods and apparatus for providing information to a mobile device | |
JP2016005268A (ja) | 情報伝送システム、情報伝送方法、及びプログラム | |
CN101855850B (zh) | 有效率地获得无线分享音频文件和信息 | |
JP2011248118A (ja) | 音響通信方法を用いたホームページ誘導方法およびシステム | |
CN104038772A (zh) | 生成铃声文件的方法及装置 | |
JP2012216185A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
CN204009894U (zh) | 基于音频的数据标签发布系统 | |
CN105448296B (zh) | 信息分发方法和装置以及信息接收方法和装置 | |
CN108289245A (zh) | 自动媒体信息播放方法 | |
JP6114249B2 (ja) | 情報送信装置および情報送信方法 | |
JP6701756B2 (ja) | 情報提供システム、情報提供装置、および、情報提供方法 | |
JP6195506B2 (ja) | 情報提供装置、情報提供方法、情報提供プログラム、端末装置および情報要求プログラム | |
WO2020246205A1 (ja) | プログラム、端末装置および端末装置の動作方法 | |
JP6866947B2 (ja) | 情報提供システム、および、情報提供方法 | |
CN203104777U (zh) | 利用广播系统实现将位置信号传送到移动通信终端的通信装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 14418886 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14878964 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016543055 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014878964 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014878964 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016123401 Country of ref document: RU Kind code of ref document: A |