WO2015106576A1 - 基于音频的数据标签发布系统及方法 - Google Patents

基于音频的数据标签发布系统及方法 Download PDF

Info

Publication number
WO2015106576A1
WO2015106576A1 PCT/CN2014/086606 CN2014086606W WO2015106576A1 WO 2015106576 A1 WO2015106576 A1 WO 2015106576A1 CN 2014086606 W CN2014086606 W CN 2014086606W WO 2015106576 A1 WO2015106576 A1 WO 2015106576A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
data
processing device
audio data
processor
Prior art date
Application number
PCT/CN2014/086606
Other languages
English (en)
French (fr)
Inventor
曲立东
Original Assignee
曲立东
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 曲立东 filed Critical 曲立东
Priority to EP14878964.7A priority Critical patent/EP3098725A4/en
Priority to RU2016123401A priority patent/RU2676031C2/ru
Priority to US14/418,886 priority patent/US9569716B2/en
Priority to JP2016543055A priority patent/JP2017504892A/ja
Publication of WO2015106576A1 publication Critical patent/WO2015106576A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/067Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
    • G06K19/07Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
    • G06K19/0723Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Definitions

  • the present invention relates to the field of computer application technologies, and in particular, to the field of computer technology for performing audio processing, and in particular to an audio-based data label issuing system and method.
  • the data tag refers to tagged data information that can be read by a specific device, including NFC, RFID, barcode, and two-dimensional code, etc. In the present application, it mainly refers to a barcode and a two-dimensional code pattern that can be transmitted through a network.
  • the object of the present invention is to overcome the above-mentioned shortcomings in the prior art, and to provide a data tag corresponding to information to be propagated associated with one or more audios, and the associated audio data may be the background music itself or the background music.
  • the viewer or the customer can recognize the background music through the mobile phone and other devices, and then obtain the specific communication information through the network, so that the viewer can intuitively obtain the product information without hindering the performance, and can also quickly and easily
  • the audio-based data label issuing system of the present invention has the following constitution:
  • the system includes an audio processing device, an audio playback device, a client, and a data tag server.
  • the audio processing device is configured to define the audio data according to a preset rule, and associate a part, some parts, or all of the audio data with the preset one or more data tag information to form associated audio data.
  • the audio processing device is further configured to receive user-side audio data, and search for one or more data tag information associated with the received user-side audio data;
  • the audio playback device is coupled to the audio processing device for reading Correlating the audio data, and outputting the associated audio data in the form of sound waves;
  • the user end includes a microphone, a sound wave digitizing processor connected to the microphone, and a network module connected to the sound wave digitizing processor,
  • the microphone is configured to receive the sound wave output by the audio playback device, the sound wave digitizing processor converts the sound wave received by the microphone into digitized user terminal audio data, and the network module connects the audio processing device through a network, And transmitting the user terminal audio data to the audio processing device
  • the user end is further configured to acquire the data information corresponding to the data label according to the obtained data label
  • the audio processing device includes a high audio processor and a mixing processor.
  • the high audio processor is configured to associate high audio data that is inaudible to one or more human ears with one or more data tag information, and the high audio processor is further configured to receive user terminal audio data according to the received user.
  • the end audio data is searched for one or more data tag information associated therewith;
  • the mixing processor is coupled to the high audio processor and stores audio data that can be heard by a human ear to be played for
  • the high-audio data that cannot be heard by one or more human ears is mixed with the audio data that can be heard by the human ear that needs to be played to form the associated audio data.
  • the audio frequency of the high-audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz.
  • the audio frequencies between the high-audio data that cannot be heard by the plurality of human ears are different.
  • the audio processing device includes: an audio segmentation processor that stores audio data that needs to be played to segment the audio data, and each piece of audio data and a preset Correlating one or more data tag information to form associated audio data, the audio segmentation processor is further configured to receive user terminal audio data, and search for one or more data tag information associated with the user terminal audio data according to the received .
  • the audio segmentation processor segments the audio data according to syllables of audio data.
  • the audio-based data label issuing system further includes a wireless network access device, and the wireless network access device is divided into Do not connect the audio processing device, the client, and the data tag server by wireless signals.
  • the wireless network access device is a 2G/3G/4G wireless signal transceiving device, and the wireless signal is a 2G/3G/4G wireless signal; or the wireless network is The access device is a wireless router, and the wireless signal is a WIFI signal.
  • the wireless network access device has a device identification code for indicating a location, and the wireless network access device is configured to, after receiving the user-side audio data, Transmitting the user terminal audio data and the device identification code to the audio processing device; the audio processing device further includes a device identifier resolution server, configured to confirm the issuance by parsing the device identifier Whether the UE of the client audio data is connected to the wireless network access device connected to the audio processing device to determine the location of the client, and if so, the audio processing device searches for one or more associated with the received audio data of the client. Data tag information.
  • the audio playback device includes a power amplifier and an audio, an input end of the power amplifier is connected to an output end of the audio processing device, and an output end of the power amplifier is connected to the The input of the sound.
  • the present invention also provides an audio-based data label issuing method implemented by the system, the method comprising the following steps:
  • the audio processing device defines the audio data according to a preset rule, and associates a part, some parts or all of the audio data with a preset one or more data tag information to form an associated audio. data;
  • the audio playback device reads the associated audio data from the audio processing device, and outputs the associated audio data in the form of sound waves;
  • the user terminal receives the sound wave output by the audio playback device through the microphone, and converts the sound wave received by the microphone into digitized user terminal audio data through the sound wave digitizing processor;
  • the user terminal sends the user terminal audio data to the audio processing device through a network module
  • the audio processing device searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server;
  • the data tag server sends a corresponding data tag to the user terminal according to the data tag information received from the audio processing device;
  • the user end acquires the data information corresponding to the data label from the server corresponding to the label through the network according to the obtained data label.
  • the audio processing device includes a high-audio processor and a mixing processor, and the step (1) specifically includes the following steps:
  • the high audio processor is one or more high audio data and one or more numbers that are inaudible to one or more human ears. Associated with the tag information;
  • the mixing processor mixes the high-audio data that cannot be heard by the one or more human ears with the audio data that can be heard by the human ear that needs to be played, and forms a Associated audio data.
  • the step (5) is specifically:
  • the high audio processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
  • the audio frequency of the high-audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz.
  • the audio frequencies between the high-audio data that cannot be heard by the plurality of human ears are different.
  • the audio processing device includes an audio segmentation processor, and the step (1) specifically includes the following steps:
  • the audio segmentation processor segments the audio data to be played, and associates each piece of audio data with a preset one or more data tag information to form associated audio data.
  • the step (5) is specifically:
  • the audio segmentation processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
  • the audio segmentation processor segments the audio data according to a syllable of audio data that needs to be played.
  • the system further includes a wireless network access device, wherein the wireless network access device respectively connects the audio processing device, the user end, and the data label server by using a wireless signal, Step (4) is specifically:
  • the user terminal sends the user terminal audio data to the audio processing device through the wireless network access device through a network module;
  • step (6) is specifically:
  • the data tag server transmits a corresponding data tag to the user terminal via the wireless network access device according to the data tag information received from the audio processing device.
  • the wireless network access device has a device identifier for indicating a location
  • the audio processing device further includes a device identifier resolution server
  • the step (41) is specific. for:
  • the device identifier resolution server of the audio processing device determines whether the received device identifier is the same as the device identifier of the wireless network access device to which the audio processing device is connected, and if yes, proceeds to the step ( 5) If not, this exits the method.
  • the audio playback device includes a power amplifier and an audio
  • the step (2) specifically includes the following steps:
  • the power amplifier reads the associated audio data from the audio processing device, and performs power amplification processing on the associated audio data;
  • the audio-based data labeling system and method of the invention are employed, as the system includes an audio processing device, an audio playback device, a client, and a data tag server. Therefore, the audio processing device can be used to associate the data tag corresponding to the information to be propagated with one or more audios, and then the associated audio data is played as the background music through the audio playing device, and the user recognizes the background music through the user end such as the mobile phone or the tablet computer.
  • the user terminal audio data is sent to the audio processing device through the network, and after the audio processing device finds the corresponding data tag information, the data tag server sends the data tag to the user end, and the user terminal can further obtain the product corresponding to the data tag. Information and other data.
  • the associated audio data can be used as the background music to enable the viewer to intuitively obtain the product information without hindering the performance of the performance; or to enable the customer to quickly obtain a large amount of product information, and the present invention is based on
  • the audio data label issuance system has a simple structure and a wide range of applications, and the method is simple to implement and the implementation cost is relatively low.
  • FIG. 1 is a schematic structural diagram of an audio-based data label issuing system of the present invention.
  • FIG. 2 is a schematic diagram showing the steps of a method for distributing an audio-based data tag according to the present invention.
  • FIG. 3 is a schematic diagram of an audio-based data label distribution system and method of the present invention implemented in an additional high-audio manner in practical applications.
  • FIG. 4 is a schematic diagram of the audio-based data label issuing system and method of the present invention implemented in a background music segmentation definition manner in practical applications.
  • FIG. 1 is a schematic structural diagram of an audio-based data label issuing system of the present invention.
  • the audio-based data label distribution system includes an audio processing device, an audio playback device, Client and data tag server.
  • the audio processing device is configured to define the audio data according to a preset rule, and associate a part, some parts, or all of the audio data with the preset one or more data tag information to form associated audio data.
  • the audio processing device is further configured to receive user-side audio data, and search for one or more data tag information associated with the received user-side audio data;
  • the audio playback device is coupled to the audio processing device for reading Correlating the audio data, and outputting the associated audio data in the form of sound waves;
  • the user end includes a microphone, a sound wave digitizing processor connected to the microphone, and a network module connected to the sound wave digitizing processor,
  • the microphone is configured to receive the sound wave output by the audio playback device, the sound wave digitizing processor converts the sound wave received by the microphone into digitized user terminal audio data, and the network module connects the audio processing device through a network, And transmitting the user terminal audio data to the audio processing device
  • the user end is further configured to acquire the data information corresponding to the data label according to the obtained data label
  • the audio-based data label issuing method implemented by the system described in this embodiment, as shown in FIG. 2, includes the following steps:
  • the audio processing device defines the audio data according to a preset rule, and associates a part, some parts or all of the audio data with a preset one or more data tag information to form an associated audio. data;
  • the audio playback device reads the associated audio data from the audio processing device, and outputs the associated audio data in the form of sound waves;
  • the user terminal receives the sound wave output by the audio playback device through the microphone, and converts the sound wave received by the microphone into digitized user terminal audio data through the sound wave digitizing processor;
  • the user terminal sends the user terminal audio data to the audio processing device through a network module
  • the audio processing device searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server;
  • the data tag server sends a corresponding data tag to the user terminal according to the data tag information received from the audio processing device;
  • the user end acquires the data information corresponding to the data label from the server corresponding to the label through the network according to the obtained data label.
  • the audio processing device comprises a high audio processor and a mixing processor.
  • the high-audio processor is used to record high-audio data and one or more data labels that are inaudible to one or more human ears. Signing information association, the high audio processor is further configured to receive user terminal audio data, and search for one or more data tag information associated with the received user terminal audio data; the mixing processor is connected to the high An audio processor, and storing audio data that can be heard by a human ear that needs to be played, to enable high-audio data that cannot be heard by the one or more human ears and the human ear that needs to be played can be heard.
  • the audio data is subjected to mixing processing to form the associated audio data.
  • the audio frequency of the high audio data that cannot be heard by the human ear ranges from 16 kHz to 20 kHz. When there are a plurality of high-audio data that cannot be heard by the human ear, the audio frequencies between the pieces of high-audio data are different.
  • the step (1) specifically includes the following steps:
  • the high audio processor associating one or more pieces of high audio data that are inaudible to a human ear with one or more data tag information;
  • the mixing processor mixes the high-audio data that cannot be heard by the one or more human ears with the audio data that can be heard by the human ear that needs to be played, and forms a Associated audio data.
  • step (5) is specifically:
  • the high audio processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
  • the audio processing device includes an audio segmentation processor that stores audio data that needs to be played to segment the audio data, and each piece of audio is segmented.
  • the data is associated with the preset one or more data tag information to form associated audio data
  • the audio segmentation processor is further configured to receive the user terminal audio data, and search for the associated one or according to the received client audio data. Multiple data tag information.
  • Its audio segmentation processor preferably segments the audio data based on the syllables of the audio data.
  • the step (1) specifically includes the following steps:
  • the audio segmentation processor segments the audio data to be played, and associates each piece of audio data with a preset one or more data tag information to form associated audio data.
  • step (5) is specifically:
  • the audio segmentation processor searches for one or more data tag information associated with the received client audio data, and transmits the data tag information to the data tag server.
  • the system further comprises a wireless network access device, wherein the wireless network access device is respectively connected to the audio processing device, the client and the data tag server by wireless signals.
  • the wireless network access device may be a signal transmitting and receiving device of 2G/3G/4G wireless communication, and the corresponding wireless signal is a 2G/3G/4G wireless signal. number.
  • the wireless network access device can also select a wireless router, and the corresponding wireless signal is a WIFI signal.
  • the step (4) is specifically:
  • the user terminal sends the user terminal audio data to the audio processing device through the wireless network access device through a network module;
  • the step (6) is specifically as follows:
  • the data tag server transmits a corresponding data tag to the user terminal via the wireless network access device according to the data tag information received from the audio processing device.
  • the wireless network access device has a device identification code (such as an SSID of a wireless router) for indicating or as location information, and the wireless network access device is configured to receive the After the user terminal audio data is described, the user terminal audio data and the device identification code are sent to the audio processing device;
  • a device identification code such as an SSID of a wireless router
  • the audio processing device further includes a device identifier resolution server, configured to confirm, by parsing the device identifier, whether the user end that sends the user terminal audio data and the wireless network access device connected to the audio processing device The connection is determined to determine the location of the client, and if so, the audio processing device looks up one or more data tag information associated with the received audio data of the client.
  • a device identifier resolution server configured to confirm, by parsing the device identifier, whether the user end that sends the user terminal audio data and the wireless network access device connected to the audio processing device The connection is determined to determine the location of the client, and if so, the audio processing device looks up one or more data tag information associated with the received audio data of the client.
  • step (41) is specifically:
  • the device identifier resolution server of the audio processing device determines whether the received device identifier is the same as the device identifier of the wireless network access device to which the audio processing device is connected, and if yes, proceeds to the step ( 5) If not, this exits the method.
  • the audio playback device includes a power amplifier and an audio, an input end of the power amplifier is connected to an output end of the audio processing device, and an output end of the power amplifier is connected to the The input of the sound.
  • the step (2) specifically includes the following steps:
  • the power amplifier reads the associated audio data from the audio processing device, and performs power amplification processing on the associated audio data;
  • the present invention may generally include two schemes as shown in FIGS. 3 and 4.
  • Option 1 As shown in Figure 3, using professional audio processing and mixing equipment, the audio electronic tag is 16 kHz to 20 kHz beyond the normal receiving range of the human ear, without affecting the viewer/customer's enjoyment of music/song/sound effects.
  • the data tag corresponding to the audio electronic tag is pushed to the smart terminal App of the viewer/customer through the WIFI network to display/display the related information.
  • the mic/microphone audio receiving range of smart terminals such as mobile phones and tablet computers ranges from 20 Hz to 20 kHz.
  • the listening range of the human ear is in the range of 20 Hz to 20 kHz, but the range that can be heard by the human ear in real life is between 90 Hz and 15.1 kHz.
  • the human ear increases with age and other reasons, the actual hearing range is much smaller than 90 Hz to 15.1 kHz (mainly in high frequency). As the age increases, the human hearing range is determined by the physiological structure.
  • the dedicated server is resolved using a dedicated audio electronic tag corresponding to the definition of the data tag.
  • the smart terminal launches a dedicated App, and uses the microphone/microphone of the smart terminal to collect audio electronic tags of 16 kHz to 20 kHz.
  • the app of the smart terminal collects the audio electronic tag through the microphone, connects to the dedicated server through the WIFI network for parsing, obtains the data tag corresponding to the electronic tag, realizes the release and application of the data tag, and enables the user to obtain the corresponding information.
  • Solution 2 As shown in FIG. 4, the existing music/song/sound effects are segmented, and each segment corresponds to a corresponding audio electronic tag.
  • the WIFI network is used to determine the location/location, the audio electronic tag and the location/location are combined, and the corresponding data tag is pushed to the viewer/customer's smart terminal App through the WIFI network to display/display the related information.
  • the location/location determination in the above solution is mainly based on the WIFI network (the specific meaning and method of "using the device identification code of the WIFI or other wireless network access device as the determined location" in the present application can refer to the application number 201310460760.6, the name The "data tag carrier information application and processing system and method" is determined.)
  • the present invention can also adopt the 3G/4G base station positioning technology of the mobile/unicom/telecom network, and the two can be used in combination.
  • each brand store has its own corresponding WIFI network and its corresponding background music/song/sound effect.
  • the customer uses the smart terminal to open a specific App program to connect to the WIFI network, and receive the audio electronic tag for obtaining the segmented background music/song/sound effect through the microphone/microphone of the smart terminal, and the audio electronic tag and the place/location (WIFI) combination are connected to Define the resolution of the dedicated server for parsing and get the corresponding data label.
  • the customer's smart terminal obtains a data tag corresponding to the site, and the data tag pushes the site related information to the smart terminal App for the user to browse/operate.
  • the audience/clients use their own smart terminals to open a specific app to connect to the WIFI network, and receive the segmented background music/songs via the microphone/microphone of the smart terminal.
  • the audio electronic tag, the audio electronic tag and the location/location (WIFI) combination are connected to the definition resolution dedicated server for parsing and obtaining the corresponding data tag.
  • the smart terminal of the viewer/customer obtains a data tag corresponding to the scene, and the data tag pushes the relevant information on the site to the smart terminal App for viewer/customer to browse/operate.
  • the application of the present invention is not limited to the specific embodiments described above, and the obvious invention can also be used to introduce the performance content or dialogue, lyrics and the like to the audience during the performances such as dramas and musicals, or to use background music in the tourist attractions.
  • the location information of the access point introduces the tourist information and the visit path information to the tourists.
  • the audio-based data labeling system and method of the invention are employed, as the system includes an audio processing device, an audio playback device, a client, and a data tag server. Therefore, the audio processing device can be used to associate the data tag corresponding to the information to be propagated with one or more audios, and then the associated audio data is played as the background music through the audio playing device, and the user recognizes the background music through the user end such as the mobile phone or the tablet computer.
  • the user terminal audio data is sent to the audio processing device through the network, and after the audio processing device finds the corresponding data tag information, the data tag server sends the data tag to the user end, and the user terminal can further obtain the product corresponding to the data tag. Information and other data.
  • the associated audio data can be used as the background music to enable the viewer to intuitively obtain the product information without hindering the performance of the performance; or to enable the customer to quickly obtain a large amount of product information, and the present invention is based on
  • the audio data label issuance system has a simple structure and a wide range of applications, and the method is simple to implement and the implementation cost is relatively low.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Telephonic Communication Services (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

一种基于音频的数据标签发布系统及方法,该系统包括音频处理设备、音频播放设备、用户端和数据标签服务器;该方法利用音频处理设备将需要传播的信息所对应的数据标签与音频关联,通过音频播放设备将关联音频数据作为背景音乐播放,通过用户端识别背景音乐后,用户端通过网络获得数据标签对应的产品信息等数据。

Description

基于音频的数据标签发布系统及方法 技术领域
本发明涉及计算机应用技术领域,特别涉及进行音频处理的计算机技术领域,具体是指一种基于音频的数据标签发布系统及方法。
背景技术
在时装发布秀或产品发布会等场合,在模特穿着展示时装走台或产品展示时,台下的观众通常希望能够更为具体地了解时装和产品的信息,但是如果通过背景解说词进行介绍,对观众来说不够直观也不够具体,很难掌握产品的全部信息;而如果通过显示屏进行显示,用户也无法专心欣赏台上的表演或产品演示,同样存在欠缺之处。因此如何能让观众完整直观地获得时装等产品的信息,又同时不妨碍观众欣赏表演或演示成为亟待解决的问题。
另外,在大型卖场、购物中心等场合,每个品牌的商店都希望路过的顾客能够多留意自己商店的产品,同时尽可能地将完整的产品信息告诉顾客,这对于销售人员来说是极大的考验,虽然可以通过发放宣传资料或销售人员直接介绍来完成,但这样的信息传递方式信息量十分有限,销售效果也不好。因此同样需要一种方法来解决如何及时快捷地将大量信息传递给顾客的问题。
数据标签指可通过特定设备读取的标签化的数据信息,包括NFC、RFID、条形码和二维码等等,在本申请中,主要指能够通过网络传递的条形码和二维码图形等。
发明内容
本发明的目的是克服了上述现有技术中的缺点,提供一种将需要传播的信息所对应的数据标签与一段或多端音频关联,关联音频数据可以是背景音乐本身或背景音乐中人耳无法听到的高音频,观众或顾客通过手机等设备识别背景音乐后,即可通过网络获取特定的传播信息,从而能使观众直观地获得产品信息,又不妨碍欣赏表演;同时也可以快捷地将大量商品信息传递给顾客的基于音频的数据标签发布系统及方法。
为了实现上述的目的,本发明的基于音频的数据标签发布系统具有如下构成:
该系统包括音频处理设备、音频播放设备、用户端和数据标签服务器。
其中,音频处理设备用以将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频处理设备还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;音频播放设备连接所述的音频处理设备,用以读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;用户端包括麦克风、与麦克风连接的声波数字化处理器以及与所述的声波数字化处理器连接的网络模块,所述的麦克风用以接收所述的音频播放设备输出的声波,所述的声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据,所述的网络模块通过网络连接所述的音频处理设备,并将所述的用户端音频数据发送到所述的音频处理设备;该用户端还用以根据获得的数据标签通过网络获取所述的数据标签对应的数据信息;数据标签服务器通过网络分别连接所述的音频处理设备和用户端,用以根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端。
该基于音频的数据标签发布系统中,所述的音频处理设备包括高音频处理器和混音处理器。
其中,高音频处理器用以将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联,该高音频处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;混音处理器连接于所述的高音频处理器,并存储有需要播放的人耳能够听到的音频数据,用以将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
该基于音频的数据标签发布系统中,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
该基于音频的数据标签发布系统中,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
该基于音频的数据标签发布系统中,所述的音频处理设备包括:音频分段处理器,存储有需要播放的音频数据,用以将该音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频分段处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
该基于音频的数据标签发布系统中,所述的音频分段处理器根据音频数据的音节将所述的音频数据分段。
该基于音频的数据标签发布系统还包括无线网络接入设备,所述的无线网络接入设备分 别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器。
该基于音频的数据标签发布系统中,所述的无线网络接入设备为2G/3G/4G无线信号收发设备,所述的无线信号为2G/3G/4G无线信号;或者,所述的无线网络接入设备为无线路由器,所述的无线信号为WIFI信号。
该基于音频的数据标签发布系统中,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的无线网络接入设备用以在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;所述的音频处理设备还包括设备识别码解析服务器,用以通过解析所述的设备识别码确认发出所述的用户端音频数据的用户端是否与连接该音频处理设备的无线网络接入设备连接,以确定用户端位置,若是,则音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
该基于音频的数据标签发布系统中,所述的音频播放设备包括功放和音响,所述的功放的输入端连接所述的音频处理设备的输出端,所述的功放的输出端连接所述的音响的输入端。
本发明还提供一种利用所述的系统实现的基于音频的数据标签发布方法,该方法包括以下步骤:
(1)所述的音频处理设备将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据;
(2)所述的音频播放设备从所述的音频处理设备读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;
(3)所述的用户端通过麦克风接收音频播放设备输出的声波,并通过声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据;
(4)所述的用户端通过网络模块将所述的用户端音频数据发送到所述的音频处理设备;
(5)所述的音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器;
(6)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端;
(7)所述的用户端根据获得的数据标签通过网络从与所述标签对应的服务器获取所述的数据标签对应的数据信息。
该基于音频的数据标签发布方法中,所述的音频处理设备包括高音频处理器和混音处理器,所述的步骤(1)具体包括以下步骤:
(11-1)所述的高音频处理器将一段或多段人耳无法听到的高音频数据与一个或多个数 据标签信息关联;
(11-2)所述的混音处理器将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
该基于音频的数据标签发布方法中,所述的步骤(5)具体为:
(51)所述的高音频处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
该基于音频的数据标签发布方法中,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
该基于音频的数据标签发布方法中,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
该基于音频的数据标签发布方法中,所述的音频处理设备包括音频分段处理器,所述的步骤(1)具体包括以下步骤:
(12)所述的音频分段处理器将需要播放的音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据。
该基于音频的数据标签发布方法中,所述的步骤(5)具体为:
(52)所述的音频分段处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
该基于音频的数据标签发布方法中,所述的音频分段处理器根据需要播放的音频数据的音节将该音频数据分段。
该基于音频的数据标签发布方法中,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器,所述的步骤(4)具体为:
(41)所述的用户端通过网络模块将所述的用户端音频数据经过所述的无线网络接入设备发送到所述的音频处理设备;
且所述的步骤(6)具体为:
(61)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签经过所述的无线网络接入设备发送至所述的用户端。
该基于音频的数据标签发布方法中,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的音频处理设备还包括设备识别码解析服务器,所述的步骤(41)具体为:
(41-1)所述的无线网络接入设备在接收到所述的用户端音频数据后,将用户端音频数 据以及所述的设备识别码发送到所述的音频处理设备;
(41-2)所述的音频处理设备的设备识别码解析服务器判断接收到的设备识别码是否与该音频处理设备所连接的无线网络接入设备的设备识别码相同,若是,则进入步骤(5),若否,这退出该方法。
该基于音频的数据标签发布方法中,所述的音频播放设备包括功放和音响,所述的步骤(2)具体包括以下步骤:
(2-1)所述的功放从所述的音频处理设备读取所述的关联音频数据,并对所述的关联音频数据进行功率放大处理;
(2-2)所述的音响从将所述的经功率放大处理的关联音频数据以声波形式输出。
采用了该发明的基于音频的数据标签发布系统及方法,由于其系统包括音频处理设备、音频播放设备、用户端和数据标签服务器。从而可以利用音频处理设备将需要传播的信息所对应的数据标签与一段或多端音频关联,而后通过音频播放设备将关联音频数据作为背景音乐播放,用户通过手机、平板电脑等用户端识别背景音乐后,通过网络将用户端音频数据发送至音频处理设备,音频处理设备查找到相应的数据标签信息后,由数据标签服务器将数据标签发送到用户端,用户端就能够进一步地获得数据标签对应的产品信息等数据。利用本发明的这一系统和方法,只要将关联音频数据作为背景音乐播放就能使观众直观地获得产品信息,又不妨碍欣赏表演;或者使顾客快捷地获得大量商品信息,且本发明的基于音频的数据标签发布系统结构简单,应用范围广泛,其方法实现方式简便,实现成本也相当低廉。
附图说明
图1为本发明的基于音频的数据标签发布系统的结构示意图。
图2为本发明的基于音频的数据标签发布方法的步骤示意图。
图3为本发明的基于音频的数据标签发布系统及方法在实际应用中以附加高音频方式实现的示意图。
图4为本发明的基于音频的数据标签发布系统及方法在实际应用中以背景音乐分段定义方式实现的示意图。
具体实施方式
为了能够更清楚地理解本发明的技术内容,特举以下实施例详细说明。
请参阅图1所示,为本发明的基于音频的数据标签发布系统的结构示意图。
在一种实施方式中,该基于音频的数据标签发布系统包括音频处理设备、音频播放设备、 用户端和数据标签服务器。
其中,音频处理设备用以将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频处理设备还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;音频播放设备连接所述的音频处理设备,用以读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;用户端包括麦克风、与麦克风连接的声波数字化处理器以及与所述的声波数字化处理器连接的网络模块,所述的麦克风用以接收所述的音频播放设备输出的声波,所述的声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据,所述的网络模块通过网络连接所述的音频处理设备,并将所述的用户端音频数据发送到所述的音频处理设备;该用户端还用以根据获得的数据标签通过网络获取所述的数据标签对应的数据信息;数据标签服务器通过网络分别连接所述的音频处理设备和用户端,用以根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端。
利用该实施方式所述的系统实现的基于音频的数据标签发布方法,如图2所示,包括以下步骤:
(1)所述的音频处理设备将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据;
(2)所述的音频播放设备从所述的音频处理设备读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;
(3)所述的用户端通过麦克风接收音频播放设备输出的声波,并通过声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据;
(4)所述的用户端通过网络模块将所述的用户端音频数据发送到所述的音频处理设备;
(5)所述的音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器;
(6)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端;
(7)所述的用户端根据获得的数据标签通过网络从与所述标签对应的服务器获取所述的数据标签对应的数据信息。
在一种较优选的实施方式中,所述的音频处理设备包括高音频处理器和混音处理器。
其中,高音频处理器用以将一段或多段人耳无法听到的高音频数据与一个或多个数据标 签信息关联,该高音频处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;混音处理器连接于所述的高音频处理器,并存储有需要播放的人耳能够听到的音频数据,用以将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。当具有多段人耳无法听到的高音频数据时,各段高音频数据之间的音频频率不同。
在利用该较优选的实施方式所述的系统实现的基于音频的数据标签发布方法中,所述的步骤(1)具体包括以下步骤:
(11-1)所述的高音频处理器将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联;
(11-2)所述的混音处理器将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
且所述的步骤(5)具体为:
(51)所述的高音频处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
在另一种较优选的实施方式中所述的音频处理设备包括音频分段处理器,该音频分段处理器存储有需要播放的音频数据,用以将该音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频分段处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。其音频分段处理器优选根据音频数据的音节将所述的音频数据分段。
在利用该较优选的实施方式所述的系统实现的基于音频的数据标签发布方法中,所述的步骤(1)具体包括以下步骤:
(12)所述的音频分段处理器将需要播放的音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据。
且所述的步骤(5)具体为:
(52)所述的音频分段处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
在进一步优选的实施方式中,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器。该无线网络接入设备可以是2G/3G/4G无线通信的信号收发设备,相应的所述的无线信号为2G/3G/4G无线信 号。该无线网络接入设备也可以选择无线路由器,相应的无线信号则为WIFI信号。
在利用该进一步优选的实施方式所述的系统实现的基于音频的数据标签发布方法中,所述的步骤(4)具体为:
(41)所述的用户端通过网络模块将所述的用户端音频数据经过所述的无线网络接入设备发送到所述的音频处理设备;
所述的步骤(6)具体为:
(61)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签经过所述的无线网络接入设备发送至所述的用户端。
在更优选的实施方式中,所述的无线网络接入设备具有用以表示或作为位置信息的设备识别码(如无线路由器的SSID),所述的无线网络接入设备用以在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;
且所述的音频处理设备还包括设备识别码解析服务器,用以通过解析所述的设备识别码确认发出所述的用户端音频数据的用户端是否与连接该音频处理设备的无线网络接入设备连接,以确定用户端位置,若是,则音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
在利用该更优选的实施方式所述的系统实现的基于音频的数据标签发布方法中,所述的步骤(41)具体为:
(41-1)所述的无线网络接入设备在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;
(41-2)所述的音频处理设备的设备识别码解析服务器判断接收到的设备识别码是否与该音频处理设备所连接的无线网络接入设备的设备识别码相同,若是,则进入步骤(5),若否,这退出该方法。
在一种可供选择的实施方式中,所述的音频播放设备包括功放和音响,所述的功放的输入端连接所述的音频处理设备的输出端,所述的功放的输出端连接所述的音响的输入端。
在利用该可供选择的实施方式所述的系统实现的基于音频的数据标签发布方法中,所述的步骤(2)具体包括以下步骤:
(2-1)所述的功放从所述的音频处理设备读取所述的关联音频数据,并对所述的关联音频数据进行功率放大处理;
(2-2)所述的音响从将所述的经功率放大处理的关联音频数据以声波形式输出。
在实际应用中,本发明大致可以包括如图3和图4所示的两种方案。
方案1:如图3所示,使用专业音频处理和混音设备,以超过人耳正常接收范围的16kHz~20kHz作为音频电子标签,在不影响观众/顾客欣赏音乐/歌曲/音响效果的情况下,通过WIFI网络将音频电子标签对应的数据标签推送到观众/客户的智能终端App中,进行相关信息的显示/展示。
研究表明,专业音响的高频段音频播放可达到5000~20kHz。智能终端,如手机、平板电脑等的话筒/麦克风音频接收范围在20Hz~20kHz。
理论上人耳的听音范围在20Hz~20kHz,但实际生活中人耳真正能听到的范围是在90Hz~15.1kHz之间。人耳随着年龄的增大和其他原因,实际的听力范围要远远小于90Hz~15.1kHz(主要在高频方面)。随着年纪增大下降明显,人的听力范围是有生理结构所决定的。
利用超出实际生活中人耳听音范围的16kHz~20kHz音频作为电子标签,通过专业音频处理和混音设备,将原先播放的音频中16kHz~20kHz之间音频删除,并将定义的16kHz~20kHz音频作为电子标签通过混音设备加入现有的音乐/歌曲/音响效果中,再通过专业音响设备进行播放。
采用专用的音频电子标签对应数据标签的定义解析专属服务器。同时,智能终端启动专用的App,使用智能终端的话筒/麦克风采集16kHz~20kHz的音频电子标签。智能终端的App通过话筒采集后音频电子标签,通过WIFI网络连接专属服务器进行解析,获取电子标签对应的数据标签,实现数据标签的发布应用,使用户获得相应的信息。
方案2:如图4所示,将现有的音乐/歌曲/音响效果进行分段处理,每个分段对应相应的音频电子标签。结合WIFI网络确定场所/位置,组合音频电子标签和场所/位置,通过WIFI网络将与之对应的数据标签推送到观众/客户的智能终端App中,进行相关信息的显示/展示。
以上方案中位置/场所确定以WIFI网络为主(本申请中所涉及的“利用WIFI或其它无线网络接入设备的设备识别码作为确定位置”的具体含义及方法可参考申请号为201310460760.6,名为“数据标签载体信息应用与处理系统及方法”确定。),同时,本发明也可采用移动/联通/电信网络的3G/4G基站定位技术,两者更可相结合进行使用。
本发明的应用范围相当广泛,如在大型卖场、购物中心,每个品牌专卖店都有各自对应的WIFI网络和各自对应的背景音乐/歌曲/音响效果。顾客使用智能终端,打开特定App程序连接WIFI网络,通过智能终端的话筒/麦克风接收获取分段的背景音乐/歌曲/音响效果的音频电子标签,音频电子标签和场所/位置(WIFI)组合连接到定义解析专属服务器进行解析,获取与之相对应的数据标签。顾客的智能终端获取到与现场对应的数据标签,数据标签将现场相关信息推送到智能终端App上,供顾客浏览/操作。
时装发布秀/产品发布会,模特穿着展示时装走台时,观众/客户使用各自的智能终端,打开特定App程序连接WIFI网络,通过智能终端的话筒/麦克风接收获取分段的背景音乐/歌曲/音响效果的音频电子标签,音频电子标签和场所/位置(WIFI)组合连接到定义解析专属服务器进行解析,获取与之相对应的数据标签。观众/顾客的智能终端获取到与现场对应的数据标签,数据标签将现场相关信息推送到智能终端App上,供观众/顾客浏览/操作。
当然本发明的应用不限于上述的具体实施例中,显而易见的本发明也可以用于例如话剧、歌舞剧的演出时向观众介绍演出内容或对白、唱词等信息,或在旅游景区利用背景音乐和接入点的位置信息向游客介绍景点信息和参观路径信息等等。
采用了该发明的基于音频的数据标签发布系统及方法,由于其系统包括音频处理设备、音频播放设备、用户端和数据标签服务器。从而可以利用音频处理设备将需要传播的信息所对应的数据标签与一段或多端音频关联,而后通过音频播放设备将关联音频数据作为背景音乐播放,用户通过手机、平板电脑等用户端识别背景音乐后,通过网络将用户端音频数据发送至音频处理设备,音频处理设备查找到相应的数据标签信息后,由数据标签服务器将数据标签发送到用户端,用户端就能够进一步地获得数据标签对应的产品信息等数据。利用本发明的这一系统和方法,只要将关联音频数据作为背景音乐播放就能使观众直观地获得产品信息,又不妨碍欣赏表演;或者使顾客快捷地获得大量商品信息,且本发明的基于音频的数据标签发布系统结构简单,应用范围广泛,其方法实现方式简便,实现成本也相当低廉。
在此说明书中,本发明已参照其特定的实施例作了描述。但是,很显然仍可以作出各种修改和变换而不背离本发明的精神和范围。因此,说明书和附图应被认为是说明性的而非限制性的。

Claims (22)

  1. 一种基于音频的数据标签发布系统,其特征在于,包括:
    音频处理设备,用以将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频处理设备还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;
    音频播放设备,连接所述的音频处理设备,用以读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;
    用户端,包括麦克风、与麦克风连接的声波数字化处理器以及与所述的声波数字化处理器连接的网络模块,所述的麦克风用以接收所述的音频播放设备输出的声波,所述的声波数字化处理器将麦克风接收的声波转换为数字化的用户端音频数据,所述的网络模块通过网络连接所述的音频处理设备,并将所述的用户端音频数据发送到所述的音频处理设备;该用户端还用以根据获得的数据标签通过网络获取所述的数据标签对应的数据信息;
    数据标签服务器,通过网络分别连接所述的音频处理设备和用户端,用以根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端。
  2. 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频处理设备包括:
    高音频处理器,用以将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联,该高音频处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息;
    混音处理器,连接于所述的高音频处理器,并存储有需要播放的人耳能够听到的音频数据,用以将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
  3. 根据权利要求2所述的基于音频的数据标签发布系统,其特征在于,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
  4. 根据权利要求2所述的基于音频的数据标签发布系统,其特征在于,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
  5. 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频处理设备包括:
    音频分段处理器,存储有需要播放的音频数据,用以将该音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据,该音频分段处理器还用以接收用户端音频数据,并根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
  6. 根据权利要5所述的基于音频的数据标签发布系统,其特征在于,所述的音频分段处理器根据音频数据的音节将所述的音频数据分段。
  7. 根据权利要1至6中任一项所述的基于音频的数据标签发布系统,其特征在于,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器。
  8. 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备为2G/3G/4G无线信号收发设备,所述的无线信号为2G/3G/4G无线信号。
  9. 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备为无线路由器,所述的无线信号为WIFI信号。
  10. 根据权利要求7所述的基于音频的数据标签发布系统,其特征在于,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的无线网络接入设备用以在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;
    所述的音频处理设备还包括设备识别码解析服务器,用以通过解析所述的设备识别码确认发出所述的用户端音频数据的用户端是否与连接该音频处理设备的无线网络接入设备连接,以确定用户端位置,若是,则音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息。
  11. 根据权利要求1所述的基于音频的数据标签发布系统,其特征在于,所述的音频播放设备包括功放和音响,所述的功放的输入端连接所述的音频处理设备的输出端,所述的功放的输出端连接所述的音响的输入端。
  12. 一种利用权利要求1所述的系统实现的基于音频的数据标签发布方法,其特征在于,所述的方法包括以下步骤:
    (1)所述的音频处理设备将音频数据按照预设的规则进行定义,使所述的音频数据中的一部分、若干部分或全部与预设的一个或多个数据标签信息关联,形成关联音频数据;
    (2)所述的音频播放设备从所述的音频处理设备读取所述的关联音频数据,并将所述的关联音频数据以声波形式输出;
    (3)所述的用户端通过麦克风接收音频播放设备输出的声波,并通过声波数字化处理器 将麦克风接收的声波转换为数字化的用户端音频数据;
    (4)所述的用户端通过网络模块将所述的用户端音频数据发送到所述的音频处理设备;
    (5)所述的音频处理设备根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器;
    (6)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签发送至所述的用户端;
    (7)所述的用户端根据获得的数据标签通过网络从与所述标签对应的服务器获取所述的数据标签对应的数据信息。
  13. 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频处理设备包括高音频处理器和混音处理器,所述的步骤(1)具体包括以下步骤:
    (11-1)所述的高音频处理器将一段或多段人耳无法听到的高音频数据与一个或多个数据标签信息关联;
    (11-2)所述的混音处理器将所述的一段或多段人耳无法听到的高音频数据与所述的需要播放的人耳能够听到的音频数据进行混音处理,形成所述的关联音频数据。
  14. 根据权利要求13所述的基于音频的数据标签发布方法,其特征在于,所述的步骤(5)具体为:
    (51)所述的高音频处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
  15. 根据权利要求13所述的基于音频的数据标签发布方法,其特征在于,所述的人耳无法听到的高音频数据的音频频率范围为16kHz至20kHz。
  16. 根据权利要求15所述的基于音频的数据标签发布方法,其特征在于,所述的多段人耳无法听到的高音频数据之间的音频频率不同。
  17. 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频处理设备包括音频分段处理器,所述的步骤(1)具体包括以下步骤:
    (12)所述的音频分段处理器将需要播放的音频数据分段,将每段音频数据与预设的一个或多个数据标签信息关联,形成关联音频数据。
  18. 根据权利要求17所述的基于音频的数据标签发布方法,其特征在于,所述的步骤(5)具体为:
    (52)所述的音频分段处理器根据接收到的用户端音频数据查找与之关联的一个或多个数据标签信息,并将所述的数据标签信息发送到所述的数据标签服务器。
  19. 根据权利要求17所述的基于音频的数据标签发布方法,其特征在于,所述的音频分段处理器根据需要播放的音频数据的音节将该音频数据分段。
  20. 根据权利要求12至19中任一项所述的基于音频的数据标签发布方法,其特征在于,该系统还包括无线网络接入设备,所述的无线网络接入设备分别通过无线信号连接所述的音频处理设备、用户端和数据标签服务器,所述的步骤(4)具体为:
    (41)所述的用户端通过网络模块将所述的用户端音频数据经过所述的无线网络接入设备发送到所述的音频处理设备;
    所述的步骤(6)具体为:
    (61)所述的数据标签服务器根据从所述的音频处理设备接收到的数据标签信息将相应的数据标签经过所述的无线网络接入设备发送至所述的用户端。
  21. 根据权利要求20所述的基于音频的数据标签发布方法,其特征在于,所述的无线网络接入设备具有用以表示位置的设备识别码,所述的音频处理设备还包括设备识别码解析服务器,所述的步骤(41)具体为:
    (41-1)所述的无线网络接入设备在接收到所述的用户端音频数据后,将用户端音频数据以及所述的设备识别码发送到所述的音频处理设备;
    (41-2)所述的音频处理设备的设备识别码解析服务器判断接收到的设备识别码是否与该音频处理设备所连接的无线网络接入设备的设备识别码相同,若是,则进入步骤(5),若否,这退出该方法。
  22. 根据权利要求12所述的基于音频的数据标签发布方法,其特征在于,所述的音频播放设备包括功放和音响,所述的步骤(2)具体包括以下步骤:
    (2-1)所述的功放从所述的音频处理设备读取所述的关联音频数据,并对所述的关联音频数据进行功率放大处理;
    (2-2)所述的音响从将所述的经功率放大处理的关联音频数据以声波形式输出。
PCT/CN2014/086606 2014-01-20 2014-09-16 基于音频的数据标签发布系统及方法 WO2015106576A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP14878964.7A EP3098725A4 (en) 2014-01-20 2014-09-16 Audio-based data label distribution system and method
RU2016123401A RU2676031C2 (ru) 2014-01-20 2014-09-16 Система и способ передачи информации с помощью тегов данных на основе частоты звукового диапазона
US14/418,886 US9569716B2 (en) 2014-01-20 2014-09-16 System and method for distributing audio-based data tags
JP2016543055A JP2017504892A (ja) 2014-01-20 2014-09-16 音声周波数をベースとするデータタグ配布システム及び方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410026089.9A CN104794127B (zh) 2014-01-20 2014-01-20 基于音频的数据标签发布系统及方法
CN201410026089.9 2014-01-20

Publications (1)

Publication Number Publication Date
WO2015106576A1 true WO2015106576A1 (zh) 2015-07-23

Family

ID=53542368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/086606 WO2015106576A1 (zh) 2014-01-20 2014-09-16 基于音频的数据标签发布系统及方法

Country Status (6)

Country Link
US (1) US9569716B2 (zh)
EP (1) EP3098725A4 (zh)
JP (1) JP2017504892A (zh)
CN (1) CN104794127B (zh)
RU (1) RU2676031C2 (zh)
WO (1) WO2015106576A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513512A (zh) * 2015-12-30 2016-04-20 华益(天津)文化传播有限公司 一种文化传播用智能设备
CN108833403A (zh) * 2018-06-11 2018-11-16 颜彦 一种具有嵌入式代码移植的融媒体信息发布生成方法
CN110855802B (zh) * 2020-01-15 2022-02-11 广州欧赛斯信息科技有限公司 职教诊改系统的数据分片分发存储方法、装置及服务器

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040254792A1 (en) * 2003-06-10 2004-12-16 Bellsouth Intellectual Proprerty Corporation Methods and system for creating voice files using a VoiceXML application
CN1629970A (zh) * 2003-12-15 2005-06-22 国际商业机器公司 用于在音频文件内表示内容的方法和系统
CN1909704A (zh) * 2006-08-30 2007-02-07 钟杨 移动终端设备获取用户接口的方法以及移动终端装置
CN101187974A (zh) * 2007-12-06 2008-05-28 深圳华为通信技术有限公司 二维码的应用方法和装置
CN102402542A (zh) * 2010-09-14 2012-04-04 腾讯科技(深圳)有限公司 一种视频标签方法及系统
CN103281401A (zh) * 2013-06-19 2013-09-04 安科智慧城市技术(中国)有限公司 展厅及其导览方法
CN104517136A (zh) 2013-09-30 2015-04-15 曲立东 数据标签载体信息应用与处理系统及方法

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4179669A (en) * 1978-06-05 1979-12-18 Bose Corporation Amplifying and equalizing
US7562392B1 (en) * 1999-05-19 2009-07-14 Digimarc Corporation Methods of interacting with audio and ambient music
JP2003523571A (ja) * 2000-02-15 2003-08-05 オーツー・マイクロ・インク ポータブル電子機器のオーディオコントローラ
JP2001297106A (ja) * 2000-04-13 2001-10-26 Sony Corp データベース作成方法
JP2002169579A (ja) * 2000-12-01 2002-06-14 Takayuki Arai オーディオ信号への付加データ埋め込み装置及びオーディオ信号からの付加データ再生装置
WO2006126462A1 (ja) * 2005-05-24 2006-11-30 Matsushita Electric Industrial Co., Ltd. 放送システム、ラジオ受信機、電子機器及びラジオ放送用変復調方法
KR100782055B1 (ko) * 2005-05-25 2007-12-04 (주)뮤레카 오디오유전자를 이용한 음악관련 정보 제공방법 및 시스템
US8249559B1 (en) * 2005-10-26 2012-08-21 At&T Mobility Ii Llc Promotion operable recognition system
US7684991B2 (en) * 2006-01-05 2010-03-23 Alpine Electronics, Inc. Digital audio file search method and apparatus using text-to-speech processing
CN101047722A (zh) * 2006-03-30 2007-10-03 腾讯科技(深圳)有限公司 媒体文件推送系统及方法
US20070286358A1 (en) * 2006-04-29 2007-12-13 Msystems Ltd. Digital audio recorder
CN101115124B (zh) * 2006-07-26 2012-04-18 日电(中国)有限公司 基于音频水印识别媒体节目的方法和装置
US20080262928A1 (en) * 2007-04-18 2008-10-23 Oliver Michaelis Method and apparatus for distribution and personalization of e-coupons
US8713593B2 (en) * 2010-03-01 2014-04-29 Zazum, Inc. Detection system and method for mobile device application
JP2012155706A (ja) * 2011-01-07 2012-08-16 Yamaha Corp 情報提供システム、携帯端末装置、識別情報解決サーバおよびプログラム
JP2013008109A (ja) * 2011-06-22 2013-01-10 Yamaha Corp 文書投稿支援システム、携帯端末装置および文書投稿支援プログラム
CN104160714A (zh) * 2012-03-02 2014-11-19 雅马哈株式会社 内容提供系统、内容提供方法、内容编辑装置、内容解析系统、以及播送站id放音装置
JP5812910B2 (ja) * 2012-03-22 2015-11-17 富士通エフ・アイ・ピー株式会社 認証装置及び認証方法
KR102317364B1 (ko) * 2012-05-01 2021-10-25 엘아이에스엔알, 인크. 콘텐츠 전달 및 관리를 위한 시스템 및 방법
US20130318114A1 (en) * 2012-05-13 2013-11-28 Harry E. Emerson, III Discovery of music artist and title by broadcast radio receivers
US9305559B2 (en) * 2012-10-15 2016-04-05 Digimarc Corporation Audio watermark encoding with reversing polarity and pairwise embedding
JP6125837B2 (ja) * 2012-12-28 2017-05-10 ヤフー株式会社 情報提供システムおよび情報提供方法
CN103152106B (zh) * 2013-03-13 2015-11-25 荆效民 基于音频的超声波信息推送方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040254792A1 (en) * 2003-06-10 2004-12-16 Bellsouth Intellectual Proprerty Corporation Methods and system for creating voice files using a VoiceXML application
CN1629970A (zh) * 2003-12-15 2005-06-22 国际商业机器公司 用于在音频文件内表示内容的方法和系统
CN1909704A (zh) * 2006-08-30 2007-02-07 钟杨 移动终端设备获取用户接口的方法以及移动终端装置
CN101187974A (zh) * 2007-12-06 2008-05-28 深圳华为通信技术有限公司 二维码的应用方法和装置
CN102402542A (zh) * 2010-09-14 2012-04-04 腾讯科技(深圳)有限公司 一种视频标签方法及系统
CN103281401A (zh) * 2013-06-19 2013-09-04 安科智慧城市技术(中国)有限公司 展厅及其导览方法
CN104517136A (zh) 2013-09-30 2015-04-15 曲立东 数据标签载体信息应用与处理系统及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3098725A4

Also Published As

Publication number Publication date
CN104794127A (zh) 2015-07-22
EP3098725A4 (en) 2017-10-04
CN104794127B (zh) 2018-03-13
EP3098725A1 (en) 2016-11-30
RU2676031C2 (ru) 2018-12-25
US20160314391A1 (en) 2016-10-27
US9569716B2 (en) 2017-02-14
JP2017504892A (ja) 2017-02-09
RU2016123401A (ru) 2018-02-28
RU2016123401A3 (zh) 2018-06-27

Similar Documents

Publication Publication Date Title
US20130301392A1 (en) Methods and apparatuses for communication of audio tokens
US20100093393A1 (en) Systems and Methods for Music Recognition
CN103152480B (zh) 利用移动终端进行到站提示的方法和装置
JP6580132B2 (ja) メディアコンテンツに関連付けられた情報を提供する方法および装置
JP2007164659A (ja) 音楽情報を利用した情報配信システム及び情報配信方法
JP2013008109A (ja) 文書投稿支援システム、携帯端末装置および文書投稿支援プログラム
CN104601202A (zh) 基于蓝牙技术实现文件搜索的方法、终端及蓝牙设备
CN106572241A (zh) 一种信息展示方法和装置
WO2015106576A1 (zh) 基于音频的数据标签发布系统及方法
CN109145226A (zh) 内容推送方法和装置
US20100222072A1 (en) Systems, methods and apparatus for providing information to a mobile device
JP2016005268A (ja) 情報伝送システム、情報伝送方法、及びプログラム
CN101855850B (zh) 有效率地获得无线分享音频文件和信息
JP2011248118A (ja) 音響通信方法を用いたホームページ誘導方法およびシステム
CN104038772A (zh) 生成铃声文件的方法及装置
JP2012216185A (ja) 情報処理装置、情報処理方法、及びプログラム
CN204009894U (zh) 基于音频的数据标签发布系统
CN105448296B (zh) 信息分发方法和装置以及信息接收方法和装置
CN108289245A (zh) 自动媒体信息播放方法
JP6114249B2 (ja) 情報送信装置および情報送信方法
JP6701756B2 (ja) 情報提供システム、情報提供装置、および、情報提供方法
JP6195506B2 (ja) 情報提供装置、情報提供方法、情報提供プログラム、端末装置および情報要求プログラム
WO2020246205A1 (ja) プログラム、端末装置および端末装置の動作方法
JP6866947B2 (ja) 情報提供システム、および、情報提供方法
CN203104777U (zh) 利用广播系统实现将位置信号传送到移动通信终端的通信装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14418886

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14878964

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016543055

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014878964

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014878964

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016123401

Country of ref document: RU

Kind code of ref document: A